VSFX 420 Blog
Monday, Feb. 8th:
I did more work with my shaders but I think I need more help with my compositing, because what is really throwing me off is using all the layers together to make the final image.
I think I need a better understanding of what each layer is used for and how to put them together. I can see clearly in my render that my model has too much reflections on them, and the shadow looks horrible too, and I need to include my shadow occlusion as well. Not sure what is going on, but I plan on attending some extra help sessions with this.
Monday, Feb. 6th:
Everything is set up for the layers of compositing, I just need to animate my model and get it to line up with my background video plate. Once I figure out how to get the layers together in Nuke, then I can start blending and shifting my textures together
I fought a little hard to get that fresnel to work but I got it to work thankfully.
Unfortunately I haven't been dedicating enough time for this project, and as much as I would love to work on it, I run into other events that take up my time.
By Thursday, Feb. 8th, I want to have it all integrated into my scene for critique. More will be posted then.
Sunday/Monday, Jan. 29th-Jan. 30th:
There were alot of things between these times that set me back a lot and had me jumping through more hoops than I thought I would have to, but I am happy with the idea that I have now
Tuesday, Jan. 24th:
After creating the breakdown for my video, and fixing some render issues with the main video, it was finally time to submit what I had. I made a lot of adjustments since the last render of my green goblin grenades. I got some notes back after some critique on my last render and made more changes to the final look. The adjustments I made to the video was the animation speed of the green goblin grenades, the path the green goblin grenades go, and I created a roto paint clean plate of my shadow clean plate for the explosion (since there shouldn't be any shadows during an explosion). Here is the final video from those changes!
When my video was finally done, my friends and I went out to get some food and sleep after the project, here are some of the photos we took:
Sunday, Jan. 22nd:
Over the next two days, I made a lot of progress in both Maya, Photoshop, and Nuke to bring my final result together. I haven't rendered all of the scene yet, only every five frames, but I want to get feedback from my peers on what I can improve on and get their opinions. Linked below is the video of my progress. The final look for this project will be done by Tuesday!
Friday, Jan. 20th:
On Friday, I was starting to bring more elements of integration into my scene that brought the sense of realism from my object to another level, and this was the introduction on how an environment shadow is made.
Turns out for an environmental shadow, there is some trickery in perspective that is needed to achieve this faux look. One of the first steps to getting to this result is to pitch a camera in the position of the spotlight in your scene. You do this by looking through your selected light, and going in the view tab in your viewer and selecting the option to "Create camera through view". What this will do is create a camera and another (unnecessary) light in your scene. I deleted the extra light added to the scene and looked through the camera of the spot light camera which will be renamed to the "KeyLightCamera". From this perspective, I created another render layer in Maya called "EnvShadowPrep" to create the environment shadow paint for the next render layer. When the image is rendered from this perspective, I open up Photoshop to create my new environment shadow paint. Easily as it is, you just have to select the shadows in Photoshop with the magic wand tool, and fill the shadows with a red value, and the inversed select of the shadows and fill it with a black value. Once the paint layer is made, you can take it back into Maya and use this projection paint to match how the shadows look in your regular scene. I had to pin the projection for this paint to fit the resolution gate because it didn't fit the scene entirely. When the floor is applied with this surface shader, you apply the same surface shader to the ball as well. This creates a warping effect around the ball to make it look like it is adapting the shadows in the scene. When this done, we bring it back into Nuke when all the layers are re-rendered. From this, I open Nuke and read the new environment shadow paint layer. This read is separate from the foreground and background work. From this read, I use a shuffle node to separate the RGB red values to be only an alpha. Now that is a cut out alpha, we can use this alpha as a mask for a color correct under both the diffuse and specular direct shuffle nodes. Adjusting the color correct nodes to mask the newly made alpha, and setting each color correct node's gain to zero, now creates this shadow for the scene. Further adjustments are going to be needed to achieve a better integrated look, but it is a big step into the next part of the project. It all just now needs some more tweaking in Maya with the lights, Photoshop for painting more of the projection, and Nuke to put it all back over each other for the final result.
Wednesday, Jan. 18th:
On the 18th, after having Martin Luther King Jr. Day, I took further steps in moving my ball through the scene and using the render layers that I have been building since the start of my project.
I got to a point in my project where I started animating my gray ball in the scene to see how it moves in my lighting. This ball animation will be useful in my future when I add my environmental shadows as well. When the animation was done, I took steps ahead to take this Maya project into Nuke for compositing all the layers together.
When in Nuke, I did some changes to the project settings under the color tab. In the color tab, I changed my color management from Nuke to OCIO. After this, I changed the OCIO config to aces_1.1. This color space management helps me keep to true colors from what I see in Maya to Nuke without any issues. After this, I started loading all my rendered layers into separate parts of the nuke tree.
The ball render is going to need a lot of work, because I need to separate the diffuse and specular direct and indirect AOV. Using shuffle nodes and declaring which channel I want to shuffle my RGB values into. After each shuffle node is separated and connected back to the ball render, I use merge nodes to connect all of them together. The operation is set to plus for both merge nodes connecting all four shuffle nodes together. 3 merge plus nodes are used, one for merging diffuse direct and indirect together, one for merging specular direct and indirect together, and one more merging the previous merges together. The result from this gives a faded look to the aplha which was an issue. This was resolved by creating a copy node to copy the alpha from the ball render and copying it onto the final merge plus node so the alpha is full. After putting this all together, I made a separate node tree for all layers that would be in the background of the ball layer, which was the shadow, occlusion layer, and the ground occlusion layer
Monday, Jan. 16th:
Because of Martin Luther King Jr. Day, our buildings were closed. As a result of this, I was at home making my light in my scene match my shadow better than it was before.
When going back to my lighting, I realized more and more details about my shadow. When I looked at it, the shadow in my scene casted by the gray ball was pretty sharp, which threw me off a little bit because all the other shadows in my scene were faded a little bit, more than the original shadow. From this, I took my sun light spotlight in Maya and started adjusting where the light should be according to my reference photo. I had also went into my spotlight settings and adjusted the roundness of my light being casted to have more of a sharper shadow than before. Eventually I found a sweet spot with it, and the pixel fall off from my render and the reference started to match. This process took some time, but I am glad I did it, because it prepares me for the next step in this process.
Friday, Jan. 13th:
On Friday, I got back to work on this project, lining up the cube in my scene with the cube in the photo. My first step was going through Dropbox to grab the photos needed, then looking at the metadata in my reference photo to find the camera type, focal length, and dimensions of the photo. After finding this information, I relate this back to the camera in Maya.
After setting up my scene settings, I fit the image to be in the resolution gate and got started on getting a cube to line up with the cube in my reference. When creating the cube in Maya to reference the cube in the photo, I made the cube the same size as the cube in the reference (4"x4"x4"). Changing the grid in Maya to be in inches instead of centimeters helps as well. You can change this in the settings window under this path: Windows > Preferences > Settings > Working units. When the cube was to size correctly, I started moving my camera to be extremely close to where the cube is in my reference photo
After aligning the cube, I wanted to make sure my preferences were set correctly for the rest of the project. Going into my preferences, I changed the Preferred Render Setup System to Legacy Render Layers instead of the default render set up. Then, I went to my Color Management section in Preferences and did more tweaking there. I checked the box to "Enable Color Management", changed my render space to be "ACEScg", changed the view transform to "sRGB Gamma" and in the Output Color Transform Preferences, I checked the box to "Apply Output Transform to Renderer." With all this set, my render layers will be in order, and my colors for the final look will be correct.
Now, I'm going to create the HDR image for the project for lighting using Photoshop. The HDR image will help me create lighting in my scene to illuminate it exactly how the reference scenes lighting was. The first step to getting this HDR is gathering my photos with different exposures together for Photoshop. There were seven levels of the exposures shot, and I combined them together using "Merge to HDR Pro" from this path: File > Automate > Merge to HDR Pro. When Photoshop was done putting the photos together, I checked off the option for "Remove Ghosts" and set the mode to 32 bit color depth to check off the option for "Complete Toning for Adobe Camera Raw".
Wednesday, Jan. 11th:
On Wednesday, Bridget brought five cameras and five camera stands for all of us in class to try out. We went over all the functions of the camera, setting up the equipment, and getting us more familiar with all of the features of the camera. We learned about aperture, exposure, and ISO in the camera, and using the histography to get a great balance of low and high values. We got a good understanding of white balancing as well and its impact on the final picture. Different white balance settings were on the camera for us to explore and understand, and we learned about the effects of setting a camera's white balance color to orange, leading to the entire picture through the lens turning everything into a purple tint.
Tuesday, Jan. 10th:
After looking through the photos, I have decided that I like Hollar, Shelton, and Evie - Fence. Listed below are the photos that I have in mind, as I'm going to bring these choices into class for further discussion:
Monday, Jan. 9th:
On Monday, we got assigned with our first project. We are given a library of images to chose from, and then we will learn how to get a scene set up to match the lighting. For now, I have a kit that Bridget gave to me for taking my own photography if I want to create my own clean plate.