I’m excited to release Precomp Switcher v1.0, a Nuke plugin that speeds up your precomp workflow.
Features:
A quick panel to create your precomp setup
A color-coded Backdrop shows the precomp state at a glance:
White – Bypass
Blue – Rendering
Red – Precomped
Toggle between Precomped and Bypass simply by disabling the Switch node.
User Guide for everyday Artists & Developer Guide for how to hook it up with pipeline.
Optional Feature for Localizing your precomps, for folks working with large files over a network.
Why use it?
Consistency. Every precomp in every script looks and behaves the same.
Readability. Tile color and label tell you at a glance whether a precomp is live, mid-render, or rendered.
Speed. One hotkey to build, one to render, one to read back and flip/toggle.
Pipeline-friendly. If your studio has its own render command, custom Write Node, custom Read Node or read-from-write tool, a TD can configure this tool to use those hooks to their own pipeline commands, instead of native Nuke ones.
Hotkeys
Shift+W to create the Write, Switch, and Backdrop precomp setup
In this final installment of the CG Compositing Series, we focus on using LightGroups and Material AOVs together in a single workflow, and solving the paradox that come with it.
Why do these 2 rebuild methods seem to clash?
We cover the following topics in the video and in this blog post:
The complications of splitting LightGroups per Material AOV
A method for transferring changes between setups using a Difference Map
The pitfalls of using Subtraction and the advantages of using Division
A comparison of math operations: Add/Subtract vs Multiply/Divide
A stress test of the Division-based setup
Template layout strategies and rules to keep your rebuilds stable
Carrying changes across the template from the 1st rebuild to the passes of the 2nd for the most interactive user experience.
Ideas and techniques you can apply in your own CG Templates.
We could decide to brute force split out each pass even further, into Material AOVs per Light Group.
When we rebuild it you could either prioritize it as Larger buckets of Material AOVs, made up of each LightGroup.
Or prioritize it as larger buckets of LightGroups, made up of each Material AOV, like a mini-Beauty Rebuild per light.
There are many problems with this workflow however:
There are many more layers and channels rendered, making file sizes larger, and nuke slower to process and more difficult to work with.
There is often a need to clone or expression link grades and color correction changes across different parts of the setup in order to affect all the lights at once, or all the material AOVs at once. Creating a clone or expression hellscape.
There are also cases where you will see a master control and expression links, so the user does not get lost in the linked/cloned nodes.
You may also see the entire setup in a Group Node, to hide it and only expose necessary controls.
Compositing is never that straight foward however and we should not be compositing from within a Group node. We often need to pull masks, rotos, elements, etc from other parts of the main node graph, and if everything is in a Group, it becomes difficult to get that information inside of the group to use.
Most Compositing should happen exposed in the main node graph to avoid any headache, and not hidden away in a Group that a user needs to jump in and out of.
This extra split workflow has many cons, let’s look at some other workflows to solve our paradox problem.
Transferring Changes from 1st Setup to the 2nd Setup
Another workflow is trying to capture and transfer the changes from the 1st Rebuild Setup to the 2nd Rebuild Setup. This is the basic idea of the workflow at its core:
An example of this technique could be illustrated from Machine Learning or Generative AI workflows, and is called Style Transfer.
In the below image, I start with an image of a bearded man. I have 2 separate models that are making changes. The first might be for facial expressions and shaves, and the second is for applying makeup. On the left side, I make a change to make the man beardless, and with an angry expression. On the right side, I’ve told it to apply clown makeup. If we want to combine the 2, I might want to package the “Beardless Angry” Changes, and apply that over to the clown makeup side. My result would be a Beardless Angry Clown.
This is a silly example but illustrates the workflow we want to use in Nuke to capture our first changes and apply them to our second changes for a combined change.
But how can we capture and package those changes from the first setup?
Subtractive (Absolute) Difference Method
We can find the difference between the 1st Rebuild and the Beauty Render using Subtraction
Temporarily store the changes in a subtractive difference map
Apply the 1st changes to the 2nd Rebuild Setup
Taking one of your rebuilds, either Material AOV comp or LightGroup comp, and subtracting the original Beauty Render will give you the Subtractive Difference Map, as seen below:
Subtractive Difference Map
The image itself is a map of positive and negative values, telling us how much we would need to add/subtract from the Beauty Render in order to get the result of our changed Rebuild.
Values of Zero will have No Change
Positive Values will get Brighter
Negative Value will get Darker
Let’s get into some equations to help us understand the math behind this workflow.
First let’s define a helpful math symbol: Delta, which stands for “The Change” or “The Difference”
First we’ll do a basic inverse operation with subtraction and addition.
Material AOVs – Beauty = Difference
Beauty + Difference = Material AOVs
Instead of adding the difference back to the Beauty, let’s swap the Beauty out for the result of our LightGroups comp. So I am adding the difference of the Material AOVs comp onto the LightGroups comp, to hopefully get the combined changes.
It’s important to realize that we do not need to start with the Material AOVs and transfer to the LightGroups, but we could also just as easily start with the LightGroups and transfer those changes over to the Material AOVs, it’s a matter of preference, but the result will be the same.
Let’s try this in nuke, by taking the Material AOVs output, minusing the Beauty Render, and then applying our subtractive
The resulting image kind of works, but is also full of problems with odd colors and seemingly black hole areas
Subtractive Method Failure
Let’s take a look at what is going wrong with the Subtraction Difference Method.
Subtractive (Absolute) Difference Problems
The Subtractive Difference Map represents Absolute Values
This tells you the exact values to add/subtract to bring the Beauty Render to the Changed Rebuild
The Subtractive Method (Absolute) only works well if you Brighten values in the Rebuilds, or only Darken them slightly
Brightening both setups will be fine, as the results will only increase.
Darkening both setups however, runs the risk of going below zero and into negative values when the change is applied to the 2nd Setup. The darker the changes on both sides, the higher the risk of negative values.
Remember that the Rebuild passes are embedded in each other’s setups. If we darken some lights, and then darken the Specular, since the specular also contains all the lights, we are essentially subtracting those light groups twice and getting negative values.
So if this Subtractive Difference Method is giving us issues, let’s look at any other ways to get the difference map.
Division (Relative) Difference Method
Let’s ask ourselves: How can I go from 8 to 4?
Obviously we could subtract 4, and 8 – 4 = 4
But if we had a new, lower number, such as 2, and we also minused 4, we’d get -2.
We could also divide 8 by 2, therefore halving it, and we’d also arrive at 4.
Then trying to divide 2 by 2 will get us 1, it is also halved.
The number of change from 8 was -4 but from 2 it was only -1. This number of change is Relative to the input number. It is a ratio or a percent of what the start number is, so it adapts to our input.
Of course, this could also be represented as multiplication. divide by 2 is the same as multiply by 0.5
So instead of trying subtraction and addition, let’s now try divide and multiply
The Result is a Division Difference Map that looks a lot different than our Subtraction Difference Map
Division Difference Map
Now let’s multiply this with our 2nd Rebuild, the LightGroups side:
Side Note: Since Nuke’s Merge node does not have a native B / A operation, if you ever wanted to swap the A and B inputs and have the disable default to the Rebuild instead of the Beauty (for Templating reasons), then you would need a special MergeDivide.
The Result from applying the Division Difference below looks a lot better than the Subtraction Method, and there are no longer any Negative Values in the image.
Division Difference Method
So why does this suddenly work? And what is going on with that Division Difference Map?
Division (Relative) Difference Map
This new Difference map is answering a different question than the subtraction difference map was:
How much do we need to Multiply the Beauty Render by in order to end up with the Rebuild Output?
What Percent do I need to increase or decrease this Beauty Render by to get to the Rebuild Output
Multiplication / Percentage will not get us Negative values
That Division Difference map appears all white, but in fact, it has values over 1, superwhites, that we cannot see by default. let’s darken it a bit so we can see the pixels over the value of 1.
Darkened Division Difference Map – for Visualization
Let’s break it down:
Values above 1 will get brighter
Values between 0 and 1 will get darker
Value of 1 means No Change
So any number multiplied by 1, is itself, and does not change. That is why the map is mostly white.
Multiplication can also be represented as a percentage:
So we could express the pixels on this map in a percentage:
So our new map will be increasing or decreasing our 2nd Rebuild input by a specific percent.
Let’s go over the math equation to see how it works. Once again we have our inverse operation, Starting and returning to Material AOVs using division and multiplication:
Then we are swapping out the Beauty Render, in the second step, with our LightGroup output. So we are applying our Division Difference Changes on top of the LightGroup Changes.
It’s worth mentioning again, that just like before, it does not matter which order you divide or multiply the Rebuilds, Material AOV 1st & LightGroup 2nd or LightGroup 1st & Material AOV 2nd, will yield the same result.
So why does the Division Difference work so much better than the Subtractive Difference?
Below is a animation showing the difference between the add/subtract and multiply / percentage.
Notice that the subtraction will go past zero towards negative values, while multiplication will only approach zero or be zero, but never go negative. We don’t really ever see a negative percent.
Going back to that embedded layers image. This time, instead of subtracting the pass on both sides, we are multiplying to zero on both sides, but we don’t run into negatives, because if you multiply something by zero twice, it is still only zero. 4 x 0 x 0 = 0. So we are actually still safe.
I encourage you to stress test this Division Difference Method with your own renders and unique cases. You are able to push the limits to an extreme level without noticing anything breaking or feeling off.
Template Layout Options
We have to decide if we want to set up our template with our 2 Rebuilds:
side by side
top to bottom
We also need to decide which Rebuild will be first and which will be second, the first will be the one captured in the change map. So either Material AOVs or LightGroups.
We could also go right to left instead of left to right, on the side by side, if we so choose:
Here are some possible template layouts in the node graph:
One thing that is a bit annoying is that while using these Templates, and making changes, we can really only see the effect of our changes by looking at the very bottom, after the changes are combined and both setups are taken into consideration. Is there any way for us to have a more interactive experience, by seeing some of the changes affecting different parts of the Template. Let’s explore that idea.
Interactive Changes throughout the Template
Instead of considering the Rebuild as 1 whole output, like our Beauty, we need to remember that it is made up of individual pieces, like our piechart from before. The passes were split and adjusted and added all up to equal the Beauty.
So instead of multiplying the Division Difference Change Map to the output of the 2nd Rebuild, we could multiply it to each individual pass separately. This would give us the same result once we add all the passes together.
Let’s explore the math of this, it becomes a little easier to understand.
If we split the Output into smaller components, we can apply the multiply to each component and then add them up after. This would be the same result as us just multiplying the whole.
The Equation for use would look something like this (Delta being the Difference, and T being Total Changes):
In nuke, we can set this up in our templates. I am just going to stick to Top to Bottom Templates for the example, as it’s a little easier to set up and understand.
It’s SUPER IMPORTANT to realize that we are only capturing the changes from the 1st setup, and applying them to the 2nd setup. There is no way to make the changes of the 2nd look back around and apply to the first, because you would create a paradoxical change loop: Changing the 1st, which changes the 2nd, which changes the 1st, which changes the 2nd, which changes the 1st…. you get the idea.
So that decision of the flow of your Template, and which setup you want to see the changes reflected in, is very important to decide as you build your CG Template
So, let’s say that we have our Material AOVs 1st, and we are applying the changes to the LightGroups. We’ll need to multiply each lightgroup pass with the division map
And if we started with LightGroups, we’d need to multiply the 2nd setup Material AOVs with the division difference map.
base LightGroupsLightGroups with Material AOV Changes applied per pass
or if you were to use the LightGroups first, you could transfer your changes to each individual Material AOV:
base Material AOVsMaterial AOVs with LightGroup Changes applied per pass
The result is an interactive user experience where you we can see our changes trickle down throughout our template and influence all the downstream passes. This can really help visualize what is happening at a local level.
Rules and Caveats
Material AOVs passes must add up to equal Beauty
Light Groups passes must also add up to equal Beauty
Do not do color corrections that introduce negative values (saturation)
Treat the CG Template as a glorified Color Correction
On the 1st Rebuild side (The Captured Change side) avoid:
Transforms / Warps
Filters: Blur, Defocus, Median, Glow
Chromatic Aberration
Replacing / Merging a totally different image on top
Texture changes should happen at the albedo level
You want to try and consider the entire CG Template as one big color correction. The pixel is being tracked all the way through the setup, in the change map, and comparing back to the beauty and applying to the second rebuild. Things like Transforms or filters, are changing the possible, or blending pixels together, and will cause artifacting because the Change map is not able to really capture the changes properly. Also some filters are a post effect, and really should not be adjusted after use, such as a Glow.
Example of Glowing 1st rebuild and viewing result in 2nd rebuild:
glow problems
Transforms or moving pixels around, will also not allow the setup to track the pixel the whole way through and leave to various artifacting, as shown below:
transform problems
You will want to apply your filters and transforms either after the CG Template, or possible only on the 2nd Rebuild section. So basically avoiding the division change map, which is unable to capture it, and only applying those operations afterwards.
Template Examples
I will be providing you examples of Side by Side, Top to Bottom, and Interactive Change Templates for each renderer: Blender, RedShift, Arnold, and Octane.
All Template Examples: Blender, RedShift, Arnold, Octane. Side by Side, Top to Bottom, Interactive
Template Ideas and Inspiration
There are just way too many variations for me to provide in every situation. However I can give some example ideas or inspirations that I have seen and worked with that you could consider implementing into your CG Template if it fits with your style of comping.
Managing Div-Map with Exposed Pipes
Using Stamps or Hidden inputs for Div-Map
Storing Div-Map in a Layer / Channel for later use
Grouping Sections for less clutter
Template Controller, pick which parts are in use:
Beauty
Material AOVs Only
LightGroups Only
Combined LG / AOV
Reversed Direction
Conclusion
This Division Difference Multiplication Technique used to solve the LightGroup / AOV Paradox is fairly unknown at the moment. There seemed to be a huge black hole of knowledge out there on this subject. I’d like to give a huge shout out to Ernest Dios for being one of the true masterminds behind this technique, and for first introducing me to it. Also a big thank you to Alexey Kuchinski for all of his mentorship.
My hope with this whole CG Compositing Series was to equip you with the knowledge of every piece of the CG Template. What all the passes are, Why they are important, How to use them, Where to put them and how to organize them to Rebuild the Beauty, and When to adjust them for specific notes.
And of course, the final piece of the puzzle. How to combine it all and use the LightGroups and Material AOVs together in an elegant way. To help you push your CG Renders to their absolute limits, without the need for a rerender.
I hope you got value out of this video, or out of any video in the CG Compositing Series.
If I could ask one small favor from you, it would be to help share this video, or this blog, to compositing or VFX friends and colleagues. Whether it’s in a group chat, work chat, discord, linkedin post, I believe this knowledge is too important to keep secret. I would love to see this amazing workflow become more commonplace in the world of Compositing.
Thank you so much for all of your support over the years. It’s be a long journey since the first CG Compositing Series Intro video, and we are finally at the end…for now. I hope it was worth the wait.
Until next time.
Downloads
Nuke scripts
1 Demo nk script, and 1 Template & Idea Proposal nk script, 2 total:
I’ve created a new Junkyard Render specifically for this Light Groups video, please download the Render and the Cryptomatte file here in order to relink it in the Demo nuke script:
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the project folder
In this video we move away from the Material AOVs and cover an equally important Beauty Rebuild using Light Group renders. This is another set of passes you can render to adjust the lights in your render, that all add up to the Beauty Render.
A Light Group is a render pass of a light (or a set of lights) in the scene, that is rendered in isolation from the rest of the scene’s lighting.
All other lights are “off” and only the Light Group’s light is “on” and affecting the scene.
All the Light Groups should add together to produce the full Lighting in the Scene; They all plus and build back the beauty render.
Importance of Light Groups
Creating good looking CG is not just about the materials of the objects, but also the Lights in the scene, that interact with those materials, and tell a story.
Different Light types can drive the aesthetic, style, realism, or story of your CG render.
Understanding lighting basics is important for being an effective CG compositor.
Types of Light Groups
Key – Primary Light Source Fill – Lift and soften Shadows Rim – Enhancing silhouette & Separation
Practical – Light Sources serving a purpose and illuminating the scene (they are part of the environment)
The Exposure Triangle refers to 3 settings on a camera that help balance the Exposure / Brightness of the Image. If you increase the brightness of 1 of the 3 sides by 1 stop (double the brightness), then you need to choose 1 of the other 2 sides to lower the brightness by 1 stop (half the brightness) in order to maintain the same exposure level of the photo.
Only Aperture and Shutter Speed are referring to the amount of physical light reaching the sensor through the lens. ISO refers to the amplification (multiplication) of the analog signal before it gets converted digitally.
Check out this AMAZING website that lets you play around with the settings and balance the image brightness in a very interactive way. I loved playing around with the sliders, it is such a cool idea.
The larger the lens opening, the more light gets through, the brighter the image.
Also the bigger opening results in a shallower Depth of Field, or smaller zone of focus. This results in larger Bokeh and separation of foreground and background.
Digital ISO is a lot like a volume knob on a radio. If the signal is weak (aka there is not much light making it to the sensor) then increasing the volume will make the sound louder (make the image brighter) but will also increase the static, or digital noise (sometimes referred to as grain).
For dealing with Exposure in nuke, I would recommend using either the Exposure Node, the Multiply Node, or the Grade node’s Gain or Multiply knobs
In the Exposure node you could change the stops directly by changing the mode to stops You can also just multiply by 2, 4, 8, or enter 1/2, 1/4, 1/8 in the Multiply slider of a Multiply or Grade node. With a normal Multiply, we can use an expression to be able to enter our stop number pow(2, x) where x is the stop number, the same as the Exposure node is using.
I tend to use either an Exposure node for Luminance and a Grade node’s Multiply knob for Color
Or I use a single Grade node, using Gain for Exposure changes, and Multiply for color changes
I also prefer to change my color using the Temperature and Magenta settings of the Color Panel, which allow intuitive corrections which also giving fine control.
This is also an important way to separate your Luminance correction from your color correction, by making sure the Intensity stays around 1 and Luminance is preserved while changing color.
Adjusting Light Groups with Exposure (Gain or Multiply) for Intensity / Luminance, and a Multiply for Color, are my preferred way to Color Grade my Light Groups
beautyLight Group Tweaks
Saturation of Light Groups
Remember that Light Groups are like individual Beauty Renders with only 1 light at a time. So you cannot simply desaturate a light group if you want to desaturate the light color.
You would either have to separate the Lighting information from the material information, using a color pass. But even then you may encounter some issues and artifacting.
Or, you can simply shift the colors of the light group to a more neutral color
Destructive vs Non Destructive workflows
You can use Gamma corrections, but be mindful that it requires an exact order of operations reversal in order to fully restore the original image. So it can be difficult to undo later if your corrections start to stack up
ColorCorrect Nodes can be especially Destructive because they are impossible to reverse, due to the fact that it is pulling a luminance key on it’s input to determine the shadows, midtones, and highlights.
This locks the input of the ColorCorrect, because if you make a change above, you are affecting the result of the ColorCorrect
It means that you either need to keep going, adding more nodes and changes on top, or perhaps start over.
Image each ColorCorrect is dependent on all of the previous ColorCorrects, this can cause a ripple, or chain reaction affect and be altering the results of all or any of the ColorCorrects if they are altered.
Of course in the end of the day, use whatever you need to do to get the shot done! But be mindful that you might be tangling a knot that you cannot untie later.
My advice would be try using Exposure and Multiply Changes for Luminance and Color first, and see how far you can get, and save the fancy ColorCorrects as a last resort, when you need to get the extra mile to completing the shot.
In the Demo Nuke script, you will find AOV and Light Group Rebuilds for:
Blender (Junkyard Scene)
Arnold (Fruitbowl)
Octane (Fruitbowl)
Redshift (Fruitbowl)
You will also find sections demoing:
Exposure
A junkyard light group rebuild that I have tweaked with Exposure and Multiply as an example
Saturation demo dealing with saturation of Light Groups
Section breaking down Destructive and non-destructive workflows in nuke.
Downloads
Junkyard
I’ve created a new Junkyard Render specifically for this Light Groups video, please download the Render and the Cryptomatte file here in order to relink it in the Demo nuke script:
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the project folder
References
Below are some links to the various research I used to create this video:
First, big shout out again to the Exposure Triangle Simulator website:
In this video we aim to understanding the problem with refraction (transmission) and reflections (indirect specular) explore potential solutions. The problem with Indirect Specular (Mirror Reflections) and Transmission (or Refraction) passes is they reflect or refract the entire beauty of the environment, locking that information into 1 pass. There often seems there is not much we can do as compositors to separate those passes further.
Here we have a nightmare scenario from a AOV rebuild point of view: A glass jar full of balloons, that is also reflected in a mirror surface. Everything in the mirror Reflection shows up only in the Specular Indirect Pass, and everything seen through the glass jar shows up only in the Transmission (refraction) Pass.
We notice as well that objects that end up in the Transmission (Refraction) pass are missing from the Diffuse Pass.
Mirror Reflections, for example ground plane reflections for our subjects, are also limited to the Indirect Specular pass:
What is Transparency?
Transparency is the ability to see – through an object or surface to what’s behind
It’s as if the object or material is ignored or nonexistent and does not have to do with Light interacting with the material.
The light passing through is not Distorted (Refract), nor does Scatter or change Color (which could be the case with Translucency or Transmission)
Transparency basically has only 1 setting: Amount – or “How much can i see through this”
Transmission can sometimes cause the light to inherit a color tint as it passes through and interacts with the material. Think of colored liquids or tinted glass.
Refraction is the change in direction and speed of a light ray as it travels through or “Transmits” through different mediums, ie. from Air to Glass or Water or Plastic
The 2 more important characteristics of Refraction are:
Transmission is only referring to Light passing through an object
Refraction is requiring the light to have changed direction, and to pass through
The render pass is doing both things, so some Render Engines decided to call the pass Transmission, because it’s referring to light passing through the material
Other renderers call the pass Refraction, referring to the Change of Direction, “bending” or distortion of the light
Both terms in this case are referring to the same phenomena, just focusing on different aspects of the light’s behaviour
Transmission might even be a more accurate label, because technically a material could have a Refraction index of 1.0, meaning no refraction/distortion is occurring, but the light is still Transmitting.
All Refractions require Transmission
Not all Transmissions require Refraction
Why is Light Redirected during Refraction?
Light travels through different mediums at different speeds, depending on the density and make up of the medium.
Examples of Mediums: Vacuum (space), Air, Glass, Plastic, Water, gases, etc.
The change of light speed while passing from 1 medium into the next, causes the light to change direction when entering the 2nd medium.
When the Light goes from a fast medium, to slower medium, and back into the fast medium on the other side, it has another refraction turn
This time, instead of one side of the light wavelength slowing first, one side speeds up first
If the exit angle is the same as the entrance angle, it will reverse the lightwave back to the original direction, and is parallel to the orginal light direction, just offset
Complex shapes create complex caustics, and moving surfaces, like water, create dynamic and organic moving Caustic patterns.
What is Translucency?
Transmissive materials have a Roughness or Glossiness setting that works in the same way as it does on Specular Highlights
Increasing the Transmission Roughness causes the light rays traveling through to scatter / “diffuse” or blur together. Think of Frosted Glass or Plastics.
This effect of “Blurring” or Scattering the Transmitted light is called Translucency
When looking at curved mirrors, it is very obvious that the object we are looking at, is a redirected and distorted view of our surrounding environment
When looking at curved glass, or lenses, light that we are looking seeing through the glass, is a redirected and distorted view of our surrounding environment
photo by betül balcı on pexelsphoto by shukhrat-umarov on pexels
Concave Refractions
With Refractions, light passing through the material and, depending on the surface shape, changes direction upon refracting
Concave shapes cause the refracted light to Diverge – spread apart
Looking at them all next to each other, we can see Reflections and Refractions are both re-directing the light rays from another part of the scene. The biggest difference is Reflect = Light Bounces off, Refract = Light passes through.
There is No Spoon
photo by chait goli on pexelsphoto by otoniel alvarado on pexels
Diffuse – All Light Interaction with Material / Object
Specular – All Surface Reflections (Bounces)
Transmission – All Pass Through Refractions
Here is an Example Scene with 1 sided Glass on the left, and 2 sided Glass on the right:
We can see the Direct Transmission shows the Light Source through only the 1 sided glass, but not the 2 sided glass
Almost all information in the 2 sided glass is stored in the Indirect Transmission:
Almost all objects that contain glass in 3D are supposed to be modelled with a thickness, meaning 2 or more sides. So more often than not, your Direct Transmission Pass will be empty and all information will go to the Indirect Transmission. This is also why very often it is not even split up and is just rendered combined as Overall Transmission.
Recap #2
Transmission – Light passes through
Refraction – Light redirects.
The CG pass could be named either or but is often referring to the same phenomenon.
Specular and Transmission are both similar in that they are capturing light redirecting and showing a virtual image of the distorted surroundings
Emission is the light source
Diffuse describes the object itself
Specular Events captures light bouncing off the object’s surface
Transmission Events capture light passing through an object.
These all get separated into their own categories.
Both Specular and Transmission have:
A Direct pass that show the first reflection or first transmission of light
An Indirect pass showing all subsequent bounces or pass throughs
An Albedo Filter (mask)
Transmissive surfaces like glass are often modelled with 2 sides
Therefore the light usually passes through 2+ sides and ends up in the indirect pass, and the direct Transmission shows up empty
Often rendered as just an overall combined Transmission pass, for convenience.
Incorporating Transmission (Refraction) Into AOV Template
Since most of the Refraction is in the Indirect, there is no need for space for splitting up and adjusting separate direct and indirect, like we do with the diffuse or spec. I recommend combining and keeping the Transmission Section Slim for Space Saving in the Template. I also recommend the layering to go: Diffuse, Transmission, Specular, Emission, Other. To me this was the clearest Layering.
I updated the Material AOV Rebuild Templates in the FruitBowl Renders for Arnold, RedShift and Octane incorporating the new Transmission / Refraction Section.
See the Downloads Section at the bottom for links to the whole nuke scripts for learning and template scripts updated per render engine, arnold, octane, redshift.
Handling Planar Mirror Reflections
One approach to rendering Planar Reflections with AOVs is flipping the Camera along the Mirror Plane
Flipping the Camera along the normal of the Mirror Plane will produce a Virtual camera for you to render the Mirrored Virtual Image from the right perspective
If your Object is sitting on top of the 3D origin ground plane, this can be as easy as making an Axis Node, Scaling the Y to -1 and plugging your camera Axis Input into this Axis Node.
This will view your scene from the perspective of your Mirror. In the above image, you can see after flipping the Camera in -Y, the Nuke rendered result is aligned with the rendered indirect Specular pass. We’ll need to do this method in the Render Application on Lighting side, or pass this camera back to the lighter in order to render the reflection with full AOVs.
Here is the re-rendered Mirror Camera Perspective of the Armored Mech, with full AOVs, matching the original reflection angle:
What about non-ground plane mirrors?
For all oriented mirror planes, the same concept applies, you want to flip the world from the pivot point and orientation of that card along it’s normal facing angle. This is easier to do in 3D applications, but can be done in nuke with a little Matrix Inversion.
I’ve made a tool called MirrorDimension to make this Camera Mirroring super easy. Just stick this node between the Mirror Card in nuke (must have it’s transformations and rotations) and the Camera node. The gizmo is acting as an Axis Node and is just flipping the world along the orientation of the Card input.
No Settings on the node, just the following instructions:
1.) Plug in the MirrorCard input to the Card or Axis node you would like to be the mirror.
– The scale of Card Does not matter as long as the orientation (translation/rotation) are correct.
– The Card’s +Z access is the front of mirror, point that towards subject / camera. This is the blue Z arrow in 3D viewer.
2.) Duplicate your Camera, and plug in the “axis” input of this new Camera to the output of this node.
3.) Your new Camera will be Mirrored according to the plane / card / axis.
4.) Render using this New Camera Setup to get the mirrored CG output.
Before MirrorDimension Node – Original Camera Position:
After Mirror Dimension Node Applied –
You would either do this in your 3D scene and render the AOVs or pass this camera to a Lighter to render from this mirror perspective.
Faking Reflections in Comp
If you suddenly need reflections but have no renders, you can use some of the above techniques to fake your reflections.
If you have your Geometry of the object, try projecting the rgba onto the geometry, and rendering it in nuke from the mirror dimension:
If you have no Geometry, but have a Position Pass. Try using a PositionToPoints node, plugged into your render and Position input plugged into your shuffled out Position pass (or select in the dropdown). You can render your rgb 3D point cloud of the object with the mirror camera and fake some reflections. It won’t be perfect, but perhaps in a pinch, it can save your ass and add more realism:
So the next question becomes, what can we do if it’s not a Planar Reflection? or if it’s multiple planar reflection, or surface is curved, or what about Refractions (Transmission) ?
Getting Help from Lighters
There is a serious limit to how much we can do in comp when encountering Indirect Specular or Refraction (Transmission) passes. Many times, if this is something that is a big feature of are shot and requires a lot of comp tweaks, we’ll need some help from our Lighting Department.
Julius Ihle – Head of Lighting and LookDev at Trixter
We talk to Julius Ihle – Head of Lighting and LookDev at Trixter for potential Lighting Solutions to these problems.
Julius is super knowledgeable, and introduces us to Light Path Expressions and Open Shading Language where lighters can help Build Additional AOVs and help us when the situation calls for it.
Julius is also an online educator and keeps a Lighting Blog discussing exactly these topics, check these tutorials out for more details:
Here is an illustration of the drawing Julius used to explain how renderers are handling Reflection and Refraction Events
In a nutshell, the render engine keeps track of the light ray path and all the events that it undertakes on it’s journey from Camera back towards the Light
Lighters can create new AOVs with custom expressions telling the render engine exactly what parts and what events they want to see in the outputted pass.
There are also Shaders that have been written that can Reflect various AOVs, such as Utility passes and Alpha channel so that reflections can be more useful for us in comp. Julius has written his own shader to do just that, download it from GitHub:
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder
Project Files for this Video:
Along with the fruitbowl renders above, here are the nuke script and project files from this video, so you can follow along:
I am linking to the gizmo on the Nuke Survival Toolkit github, where you can download the raw file or copy/paste the RAW source code from your browser into nuke:
Since I am using Stamps in the script, all renders can be swapped out at the top of the script where the “SourceImages” Backdrop is, and the rest of the script will get populated correctly.
Slide show PDF
Here is a PDF version of my slideshow in case you would like to save for future research or review:
In this tutorial, we go further down the levels of complexity into the most complex category, which includes Albedo and RAW Lighting. These are the smallest components of AOVS, the building blocks, and unveil how lights, textures, and materials come together to produce the beauty render.
What is Albedo?
An Albedo Map is the base color or texture map that defines either the diffuse color or specular tint of the surface.
Remember that in Physically Based Rendering (PBR) depending on whether a material is Metallic or Dielectric (non metallic), determines whether the albedo color is used as Diffuse Color or Specular Color.
It knows what to use the albedo for based off of a black and white metallic map
The RAW Specular pass is that objects in the scene would look like if they had a 100% reflective chrome shader on. It renders everything uniformly reflective.
Specular Filter is like a mask or a albedo multiplier, that limits the visibility of the RAW specular reflective pass to certain areas. The thought process is: Might as well render everything reflective, and then decide where and how much it is needed.
Just like the albedo and the RAW Lighting, RAW Specular and Specular Filter are multiplied together to form the final Specular pass
Specular = RAW Specular * Specular Filter
What is RAW Reflection and Reflection Filter?
RAW Reflection and Reflection filter is essentially the same thing at RAW Specular and Specular filter. You might see this term depending on the renderer. Sometimes Specular is referring to Direct Specular and Reflection is referring to Indirect Specular.
The more important take away is you want pair the “RAW” pass with it’s “Filter” or “Albedo” pass. They get combined and multiplied together to equal the final pass
Reflection = RAW Reflection * Reflection Filter
RAW Direct Diffuse & RAW Indirect Diffuse
Just like the normal Diffuse pass, RAW Lighting passes can also be split into Direct and Indirect Lighting. So you can end up with the RAW Direct Lighting and the RAW Indirect Lighting. Both passes are using the same Diffuse Albedo, so it is only the lighting that is split, not the albedo.
Total RAW Diffuse = RAW Direct Diffuse + RAW Indirect Diffuse
RAW Direct Specular & RAW Indirect Specular
And just like the Diffuse RAW passes, we can also break up the RAW Specular passes into RAW Direct Specular and RAW Indirect Specular.
Again both Direct and Indirect Specular will use the same Specular Filter pass.
Total RAW Specular = RAW Direct Specular + RAW Indirect Specular
Diffuse Equation
Knowing the diffuse equation will help us understand how it is built, and more importantly, the math behind splitting the Diffuse pass into it’s individual components of Albedo and RAW Lighting. Let’s go over a basic equation and reinforce some math concepts:
x = Albedo y = RAW Light Diffuse = ( Albedo * RAW Light ) Diffuse = ( x * y )
In math, certain operations cancel each other out. Just like Subtraction cancels out Addition, Division cancels out Multiplication
( x + y ) - y = x ( x * y ) ÷ y = x
We can take the Diffuse pass, and dividing by the component we do not want, we can get the component we do want.
What that means is if you have the Diffuse pass and 1 other component, albedo or RAW Lighting, we can always generate the remaining missing pass.
x = Albedo y = RAW Light
Diffuse = ( Albedo * RAW Light ) Diffuse = ( x * y )
( x * y ) ÷ y = x ( x * y ) ÷ x = y
Diffuse ÷ Albedo = RAW Light Diffuse ÷ RAW Light = Albedo
Division Problems
You can divide 0 by any number and you get the result of 0. But if you try to do the reverse, you run into a classic math problem. You cannot divide by 0, the result is undefined… Not possible.
0 ÷ x = 0 x ÷ 0 = undefined
This can cause serious problems in nuke when dividing, and we need to be careful.
Using Expression node to test math in nuke
If we use an expression node we can enter the following equation:
0/r 0/g 0/b 0/a
The nuke Expression node has some predefined variable for using the channels. So it will carry out this math on a per pixel basis for each channel.
r = red channel g = green channel b = green channel a = alpha channel
we can see that once we start dividing by 0 value pixels, we are getting issues. Nuke’s answer for an undefined result is nan pixels
nan stands for “Not A Number”
inf stands for Infinity
Testing for nan or inf pixels
We can use another expression node to write a little tcl expression that will show 1.0 (white) for any illegal value pixels. If it’s a normal number, it will display at 0.0 or black. This can easily and visibly test if we are having “problem pixels” in our image such as nan and inf
isnan() tests for nan (not a number) pixels. You need to enter which channel you want to check inside of the parenthesis, for example isnan(g) and it will display 1.0 for nan values and 0.0 for normal values
isinf() tests for infinity value pixels. You need to enter which channel you want to check inside of the parenthesis, for example isinf(g) and it will display 1.0 for inf values and 0.0 for normal values
We can just add them together to get a full mapping of “illegal values” to warn us
So dividing by 0 in nuke can give you illegal values. Luckily, the Merge(divide) operation in the Merge node avoids these issues. It has built in protections so that 0/0 = 0 and any other number divided by 0 is bypassed, or skipped, and it does nothing. it will just show you in the A input value and not do any math at all.
There is a limitation to the Merge node however. There is only 1 operation for divide, and that is A/B
We know that when we disable nodes in nuke, it defaults to the B input. But if we switch the inputs, we do not get the same result. Meaning we are locked in to our inputs based on whatever image we need to divide by the other image.
So there is no B/A operation, we’ll need to recreate it ourselves
MergeExpression Node
We can use a MergeExpression node, which is basically a combination of a Merge node and an Expression no, in fact the properties look identical to an Expression node.
The Merge Expression has access to the same variables at the normal expression, namely the r,g,b,a variables representing the different channels:
r = red channel g = green channel b = green channel a = alpha channel
But the MergeExpression also has 2 inputs, and we can choose what input we are sourcing from in our equations with capital letters A and B
A = A input B = B input
Because we need to specify which red channel we are grabbing from, A or B red channel, we need to be more specific. Therefore:
Ar = A input red channel Bg = B input green channel
So we specify which input first and then the channel we want.
So now we can do a simple equation of B input divided by A input:
Br/Ar Bg/Ag Bb/Ab Ba/Aa
Fixing the MergeExpression
Unfortunately, the MergeExpression is pure math, and does not have the built in protections that the normal Merge node does when it comes to dividing. So if we end up dividing by 0 using the MergeExpression, we will end up with nan and inf pixel values. And that is very dangerous, because this will break the image, as you cannot do further math with those values, they get corrupted.
But it’s ok, we can implement the fix ourselves, so that we can have safe values just like the Merge node
The solution is to enter a little tcl expression into the node
Ar == 0 ? Br : Br/Ar Ag == 0 ? Bg : Bg/Ag Ab == 0 ? Bb : Bb/Ab Aa == 0 ? Ba : Ba/Aa
This code basically reads as follows:
First we need to check if the A input has 0 values, since that is what we are dividing with. and if we divide with a 0 then we get a problem.
so the first part is does the A input pixel equal 0 ? if yes, just skip, bypass, and revert to B input pixel. Don’t even do any math. If the A input pixel is not 0, then it will proceed to do the operation B/A and give the result.
This will fix the issue as all the zero pixels will be skipped. This result is identical to the Merge node set to divide
Except now it is B/A and when we disable the node, it will revert to the B stream that we want.
you can just copy/paste the code below into your nuke to get the MergeDivide that I created:
Think of Multiply like combining, fusing, mixing, linking, joining, locking
Think of Divide like separating, splitting, unlinking, disjoining, unlocking
Start with the combined pass
Separate with division
Change individual component
Recombine with multiplication
How can we use Albedo and RAW Lighting as Compositors?
1.) The first reason to separate albedo and RAW lighting would be to make an adjustment to only the texture and not the RAW Lighting or vice versa.
if you desaturate the diffuse pass, you risk desaturating the lighting and the texture at the same time. but if you wanted to just desaturate the object, but keep the tinting of the lighting, you would need to separate them first
Here is an example of the Blender Room where we one side desarating the entire diffuse pas, and another where we only desaturated the albedo pass. You will notice on the right side, the light is still warmer and maintaining the warmth of the sunlight. This is what a gray object would look in that environment
left side: desaturating entire diffuse pass right side: desaturating the albedo only
Here is the same example on the VRAY scene, where you can see the desaturation affecting the bounce lighting:
left side: desaturating entire diffuse pass right side: desaturating the albedo only
2.)There are many non linear Color Corrections or operations that you might also specifically want to do while these passes are separated, to get better or cleaner results.
Whether it is to remove light / shadow from a texture CC, or removing texture info so that you can adjust specific lighting. Operations such as:
keying
despilling / desaturating
gamma
ColorCorrect nodes
HueCorrects
HSV node – to pull color keys
3.) The next big reason would be to alter or change the texture in the scene and not need to go back to the CG department.
In this example we replace the picture on the wall with a checkerboard, but it still maintains the lighting of the scene. So you could add noise or blood textures, change billboard ads, etc, and they would still appear to live inside your shot.
left side: original painting right side: replacing the albedo with another image
Different ways to rebuild AOVs at complex level
Variation 01:
Add the direct, indirect, SSS passes together first, generating your diffuse pass. then do a divide / multiply with the albedo pass afterwards at a second step.
variation 01 rebuild structure
Variation 02
We could do the albedo divide multiply on a per pass basis. so basically we are having the RAW direct and RAW indirect split out first. We could do changes to the albedo and return to normal, and then add the direct and indirect and SSS together as a second step.
variation 02 rebuild structure
Variation 03
Similar to variation 02, we do the albedo changes on a per pass basis first. but instead of immediately reverting back to normal, and then plussing the direct, indirect and SSS together. We could instead plus them at the RAW level. The final step would just to multiply the albedo back.
Basically, variation 02 was 3 divide, 3 multiply and 2 plus
and variation 03 is 3 divide, 2 plus, and 1 multiply
variation 03 rebuild structure
Realistic Proposal for CG AOV Rebuild
The above setups are more for learning, with labels and backdrops to help break down the workflow and structure.
Below is the setup that I gravitate towards when settings up CG Templates. I try my best to apply logical flow and convenience. Maximizing organization and flexibility, while still being clean and fast. I have space for albedo / RAW Lighting change, but I keep it off by default and allow to turn it on when needed.
We see all levels of complexity being implemented:
You can find these realistic template nuke scripts of these setups for each renderer below in the downloads section. I exported the individual templates for Arnold, Redshift, Octane, and Blender.
I would recommend waiting for future videos where I will keep expanding on the template and making it more robust. But if you are eager to download and try it out, feel free to give it a try and modify it for your needs. More and better additions will come in the future posts.
Downloads:
If you haven’t downloaded the FruitBowl Renders already yet, you can do so now:
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder
Project Files for this Video:
Along with the fruitbowl renders above, here are the nuke script and project files from this video, so you can follow along:
Since I am using Stamps in the script, all renders can be swapped out at the top of the script where the “SourceImages” Backdrop is, and the rest of the script will get populated correctly.
Slide show PDF
Here is a PDF version of my slideshow in case you would like to save for future research or review:
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder
This will help the Read nodes auto-reconnect to the sourceImages for you.
Often there are a lot of renders passes to sort, and it’s useful to divide them into categories based on their functions. We can divide up all the render passes by how they are used.
There are 2 Overarching Types of CG Passes:
Beauty Rebuild Passes – Will recreate the Beauty Render
Data Passes – Helper passes
There are 4 Main Categories of CG Render Passes
Material AOVs
Light Groups
Utilities
IDs
Material AOVs
Used to adjust the Material Attributes (Shader) of objects in the scene
Examples:
Diffuse, Specular, Reflection, Sub-Surface Scattering, Refraction, Texture/Color, Emission, Raw Lighting, etc.
The passes in this category should add up to recreate the beauty render, as demonstrated in the previous video
From now on in the series, if I only say “AOVs”, I am referring to this category here. I will try my best to say Material AOVs, but I am so used to it being in my terminology and don’t find the AOV “all render pass” definition very useful.
Material AOVs are passes related to the shader or material from the 3D application. When we use these passes, we are wanting to manipulate the material or the shader of the object
Key, Rim, Fill, HDRI, Light-Emitting Objects, etc.
You can separate your lights however you like. Usually you see things like the 3 point lighting set up broken out into different lights. Along with HDRI and light emitting objects separated.
We are usually adjusting light attributes such as temperature and intensity
The ID category could probably live under the Utilities Category, but I do think the separation of these 2 categories is useful.
ID’s sole purpose is to pull out an alpha or matte channel, whereas Utilities can have many use cases beyond just that.
Many times a texture artist working on characters will make custom texture matte passes that can be rendered out as Texture RGB IDs to help isolate those important parts of the texture for adjustment in comp.
These also do not add up to the Beauty Render
Nuke Script: Breaking out Categories of the Renderers
Nuke script is a node graph representation of the slides table we looked at and I’ve broken out the passes in the categories for each of the 3 render engines.
In order for the LayerContactSheet node to display just the passes for each category, I am removing all layers from the other categories.
I’ve also broken out all of the Category’s Layers into shuffles when a text of the layer name into a contact sheet. The main difference would be that this contact sheet would be renderable, and the UI text on the layerContactSheet is not.
In the Beauty Rebuild Passes Section, underneath we have a Material AOV rebuild and a Light Group Rebuild, showing that these passes add up to equal the Beauty.
Please look through the different categories and different Render Engines to familiarise yourself.
Tips and Tricks for making contact sheets
Split Layers
Here are some links to some various Split out layers / shuffle layers python scripts found on nukepedia:
Place the FruitBowl renders files into the /SourceImages/ folder of the project files and nuke will reconnect the read nodes.
What is a CG multi-pass Render?
A CG Render with multiple extra layers or passes that are to be used to recreate the Beauty Render and to aid in further manipulation while Compositing.
Why do we need it?
Renders are Expensive, and Changes are often necessary. It can take too long to make tweaks and hit notes if you have to re-render the image.
Sometimes it’s faster to find the “look” you are going for in Comp, rather than waiting for the Render results.
Some effects are better achieved in Comp and need additional passes to help achieve the effect in Compositing.
Terms and Definitions
Here are some useful Terms and Definitions that I will be using in this series. They are commonly used in the industry, but sometimes they can be confusing or interchangeable, so I will try and define them for us to help while discussing CG Compositing
Render – The output image or final result of the export calculation from the CG software.
Renderer – The Render Engine or algorithm used to produce the render.
Render Passes – A general term for additional layers exported by the CG renderer meant to be used alongside the main render. These might come contained within a multi-pass EXR or be rendered as separate images.
SourceImages and Stamps
All of the read nodes and source images in the nuke scripts will be located at the top of each nuke script under a “Source Images” Backdrop
You will need to re-link the files in this area if you are following along
We will be using Adrian Pueyo’s “Stamps” add-on to nuke in order to populate our nuke script with the files in the source image folder.
LayerContactSheet is the easiest, fastest, and most convenient way to get a visual overview of all the passes contained in your render.
Turn on Show Layer Names to get UI labels of each pass name. This is only a GUI overlay, so you cannot render it out, it’s just for viewing purposes, but it’s great for identifying the pass names we are looking at
The Viewer
The Viewer shows an alphabetical dropdown list of channels of the stream where the viewer is plugged into.
Remember to set the viewer back to RBGA when you are done viewing that layer
You can use the PageUp PageDown hotkeys to cycle through layers in the Viewer
Along the bottom left of the viewer, it also lists all the channels separated by commas. It’s good to occasionally look at this part of the viewer to keep track of if you’ve lost your layers from the stream, or you are accidentally carrying layers that you do not need anymore in the stream.
Shuffle node
The Old Shuffle node will show a list of all layers in the stream which it is plugged into if you use the “in 1” dropdown
Good way to quickly check what layers are in your stream, but not as visual as layerContactSheet
ShuffleCycleLayers python script:
I wrote a tool called “ShuffleCycleLayers” which you can use hotkeys like Page Up, Page Down or + , – to cycle through the layers of the selected shuffle node, just like the viewer layer cycler. Maybe some people will find this handy if they don’t like to changed the viewer channel dropdown and would prefer to cycle through Shuffle node layers
Old shuffle only displays list of layers within the stream the input is plugged into
New shuffle displays list of every layer in the nuke script
If you’d like to exclusively use the old shuffle node instead of the new shuffle node, you can add this line of code to your menu.py in your User/.nuke/ folder
Split Layers is a python script that shuffles out all available layers from a selected node
This will make 1 shuffle per layer all connected to the source.
You can then just view and toggle between all the layers in the nodegraph
selecting all and hitting the hotkey alt + p will toggle on the postage stamp feature in all the shuffles, and if you visual thumbnails for all the passes. This can be useful for grouping and organising the passes.
Here are some links to some various Split out layers / shuffle layers python scripts found on nukepedia:
Channels are the individual pieces that make up a Layer, or Channel Set. The most common example is red, green, blue and alpha, channels that make up the rgba layer
A layer must contain at least 1 channel, but often has multiple channels.
Nuke prefers layers to have a maximum of 4 Channels per layer, any more and it has difficulty displaying them in the GUI interface
It becomes significantly more difficult to see the channels beyond 4 that are in 1 layer. Nuke’s interface is built around displaying 4 channels.
An individual channel in nuke is written as LayerName.ChannelName, to let you know what layer it belongs to
Depth.Z for example, in which Depth is the LayerName, and Z is the ChannelName
Whenever there is only 1 Channel, this displays in the viewer as the red channel, since it’s the first channel visible in rgba
There are also many cases where someone will just refer to it as “The Depth Channel”, where they are recalling referring to the Layer, but since it commonly has only 1 Channel, they are talking about the same thing.
Some nodes in nuke deal with layers and channel differently, or prefer to deal with one vs the other
A shuffle dropdown displays LayerNames for example whereas a Copy node displays Channels, and therefore the list is much bigger since it is displaying the individual pieces of the layer
Blur node “channels” dropdown actually lists layers, and then you can toggle the channels of that layer on/off
Basically any node with a mask input is dealing with channels since it only needs 1 channel to function
The first 4 channels of a layer are mapped to, and will display as Red, Green, Blue, and Alpha in the viewer, regardless the actual name of the layer. Any more than 4 channels in a layer and nuke has a hard time displaying them
A motion pass for example, is describing motion in XY directions. Left-Right and Up-Down. So only 2 channels are needed in the Layer and they display as Red and Green
A position pass, for example, is usually describing XYZ – 3D space coordinates, and sometimes the channels are actually named x, y, and z. So Position.x, Position.y, Position.z
Since X, Y, and Z are taking up the first 3 channels in this layer, they will display as red, green, blue
AOVs
AOVs stand for Arbitrary Output Variables
Arbitrary output variables (AOVs) allow data from a shader or renderer to be output during render calculations to provide additional options during compositing. This is usually data that is being calculated as part of the beauty pass, so comes with very little extra processing cost.
They can be considered ”checkpoints” or “steps” in the rendering process. The render engine splits up many calculations while making the final image (Beauty) and is exporting these smaller steps out to disk so we can combine them and manipulate them in Comp.
The important thing to take away is the renderer takes these “pieces, these AOVs, and combines them together to form the final Beauty render. We are essentially trying to recreate this process with our CG rebuild, while retaining control over the individual pieces.
One of the best things about AOVs is we get them “for free” since the renderer was going to calculate them anyway.
AOVs can sometimes be just a “catch all term” for all layers/passes you will render out
“What AOVs are you exporting” is a common question, and many 3D applications will use the term AOVs to define any render passes (even though some of them require extra work to get, like ID’s or custom passes)
Differences in the Render AOVs
All the renderers are essentially doing the same thing. They are crunching the numbers, using different algorithms, and coming up with the math needed to produce the final renders.
Since all the renders are basically doing the same steps / calculations, you just have to get used to what that renderer chooses to name these AOVs or lighting passes. All the passes will combine together and add up to the final Beauty output.
There are certain similarities or patterns between all the renderers.
Sometimes we’ll be looking at 1 renderer while explaining concepts, but they often translate over to the other renderers in some way. So keep an eye out for the patterns described and apply what is being taught to your renderer’s output.
Our renders have differences in amount of AOVs exported and differences in naming conventions for the AOVs
For a long time I wanted to release a CG compositing series. Many things stopped me in the past:
Time constraints
Access to good Render examples to work with
Not thinking I had too much to contribute to the subject matter
This series will be focused on answering the following question
How do I best rebuild my CG passes, for the most flexibility as a Compositor?
Download the FruitBowl Renders for the Series
My Friend and fellow artist, Chase Bickel, has kindly provided us with some high quality renders of a FruitBowl to download for free and play around with.
Download the FruitBowl renders now, or I will always post the links at the top of each video and blog post for you to download later:
You can place the FruitBowl renders files into the /SourceImages/ folder of the project files folder accompanying each video and nuke will reconnect the read nodes.
For Example:
These Renders are full of common passes you would find in production, including:
AOVs
Lightgroups
IDs
Utility
Gameplan
Start with the Basics –> Build our way to more advanced topics –> End with a proposed template for your CG Rebuild
I will go through different types of AOV passes you would typically find at a studio, what they are, how they are used, and how should think about them in relationship to one another. We will categorise and group different AOVs in order to define them better, and help us find the commonality and patterns between renderers.
This series aims to be useful no matter what renderer your CG comes from, as the principles are the same.
Topics Covered
Differences between Additive and Subtractive Workflows, and the pros and cons of both
Explaining the difference between Material AOVs and LightGroups and how to work with them together seamlessly
This includes an elegant solution to the infamous AOV – Lightgroup paradox
I will cover the importance of making Mattes and alphas, to help us isolate, and automate our CG manipulation. We will go over common utility passes and IDs and show how to do some cool things with them
Using Full CG Render
Will not cover how to integrate CG renders into a live-action plate
Will focus on the CG rebuild and various methods of manipulation to get the most out of your CG renders
Something for everyone
Juniors, Mids, Seniors, TDs, Comp Supervisors
There will be knowledge to be learned across all levels
Perhaps this will one day be a pre-requisite for a full CG Compositing into live-action plate course
This series will take some time to release all episodes, so please have patience