CG Compositing Series – 4.1 LightGroup / AOV Paradox

In this final installment of the CG Compositing Series, we focus on using LightGroups and Material AOVs together in a single workflow, and solving the paradox that come with it.

Why do these 2 rebuild methods seem to clash?

We cover the following topics in the video and in this blog post:

  • The complications of splitting LightGroups per Material AOV
  • A method for transferring changes between setups using a Difference Map
  • The pitfalls of using Subtraction and the advantages of using Division
  • A comparison of math operations: Add/Subtract vs Multiply/Divide
  • A stress test of the Division-based setup
  • Template layout strategies and rules to keep your rebuilds stable
  • Carrying changes across the template from the 1st rebuild to the passes of the 2nd for the most interactive user experience.
  • Ideas and techniques you can apply in your own CG Templates.

SlideShow PDF Download here:


What is the LightGroup / Material AOV Paradox?

Why do these two rebuild methods seem to clash?

We basically have 2 setups that are incompatible with one another, making it hard to use them at the same time.

Both the Light Groups and the Material AOV Rebuilds are different ways to Slice the CG Beauty Render

But this is not the full story, for a better overview of the situation, we need to look at the same image from a slightly different angle.

The Passes of the Opposite Rebuild actually exist within each slice of the Current Rebuild

They are fully embedded and intertwined in one another.


The Paradox:
How do you make changes to both Rebuilds if the Passes are already embedded within each other?


Possible Solutions to the Paradox:

Let’s explore some possible solutions to this problem.


Split Pass Workflow: Split out Material AOVs per LightGroup

Download the larger LightGroup-per-AOV split Render here or at the bottom of this blog

Junkyard_LightGroup_AOV_Split.exr ( 223 mb )

We could decide to brute force split out each pass even further, into Material AOVs per Light Group.

When we rebuild it you could either prioritize it as Larger buckets of Material AOVs, made up of each LightGroup.

Or prioritize it as larger buckets of LightGroups, made up of each Material AOV, like a mini-Beauty Rebuild per light.

There are many problems with this workflow however:

There are many more layers and channels rendered, making file sizes larger, and nuke slower to process and more difficult to work with.

There is often a need to clone or expression link grades and color correction changes across different parts of the setup in order to affect all the lights at once, or all the material AOVs at once. Creating a clone or expression hellscape.

There are also cases where you will see a master control and expression links, so the user does not get lost in the linked/cloned nodes.

You may also see the entire setup in a Group Node, to hide it and only expose necessary controls.

Compositing is never that straight foward however and we should not be compositing from within a Group node. We often need to pull masks, rotos, elements, etc from other parts of the main node graph, and if everything is in a Group, it becomes difficult to get that information inside of the group to use.

Most Compositing should happen exposed in the main node graph to avoid any headache, and not hidden away in a Group that a user needs to jump in and out of.

This extra split workflow has many cons, let’s look at some other workflows to solve our paradox problem.


Transferring Changes from 1st Setup to the 2nd Setup

Another workflow is trying to capture and transfer the changes from the 1st Rebuild Setup to the 2nd Rebuild Setup. This is the basic idea of the workflow at its core:

An example of this technique could be illustrated from Machine Learning or Generative AI workflows, and is called Style Transfer.

In the below image, I start with an image of a bearded man. I have 2 separate models that are making changes. The first might be for facial expressions and shaves, and the second is for applying makeup. On the left side, I make a change to make the man beardless, and with an angry expression. On the right side, I’ve told it to apply clown makeup. If we want to combine the 2, I might want to package the “Beardless Angry” Changes, and apply that over to the clown makeup side. My result would be a Beardless Angry Clown.

This is a silly example but illustrates the workflow we want to use in Nuke to capture our first changes and apply them to our second changes for a combined change.

But how can we capture and package those changes from the first setup?


Subtractive (Absolute) Difference Method

  • We can find the difference between the 1st Rebuild and the Beauty Render using Subtraction
  • Temporarily store the changes in a subtractive difference map
  • Apply the 1st changes to the 2nd Rebuild Setup

Taking one of your rebuilds, either Material AOV comp or LightGroup comp, and subtracting the original Beauty Render will give you the Subtractive Difference Map, as seen below:

Subtractive Difference Map

The image itself is a map of positive and negative values, telling us how much we would need to add/subtract from the Beauty Render in order to get the result of our changed Rebuild.

  • Values of Zero will have No Change
  • Positive Values will get Brighter
  • Negative Value will get Darker

Let’s get into some equations to help us understand the math behind this workflow.

First let’s define a helpful math symbol: Delta, which stands for “The Change” or “The Difference”

First we’ll do a basic inverse operation with subtraction and addition.

Material AOVs – Beauty = Difference

Beauty + Difference = Material AOVs

Instead of adding the difference back to the Beauty, let’s swap the Beauty out for the result of our LightGroups comp. So I am adding the difference of the Material AOVs comp onto the LightGroups comp, to hopefully get the combined changes.

It’s important to realize that we do not need to start with the Material AOVs and transfer to the LightGroups, but we could also just as easily start with the LightGroups and transfer those changes over to the Material AOVs, it’s a matter of preference, but the result will be the same.

Let’s try this in nuke, by taking the Material AOVs output, minusing the Beauty Render, and then applying our subtractive

The resulting image kind of works, but is also full of problems with odd colors and seemingly black hole areas

Subtractive Method Failure

Let’s take a look at what is going wrong with the Subtraction Difference Method.


Subtractive (Absolute) Difference Problems

  • The Subtractive Difference Map represents Absolute Values 
  • This tells you the exact values to add/subtract to bring the Beauty Render to the Changed Rebuild
  • The Subtractive Method (Absolute) only works well if you Brighten values in the Rebuilds, or only Darken them slightly

Brightening both setups will be fine, as the results will only increase.

Darkening both setups however, runs the risk of going below zero and into negative values when the change is applied to the 2nd Setup. The darker the changes on both sides, the higher the risk of negative values.

Remember that the Rebuild passes are embedded in each other’s setups. If we darken some lights, and then darken the Specular, since the specular also contains all the lights, we are essentially subtracting those light groups twice and getting negative values.

So if this Subtractive Difference Method is giving us issues, let’s look at any other ways to get the difference map.


Division (Relative) Difference Method

Let’s ask ourselves: How can I go from 8 to 4?

Obviously we could subtract 4, and 8 – 4 = 4

But if we had a new, lower number, such as 2, and we also minused 4, we’d get -2.

We could also divide 8 by 2, therefore halving it, and we’d also arrive at 4.

Then trying to divide 2 by 2 will get us 1, it is also halved.

The number of change from 8 was -4 but from 2 it was only -1. This number of change is Relative to the input number. It is a ratio or a percent of what the start number is, so it adapts to our input.

Of course, this could also be represented as multiplication. divide by 2 is the same as multiply by 0.5

So instead of trying subtraction and addition, let’s now try divide and multiply

The Result is a Division Difference Map that looks a lot different than our Subtraction Difference Map

Division Difference Map

Now let’s multiply this with our 2nd Rebuild, the LightGroups side:

Side Note: Since Nuke’s Merge node does not have a native B / A operation, if you ever wanted to swap the A and B inputs and have the disable default to the Rebuild instead of the Beauty (for Templating reasons), then you would need a special MergeDivide.

Feel free to download this tool here: MergeDivide.nk

The Result from applying the Division Difference below looks a lot better than the Subtraction Method, and there are no longer any Negative Values in the image.

Division Difference Method

So why does this suddenly work? And what is going on with that Division Difference Map?


Division (Relative) Difference Map

This new Difference map is answering a different question than the subtraction difference map was:

  • How much do we need to Multiply the Beauty Render by in order to end up with the Rebuild Output?
  • What Percent do I need to increase or decrease this Beauty Render by to get to the Rebuild Output

Multiplication / Percentage will not get us Negative values

That Division Difference map appears all white, but in fact, it has values over 1, superwhites, that we cannot see by default. let’s darken it a bit so we can see the pixels over the value of 1.

Darkened Division Difference Map – for Visualization

Let’s break it down:

  • Values above 1 will get brighter
  • Values between 0 and 1 will get darker
  • Value of 1 means No Change

So any number multiplied by 1, is itself, and does not change. That is why the map is mostly white.

Multiplication can also be represented as a percentage:

So we could express the pixels on this map in a percentage:

So our new map will be increasing or decreasing our 2nd Rebuild input by a specific percent.

Let’s go over the math equation to see how it works. Once again we have our inverse operation, Starting and returning to Material AOVs using division and multiplication:

Then we are swapping out the Beauty Render, in the second step, with our LightGroup output. So we are applying our Division Difference Changes on top of the LightGroup Changes.

It’s worth mentioning again, that just like before, it does not matter which order you divide or multiply the Rebuilds, Material AOV 1st & LightGroup 2nd or LightGroup 1st & Material AOV 2nd, will yield the same result.


So why does the Division Difference work so much better than the Subtractive Difference?

Below is a animation showing the difference between the add/subtract and multiply / percentage.

Notice that the subtraction will go past zero towards negative values, while multiplication will only approach zero or be zero, but never go negative. We don’t really ever see a negative percent.

Going back to that embedded layers image. This time, instead of subtracting the pass on both sides, we are multiplying to zero on both sides, but we don’t run into negatives, because if you multiply something by zero twice, it is still only zero. 4 x 0 x 0 = 0. So we are actually still safe.

I encourage you to stress test this Division Difference Method with your own renders and unique cases. You are able to push the limits to an extreme level without noticing anything breaking or feeling off.


Template Layout Options

We have to decide if we want to set up our template with our 2 Rebuilds:

  • side by side
  • top to bottom

We also need to decide which Rebuild will be first and which will be second, the first will be the one captured in the change map. So either Material AOVs or LightGroups.

We could also go right to left instead of left to right, on the side by side, if we so choose:

Here are some possible template layouts in the node graph:

One thing that is a bit annoying is that while using these Templates, and making changes, we can really only see the effect of our changes by looking at the very bottom, after the changes are combined and both setups are taken into consideration. Is there any way for us to have a more interactive experience, by seeing some of the changes affecting different parts of the Template. Let’s explore that idea.


Interactive Changes throughout the Template

Instead of considering the Rebuild as 1 whole output, like our Beauty, we need to remember that it is made up of individual pieces, like our piechart from before. The passes were split and adjusted and added all up to equal the Beauty.

So instead of multiplying the Division Difference Change Map to the output of the 2nd Rebuild, we could multiply it to each individual pass separately. This would give us the same result once we add all the passes together.

Let’s explore the math of this, it becomes a little easier to understand.

If we split the Output into smaller components, we can apply the multiply to each component and then add them up after. This would be the same result as us just multiplying the whole.

The Equation for use would look something like this (Delta being the Difference, and T being Total Changes):

In nuke, we can set this up in our templates. I am just going to stick to Top to Bottom Templates for the example, as it’s a little easier to set up and understand.

It’s SUPER IMPORTANT to realize that we are only capturing the changes from the 1st setup, and applying them to the 2nd setup. There is no way to make the changes of the 2nd look back around and apply to the first, because you would create a paradoxical change loop: Changing the 1st, which changes the 2nd, which changes the 1st, which changes the 2nd, which changes the 1st…. you get the idea.

So that decision of the flow of your Template, and which setup you want to see the changes reflected in, is very important to decide as you build your CG Template

So, let’s say that we have our Material AOVs 1st, and we are applying the changes to the LightGroups. We’ll need to multiply each lightgroup pass with the division map

And if we started with LightGroups, we’d need to multiply the 2nd setup Material AOVs with the division difference map.

base LightGroups
LightGroups with Material AOV Changes applied per pass

or if you were to use the LightGroups first, you could transfer your changes to each individual Material AOV:

base Material AOVs
Material AOVs with LightGroup Changes applied per pass

The result is an interactive user experience where you we can see our changes trickle down throughout our template and influence all the downstream passes. This can really help visualize what is happening at a local level.


Rules and Caveats

  • Material AOVs passes must add up to equal Beauty
  • Light Groups passes must also add up to equal Beauty
  • Do not do color corrections that introduce negative values (saturation)
  • Treat the CG Template as a glorified Color Correction
  • On the 1st Rebuild side (The Captured Change side) avoid:
    • Transforms / Warps
    • Filters: Blur, Defocus, Median, Glow
    • Chromatic Aberration
    • Replacing / Merging a totally different image on top
      • Texture changes should happen at the albedo level

You want to try and consider the entire CG Template as one big color correction. The pixel is being tracked all the way through the setup, in the change map, and comparing back to the beauty and applying to the second rebuild. Things like Transforms or filters, are changing the possible, or blending pixels together, and will cause artifacting because the Change map is not able to really capture the changes properly. Also some filters are a post effect, and really should not be adjusted after use, such as a Glow.

Example of Glowing 1st rebuild and viewing result in 2nd rebuild:

glow problems

Transforms or moving pixels around, will also not allow the setup to track the pixel the whole way through and leave to various artifacting, as shown below:

transform problems

You will want to apply your filters and transforms either after the CG Template, or possible only on the 2nd Rebuild section. So basically avoiding the division change map, which is unable to capture it, and only applying those operations afterwards.


Template Examples

I will be providing you examples of Side by Side, Top to Bottom, and Interactive Change Templates for each renderer: Blender, RedShift, Arnold, and Octane.

All Template Examples: Blender, RedShift, Arnold, Octane. Side by Side, Top to Bottom, Interactive

Template Ideas and Inspiration

There are just way too many variations for me to provide in every situation. However I can give some example ideas or inspirations that I have seen and worked with that you could consider implementing into your CG Template if it fits with your style of comping.

  • Managing Div-Map with Exposed Pipes
  • Using Stamps or Hidden inputs for Div-Map
  • Storing Div-Map in a Layer / Channel for later use
  • Grouping Sections for less clutter
  • Template Controller, pick which parts are in use:
    • Beauty
    • Material AOVs Only
    • LightGroups Only
    • Combined LG / AOV
  • Reversed Direction

Conclusion

This Division Difference Multiplication Technique used to solve the LightGroup / AOV Paradox is fairly unknown at the moment. There seemed to be a huge black hole of knowledge out there on this subject. I’d like to give a huge shout out to Ernest Dios for being one of the true masterminds behind this technique, and for first introducing me to it. Also a big thank you to Alexey Kuchinski for all of his mentorship.

My hope with this whole CG Compositing Series was to equip you with the knowledge of every piece of the CG Template. What all the passes are, Why they are important, How to use them, Where to put them and how to organize them to Rebuild the Beauty, and When to adjust them for specific notes.

And of course, the final piece of the puzzle. How to combine it all and use the LightGroups and Material AOVs together in an elegant way. To help you push your CG Renders to their absolute limits, without the need for a rerender.

I hope you got value out of this video, or out of any video in the CG Compositing Series.

If I could ask one small favor from you, it would be to help share this video, or this blog, to compositing or VFX friends and colleagues. Whether it’s in a group chat, work chat, discord, linkedin post, I believe this knowledge is too important to keep secret. I would love to see this amazing workflow become more commonplace in the world of Compositing.

Thank you so much for all of your support over the years. It’s be a long journey since the first CG Compositing Series Intro video, and we are finally at the end…for now. I hope it was worth the wait.

Until next time.


Downloads

Nuke scripts

1 Demo nk script, and 1 Template & Idea Proposal nk script, 2 total:

CG_Comp_Series_4_1_LG_AOV_Paradox_Demo_Scripts.zip ( 164 kb )

Tools

MergeDivide tool that was demoed:

MergeDivide.nk


Junkyard

I’ve created a new Junkyard Render specifically for this Light Groups video, please download the Render and the Cryptomatte file here in order to relink it in the Demo nuke script:

Download Render files here:
Junkyard_LightGroups.zip ( 115 mb )

Junkyard_LightGroup_AOV_Split.exr ( 223 mb )


Fruitbowl

If you haven’t downloaded the FruitBowl Renders already yet, you can do so now:

You can Choose to either download all 3 FruitBowls at once:
FruitBowl_All_Renders_Redshift_Arnold_Octane.zip (1.61 GB)

Or Each FruitBowl Render Individually for faster downloads:

FruitBowl_Redshift_Render.zip (569.1 MB)

FruitBowl_Arnold_Render.zip (562.8 MB)

FruitBowl_Octane_Render.zip (515.4 MB)

The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.

  1. Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
  2. Or simply drop the Render files into the SourceImages folder of the project folder

CG Compositing Series – 3.1 Light Groups

In this video we move away from the Material AOVs and cover an equally important Beauty Rebuild using Light Group renders. This is another set of passes you can render to adjust the lights in your render, that all add up to the Beauty Render.

SlideShow PDF Download here:


What are Light Groups?

  • A Light Group is a render pass of a light (or a set of lights) in the scene, that is rendered in isolation from the rest of the scene’s lighting.
  • All other lights are “off” and only the Light Group’s light is “on” and affecting the scene.
  • All the Light Groups should add together to produce the full Lighting in the Scene; They all plus and build back the beauty render.

Importance of Light Groups

  • Creating good looking CG is not just about the materials of the objects, but also the Lights in the scene, that interact with those materials, and tell a story.
  • Different Light types can drive the aesthetic, style, realism, or story of your CG render.
  • Understanding lighting basics is important for being an effective CG compositor.

Types of Light Groups

Key – Primary Light Source
Fill – Lift and soften Shadows
Rim – Enhancing silhouette & Separation


Practical – Light Sources serving a purpose and illuminating the scene (they are part of the environment)

https://www.soundstripe.com/blogs/how-to-master-the-art-of-practical-lighting
https://www.therookies.co/projects/20802

Interactive – Dynamic Lights Changing over time


Light Groups for Compositors

A Compositor is usually focused on 2 main aspects of the Lights using Light Groups:

  1. Exposure – How Bright the Lights are
  2. Color Temperature – What Color (Hue) the Lights are

Exposure

  • Exposure is referring to how bright the image is.
  • Exposure is usually measured in “stops” of light.
  • Stops are relative, meaning they are based on the current image you are looking at.
  • +1 stop higher is 2x as bright. Doubled
  • -1 stop lower is 1/2 as bright. Halved
https://www.john-rowell.com/blog/2017/3/27/what-is-a-stop-of-light
https://www.john-rowell.com/blog/2017/3/27/what-is-a-stop-of-light
https://www.photographytalk.com/exposure-compensation-explained
https://www.diyphotography.net/what-is-middle-grey-and-why-does-it-even-matter/

Exposure Triangle in Photography

The Exposure Triangle refers to 3 settings on a camera that help balance the Exposure / Brightness of the Image. If you increase the brightness of 1 of the 3 sides by 1 stop (double the brightness), then you need to choose 1 of the other 2 sides to lower the brightness by 1 stop (half the brightness) in order to maintain the same exposure level of the photo.

Only Aperture and Shutter Speed are referring to the amount of physical light reaching the sensor through the lens. ISO refers to the amplification (multiplication) of the analog signal before it gets converted digitally.

https://www.photopills.com/articles/exposure-photography-guide
https://www.photopills.com/articles/exposure-photography-guide
https://petapixel.com/exposure-triangle/
https://petapixel.com/exposure-triangle/

Check out this AMAZING website that lets you play around with the settings and balance the image brightness in a very interactive way. I loved playing around with the sliders, it is such a cool idea.

http://www.andersenimages.com/tutorials/exposure-simulator/


Aperture

https://www.photopills.com/articles/exposure-photography-guide
https://www.studiobinder.com/blog/what-is-the-exposure-triangle-explained/
  • How big the opening of the lens is.
  • The larger the lens opening, the more light gets through, the brighter the image.
  • Also the bigger opening results in a shallower Depth of Field, or smaller zone of focus. This results in larger Bokeh and separation of foreground and background.
https://robynsphotographyacademy.com/understanding-aperture/

Shutter Speed

https://www.studiobinder.com/blog/what-is-the-exposure-triangle-explained/
  • How much time the opening of the lens remains open for, measured in fractions of a second.
  • Leaving the lens open for longer, lets in more light and brightens the image.
  • Longer exposure times will result in more motion blur, depending on the shutter speed and speed of the object being shot.
https://isblens.weebly.com/shutter-speed.html
https://snapsnapsnap.photos/a-beginners-guide-for-manual-controls-in-iphone-photography-shutter-speed/
https://snapsnapsnap.photos/a-beginners-guide-for-manual-controls-in-iphone-photography-shutter-speed/

ISO

  • ISO used to refer to the sensitivity level of film stock in film cameras.
  • ISO stands for International Organization of Standards
  • The higher the film stock ISO, the grainier the image appeared, due the to the materials being used for the lower intensity film stock.
  • With digital cameras, sensors have only one sensitivity level.
  • Digital ISO refers to the Amplification (intensity multiplier) of the analog signal before it gets converted to digital data.
https://skylum.com/how-to/what-is-iso-in-photography
https://www.alanranger.com/blog-on-photography/what-is-iso-in-photography

Digital ISO is a lot like a volume knob on a radio. If the signal is weak (aka there is not much light making it to the sensor) then increasing the volume will make the sound louder (make the image brighter) but will also increase the static, or digital noise (sometimes referred to as grain).

pexels.com photo by githirinick
Back to the Future
https://www.photopills.com/articles/exposure-photography-guide

F-Number Meaning

  • F-number is focal length divided by the aperture diameter (size of the opening of the hole).
  • The “f/” notation is a convenient way to say “some fraction of the focal length”
  • They are called f-stops because each stop, or notch in the settings, halves or doubles the light admitted into the camera.
https://www.photopills.com/articles/exposure-photography-guide
https://www.originalartphotography.co.uk/2015/03/what-is-a-stop-photography-jargon/

Doubling the Area of a Circle

https://www.chilimath.com/lessons/geometry-lessons/area-of-a-circle/
  • Doubling the Amount of Light requires doubling the Area of the Circle (lens opening)
  • Doubling the Radius does not double the Area, it actually quadruples it. 22 = 4 , but (2 x 2)2 = 42= 16
  • What do we need to multiply the Radius by to get double the Area?

The Square Root of 2 roughly equals ≈ 1.4142

Doubling the Area of the circle requires us to multiply it by roughly 1.4, which is why the numbers on the stops are written as they are

https://www.photopills.com/articles/exposure-photography-guide
https://www.photopills.com/articles/exposure-photography-guide

Exposure in nuke

For dealing with Exposure in nuke, I would recommend using either the Exposure Node, the Multiply Node, or the Grade node’s Gain or Multiply knobs

In the Exposure node you could change the stops directly by changing the mode to stops
You can also just multiply by 2, 4, 8, or enter 1/2, 1/4, 1/8 in the Multiply slider of a Multiply or Grade node.
With a normal Multiply, we can use an expression to be able to enter our stop number
pow(2, x) where x is the stop number, the same as the Exposure node is using.


Temperature (Color)

https://www.rmd-leuchten.de/en/color-temperature/
  • Temperature describes the hue, or color of the light, measured in Kelvin (K).
  • Heated objects emit light photons as they heat up, in a process called Black-Body Radiation.
  • As objects get hotter they emit different frequency wavelengths of light, shifting from red to orange to white to blue.
https://lednetwork.ca/blogs/the-led-network-blog/what-is-colour-temperature-why-is-it-important-for-lighting
https://gvmled.com/what-is-the-color-temperature-in-lighting/
https://rbw.com/blog/understanding-color-temperature-of-led-lighting

ColorKelvin (K)Celsius (°C)Fahrenheit (°F)
Red1000–2000 K700–1700 °C1300–3100 °F

ColorKelvin (K)Celsius (°C)Fahrenheit (°F)
OrangeYellow 2000–3500 K1700–3200 °C3100–5800 °F
White3500–6500 K3200–6200 °C5800–11200 °F

ColorKelvin (K)Celsius (°C)Fahrenheit (°F)
Blue6500+ K6200+ °C11200+ °F
https://www.autoevolution.com/news/staged-combustion-engine-fires-up-for-the-first-time-spits-out-350000-hp-in-one-second-235304.html

pexels.com photo by CottonBro
pexels.com photo by ClickerHappy

Color Grading in Nuke

I tend to use either an Exposure node for Luminance and a Grade node’s Multiply knob for Color

Or I use a single Grade node, using Gain for Exposure changes, and Multiply for color changes

I also prefer to change my color using the Temperature and Magenta settings of the Color Panel, which allow intuitive corrections which also giving fine control.

This is also an important way to separate your Luminance correction from your color correction, by making sure the Intensity stays around 1 and Luminance is preserved while changing color.

Adjusting Light Groups with Exposure (Gain or Multiply) for Intensity / Luminance, and a Multiply for Color, are my preferred way to Color Grade my Light Groups

beauty
Light Group Tweaks

Saturation of Light Groups

Remember that Light Groups are like individual Beauty Renders with only 1 light at a time. So you cannot simply desaturate a light group if you want to desaturate the light color.

You would either have to separate the Lighting information from the material information, using a color pass. But even then you may encounter some issues and artifacting.

Or, you can simply shift the colors of the light group to a more neutral color


Destructive vs Non Destructive workflows

You can use Gamma corrections, but be mindful that it requires an exact order of operations reversal in order to fully restore the original image. So it can be difficult to undo later if your corrections start to stack up

ColorCorrect Nodes can be especially Destructive because they are impossible to reverse, due to the fact that it is pulling a luminance key on it’s input to determine the shadows, midtones, and highlights.

This locks the input of the ColorCorrect, because if you make a change above, you are affecting the result of the ColorCorrect

It means that you either need to keep going, adding more nodes and changes on top, or perhaps start over.

Image each ColorCorrect is dependent on all of the previous ColorCorrects, this can cause a ripple, or chain reaction affect and be altering the results of all or any of the ColorCorrects if they are altered.

Of course in the end of the day, use whatever you need to do to get the shot done! But be mindful that you might be tangling a knot that you cannot untie later.

My advice would be try using Exposure and Multiply Changes for Luminance and Color first, and see how far you can get, and save the fancy ColorCorrects as a last resort, when you need to get the extra mile to completing the shot.


Demo Nuke Script

Download the Demo Nuke Script here:
CG_Compositing_Series_LightGroups_Demo_v07.nk

In the Demo Nuke script, you will find AOV and Light Group Rebuilds for:

  • Blender (Junkyard Scene)
  • Arnold (Fruitbowl)
  • Octane (Fruitbowl)
  • Redshift (Fruitbowl)

You will also find sections demoing:

  • Exposure
  • A junkyard light group rebuild that I have tweaked with Exposure and Multiply as an example
  • Saturation demo dealing with saturation of Light Groups
  • Section breaking down Destructive and non-destructive workflows in nuke.

Downloads

Junkyard

I’ve created a new Junkyard Render specifically for this Light Groups video, please download the Render and the Cryptomatte file here in order to relink it in the Demo nuke script:

Download Render files here:
Junkyard_LightGroups.zip ( 115 mb )


Fruitbowl

If you haven’t downloaded the FruitBowl Renders already yet, you can do so now:

You can Choose to either download all 3 FruitBowls at once:
FruitBowl_All_Renders_Redshift_Arnold_Octane.zip (1.61 GB)

Or Each FruitBowl Render Individually for faster downloads:

FruitBowl_Redshift_Render.zip (569.1 MB)

FruitBowl_Arnold_Render.zip (562.8 MB)

FruitBowl_Octane_Render.zip (515.4 MB)

The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.

  1. Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
  2. Or simply drop the Render files into the SourceImages folder of the project folder

References

Below are some links to the various research I used to create this video:

First, big shout out again to the Exposure Triangle Simulator website:

Exposure Triangle

http://www.andersenimages.com/tutorials/exposure-simulator/

http://photography-mapped.com/interact.html

https://www.adorama.com/alc/9-online-camera-simulators-to-help-your-photography-skill/


3D Rendering Light Groups

https://www.blog.poliigon.com/blog/4-simple-steps-to-set-up-light-groups-in-blender

https://help.maxon.net/r3d/katana/en-us/Content/html/Light+Group+AOVs.html#LightGroupAOVs-LightGroupsinDetail

https://www.premiumbeat.com/blog/the-role-of-light-groups-in-arnold-for-maya/

YoutTube – Light Groups in Arnold for Maya | Francesco Furneri

https://garagefarm.net/blog/blender-light-groups

Julius Ihle – HDR Prepper Nuke Gizmo for IBL (Updated!)


Photography

https://www.canonoutsideofauto.ca/

https://www.john-rowell.com/blog/2017/3/27/what-is-a-stop-of-light

https://www.photopills.com/articles/exposure-photography-guide

https://www.studiobinder.com/blog/what-is-iso/

https://photographylife.com/what-is-iso-in-photography

https://photo.stackexchange.com/questions/35136/is-it-better-to-shoot-with-a-higher-iso-or-use-lower-iso-and-raise-the-exposure

https://theauroraguy.com/blogs/blog/iso-is-not-what-you-think-it-is-what-is-iso-really

https://photo.stackexchange.com/questions/52163/digital-iso-vs-post-exposure-correction

https://www.alanranger.com/blog-on-photography/what-is-iso-in-photography

https://skylum.com/how-to/what-is-iso-in-photography

https://petapixel.com/exposure-triangle/

https://www.outdoorphotographyschool.com/aperture-and-f-stops-explained/

https://www.exposureguide.com/focusing-basics/

https://manualmodebasics.weebly.com/shutter-speed.html

https://digitalphotographylive.com/shutter-speed/

https://isblens.weebly.com/shutter-speed.html

https://snapsnapsnap.photos/a-beginners-guide-for-manual-controls-in-iphone-photography-shutter-speed/

https://www.diyphotography.net/what-is-middle-grey-and-why-does-it-even-matter/

https://silentpeakphoto.com/photography-tips/stops-in-photography-explained/


3 Point Lighting

https://lightingpixels.blogspot.com/2013/01/tutorials-does-three-point-lighting-suck.html

https://academyofanimatedart.com/understanding-the-basics-of-3-point-lighting/

Youtube – CINEMATIC LIGHTING: 3 Point Lights | Kriscoart

https://www.linkedin.com/pulse/what-crucial-role-lighting-3d-animation-incredimate-jhkac/

https://www.soundstripe.com/blogs/how-to-master-the-art-of-practical-lighting

BEST Resource you will ever find on the subject of CG Cinematography and Lighting – Onine Book:

https://chrisbrejon.com/cg-cinematography/


Area of Circle

https://www.chilimath.com/lessons/geometry-lessons/area-of-a-circle

https://mathmonks.com/circle/area-of-a-circle

YoutTube – Video 878.1 – How do you double the area of a circle? – Practice | Chau Tu

Youtube – Area of a circle | Perimeter, area, and volume | Geometry | Khan Academy

Youtube – If I double the diameter of a circle, what happens to the perimeter and area? | Wendy Maths

Youtube – Circle Area (classic visual proof) | Mathematical Visual Proofs


Color Temperature

https://www.studiobinder.com/blog/what-is-color-temperature-definition/

https://nofilmschool.com/what-is-color-temperature-and-how-should-filmmakers-utilize-it

https://www.therookies.co/projects/76980

https://step1dezignsblog.wordpress.com/2017/10/06/how-to-choose-the-right-color-temperature-for-your-led-lighting-applications/

https://www.wonderopolis.org/wonder/what-is-the-color-of-fire

CG Compositing Series – 2.5 Material AOVs – Refractions & Reflections


Refraction & Reflection Passes (Exceptions)

In this video we aim to understanding the problem with refraction (transmission) and reflections (indirect specular) explore potential solutions. The problem with Indirect Specular (Mirror Reflections) and Transmission (or Refraction) passes is they reflect or refract the entire beauty of the environment, locking that information into 1 pass. There often seems there is not much we can do as compositors to separate those passes further.


Here we have a nightmare scenario from a AOV rebuild point of view: A glass jar full of balloons, that is also reflected in a mirror surface. Everything in the mirror Reflection shows up only in the Specular Indirect Pass, and everything seen through the glass jar shows up only in the Transmission (refraction) Pass.

We notice as well that objects that end up in the Transmission (Refraction) pass are missing from the Diffuse Pass.

Mirror Reflections, for example ground plane reflections for our subjects, are also limited to the Indirect Specular pass:


What is Transparency?

  • Transparency is the ability to see – through an object or surface to what’s behind
  • It’s as if the object or material is ignored or nonexistent and does not have to do with Light interacting with the material.
  • The light passing through is not Distorted (Refract), nor does Scatter or change Color (which could be the case with Translucency or Transmission)

Transparency basically has only 1 setting: Amount – or “How much can i see through this”

YouTube: Opacity Maps – PixPlant

What is Transmission?

  • Transmission is the passing of light completely through a material
  • Refractive, Transparent, and Translucent materials all transmit light, but Opaque materials do not. 
  • If light is not transmitted, it may have been reflected (specular) or absorbed.
https://abnercabuang.wordpress.com/2017/11/19/reflection-refraction-transmission-and-absorption-of-light/

Transmission can sometimes cause the light to inherit a color tint as it passes through and interacts with the material.  Think of colored liquids or tinted glass.

https://documentation.3delightcloud.com/display/SFRP/3Delight+Glass

What is Refraction?

  • Refraction is the change in direction and speed of a light ray as it travels through or “Transmits” through different mediums, ie. from Air to Glass or Water or Plastic

The 2 more important characteristics of Refraction are:

1.) The Light passes through the material

2.) The Light changes direction

  • The amount of distortion, “bending”, or change in direction of a light’s path while passing through the material, depends on factor’s like:
  • Thickness of the material, Angle of View, and the material’s Index of Refraction
https://lightcolourvision.org/dictionary/definition/index-of-refraction/
https://en.wikipedia.org/wiki/Refraction
Photo by Jill Burrow – Pexels
drinking-straw-in-a-glass-of-water-refraction_congerdesign_Pixabay

Refractions vs Transmission?

  • Transmission is only referring to Light passing through an object
  • Refraction is requiring the light to have changed direction, and to pass through
  • The render pass is doing both things, so some Render Engines decided to call the pass Transmission, because it’s referring to light passing through the material
  • Other renderers call the pass Refraction, referring to the Change of Direction, “bending” or distortion of the light
  • Both terms in this case are referring to the same phenomena, just focusing on different aspects of the light’s behaviour
  • Transmission might even be a more accurate label, because technically a material could have a Refraction index of 1.0, meaning no refraction/distortion is occurring, but the light is still Transmitting. 
  • All Refractions require Transmission
  • Not all Transmissions require Refraction

Why is Light Redirected during Refraction?

  • Light travels through different mediums at different speeds, depending on the density and make up of the medium. 
  • Examples of Mediums: Vacuum (space), Air, Glass, Plastic, Water, gases, etc.
  • The change of light speed while passing from 1 medium into the next, causes the light to change direction when entering the 2nd medium.
https://stoplearn.com/refraction-of-light/

Light Wave “Turning” or “Bending”

Light is a Wave:

One side of the wave hits the new medium and slows down first, turning/bending/redirecting the light wave towards a new direction.

https://en.wikipedia.org/wiki/Refraction
https://www.telescope-optics.net/reflection.htm
https://blog.soton.ac.uk/soundwaves/wave-interaction/3-refraction/

Color Light Wave Frequencies

Remember that Different Frequencies of Light Spectrum show up as different colors

Different frequencies of light refract at slightly different angles, causing the colors to separate. This is what happens with Color Prisms.

https://en.wikipedia.org/wiki/Dispersive_prism
https://en.wikipedia.org/wiki/Refraction
https://sciencenotes.org/refraction-definition-refractive-index-snells-law/

Refraction / Reflection in Rainbows

A Combination of this Refraction Color Separation and Reflections within water droplets is what allows us to see Rainbows.

https://www.quora.com/Why-is-high-humidity-required-for-the-formation-of-rainbow
https://www.quora.com/Are-specific-conditions-needed-for-Rainbow-to-occur
https://www.quora.com/If-light-travel-at-the-same-speed-in-rainbows-as-it-travels-in-air-would-we-still-have-rainbows

Index of Refraction

  • Different materials have different densities and make ups and will cause light waves to move through at different speeds
  • This is measured with an Index of Refraction, which measures how fast light moves through that medium, and therefore how much it refracts
  • An Index of 1.0 is light’s speed in a Vacuum – or no change in direction
  • Higher numbers mean light travels through the medium slower and light bends more
https://micro.magnet.fsu.edu/optics/lightandcolor/refraction.html
https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-shading/reflection-refraction-fresnel.html

In CG, this Index of Refraction is an attribute setting on Materials that will make it more or less refractive

https://documentation.3delightcloud.com/display/SFRP/3Delight+Glass

Refraction Re-Entering Original Medium

  • When the Light goes from a fast medium, to slower medium, and back into the fast medium on the other side, it has another refraction turn
  • This time, instead of one side of the light wavelength slowing first, one side speeds up first
  • If the exit angle is the same as the entrance angle, it will reverse the lightwave back to the original direction, and is parallel to the orginal light direction, just offset
https://www.quora.com/Will-the-angle-of-refraction-of-a-ray-of-light-passing-from-glass-to-air-be-equal-to-the-angle-of-incidence-greater-than-the-angle-of-incidence-smaller-than-the-angle-of-incidence-or-45-What-are-the-reasons-for-your
https://en.wikipedia.org/wiki/Refraction
https://micro.magnet.fsu.edu/optics/lightandcolor/refraction.html

Refraction Angle

  • The Angle that the light wave hits the surface also matters
  • If the light hits the material exactly perpendicular to the surface normal then it will pass through and the light does not bend at all
  • The more extreme the angle, the more refraction. This is why light appears most warped at the edges of curved surfaces.
https://www.hanlin.com/archives/695184
Pexels – Photo by Burak The Weekender

This is exactly what causes lens distortion to be more extreme at the edges of frame vs the center of frame

https://en.wikipedia.org/wiki/Fisheye_lens
https://en.wikipedia.org/wiki/Fisheye_lens
https://help.shopmoment.com/article/181-superfish-distortion-correction

Chromatic Aberration

Combining the more extreme distortion with the Color separation is why we get Chromatic Abberation more in the edges of frame as well.

https://en.wikipedia.org/wiki/Chromatic_aberration
https://en.wikipedia.org/wiki/Chromatic_aberration
http://www.tlc-systems.com/artzen2-0047.htm

Caustics

Light Refracting through complex shaped objects, changes direction, and concentrate towards certain areas more than others and create Caustics.

Pexels – Photo by Maria Orlova
https://en.wikipedia.org/wiki/Caustic_(optics)

Complex shapes create complex caustics, and moving surfaces, like water, create dynamic and organic moving Caustic patterns.


What is Translucency?

  • Transmissive materials have a Roughness or Glossiness setting that works in the same way as it does on Specular Highlights
  • Increasing the Transmission Roughness causes the light rays traveling through to scatter / “diffuse” or blur together.  Think of Frosted Glass or Plastics.
  • This effect of “Blurring” or Scattering the Transmitted light is called Translucency
https://medium.com/@stevesi/on-bigco-leaks-transparency-and-disclosure-6d7812e227a0
https://sitelikeet.life/product_details/15285792.html
https://slideplayer.com/slide/8349700/ – Light and Color Presentation – by Elijah Dixon

Roughness Blurs Everything Together

Specular Roughness Setting:

https://documentation.3delightcloud.com/display/SFRP/3Delight+Glass

Transmission Roughness Setting:

https://documentation.3delightcloud.com/display/SFRP/3Delight+Glass

Recap

Transparency – You can see through to BG, as if the material or object is not visible or ignored

Transmission – Light allowed to pass through the surface / material

Refraction – Light changes direction as it passes through the material / surface

Translucency – Light passes through material and gets scattered / blurred 


Virtual Images / Worlds

When looking at fully reflective and refractive objects, we are seeing a distorted representation of our surroundings.

https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-shading/reflection-refraction-fresnel.html
https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-shading/reflection-refraction-fresnel.html
https://en.wikipedia.org/wiki/Refraction

Concave/Convex Reflections

When looking at curved mirrors, it is very obvious that the object we are looking at, is a redirected and distorted view of our surrounding environment

https://www.simply.science/images/content/physics/waves_optics/reflection/Concept_map/Convexconcave_mirrors.html
https://wbbsesolutions.guru/wbbse-solutions-for-class-10-physical-science-and-environment-chapter-5/

Convex Reflections

  • With Reflections, light bounces off the material and, depending on the surface shape, changes direction upon reflecting
  • Convex shapes cause the light to Diverge – spread apart
https://www.shokabo.co.jp/sp_e/optical/labo/lens/lens.htm

Concave Reflections

  • Concave shapes cause the light to Converge – come together
https://www.shokabo.co.jp/sp_e/optical/labo/lens/lens.htm

Concave / Convex Refractions

When looking at curved glass, or lenses, light that we are looking seeing through the glass, is a redirected and distorted view of our surrounding environment

photo by betül balcı on pexels
photo by shukhrat-umarov on pexels

Concave Refractions

  • With Refractions, light passing through the material and, depending on the surface shape, changes direction upon refracting
  • Concave shapes cause the refracted light to Diverge – spread apart 
https://www.britannica.com/technology/lens-optics

Convex Refractions

Convex shapes cause the light to Converge – come together

https://www.britannica.com/technology/lens-optics

Looking at them all next to each other, we can see Reflections and Refractions are both re-directing the light rays from another part of the scene. The biggest difference is Reflect = Light Bounces off, Refract = Light passes through.


There is No Spoon

photo by chait goli on pexels
photo by otoniel alvarado on pexels

There is No Glass Either…

https://wifflegif.com/gifs/490974-pouring-water-reverses-arrow-gif
https://www.cleverpatch.com.au/ideas/by-product-type/paper-and-card/refraction-in-action

Diffuse – Specular – Transmission (New Category)

Diffuse – All Light Interaction with Material / Object

Specular – All Surface Reflections (Bounces)

Transmission – All Pass Through Refractions

Here is an Example Scene with 1 sided Glass on the left, and 2 sided Glass on the right:

We can see the Direct Transmission shows the Light Source through only the 1 sided glass, but not the 2 sided glass

Almost all information in the 2 sided glass is stored in the Indirect Transmission:

Almost all objects that contain glass in 3D are supposed to be modelled with a thickness, meaning 2 or more sides. So more often than not, your Direct Transmission Pass will be empty and all information will go to the Indirect Transmission. This is also why very often it is not even split up and is just rendered combined as Overall Transmission.


Recap #2

  • Transmission – Light passes through  
  • Refraction – Light redirects.  
  • The CG pass could be named either or but is often referring to the same phenomenon.
  • Specular and Transmission are both similar in that they are capturing light redirecting and showing a virtual image of the distorted surroundings
  • Emission is the light source
  • Diffuse describes the object itself
  • Specular Events captures light bouncing off the object’s surface
  • Transmission Events capture light passing through an object. 
  • These all get separated into their own categories.
  • Both Specular and Transmission have: 
  • A Direct pass that show the first reflection or first transmission of light
  • An Indirect pass showing all subsequent bounces or pass throughs
  • An Albedo Filter (mask)
  • Transmissive surfaces like glass are often modelled with 2 sides
  • Therefore the light usually passes through 2+ sides and ends up in the indirect pass, and the direct Transmission shows up empty
  • Often rendered as just an overall combined Transmission pass, for convenience.

Incorporating Transmission (Refraction) Into AOV Template

Since most of the Refraction is in the Indirect, there is no need for space for splitting up and adjusting separate direct and indirect, like we do with the diffuse or spec. I recommend combining and keeping the Transmission Section Slim for Space Saving in the Template. I also recommend the layering to go: Diffuse, Transmission, Specular, Emission, Other. To me this was the clearest Layering.

I updated the Material AOV Rebuild Templates in the FruitBowl Renders for Arnold, RedShift and Octane incorporating the new Transmission / Refraction Section.

See the Downloads Section at the bottom for links to the whole nuke scripts for learning and template scripts updated per render engine, arnold, octane, redshift.


Handling Planar Mirror Reflections

One approach to rendering Planar Reflections with AOVs is flipping the Camera along the Mirror Plane

Flipping the Camera along the normal of the Mirror Plane will produce a Virtual camera for you to render the Mirrored Virtual Image from the right perspective

If your Object is sitting on top of the 3D origin ground plane, this can be as easy as making an Axis Node, Scaling the Y to -1 and plugging your camera Axis Input into this Axis Node.

This will view your scene from the perspective of your Mirror. In the above image, you can see after flipping the Camera in -Y, the Nuke rendered result is aligned with the rendered indirect Specular pass. We’ll need to do this method in the Render Application on Lighting side, or pass this camera back to the lighter in order to render the reflection with full AOVs.

Here is the re-rendered Mirror Camera Perspective of the Armored Mech, with full AOVs, matching the original reflection angle:


What about non-ground plane mirrors?

For all oriented mirror planes, the same concept applies, you want to flip the world from the pivot point and orientation of that card along it’s normal facing angle. This is easier to do in 3D applications, but can be done in nuke with a little Matrix Inversion.

I’ve made a tool called MirrorDimension to make this Camera Mirroring super easy. Just stick this node between the Mirror Card in nuke (must have it’s transformations and rotations) and the Camera node. The gizmo is acting as an Axis Node and is just flipping the world along the orientation of the Card input.

No Settings on the node, just the following instructions:

1.) Plug in the MirrorCard input to the Card or Axis node you would like to be the mirror.

– The scale of Card Does not matter as long as the orientation (translation/rotation) are correct.

– The Card’s +Z access is the front of mirror, point that towards subject / camera. This is the blue Z arrow in 3D viewer.

2.) Duplicate your Camera, and plug in the “axis” input of this new Camera to the output of this node.

3.) Your new Camera will be Mirrored according to the plane / card / axis.

4.) Render using this New Camera Setup to get the mirrored CG output.

Before MirrorDimension Node – Original Camera Position:

After Mirror Dimension Node Applied –

You would either do this in your 3D scene and render the AOVs or pass this camera to a Lighter to render from this mirror perspective.


Faking Reflections in Comp

If you suddenly need reflections but have no renders, you can use some of the above techniques to fake your reflections.

If you have your Geometry of the object, try projecting the rgba onto the geometry, and rendering it in nuke from the mirror dimension:

If you have no Geometry, but have a Position Pass. Try using a PositionToPoints node, plugged into your render and Position input plugged into your shuffled out Position pass (or select in the dropdown). You can render your rgb 3D point cloud of the object with the mirror camera and fake some reflections. It won’t be perfect, but perhaps in a pinch, it can save your ass and add more realism:

So the next question becomes, what can we do if it’s not a Planar Reflection? or if it’s multiple planar reflection, or surface is curved, or what about Refractions (Transmission) ?


Getting Help from Lighters

There is a serious limit to how much we can do in comp when encountering Indirect Specular or Refraction (Transmission) passes. Many times, if this is something that is a big feature of are shot and requires a lot of comp tweaks, we’ll need some help from our Lighting Department.


Julius Ihle – Head of Lighting and LookDev at Trixter

We talk to Julius Ihle – Head of Lighting and LookDev at Trixter for potential Lighting Solutions to these problems.

Julius is super knowledgeable, and introduces us to Light Path Expressions and Open Shading Language where lighters can help Build Additional AOVs and help us when the situation calls for it.

Julius is also an online educator and keeps a Lighting Blog discussing exactly these topics, check these tutorials out for more details:

Julius’ Blog:
https://julius-ihle.de/?page_id=346


Light Path Expressions

Julius’ Tutorial: LPE Quick Tip #1: Light Path Splitting for Transmission
https://julius-ihle.de/?p=2619

Here is an illustration of the drawing Julius used to explain how renderers are handling Reflection and Refraction Events

In a nutshell, the render engine keeps track of the light ray path and all the events that it undertakes on it’s journey from Camera back towards the Light

Lighters can create new AOVs with custom expressions telling the render engine exactly what parts and what events they want to see in the outputted pass.

Here is a link to the Light Path Expression community GitHub:
https://github.com/AcademySoftwareFoundation/OpenShadingLanguage/wiki/OSL-Light-Path-Expressions

And here is the Arnold User Guide that Julius Mentions in the video to check out for more education:
https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_user_guide_ac_output_aovs_ac_expression_aovs_html

LPE’s are supported by many renderers so check if the one you are using supports them.


Open Shading Language

Julius’ Tutorial: Playing with OSL #5: Arnold Reflection Alpha + Utilities
https://julius-ihle.de/?p=2788

There are also Shaders that have been written that can Reflect various AOVs, such as Utility passes and Alpha channel so that reflections can be more useful for us in comp. Julius has written his own shader to do just that, download it from GitHub:

https://github.com/julsVFX/osl


Downloads:

If you haven’t downloaded the FruitBowl Renders already yet, you can do so now:

You can Choose to either download all 3 FruitBowls at once:
FruitBowl_All_Renders_Redshift_Arnold_Octane.zip (1.61 GB)

Or Each FruitBowl Render Individually for faster downloads:

FruitBowl_Redshift_Render.zip (569.1 MB)

FruitBowl_Arnold_Render.zip (562.8 MB)

FruitBowl_Octane_Render.zip (515.4 MB)

The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.

  1. Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
  2. Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder

Project Files for this Video:

Along with the fruitbowl renders above, here are the nuke script and project files from this video, so you can follow along:

All Nuke Project Files and template scripts:
CG_Compositing_Series_MaterialAOVs_RefractionsReflections_AllScripts.zip (88 KB)

Nuke scripts included in the above download, but can downloaded individually are:

CG_Compositing_Series_2_5_Material_AOVs_RefractionsReflections_DemoScript.nk


CG_Compositing_Series_2_5_Material_AOVs_ArmorMech_ReflectionsMirror_Demo.nk


CG_Compositing_Series_2_5_Material_AOVs_Updated_Transmission_Templates.nk


I have also updated these Individual AOV Rebuild Templates scripts for specific render engines to include a Transmission Section:

Realistic_AOV_Bebuild_Arnold_Template.nk

Realistic_AOV_Bebuild_Redshift_Template.nk

Realistic_AOV_Bebuild_Octane_Template.nk

Realistic_AOV_Bebuild_Blender_Template.nk


Glass Balloons (Houdini Solaris)

GlassBalloons_Renders.zip (2 EXRs – 101.4MB)


Armor Mech (Rendered in Blender):

Original model by Numata3D_98 on turbosquid:
https://www.turbosquid.com/3d-models/3d-attack-mecha-quadpod-1993489

4 EXR Renders and Geo (for nuke geo projection demo):

ArmorMech_RendersAndGeo.zip (179.4MB)


MirrorDimension

I am linking to the gizmo on the Nuke Survival Toolkit github, where you can download the raw file or copy/paste the RAW source code from your browser into nuke:

MirrorDimension gizmo

Or download the .nk file here:
MirrorDimension.nk

Or on Nukepedia:

https://www.nukepedia.com/gizmos/3d/mirrordimension


Blender JunkYard Scene:

Scene from https://www.blender.org/download/demo-files/

JunkShop_v01.exr (144.7MB )


Blender ClassRoom Scene:

Scene from https://www.blender.org/download/demo-files/

3 Render Files:

BlenderClassRoom_All_Renders.zip (213.6MB)


VRay Room Render:

Vray Room – Can be downloaded from this website, look for “download example scene” (36.6MB):

https://www.chaos.com/blog/how-to-use-cryptomatte-render-elements-in-v-ray-for-sketchup


Since I am using Stamps in the script, all renders can be swapped out at the top of the script where the “SourceImages” Backdrop is, and the rest of the script will get populated correctly.


Slide show PDF

Here is a PDF version of my slideshow in case you would like to save for future research or review:


References / Research


Light Path Expression Doc:
Github Wiki: OSL Light Path Expressions

Arnold Light Path Expression Help and Examples:
Arnold Help: Light Path Expression AOVs – Arnold User Guide

Julius Ihle Blog
Julius Ihle’s Github page : julsVFX/osl
Playing with OSL #5: Arnold Reflection Alpha + Utilities
LPE Quick Tip #1: Light Path Splitting for Transmission


Websites:

Refraction Wikipedia

Transparency_and_translucency – Wikipedia

https://notes.thatother.dev/physics/refraction

https://help.maxon.net/r3d/cinema/en-us/Content/html/Integrated+AOVs.html

https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-shading/reflection-refraction-fresnel.html

https://abnercabuang.wordpress.com/2017/11/19/reflection-refraction-transmission-and-absorption-of-light

https://study.com/learn/lesson/transmission-light-wave-examples.html

Basics of creating glass materials in Corona renderer and 3Ds Max

V-Ray Materials

3Delight Glass – Storage for referenced pages – 3DL Docs

https://macdesignstudio.wordpress.com/tag/reflection

Light Pipe Design: How TIR & Refraction Come into Play

Light and color. – ppt video online download

On BigCo Leaks: Transparency and disclosure

Can You See Through Me? | Lesson Plan

https://wbbsesolutions.guru/wbbse-solutions-for-class-10-physical-science-and-environment-chapter-5

FAQ/Combining 3D Passes – VFXPedia

Refraction – Definition, Refractive Index, Snell’s Law

The Physics Behind Rainbow Formation

Refraction | Sound Waves

Refraction Of Light – 2023

https://www.geocities.ws/rmackrell509/4thSpring.html

What are the uses of refraction in our daily life?

What is the Index of Refraction? Measurement, Definition & More –

Autodesk – arnold – Help

View topic – Help understanding Refraction, SSS and Transmission passes?

PPT – The Basics of Refraction PowerPoint Presentation, free download – ID:2558034

Molecular Expressions: Science, Optics, and You: Light and Color – Refraction of Light

https://slideplayer.com/slide/16831983

https://www.researchgate.net/figure/Distortions-of-the-light-field-generated-by-refractive-a-and-reflective-b-convex_fig1_308768656

https://global.canon/en/technology/s_labo/light/003/02.html

Delivering VR in Perfect Focus With Nanostructure Meta-lenses

https://osa.magnet.fsu.edu/teachersparents/articles/lensesgeometricaloptics.html

https://www.simply.science/images/content/physics/waves_optics/reflection/Concept_map/Convexconcave_mirrors.html

What is the difference between Translucency and Transparency?

https://www.linkedin.com/pulse/transparency-vs-translucency-whats-difference-between-archie-blake-3acne

Transparent vs Translucent


YouTube Links:

Light Absorption, Reflection, and Transmission

How is Light Absorbed, Reflected and Refracted

Why does light bend when it enters glass?

Refraction of Light

Reflection, Refraction and Absorption

Opacity Maps – PixPlant

Refractive index of water

How To Demonstrate Light Bending or Refraction

How Lenses Function (CanonOfficial)

Refraction Explained

Compositing/Render layers in Blender

CG Compositing Series – 2.4 Material AOVs – Albedo & RAW Lighting


Albedo & RAW Lighting (Complex) Passes

In this tutorial, we go further down the levels of complexity into the most complex category, which includes Albedo and RAW Lighting. These are the smallest components of AOVS, the building blocks, and unveil how lights, textures, and materials come together to produce the beauty render.


What is Albedo?

  • An Albedo Map is the base color or texture map that defines either the diffuse color or specular tint of the surface.
  • Remember that in Physically Based Rendering (PBR) depending on whether a material is Metallic or Dielectric (non metallic), determines whether the albedo color is used as Diffuse Color or Specular Color.
  • It knows what to use the albedo for based off of a black and white metallic map
https://meshlogic.github.io/posts/blender/materials/nodes-pbr-basic-shader/

What’s the difference between Albedo and Diffuse?

  • Diffuse contains lighting and shading information such as highlights, shadow and light color.  It’s object’s color / texture in the lit scene.
  • An Albedo Map is basically the object’s texture as it would appear under uniform lighting, without the influence of shadows or highlights.
https://www.cgdirector.com/albedo-map/
https://bryanray.name/2015/05/24/blackmagic-fusion-the-texture-node/

Other names for Albedo

  • Texture
  • Color
  • Base Color
  • Diffuse Map
  • RAW Diffuse Color
  • Diffuse Filter

Common terms:

  • “Map”
  • “Filter”

What is RAW Lighting?

  • RAW Lighting is the pure lighting information of the scene, without any specular, object colors, or textures.
  • A pass that describes how light is affecting in the scene.
  • This multiplied with the Albedo makes up the Diffuse Pass.
RAW Lighting Pass – Fruitbowl Render

How are they combined?

Albedo and RAW Lighting are always multiplied together, not plussed

Diffuse = Albedo * RAW Lighting

https://bryanray.name/2015/05/24/blackmagic-fusion-the-texture-node/

What is RAW Specular and Specular Filter?

The RAW Specular pass is that objects in the scene would look like if they had a 100% reflective chrome shader on. It renders everything uniformly reflective.

Specular Filter is like a mask or a albedo multiplier, that limits the visibility of the RAW specular reflective pass to certain areas. The thought process is: Might as well render everything reflective, and then decide where and how much it is needed.

Just like the albedo and the RAW Lighting, RAW Specular and Specular Filter are multiplied together to form the final Specular pass

Specular = RAW Specular * Specular Filter


What is RAW Reflection and Reflection Filter?

RAW Reflection and Reflection filter is essentially the same thing at RAW Specular and Specular filter. You might see this term depending on the renderer. Sometimes Specular is referring to Direct Specular and Reflection is referring to Indirect Specular.

The more important take away is you want pair the “RAW” pass with it’s “Filter” or “Albedo” pass. They get combined and multiplied together to equal the final pass

Reflection = RAW Reflection * Reflection Filter


RAW Direct Diffuse & RAW Indirect Diffuse

Just like the normal Diffuse pass, RAW Lighting passes can also be split into Direct and Indirect Lighting. So you can end up with the RAW Direct Lighting and the RAW Indirect Lighting. Both passes are using the same Diffuse Albedo, so it is only the lighting that is split, not the albedo.

Total RAW Diffuse  = RAW Direct Diffuse + RAW Indirect Diffuse


RAW Direct Specular & RAW Indirect Specular

And just like the Diffuse RAW passes, we can also break up the RAW Specular passes into RAW Direct Specular and RAW Indirect Specular.

Again both Direct and Indirect Specular will use the same Specular Filter pass.

Total RAW Specular  = RAW Direct Specular + RAW Indirect Specular


Diffuse Equation

Knowing the diffuse equation will help us understand how it is built, and more importantly, the math behind splitting the Diffuse pass into it’s individual components of Albedo and RAW Lighting. Let’s go over a basic equation and reinforce some math concepts:

x = Albedo
y = RAW Light
Diffuse = ( Albedo * RAW Light )
Diffuse = ( x * y )

In math, certain operations cancel each other out. Just like Subtraction cancels out Addition, Division cancels out Multiplication

( x + y ) - y = x
( x * y ) ÷ y = x

We can take the Diffuse pass, and dividing by the component we do not want, we can get the component we do want. 

What that means is if you have the Diffuse pass and 1 other component, albedo or RAW Lighting, we can always generate the remaining missing pass.


x = Albedo
y = RAW Light

Diffuse = ( Albedo * RAW Light )
Diffuse = ( x * y )

( x * y ) ÷ y = x
( x * y ) ÷ x = y

Diffuse ÷ Albedo = RAW Light
Diffuse ÷ RAW Light = Albedo

Division Problems

You can divide 0 by any number and you get the result of 0. But if you try to do the reverse, you run into a classic math problem. You cannot divide by 0, the result is undefined… Not possible. 

0 ÷ x = 0
x ÷ 0 = undefined

This can cause serious problems in nuke when dividing, and we need to be careful.


Using Expression node to test math in nuke

If we use an expression node we can enter the following equation:

0/r
0/g
0/b
0/a

The nuke Expression node has some predefined variable for using the channels. So it will carry out this math on a per pixel basis for each channel.

r = red channel
g = green channel
b = green channel
a = alpha channel

we can see that once we start dividing by 0 value pixels, we are getting issues. Nuke’s answer for an undefined result is nan pixels

nan stands for “Not A Number”

inf stands for Infinity


Testing for nan or inf pixels

We can use another expression node to write a little tcl expression that will show 1.0 (white) for any illegal value pixels. If it’s a normal number, it will display at 0.0 or black. This can easily and visibly test if we are having “problem pixels” in our image such as nan and inf

isnan() tests for nan (not a number) pixels. You need to enter which channel you want to check inside of the parenthesis, for example isnan(g) and it will display 1.0 for nan values and 0.0 for normal values

isinf() tests for infinity value pixels. You need to enter which channel you want to check inside of the parenthesis, for example isinf(g) and it will display 1.0 for inf values and 0.0 for normal values

We can just add them together to get a full mapping of “illegal values” to warn us

isnan(r) + isinf(r)
isnan(g) + isinf(g)
isnan(b) + isinf(b)
isnan(a) + isinf(a)

So dividing by 0 in nuke can give you illegal values. Luckily, the Merge(divide) operation in the Merge node avoids these issues. It has built in protections so that 0/0 = 0 and any other number divided by 0 is bypassed, or skipped, and it does nothing. it will just show you in the A input value and not do any math at all.

There is a limitation to the Merge node however. There is only 1 operation for divide, and that is A/B

We know that when we disable nodes in nuke, it defaults to the B input. But if we switch the inputs, we do not get the same result. Meaning we are locked in to our inputs based on whatever image we need to divide by the other image.

So there is no B/A operation, we’ll need to recreate it ourselves


MergeExpression Node

We can use a MergeExpression node, which is basically a combination of a Merge node and an Expression no, in fact the properties look identical to an Expression node.

The Merge Expression has access to the same variables at the normal expression, namely the r,g,b,a variables representing the different channels:

r = red channel
g = green channel
b = green channel
a = alpha channel

But the MergeExpression also has 2 inputs, and we can choose what input we are sourcing from in our equations with capital letters A and B

A = A input
B = B input

Because we need to specify which red channel we are grabbing from, A or B red channel, we need to be more specific. Therefore:

Ar = A input red channel
Bg = B input green channel

So we specify which input first and then the channel we want.

So now we can do a simple equation of B input divided by A input:

Br/Ar
Bg/Ag
Bb/Ab
Ba/Aa

Fixing the MergeExpression

Unfortunately, the MergeExpression is pure math, and does not have the built in protections that the normal Merge node does when it comes to dividing. So if we end up dividing by 0 using the MergeExpression, we will end up with nan and inf pixel values. And that is very dangerous, because this will break the image, as you cannot do further math with those values, they get corrupted.

But it’s ok, we can implement the fix ourselves, so that we can have safe values just like the Merge node

The solution is to enter a little tcl expression into the node

Ar == 0 ? Br : Br/Ar
Ag == 0 ? Bg : Bg/Ag
Ab == 0 ? Bb : Bb/Ab
Aa == 0 ? Ba : Ba/Aa

This code basically reads as follows: 

First we need to check if the A input has 0 values, since that is what we are dividing with. and if we divide with a 0 then we get a problem.

so the first part is does the A input pixel equal 0 ? if yes, just skip, bypass, and revert to B input pixel. Don’t even do any math. If the A input pixel is not 0, then it will proceed to do the operation B/A and give the result.

This will fix the issue as all the zero pixels will be skipped. This result is identical to the Merge node set to divide

Except now it is B/A and when we disable the node, it will revert to the B stream that we want.

you can just copy/paste the code below into your nuke to get the MergeDivide that I created:


MergeExpression {
inputs 2
expr0 "Ar == 0 ? Br : Br/Ar"
expr1 "Ag == 0 ? Bg : Bg/Ag"
expr2 "Ab == 0 ? Bb : Bb/Ab"
expr3 "Aa == 0 ? Ba : Ba/Aa"
name MergeDivide
label "( B / A )"
note_font_color 0xffffffff
selected true
}

Otherwise you can download the nuke tool here and add it to your toolsets:

MergeDivide.nk


Multiply / Divide Concepts

  • Think of Multiply like combining, fusing, mixing, linking, joining, locking
  • Think of Divide like separating, splitting, unlinking, disjoining, unlocking
  • Start with the combined pass
  • Separate with division
  • Change individual component
  • Recombine with multiplication

How can we use Albedo and RAW Lighting as Compositors?

1.) The first reason to separate albedo and RAW lighting would be to make an adjustment to only the texture and not the RAW Lighting or vice versa.

  • if you desaturate the diffuse pass, you risk desaturating the lighting and the texture at the same time. but if you wanted to just desaturate the object, but keep the tinting of the lighting, you would need to separate them first

Here is an example of the Blender Room where we one side desarating the entire diffuse pas, and another where we only desaturated the albedo pass. You will notice on the right side, the light is still warmer and maintaining the warmth of the sunlight. This is what a gray object would look in that environment

left side: desaturating entire diffuse pass
right side: desaturating the albedo only

Here is the same example on the VRAY scene, where you can see the desaturation affecting the bounce lighting:

left side: desaturating entire diffuse pass
right side: desaturating the albedo only

2.)There are many non linear Color Corrections or operations that you might also specifically want to do while these passes are separated, to get better or cleaner results. 

Whether it is to remove light / shadow from a texture CC, or removing texture info so that you can adjust specific lighting. Operations such as:

  • keying
  • despilling / desaturating
  • gamma
  • ColorCorrect nodes
  • HueCorrects
  • HSV node – to pull color keys

3.) The next big reason would be to alter or change the texture in the scene and not need to go back to the CG department.

In this example we replace the picture on the wall with a checkerboard, but it still maintains the lighting of the scene. So you could add noise or blood textures, change billboard ads, etc, and they would still appear to live inside your shot.

left side: original painting
right side: replacing the albedo with another image

Different ways to rebuild AOVs at complex level

Variation 01:

Add the direct, indirect, SSS passes together first, generating your diffuse pass. then do a divide / multiply with the albedo pass afterwards at a second step.

variation 01 rebuild structure

Variation 02

We could do the albedo divide multiply on a per pass basis. so basically we are having the RAW direct and RAW indirect split out first. We could do changes to the albedo and return to normal, and then add the direct and indirect and SSS together as a second step.

variation 02 rebuild structure

Variation 03

Similar to variation 02, we do the albedo changes on a per pass basis first. but instead of immediately reverting back to normal, and then plussing the direct, indirect and SSS together. We could instead plus them at the RAW level. The final step would just to multiply the albedo back.

Basically, variation 02 was 3 divide, 3 multiply and 2 plus

and variation 03 is 3 divide, 2 plus, and 1 multiply

variation 03 rebuild structure

Realistic Proposal for CG AOV Rebuild

The above setups are more for learning, with labels and backdrops to help break down the workflow and structure.

Below is the setup that I gravitate towards when settings up CG Templates. I try my best to apply logical flow and convenience. Maximizing organization and flexibility, while still being clean and fast. I have space for albedo / RAW Lighting change, but I keep it off by default and allow to turn it on when needed.

We see all levels of complexity being implemented:

Basic : Diffuse, Specular, Emission

Intermediate: Direct, Indirect, SubSurface Scattering

Complex: Albedo and RAW Lighting

Realistic proposal for a CG AOV Rebuild

You can find these realistic template nuke scripts of these setups for each renderer below in the downloads section. I exported the individual templates for Arnold, Redshift, Octane, and Blender.

I would recommend waiting for future videos where I will keep expanding on the template and making it more robust. But if you are eager to download and try it out, feel free to give it a try and modify it for your needs. More and better additions will come in the future posts.


Downloads:

If you haven’t downloaded the FruitBowl Renders already yet, you can do so now:

You can Choose to either download all 3 FruitBowls at once:
FruitBowl_All_Renders_Redshift_Arnold_Octane.zip (1.61 GB)

Or Each FruitBowl Render Individually for faster downloads:

FruitBowl_Redshift_Render.zip (569.1 MB)

FruitBowl_Arnold_Render.zip (562.8 MB)

FruitBowl_Octane_Render.zip (515.4 MB)

The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.

  1. Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
  2. Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder

Project Files for this Video:

Along with the fruitbowl renders above, here are the nuke script and project files from this video, so you can follow along:

All Nuke Project Files and template scripts:
CG_Compositing_Series_MaterialAOVs_Albedos_RAW_Lighting_nkscripts.zip (155 KB)

Individual Template scripts for specific renderers:

Realistic_AOV_Bebuild_Arnold_Template.nk

Realistic_AOV_Bebuild_Redshift_Template.nk

Realistic_AOV_Bebuild_Octane_Template.nk

Realistic_AOV_Bebuild_Blender_Template.nk


Blender Cube Room Render

Blender Cube Room Diorama zip ( ~ 70MB)

original cube diorama blender files from blender demo file site:
https://www.blender.org/download/demo-files/


VRay Room Render:

Vray Room – Can be downloaded from this website, look for “download example scene” (36.6MB):

https://www.chaos.com/blog/how-to-use-cryptomatte-render-elements-in-v-ray-for-sketchup


Since I am using Stamps in the script, all renders can be swapped out at the top of the script where the “SourceImages” Backdrop is, and the rest of the script will get populated correctly.


Slide show PDF

Here is a PDF version of my slideshow in case you would like to save for future research or review:


References

VNTANA – What Are Texture Maps And Why Do They Matter For 3D Fashion?

A23D – Difference between Albedo and Diffuse map

cgdirector – What is an Albedo Map and How to use it?

Youtube – TorQueMoD - WTF are Albedo textures and how do I make them?

Youtube – Zeracheil – Texture Maps Explained – PBR Workflow

DIGITAL COMPOSITING IN THE VFX PIPELINE – PDF

steakunderwater – FAQ/Combining 3D Passes – VFXPedia

xuan prada – RAW LIGHTING AND ALBEDO AOVS IN ARNOLD

photoshop essentials – The Overlay Blend Mode in Photoshop

Bryan Ray – Blackmagic Fusion: The Texture Node

Youtube – 3DAS – 3ds Max Export Multiple Render Passes (EXR) into Photoshop Extended

Adam Lindsey – Nuke Notes

Youtube – Hugo’s Desk – How to use the VRay AOVs in Nuke (render passes)

CG Compositing Series – 2.3 Material AOVs – Direct, Indirect, SSS


Direct, Indirect, SSS (intermediate) passes

In this tutorial, we move down the levels of complexity into the Intermediate category and explore breaking apart diffuse, specular further into Direct Lighting, Indirect Lighting, and SubSurface Scattering


What is Direct Lighting?

  • Direct Lighting is when the Light Source directly illuminates a surface.  This could be considered the “first bounce” or the first time the light ray is hitting a surface.
https://en.wikipedia.org/wiki/Global_illumination

What is Indirect Lighting?

  • Indirect Lighting is all subsequent bounces of the Light.  This can be known as “Bounce Lighting”.  Light is often diffused throughout the scene, and also will pick up some of the surface colors.
https://en.wikipedia.org/wiki/Global_illumination

Direct and Indirect Passes as rendered / calculated separately and combined to equal the beauty render. Direct is only the “first bounce” or whatever is directly in view of a light source. Indirect is all bounces after the first hit (excluding the first bounce).

https://sinmantyx.wordpress.com/2015/03/18/perfect-clamp-1/

Direct and Indirect Lighting in the real world is used to describe a harsh lightsource, directly hitting a room or object and casting harsh shadows, verses indirect or “bounce lighting” which the light is aimed at a wall or ceiling or bounce card, and diffused throughout the scene, creating a more ambient lit environment.

https://www.olamled.com/direct-lighting-vs-indirect-lighting-which-is-better/

Raytracing – Direct Lighting

https://developer.nvidia.com/discover/ray-tracing
  • Ray tracing is a render calculation used to find Direct Lighting, shadows, and specular highlights.
  • Instead of calculating from the Light Source outwards and every direction in the scene, it saves time by going from the Render Camera backwards, only needing to calculate light rays hitting the camera, and necessary for the creating the final image.
  • It starts from a pixel on the final render and follows the light path until it reflects off or through a surface/material. It then asks “Am I directly illuminated by a light source?” and if so follows the path back to the light source, and determines the distance, intensity, and color of light hitting the surface.
  • If the area is not hit by direct light, it renders as black. This calculation ends after the “first bounce”.
https://www.dualshockers.com/xbox-one-exclusive-quantum-breaks-wip-screenshots-show-advanced-effects-and-comparisons/

Global Illumination “GI” – Indirect Lighting

https://www.scratchapixel.com/lessons/3d-basic-rendering/global-illumination-path-tracing/introduction-global-illumination-path-tracing.html

  • Global Illumination or “GI” involves various techniques to calculate the indirect lighting that occurs when light bounces around in a scene.
  • This process helps to subtly illuminate shadowed areas and contributes to the overall color and intensity of the scene, especially around areas that are hit by direct lighting.
  • There are often many number of bounces allowed, depending on render time and settings. Each bounce inherits color from objects and materials and further distributes light into the scene.
  • The result is a more realistic and natural-looking shot, as it mimics the complex ways light interacts in the real world.

I mention this amazing Raytracing video from Josh’s Channel that breaks down how raytracing is working in the renderer with amazing visuals. The video itself is amazing, and entertaining. I highly recommend watching the whole video if you want to know about state of the art raytracing techniques.

The section I clipped from Josh’s video is between 1:24 and 2:14


Direct + Indirect = Total Lighting

https://www.dualshockers.com/xbox-one-exclusive-quantum-breaks-wip-screenshots-show-advanced-effects-and-comparisons/
https://www.dualshockers.com/xbox-one-exclusive-quantum-breaks-wip-screenshots-show-advanced-effects-and-comparisons/

Image Property of DreamWorks – SIGGRAPH 2010

Image Property of DreamWorks – SIGGRAPH 2010

Real Time Raytracing / Global Illumination – RTX Graphics

Real Time Global Illumination, is becoming the new normal in Real Time Renderers such as Unreal Engine and Unity. More powerful Graphics cards are being upgraded to handle these immense calculations, such as Nvidia’s RTX 3090 or 4090 series graphics cards. These are allowing for real time bounce lighting and reflections, instead of traditionally baked lighting in environments. This all adds significant realism to the scenes and games, and shows just how important this process is to photo realism.


How can we use Direct & Indirect Passes in Compositing?

1.) Contrast / Color Correction

direct / indirect pass decontrast
  • Individual control of the mids/lows and highlights. Gives more flexibility over the color correction in order to increase or decrease contrast and better match CG to plate.

2.) Filters and FX

  • Adding glow filters to Direct Lighting pass to “punch” the lighting and adding some realistic camera lens fx.  Using direct or indirect lighting passes to drive other FX and filters.

3.) Denoising CG

  • Indirect passes (and Sub Surface Scattering) are very expensive renders, and often arrive with some unwanted render noise and chattering.  Instead of applying denoise techniques to the whole beauty render, applying denoise to only necessary passes can help preserve details and improve final quality of your renders in comp.

CG Denoising Techniques in Nuke

1.) Nuke’s Denoiser

Nuke Denoise Node

We can simply use Nuke’s built in denoiser, it is the easiest to test and doesn’t do a bad job after some settings adjustments. No plugin or external tool required

2.) Neat Video Denoise Plugin

https://www.neatvideo.com/
Neat Video is the best denoiser on the market. It is fairly affordable, and chances are your studio already has a license. It can be a bit heavy, I would recommend pre-rendering the results instead of leaving them live in your comp script.

3.) Motion Vector Denoise

This technique involves using the Motion Vector Utility pass to distort the previous frame and next frame’s pixels, back into the position of the current frame. Usually you see a 3 frame average, or 5 frame average, (current frame, +2 frames ahead, -2 frames before). 

It’s also common to use a TemporalMedian Node to help smooth out noise chattering over pixels that are not changing that much frame to frame.

It’s important to note that we should always try to minimise artifacting and quality loss by isolating degrain techniques to only the problematic render passes, and not every layer or the beauty overall. Typically most of the problematic CG noise is occurring on the Indirect and SubSurface Scattering Passes.

MotionVector Denoise Technique

Some Great tools for Motion Vector Denoising:

Vector Median:

https://www.nukepedia.com/gizmos/filter/vectormedian

Deflicker Velocity:

https://www.nukepedia.com/gizmos/time/deflicker-with-velocity-pass

I do believe more tools could be made using these techniques and shared with the community. If you want to have a go at using this technique to come up with different tools that reduce grainy CG while minimizing artifacting, I am sure the Nuke community would be grateful!


Downloads:

If you haven’t downloaded the FruitBowl Renders already yet, you can do so now:

You can Choose to either download all 3 FruitBowls at once:
FruitBowl_All_Renders_Redshift_Arnold_Octane.zip (1.61 GB)

Or Each FruitBowl Render Individually for faster downloads:

FruitBowl_Redshift_Render.zip (569.1 MB)

FruitBowl_Arnold_Render.zip (562.8 MB)

FruitBowl_Octane_Render.zip (515.4 MB)

The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.

  1. Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
  2. Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder

Project Files for this Video:

Along with the fruitbowl renders above, here are the nuke script and project files from this video, so you can follow along:

Nuke Project File:
CG_Compositing_Series_MaterialAOVs_Intermediate_DirectAndIndirect.nk


Blender Cube Room Diorama zip ( 3 renders ~ 70MB each, zip file total 204.4MB)

original cube diorama blender files from blender demo file site:
https://www.blender.org/download/demo-files/


Cornell Box noisy Render zip (1.55GB) exr img seq

Special thanks to Valentin Nicolini for providing the cornell box render

Please note the render is using ACES colorspace, so you’ll need to set your nuke OCIO settings to ACES to view this render correctly.


Vray Room – Can be downloaded from this website, look for “download example scene” (36.6MB):

https://www.chaos.com/blog/how-to-use-cryptomatte-render-elements-in-v-ray-for-sketchup


Vray Teapots can be downloaded from this website ~35MB:

https://www.lucamignardi.com/2-5d-relighting-nuke/


The Foundry spheres examples can be downloaded here:

https://learn.foundry.com/nuke/content/reference_guide/toolsets_nodes/toolsets_nodes.html


Since I am using Stamps in the script, all renders can be swapped out at the top of the script where the “SourceImages” Backdrop is, and the rest of the script will get populated correctly


Finally here is a PDF version of my slideshow in case you would like to save for future research or review:


Research links:

https://www.pluralsight.com/blog/film-games/understanding-global-illumination

https://en.wikipedia.org/wiki/Global_illumination

https://www.ledyilighting.com/direct-lighting-vs-indirect-lighting/

https://manual.reallusion.com/iClone_7/ENU/Content/iClone_7/Pro_7.4/27_GI/GI_Basic_Intro_and_Benefits.htm

https://sinmantyx.wordpress.com/2015/03/18/perfect-clamp-1/

https://blogs.nvidia.com/blog/direct-indirect-lighting/

https://lightingdistinctions.com/direct-light-vs-indirect-light-how-to-make-the-most-of-both/

https://3dheven.com/what-is-global-illumination-and-how-does-it-differ-from-other-rendering-techniques/

https://cg.informatik.uni-freiburg.de/course_notes/graphics2_09_pathTracing.pdf


Thank you for all your patience, I’m hoping to publish more tutorials in this series soon.
Best,
-Tony

VFX Nomads Podcast: Episode 001


Following the well received VFX Community Nuke webinar hosted by the Foundry a couple months ago, link hereJosh Parks, Tony Lyons, and Adrián Pueyo wanted to do more. So we decided to start recording more of our conversations.

We’re excited to introduce the VFX Nomads Podcast

Along side us is Senior Compositor/Compositing Supervisor and good friend, Gautama Murcho, who shares his wealth of knowledge, offering insights into his experiences in the VFX Industry.

The podcast is also on Spotify if you prefer!

Please Subscribe if you’d like to have new episodes on your radar. Feel free to post comments, feedback, or questions for us to talk about in the future. We hope you enjoy the first episode! 

Why Join the VFX Community? Foundry YouTube Live Panel with Josh Parks and Adrian Pueyo

I recently had the pleasure of teaming up with Josh Parks and Adrian Pueyo in a Foundry Live Panel event on YouTube Live. We talk about advice for people starting in the industry, getting into teaching, how to keep learning, and the importance of networking and community.

Josh, Adrian, and I are friends and former colleagues. I couldn’t be more proud and excited to see them evolve in their careers and see their various contributions to the VFX Compositing Community over the years. It was an honor to talk alongside them in what felt like a typical chat we might have if we all met up in person over lunch.


Back in December we decided to create a space on LinkedIn to be a place for folks to share cool nuke and compositing posts. The LinkedIn news feed can be a little bit of a fire hose of information, and if you don’t save something, it can quickly disappear into the ether. If you’d like to be part of the nuke community there, for articles, tutorials, news, and questions, we’d be happy to have you.

Foundry Nuke Compositors LinkedIn Group

I had an absolute blast speaking alongside Adrian and Josh, and in my opinion, it went by too fast! I hope you enjoy the talk and maybe get a little inspiration out of it. I really hope to chat with them again in the future.

If you’re interested in checking out Josh or Adrian’s websites and courses, here are some links:

Josh Parks:
https://www.compositingpro.com/
https://www.nukecompositingtutorials.com/

Check out Josh’s newsletter, Training Courses, and Masterclass series

Adrian Pueyo:
https://adrianpueyo.com/

Adrian just released a brand new Python Course tailored for nuke compositors on his new training platform. Check out his courses page for more info.

CG Compositing Series – 2.2 Material AOVs (Bonus) – Cross Polarization Photography


Download the PDF here ^


Cross Polarization Photography

In this Bonus video on Material AOVs, I cover Cross Polarization photography, which is a technique that allows us to separate diffuse and specular components of everyday objects. I go into detail about the lighting concepts that allows this separation to occur, and how it’s used to gather reference and textures to recreate objects in 3D.


Electromagnetic Spectrum

  • Visible Light is a section of the Electromagnetic Spectrum
  • Light / Color is represented in 2D as a Sine Wave with a specific frequency

3D Light Wave Representation

  • The 2D representation looks a bit different in 3D space, since the light waves could be oriented in any and all directions along it’s forward axis
  • A light beam with randomly oriented Light Waves is referred to as an Unpolarized Light

Linear Polarization of Light

Linear Polarization isolates one specific angle of the light wavelength, only allowing a portion of the light waves that were oriented in the that direction, through the filter


Cross Polarization of Light

  • Cross Polarization uses 2 Polarizers that are perpendicular to each other, effectively eliminating the light wave passing through.
  • The first polarizer isolates the light wave to only one orientation
  • The second polarizer, if parallel to the first, continues to allow the polarized light through, but as it becomes more perpendicular, the light gets dimmer, and eventually blocked entirely


Polarization Upon Reflection

  • When unpolarized light hits a reflective surface (with a refractive index different than the surrounding medium, such as glass, snow, or water) the specular reflection is polarized or partially polarized to the angle perpendicular to the plane of incidence. (along the surface)
  • How polarized the Reflection depends on many factors; angle of incidence, material type, etc.

Brewster’s Angle

  • At a specific angle, the specular reflection is completely polarized to the angle perpendicular to the plane of incidence. 
  • This angle is known as Brewster’s Angle.

Unpolarized Diffuse Component

  • Only the Specular Reflection has the effect of the Brewster’s Angle Polarization 
  • The Diffuse Component is Unpolarized, because they are newly emitted photons from excited atoms
  • This phenomenon only happens when the light is reflected off dielectric materials such as water or glass.
  • When reflection occurs on a metallic surface, no Brewster Angle nor refracted light exist

Polarized Specular Reflections

  • Placing a Linear Polarizer filter in front of the observer will Cross Polarize some Specular Reflections if angled correctly.  It blocks the polarized reflection light wave from shining through it
  • This is how Polarized Sunglasses are able to eliminate harsh glares and reflections from dielectric surfaces such as glass, water, snow, etc.


Cross Polarized Photography

  • If you polarize the light source, the Specular Reflection is also polarized (because it’s a mirror reflection of the light wave).
  • The Diffuse Component is unpolarized light because it is newly created lightwaves oriented randomly.  Adding a second polarizer on the Camera, means we can block the Specular Component entirely depending on the angle of the Polarizers.  When the 2 polarizers are parallel, we see Specular + Diffuse , and when they are perpendicular we will see only Diffuse.

  • The Parallel Polarized image gives use the Specular and Partial Diffuse (only Diffuse Component of that orientation)
  • The Cross Polarized image, negates the Specular, and only shows the other half of the Diffuse Component
  • To isolate the Specular Component, take Parallel Polarized image (Specular + Partial Diffuse) and minus the Cross Polarized image (Partial Diffuse).  The Diffuse Components cancel out, and all that is left is the Specular Component

  • This Cross Polarization Photography allows CG Artists to collect photogrammetry data of everyday objects, and allows  them to recreate these objects in 3D with accurate Diffuse and Specular Maps for Physically Based Rendering
  • What seems just like theoretical Diffuse/Specular Render Pass separation in CG is actually a lighting phenomenon that can be separated into Diffuse and Specular Components in the real world

Notice that Metallic Materials have no real Diffuse Color to them, They show up as completely black in the Cross Polarized result.  Metals are entirely surface level Specular Reflections


  • Occasionally, the Diffuse Components of the Parallel Polarized and Cross Polarized Images are slightly different, (brighter or a shift in color for example)
  • In this case, when we minus the Cross Polarized result from the Parallel Polarized result, we are left with leftover color information or artifacts.  The Specular Component can be desaturated to compensate for those color artifacts
  • Remember that in Dielectric Materials the Specular Component is the same color as the light source, but Metals can sometimes tint the Specular color depending on the type of Metal

Light Stage: Cross Polarization

  • The light stage used in films is capturing evenly lit, cross polarized textures of various facial expressions.
  • This helps separate Diffuse and Specular and aids in tracking features of the face

References:

Here are some great websites that go into more detail about polarizations:

Reflection and Polarization of light in machine vision – Toshiba Teli Corporation

FilmicWorlds – How to Split Specular and Diffuse in Real Images

Polarization Explained: The Sony Polarized Sensor

Youtube – Cross Polarization Tutorial: Removing Specular Highlights and Reflections – Classy Dog Studios

Youtube – Cross Polarisation Explained by Grzegorz Baran

Youtube – The Key to Cleaner 3D Scans: Cross-Polarization – William Faucher

PetaPixel – Cross Polarization: What It Is and Why It Matters

optometryzone – How do Polarised glasses work?

Youtube – ScholarSwing – 16 – Class 12 – Physics – Wave Optics – Polarisation

Youtube – Khan Academy Polarization of light, linear and circular | Light waves | Physics

Youtube – xmtutor – What is Polarisation?

Youtube – xmtutor – Third Polariser

The Light Stages and Their Applications to Photoreal Digital Actors – PDF

Light Stages https://vgl.ict.usc.edu/

CG Compositing Series – 2.1 Material AOVs – Diffuse, Specular, Emission

Material AOVs

In this post we are going to be focusing in on the Material AOVs Category.

Levels of Complexity

There are different levels of complexity to rebuilding Material AOVs into the beauty, and it all depends on how much flexibility and control you want with the cost of complexity and speed.


Simple

  • Diffuse
  • Specular
  • Emission
  • Other – Refraction / True Reflection

Intermediate

  • Diffuse
    • Direct Diffuse
    • Indirect Diffuse
    • Sub Surface Scattering
  • Specular
    • Direct Specular
    • Indirect Specular
    • Reflection
    • Coat
    • Sheen
  • Emission
  • Other – Refraction / True Reflection

Complex

  • Diffuse
    • Direct Diffuse
    • Indirect Diffuse
    • Sub Surface Scattering
      • Raw Diffuse
      • Albedo / Color / Texture
  • Specular
    • Direct Specular
    • Indirect Specular
    • Reflection
    • Coat
    • Sheen
      • Raw Specular
      • Albedo / Filter / Texture
  • Emission
  • Other – Refraction / True Reflection

Diffuse, Specular, and Emission are the Foundational Categories, and the complexities are subdivisions of the Diffuse and Specular Categories

So let’s first focus on the Simple category of Diffuse, Specular, and Emission and really break those down and understand them fully. This will make the future subdivisions easier, familiarise us with terms and concepts, and help us have a grounded foundation of knowledge for what we are adjusting when using these passes.


The full presentation from the video can be downloaded here in pdf format, for those who want to keep or study it offline:

Adjectives of Specular, Diffuse, and Emission

Specular

  • Reflection
  • Mirror
  • Shiny
  • Glossy
  • Wet
  • Metallic
  • Highlights
  • Pings
  • Crisp
  • Sharp
  • Polished

Diffuse

  • Soft
  • Flat
  • Ambient
  • Natural
  • Rough
  • Earthy
  • Organic
  • Matte
  • Weathered
  • Dull

Emission

  • Bright
  • Radiant
  • Luminescent
  • Glowing
  • Self-Illuminating
  • Incandescent
  • Electric
  • Beaming
  • Shining
  • Luminous
  • Illuminated

Emission

  • Emission is any object, material, or texture that is actively emitting light into the scene
  • This includes any Lights, Super-heated metals, or Elemental FX like fire/ sparks / lightning / magic etc
  • Neon Lights, Screens, Monitors are all examples of real life Emission objects

Diffuse vs Specular

Specular – Surface Level Reflections

Diffuse – Light passes through surface and interacts with the material at a molecular level, Scattering and Absorption allow certain colors to re-exit and scatter into scene

Let’s talk about Specular first andSurface level Reflections


Specular

Law of Reflection

  • The angles of incidence is equal to the angle of reflection

Smooth Surface – Specular Reflections

  • Light Beam = a bundle of parallel light rays
  • Light Beam remains parallel on incidence and parallel on reflection

Planar Mirror and Virtual Image

  • An Image created by planar specular reflection that does not actually exist as a physical object is referred to as a Virtual Image.
  • The Virtual Image appears to be located “behind” the mirror
  • Virtual Image distance =  Object to Mirror + Mirror to Observer.
  • Speculum is the Latin word for “mirror”, which is where “Specular” derives from

The people are witnessing a virtual image of themselves looking back, that is double the distance from them to the mirror. The light travels from them -> to the mirror, and then from the mirror -> back to their eye

Notice the reflected virtual image of the chess piece is in focus, even though the real piece (in the foreground) is out of focus. The camera lens is respecting the mirror’s virtual image distance, even though the mirror itself is out of focus.

Here you can see a ground plane mirror appearing to invert the tree in it’s virtual image


Rough Surface – Diffused Reflection

  • The uneven surface causes the Incidence Rays to hit at different angles
  • The outgoing reflection rays scatter in different directions

Here you see some examples of different CG materials along the Roughness / Glossiness spectrum


Wet Surface Reflections

When a surface is wet, the water fills the gaps and flattens the surface and causes more a specular reflection


Microscopic Surface Details

In these slides and examples we are discussing surfaces at a microscopic level. You might think a piece of paper looks smooth, but under a microscope it has quite a bit of roughness to it, which is what makes it so evenly lit and diffuse.


Metallic vs Dielectric Surfaces

The diffuse and specular terms describe two distinct effects going on.  The Light interacts with materials differently depending on if the material is a metal, or a non-metal (Dielectric)

Dielectric – Absorbs and Scatters light

Metallic – Does not Absorb light. Only Reflects


Dielectric (Non-Metal)

  • Light penetrates the surface level and the molecules of the material absorb and scatter the light within
  • The light photons excite the atoms they hit below the surface. Some of the light is absorbed, and this energy is converted to heat. Then new light rays (photons) are emitted from the excited atoms. Those might excite nearby atoms or exit the surface as new photons. These new photons are same color as our material.
  • The Base Color Texture (Albedo Map) – determines the color of the diffusely scattered photons from excited atoms.  It’s the color that is scattered back out and not absorbed by the material

Metallic

  • Does not Allow light to penetrate the surface and does not Absorb light. They only Reflect light on the surface
  • Metals can be thought of as positively charged ions suspended in a “sea of electrons” or “electron gas”.  Attractions hold electrons near the ions, but not so tightly as to impede the electrons flow.  This explains many of the properties of metals, like conductivity of heat and electricity
  • The incoming photon does not excite the atoms, but bounces directly off the electron gas
  • The Base Color (Albedo) is used to describe the color tint of the specular reflection
“Electron Gas” Model

Notice the Specular Reflections are tinted a certain color depending on the metal type:

On Dielectric Plastic balls, the material color changes, but notice the specular highlights are the same color, maintaining the color of the light or surrounding environment.

Comparison of a Metallic vs Dielectric Material in CG


Chrome Sphere and Diffuse Ball

Used as a reference to see what something 100% Smooth and Metal (Specular) and 100% Rough and Dielectric (Diffuse) looks like in the scene.

Resources:

DAIWTONG824@OUTLOOK.COM

10" 50/50 Chrome and Grey VFX Ball


Diffuse

The diffuse component includes light that penetrates the surface and interacts with the materials molecules. This happens in different ways in the real world

Transmission

  • Light passing through the material / surface
  • Can be thought of as “transparency”

Refraction

  • when light changes angles as it goes through different materials or mediums

Absorption

  • When certain wavelength colors of light get absorbed by the material

Scattering

  • when light is dispersed in many directions when it comes into contact with small particles or structures in the material

Simplified Diffuse Calculation

When the distance that light travels beneath the surface is insignificant and negligible, the calculation can be simplified by the renderer and just calculated at the surface point where the light hits. It uses the Base Color Texture (Albedo) as the Diffuse Color that will scatter. 


Sub Surface Scattering

When the distance the light travels beneath the surface of the material is significant, the interior scattering must be calculated. This is referred to as Sub Surface Scattering (SSS)


Physically Based Rendering Terminology

Albedo

  • Base Color Texture Map
  • On Dielectrics (non-metal) refers to color of material
  • On Metals, refers to the color tint of the specular reflection
  • Texture map is without highlights, shadows, or ambient occlusion

Metalness Map

  • What area is metallic or not. (will use Albedo Color differently). Usually Black or White

Roughness (Glossiness) Map

  • How blurry or how sharp the reflection will be

Real life objects often have a diffuse and a specular component

Diffuse describes the color of the billard balls, but the specular highlights are all the same color (reflecting the color of the light above the table)


Iridescence

  • There is also Iridescent materials that change specular color depending on viewing angle.
  • Iridescence is a kind of structural coloration due to wave interference of light in microstructures or thin films.

Nuke – Simple Material AOV setup

We can break our fruit bowl render into the 3 simple components, Diffuse, Specular, and Emission. They layers look like this:

You can download the nuke script shown in the Tutorial. I created the mini setups for the 3 different types of renderers, Arnold, RedShift, and Octane. Dividing the Beauty render up into their 3 Diffuse, Specular, Emission Components, and Recombining them.

Download nuke script project file


If you haven’t downloaded the FruitBowl Renders already yet, you can do so now:

You can Choose to either download all 3 FruitBowls at once:
FruitBowl_All_Renders_Redshift_Arnold_Octane.zip (1.61 GB)

Or Each FruitBowl Render Individually for faster downloads:

FruitBowl_Redshift_Render.zip (569.1 MB)

FruitBowl_Arnold_Render.zip (562.8 MB)

FruitBowl_Octane_Render.zip (515.4 MB)

The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.

  1. Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
  2. Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder

This will help the Read nodes auto-reconnect to the sourceImages for you.


Recap

  • Emission / Illumination materials emit light
  • Specular and Diffuse can be separated by Surface Level Reflections and below surface Material Interactions
  • Each individual light ray follows the Law of Reflection.
  • The smoother a surface is, the more mirror-like the specular reflection will be.
  • The roughness of a surface will cause the reflected rays to scatter, and reflection to be blurred.
  • Metallic materials do not allow light to enter the surface.  They only reflect light
  • Dielectric materials allow light to enter the surface.  Light rays are refracted, absorbed, scattered by the materials molecules. Certain color wavelengths re-exit the surface in random directions, which is what we perceive as the materials color
  • Albedo – Base Color Texture. On Dielectrics – color of material | On Metals – color tint of the specular reflection.
  • Sub Surface Scattering is when light below the surface travels a significant distance before re-exiting
  • Iridescent materials tint the color of the specular reflection depending on viewing angle.

References, Resources, Credits

Firstly, Thanks to Pexels for providing such a good resource for stock reference images

I did a hell of a lot of research on this topic before creating the video, I really encourage you to dig a little further and explore the topics more using these great resources:


Naty Hoffman

Youtube – 2015 Siggraph Presentation – Naty Hoffman – Physics and Math of Shading | SIGGRAPH Courses

2015 Siggraph Presentation – Naty Hoffman PDF Paper:


Khan Academy

Video – Specular and diffuse reflection

Video – Specular and diffuse reflection 2

Video – Virtual Mirror


Scientific websites:

Website – The Physics Classroom – Specular vs Diffuse Reflection

Youtube – The Physics Classroom – Specular vs Diffuse Reflection

Website – Olympus LS – Interactive Explanation of Diffuse and Specular

Youtube – Specular vs. Diffuse Reflection, Incident and Reflected Angles | Geometric Optics | Doc Physics

Website – Erika Jame Site – Reflection of Light

Youtube – Specular vs Diffuse Reflection | Physics with Professor Matt Anderson | M27-05

Youtube – Physics with Professor Matt Anderson | Physics with Professor Matt Anderson

Youtube – Reflection of Light Explained Clearly – MooMooMath and Science


CGI Blog Posts

Master of Light – Vector Perez Mindmap

Website – CG Learn – Physically Based Shading

Website – PBR Texture Conversion – Marmoset

Website – Basic Theory of Physically-Based Rendering – Marmoset

Website – JORGEN HDRI Explained

Website – THE PBR GUIDE – PART 1 – Adobe Substance 3D

Website – Material physics in context of PBR texturing – HandlesPixels

Website – PHYSICALLY BASED RENDERING ENCYCLOPEDIA

Website – Tutorial: Blender – Quixel/Substance – Sketchfab: A Proper PBR Workflow

Website – Omniverse MDL Materials

Website – What is an Albedo Map and How to use it ? by Alex Glawion

Website – to buy chrome sphere and diffuse balls – VFX Super Store

Wesbite – Physics Stack Exchange – Why don’t dielectric materials have coloured reflections like conductors?


As always thank you for watching, hope you learned something. More videos to come.

The Foundry Nuke: Gizmo Creation – Tips and Tricks

Hi everyone,

The Foundry has recently published a video I created for them earlier this year, Gizmo Creation: Tips and Tricks. Let me know if you found them useful, hopefully there will be more parts in the future. Thanks!

I will just grab the great description from the video the Foundry has provided:

Gizmos are user-created super-tools in Nuke, which are an easy way to package up parts of your node graph into a single group – or Gizmo – so that it can be shared across projects, teams, and Nuke Scripts.

In this video, Tony Lyons gives an insight into how he creates Gizmos in Nuke.

He starts with the User Knob Interface and how it’s been revamped, making Gizmo creation more straightforward and faster than ever before.

Tony then looks at how we can elevate the flexibility and versatility of our Gizmos by adding multiple inputs and a switch node so you can switch between inputs easily.

He also touches on how parameters from nodes within your Gizmos can be added to the Gizmo itself, allowing you to adjust things from the node graph without needing to dive into your tools to find a specific knob tweak.

Want to know more about Gizmos? Check out

https://learn.foundry.com/course/1023…
https://learn.foundry.com/course/6585…

Want to see more of Tony Lyons? Check out https://www.creativelyons.com/

Interested in Nuke? Try it for free here! https://www.foundry.com/products/nuke…

Chapters 0:00
Introduction 0:31
User Knob Editing Toolbar 1:29
Linking Parameters Between Nodes 3:27
Changing Knob Properties 4:28
Speed Up Gizmo Creation 5:13
Customising Input Names 8:01
Changing the Default Input 10:50
Switching Between Multiple Inputs 14:52
Adding Mix and Mask Options 20:13
Adding Channel Options

About Us: We are the creators of industry-standard visual effects, computer graphics and 3D design software for the Digital Design, Media and Entertainment industries. Since 1996, Foundry has strived to bring artists and studios the best tools for their workflows so they can battle industry constraints whilst staying creative. Subscribe to our channel and get the latest news, tutorials, webinars and updates from the Foundry team.