The Foundry has recently published a video I created for them earlier this year, Gizmo Creation: Tips and Tricks. Let me know if you found them useful, hopefully there will be more parts in the future. Thanks!
I will just grab the great description from the video the Foundry has provided:
Gizmos are user-created super-tools in Nuke, which are an easy way to package up parts of your node graph into a single group – or Gizmo – so that it can be shared across projects, teams, and Nuke Scripts.
In this video, Tony Lyons gives an insight into how he creates Gizmos in Nuke.
He starts with the User Knob Interface and how it’s been revamped, making Gizmo creation more straightforward and faster than ever before.
Tony then looks at how we can elevate the flexibility and versatility of our Gizmos by adding multiple inputs and a switch node so you can switch between inputs easily.
He also touches on how parameters from nodes within your Gizmos can be added to the Gizmo itself, allowing you to adjust things from the node graph without needing to dive into your tools to find a specific knob tweak.
Chapters 0:00 Introduction 0:31 User Knob Editing Toolbar 1:29 Linking Parameters Between Nodes 3:27 Changing Knob Properties 4:28 Speed Up Gizmo Creation 5:13 Customising Input Names 8:01 Changing the Default Input 10:50 Switching Between Multiple Inputs 14:52 Adding Mix and Mask Options 20:13 Adding Channel Options
About Us: We are the creators of industry-standard visual effects, computer graphics and 3D design software for the Digital Design, Media and Entertainment industries. Since 1996, Foundry has strived to bring artists and studios the best tools for their workflows so they can battle industry constraints whilst staying creative. Subscribe to our channel and get the latest news, tutorials, webinars and updates from the Foundry team.
The project files and the Renders are separate downloads, so if you have already downloaded 1.1 What and Why files or the Fruitbowl Renders, there are a couple ways to combine them to work.
Either add the .nk script to the previous package (in the folder above SourceImages, with the other .nk scripts)
Or simply drop the Render files into the SourceImages folder of the new 1.2 project folder
This will help the Read nodes auto-reconnect to the sourceImages for you.
Often there are a lot of renders passes to sort, and it’s useful to divide them into categories based on their functions. We can divide up all the render passes by how they are used.
There are 2 Overarching Types of CG Passes:
Beauty Rebuild Passes – Will recreate the Beauty Render
Data Passes – Helper passes
There are 4 Main Categories of CG Render Passes
Material AOVs
Light Groups
Utilities
IDs
Material AOVs
Used to adjust the Material Attributes (Shader) of objects in the scene
Examples:
Diffuse, Specular, Reflection, Sub-Surface Scattering, Refraction, Texture/Color, Emission, Raw Lighting, etc.
The passes in this category should add up to recreate the beauty render, as demonstrated in the previous video
From now on in the series, if I only say “AOVs”, I am referring to this category here. I will try my best to say Material AOVs, but I am so used to it being in my terminology and don’t find the AOV “all render pass” definition very useful.
Material AOVs are passes related to the shader or material from the 3D application. When we use these passes, we are wanting to manipulate the material or the shader of the object
Key, Rim, Fill, HDRI, Light-Emitting Objects, etc.
You can separate your lights however you like. Usually you see things like the 3 point lighting set up broken out into different lights. Along with HDRI and light emitting objects separated.
We are usually adjusting light attributes such as temperature and intensity
The ID category could probably live under the Utilities Category, but I do think the separation of these 2 categories is useful.
ID’s sole purpose is to pull out an alpha or matte channel, whereas Utilities can have many use cases beyond just that.
Many times a texture artist working on characters will make custom texture matte passes that can be rendered out as Texture RGB IDs to help isolate those important parts of the texture for adjustment in comp.
These also do not add up to the Beauty Render
Nuke Script: Breaking out Categories of the Renderers
Nuke script is a node graph representation of the slides table we looked at and I’ve broken out the passes in the categories for each of the 3 render engines.
In order for the LayerContactSheet node to display just the passes for each category, I am removing all layers from the other categories.
I’ve also broken out all of the Category’s Layers into shuffles when a text of the layer name into a contact sheet. The main difference would be that this contact sheet would be renderable, and the UI text on the layerContactSheet is not.
In the Beauty Rebuild Passes Section, underneath we have a Material AOV rebuild and a Light Group Rebuild, showing that these passes add up to equal the Beauty.
Please look through the different categories and different Render Engines to familiarise yourself.
Tips and Tricks for making contact sheets
Split Layers
Here are some links to some various Split out layers / shuffle layers python scripts found on nukepedia:
Place the FruitBowl renders files into the /SourceImages/ folder of the project files and nuke will reconnect the read nodes.
What is a CG multi-pass Render?
A CG Render with multiple extra layers or passes that are to be used to recreate the Beauty Render and to aid in further manipulation while Compositing.
Why do we need it?
Renders are Expensive, and Changes are often necessary. It can take too long to make tweaks and hit notes if you have to re-render the image.
Sometimes it’s faster to find the “look” you are going for in Comp, rather than waiting for the Render results.
Some effects are better achieved in Comp and need additional passes to help achieve the effect in Compositing.
Terms and Definitions
Here are some useful Terms and Definitions that I will be using in this series. They are commonly used in the industry, but sometimes they can be confusing or interchangeable, so I will try and define them for us to help while discussing CG Compositing
Render – The output image or final result of the export calculation from the CG software.
Renderer – The Render Engine or algorithm used to produce the render.
Render Passes – A general term for additional layers exported by the CG renderer meant to be used alongside the main render. These might come contained within a multi-pass EXR or be rendered as separate images.
SourceImages and Stamps
All of the read nodes and source images in the nuke scripts will be located at the top of each nuke script under a “Source Images” Backdrop
You will need to re-link the files in this area if you are following along
We will be using Adrian Pueyo’s “Stamps” add-on to nuke in order to populate our nuke script with the files in the source image folder.
LayerContactSheet is the easiest, fastest, and most convenient way to get a visual overview of all the passes contained in your render.
Turn on Show Layer Names to get UI labels of each pass name. This is only a GUI overlay, so you cannot render it out, it’s just for viewing purposes, but it’s great for identifying the pass names we are looking at
The Viewer
The Viewer shows an alphabetical dropdown list of channels of the stream where the viewer is plugged into.
Remember to set the viewer back to RBGA when you are done viewing that layer
You can use the PageUp PageDown hotkeys to cycle through layers in the Viewer
Along the bottom left of the viewer, it also lists all the channels separated by commas. It’s good to occasionally look at this part of the viewer to keep track of if you’ve lost your layers from the stream, or you are accidentally carrying layers that you do not need anymore in the stream.
Shuffle node
The Old Shuffle node will show a list of all layers in the stream which it is plugged into if you use the “in 1” dropdown
Good way to quickly check what layers are in your stream, but not as visual as layerContactSheet
ShuffleCycleLayers python script:
I wrote a tool called “ShuffleCycleLayers” which you can use hotkeys like Page Up, Page Down or + , – to cycle through the layers of the selected shuffle node, just like the viewer layer cycler. Maybe some people will find this handy if they don’t like to changed the viewer channel dropdown and would prefer to cycle through Shuffle node layers
Old shuffle only displays list of layers within the stream the input is plugged into
New shuffle displays list of every layer in the nuke script
If you’d like to exclusively use the old shuffle node instead of the new shuffle node, you can add this line of code to your menu.py in your User/.nuke/ folder
Split Layers is a python script that shuffles out all available layers from a selected node
This will make 1 shuffle per layer all connected to the source.
You can then just view and toggle between all the layers in the nodegraph
selecting all and hitting the hotkey alt + p will toggle on the postage stamp feature in all the shuffles, and if you visual thumbnails for all the passes. This can be useful for grouping and organising the passes.
Here are some links to some various Split out layers / shuffle layers python scripts found on nukepedia:
Channels are the individual pieces that make up a Layer, or Channel Set. The most common example is red, green, blue and alpha, channels that make up the rgba layer
A layer must contain at least 1 channel, but often has multiple channels.
Nuke prefers layers to have a maximum of 4 Channels per layer, any more and it has difficulty displaying them in the GUI interface
It becomes significantly more difficult to see the channels beyond 4 that are in 1 layer. Nuke’s interface is built around displaying 4 channels.
An individual channel in nuke is written as LayerName.ChannelName, to let you know what layer it belongs to
Depth.Z for example, in which Depth is the LayerName, and Z is the ChannelName
Whenever there is only 1 Channel, this displays in the viewer as the red channel, since it’s the first channel visible in rgba
There are also many cases where someone will just refer to it as “The Depth Channel”, where they are recalling referring to the Layer, but since it commonly has only 1 Channel, they are talking about the same thing.
Some nodes in nuke deal with layers and channel differently, or prefer to deal with one vs the other
A shuffle dropdown displays LayerNames for example whereas a Copy node displays Channels, and therefore the list is much bigger since it is displaying the individual pieces of the layer
Blur node “channels” dropdown actually lists layers, and then you can toggle the channels of that layer on/off
Basically any node with a mask input is dealing with channels since it only needs 1 channel to function
The first 4 channels of a layer are mapped to, and will display as Red, Green, Blue, and Alpha in the viewer, regardless the actual name of the layer. Any more than 4 channels in a layer and nuke has a hard time displaying them
A motion pass for example, is describing motion in XY directions. Left-Right and Up-Down. So only 2 channels are needed in the Layer and they display as Red and Green
A position pass, for example, is usually describing XYZ – 3D space coordinates, and sometimes the channels are actually named x, y, and z. So Position.x, Position.y, Position.z
Since X, Y, and Z are taking up the first 3 channels in this layer, they will display as red, green, blue
AOVs
AOVs stand for Arbitrary Output Variables
Arbitrary output variables (AOVs) allow data from a shader or renderer to be output during render calculations to provide additional options during compositing. This is usually data that is being calculated as part of the beauty pass, so comes with very little extra processing cost.
They can be considered ”checkpoints” or “steps” in the rendering process. The render engine splits up many calculations while making the final image (Beauty) and is exporting these smaller steps out to disk so we can combine them and manipulate them in Comp.
The important thing to take away is the renderer takes these “pieces, these AOVs, and combines them together to form the final Beauty render. We are essentially trying to recreate this process with our CG rebuild, while retaining control over the individual pieces.
One of the best things about AOVs is we get them “for free” since the renderer was going to calculate them anyway.
AOVs can sometimes be just a “catch all term” for all layers/passes you will render out
“What AOVs are you exporting” is a common question, and many 3D applications will use the term AOVs to define any render passes (even though some of them require extra work to get, like ID’s or custom passes)
Differences in the Render AOVs
All the renderers are essentially doing the same thing. They are crunching the numbers, using different algorithms, and coming up with the math needed to produce the final renders.
Since all the renders are basically doing the same steps / calculations, you just have to get used to what that renderer chooses to name these AOVs or lighting passes. All the passes will combine together and add up to the final Beauty output.
There are certain similarities or patterns between all the renderers.
Sometimes we’ll be looking at 1 renderer while explaining concepts, but they often translate over to the other renderers in some way. So keep an eye out for the patterns described and apply what is being taught to your renderer’s output.
Our renders have differences in amount of AOVs exported and differences in naming conventions for the AOVs
For a long time I wanted to release a CG compositing series. Many things stopped me in the past:
Time constraints
Access to good Render examples to work with
Not thinking I had too much to contribute to the subject matter
This series will be focused on answering the following question
How do I best rebuild my CG passes, for the most flexibility as a Compositor?
Download the FruitBowl Renders for the Series
My Friend and fellow artist, Chase Bickel, has kindly provided us with some high quality renders of a FruitBowl to download for free and play around with.
Download the FruitBowl renders now, or I will always post the links at the top of each video and blog post for you to download later:
You can place the FruitBowl renders files into the /SourceImages/ folder of the project files folder accompanying each video and nuke will reconnect the read nodes.
For Example:
These Renders are full of common passes you would find in production, including:
AOVs
Lightgroups
IDs
Utility
Gameplan
Start with the Basics –> Build our way to more advanced topics –> End with a proposed template for your CG Rebuild
I will go through different types of AOV passes you would typically find at a studio, what they are, how they are used, and how should think about them in relationship to one another. We will categorise and group different AOVs in order to define them better, and help us find the commonality and patterns between renderers.
This series aims to be useful no matter what renderer your CG comes from, as the principles are the same.
Topics Covered
Differences between Additive and Subtractive Workflows, and the pros and cons of both
Explaining the difference between Material AOVs and LightGroups and how to work with them together seamlessly
This includes an elegant solution to the infamous AOV – Lightgroup paradox
I will cover the importance of making Mattes and alphas, to help us isolate, and automate our CG manipulation. We will go over common utility passes and IDs and show how to do some cool things with them
Using Full CG Render
Will not cover how to integrate CG renders into a live-action plate
Will focus on the CG rebuild and various methods of manipulation to get the most out of your CG renders
Something for everyone
Juniors, Mids, Seniors, TDs, Comp Supervisors
There will be knowledge to be learned across all levels
Perhaps this will one day be a pre-requisite for a full CG Compositing into live-action plate course
This series will take some time to release all episodes, so please have patience
I was recently on VFXforFilmmakers channel doing a keying demo using my advanced keying template. Matt has kindly filmed some 4K ACES blackmagic footage for all of you to practice on, and we’ve included this nuke script, original footage, pre-renders and final render in the work files for you to play around and dive into.
It’s a great resource and practical case of how I would use the techniques and templates that I developed in the series. By no means the only way to keep, but hopefully you will find many parts interesting and valuable.
The FREE working files can be downloaded from Matt’s website VFXforFilm.com
If you already have the package installed, should be as easy swapping out the old folder with the new one. In the future I plan to do a monthly release update, given there is enough material to add, bug fix, change, etc.
Please let me know if there are any tools you think I missed and would make a good addition in the comments, as well as any bugs or unusual behavior. Thanks
I’m happy to bring you a side project I’ve been working on for awhile, The Nuke Survival Toolkit!
The Nuke Survival Toolkit is a portable tool menu for the Foundry’s Nuke with a hand-picked selection of nuke gizmos collected from all over the web, organized into 1 easy-to-install toolbar.
Many thanks to all the tool contributors out there who made this tool menu possible.
Special thanks and shout-out to Adrian Pueyo for the inspiration and guidance to be able to finish this project. This toolkit contains exclusive AP tools from Adrian and myself that have not been release publicly until now! Make sure to check out all tools with an AP or TL tag at the end.
Select the rotation angle and size of the blur. Choose between blur and defocus. Has a perpendicular blur that blurs in the perpendicular direction to the angle chosen.
Binary Alpha is a very simple, yet super convenient expression that I use all the time, and decided to turn into a quick gizmo.
It analyzes a choice of the RGB, RGBA, or Alpha input and outputs an Alpha Channel (or RGBA result) that is Binary, 0 or 1. Any Pixels that are not 0 will be turned into 1 (negative numbers also), and 0 will remain 0.
This is perfect for those “blur, unpremult, set alpha, blur” for tricks extending colors, or if you need a quick matte for finding any rgb color above or below 0, in a CG render passes for example.
The good ol’ blur/unpremult/blur ❤ :
Basic properties:
The literal tcl expression is just:
r!=0 || g!=0 || b!=0 || a! = 0 ? 1 : 0
Which in english, translates to something like: “if red is not 0, or green is not 0, or blue is not 0, or alpha is not 0, then be 1, or else, be 0” So it will include negative pixels as an output as 1 as well.
Super simple but hopefully a time saver if you are like me and hate remembering expressions.
For those who just want to quickly see what the tool does, I’ll include a time-stamped link to that part of the Demo here: https://youtu.be/Kw3bcsmkGuk?t=2145
BlacksMatch recreates a Toe operation with merge nodes, meaning you can now plug in an external image are your black color and it will perform the operation taking each pixel’s value into account as the blackpoint.
You can control the Multiply, which is how far above the blackpoint the blacks match with stop affecting your midtones and highlights. For example, if you plugged in 0.15 and had the multiply set to 2, then values above 0.3 remain unaffected.
The “falloff” or Gamma control just controls the falloff of the curve into your blackpoint color. if it’s really high, it will act more like a screen or plus (still ending at the blackpoint color times your multiply control), and if it’s really low, it will act more like a clamp. Your blackpoint will not ever fall below your input color while you manipulate the curves.
There is a preview plotscan button that helps you visualize how your curve is behaving with your settings. Just move the plotscan picker around and it will sample your blackpoint color at that area and give you an overlay of your curve. (Don’t forget to turn it off when you are done)
I personally think this is a tool every comper should have in their toolkit, as it’s by far the most controlable way to match your blacks properly!
The settings of the BlackMatch Tool and a wipe from the tutorial:
There is a full video Tutorial about the BlacksMatch workflow, along with a Tool Demonstration at the end. If you want to know how I made it and whats going on under the hood, please watch the whole video. It might give you some ideas of how to re-think your matching blacks workflow.
Here is the Flow Chart for the Blacks Match Workflow:
Our goals are:
1.) Nothing should fall below the blackpoint value
2.) The blackpoint should affect the mids/highs as little as possible.
The Most important thing to remember is to try and not adjust any color corrections after you apply your blackpoint.
Here’s a few examples of the importance matching blacks can be to your image:
Here is a picture with just some beauty rendered statues, color corrected and placed into our scene, no blacks match… stands out quite a bit:
Here is a before picture is we just turn off all the color and detail and just place “pure black” statues into our scene:
If we start sampling the colors around the surrounding areas of the statues and applying theses as our blackpoint, still ignore any midtone/highlight color or detail. We can actually see our statues are fitting in quite nicely. You can think of it like “if there were pure black objects in my scene in that area, what would it look like?” and we are getting pretty decent results:
And here is the image with our matched blacks properly combined with our midtones and highlights. But there is a lot of operations used to combine the blackpoint with the midtones and highlights. So let’s take a look at all of them, and study the best way of combining these:
For the second part of our goal, the blacks should affect our midtones and highlights as little as possible. We have to look at different operations of how to apply our blackpoint:
Here’s some graphs comparing the most common operations of how to match the blackpoint and what they are doing to a 0-1 curve.
Here’s a closer look at the curves next to each other:
Here is just an overlay of all the curves on top of each other to compare them to one another:
Here is a close up of a Clamp operation:
Here’s a close up of the plus operation:
Here’s an example of a Screen operation:
A close up of a Hypot operation:
A close up of a Toe operation:
Let’s now talk about the Good, the Bad, and the Ugly… starting with the bad:
A screen and lift do a sililar operation between 0-1, but the screens influence stops at 1, where as a lift is actually using 1 as a pivot point to lift the blacks and lower the highlights above 1. If you set a lift to 1, it will completely decontrast the image, sandwiching every pixel and turning the entire frame to 1.
No matter if you leave a color correct at default range, or start adjusting the range curves, the color correct produces some very strange results because of the S-curve it generates. Because it is sampling the luminance from the bg image, if you enter a black point number higher than the luma key it is calculating, than the curve will first be your black point color, then dip back down to the midtone color and rise back up to your highlights. This creates a really strange image that you’ll want to avoid.
Avoid Lift on a Grade, and avoid ColorCorrect nodes for adjusting your blackpoint.
The Ugly:
Both Clamp and Plus are at the Extremes of our operations, and have the least appealing qualities. You can acheive much more control and better operations using our remains screen, hypot, and toe operations. Here is the gif of the curves compared to one another again so you can see that clamp and plus are at the extremes:
Screen and Hypot are perfectly fine operations, but offer limited control. and Toe… Well we can’t even input an image, and we don’t even know what exactly it is doing. There’s very little documentation on it. Let’s try to reconstruct it:
With a little bit of fiddling around. We can see the top of the toe operation is exactly double the value of the blackpoint… We need to start by re-creating a screen, which is basically an inverted luminance key, used as a mask, that is plusing out blackpoint. From there we can create a screen operation that instead of end at 0-1, ends at 0 to 2x the balckpoint value, and you can see in the example above we have a mini triangle encompassing our toe operation. There it’s a matter of using a gamma of 0.5 on the luma-key mask and we have our toe.
So to reiterate:
A toe is an inverted luma-key, that instead of 0-1 is 0-‘2x the blackpoint color’ and then is gammed by 0.5 and is used as a mask to plus the blackpoint color over the image.
I know… that’s a mouthfull. But what we take away from making this toe for ourselves is that we have controls over 2 things. The multiply of how far above the black color it is affecting our midtones and highlights. And the gamma curve that is controlling our falloff of the curve towards the blackpoint value.
With this knowledge and math, we can create a tool that uses merges to do our math operations, which mean me can plug in an external image as out blackpoint and expose controls for the mult (above the blackpoint) and gamma (falloff) of the curve. And now we have our BlacksMatch tool.
I’ve received a few requests for the script and images I’ve used in the tutorial, so I’ve put together a folder on my dropbox for you guys to download and play around with. This is a preview of the part of the script I am saving for you. It includes the statues over the temple example, a couple of the simple shapes over complex black level images, and the part of the script that I recreated the toe, with the animating graph.
I’m also adding a reference image folder, with some of the cool hazy/foggy complex black point images I found while researching this topic. Maybe they will be good practice for you to bring into nuke and play around.
Finally I am adding in the original statue exr render, with some passes: beauty, depth, position, and normals, in case you want to try and color correct and match the statue render into any of these images or your own backgrounds. Thanks to Ernest Dios for the render.