Advanced grain profiling and compositing
Film emulation lives or dies by texture. The color is seen as the main attribute of film, but the feel comes from grain—and grain in real film is exposure (density) dependent. That single fact is the spine of my workflow. Most plugins just slap on a fixed grain layer and fade it by region; the pattern doesn’t breathe with exposure, so it never reads as organic. My approach builds exposure-aware grain plates and composites them by luminance band so the texture tracks the image the way film does. I’m not saying that those plug ins or Resolve grain are bad, but using them I more often then not had obtained an image with some grain that did look and feel as organic as it should have. There is an incredible paper “A Stochastic Film Grain Model for Resolution-Independent Rendering (A. Newson, J. Delon and B. Galerne)” were they show how they modeled the grain distribution and clumping and how the grain changes based on exposure. This is an important thing to understand: grain is density dependent (density=film exposure). Which means that if we were to look at the grain structure of a piece of film at varying exposures with a microscope, we would see something like this. The images below are from a from “A Stochastic Film Grain Model for Resolution-Independent Rendering”
It’s clear than that if we want to be able to accurately replicate the appearance of film grain we need to take density into account. Before diving in, a little background about this whole process and how the idea came to be.
Livegrain is a very expensive plug in to texture digitally shot images. This plug-in is priced per project and normally used by big Hollywood productions. You can check it out here. Some time ago I found a patent paper about their process, which happens to be pretty straight forward. They basically record a full frame grey chart with a motion picture film camera at different exposures. Those different exposures of the gray chart represent the different grain assets that are then composited onto the digital image in their respective luminance range. Below a 2 diagrams from their patent.
This is something that isn’t normally done when using Resolve grain or many other grain plug ins. If we take Resolve grain for example we have a fixed grain layer that is overlayed onto the image. By using the control sliders for shadows, midtones and highlights, we are changing the opacity of the overlay in that luminance region, but we’re not changing the underlying grain layer.
In order to achieve what Livegrain does I though about 2 methods.
If you’re not interested and you just want to download some of the grain layers I generated completely for free, you can click here. Included with the download also the amazing Kodak Ektachrome LUT that I showcased in the previous blog post. If you’re already a subscriber to the mailing list you should have received an email with the download link
Method 1
In order to create physically accurate grain that is exposure dependent I took the algorithm from the Newson et al. paper and adapted to python. It needs to run in Colab as it requires GPU acceleration as the algorithm is quite compute intensive. Depending on the size of the grain it could take around 10-20 seconds to render a 4k picture which makes it unusable for real time texturing, but I wasn’t set to use it for real time texturing; I wanted to create the grain assets for each luminance range using the physically accurate algorithm from the paper and then composite those assets onto an image in their respective luminance range, like the people over at Livegrain do.
To generate a grain layer we need to generate as many frames as required to cover the desired length (time wise) of the grain layer. I normally generate 700 frames for a 28 seconds grain layer asset at 25fps. The generation of the grain layers would take some time using the paper’s algorithm, but once done for each luminance we would be able to run that grain real time on moving images respecting the density based requirement.
Once the frames have been generated we need to remove the exposure bias centering the grain layers around middle grey before being able to composite them onto an image. We’re going to do this with another this time light weight python script that you can run locally. Basically all we’re left is the grain level with no exposure bias. It might not be visible in the image below, but it’s pretty clear once observed full frame.
NB. The papers algorithm has been developed for monochrome grain. This doesn’t mean that it can’t be used to generate colored grain layers. For colored image/grain the paper says to apply the algorithm per channel. This works but it generates grain that is too high in saturation. This can be easily fixed by desaturating the grain asset a bit in order to have it behave like film where the grain is mainly luma driven.
Method 2 (profiling grain from scans)
After reading about Livegrain workflow I was also set to mimic their process to the letter. But I quickly realized there was a downside…it’s pretty pricey. To pull it off, you’d need to run a 35mm camera motion picture (expensive to rent) at different exposures for around 30 seconds per grain asset (around 5 to 10 exposures depending on how granular you want to be) and that’s a lot of film. Well, I wasn’t ready to spend that amount of money without even being sure it would have looked better than other readily available options. But then I realized that I could do something a little different without sacrificing authenticity. The real grain could be analyzed mathematically to understand the distribution of the grains within frame and then with those statistics figured out it would be possible to reproduce as many grain frames as needed to create a grain assets. Since the analysis is done on a single frame the whole process becomes much more budget friendly as I can create assets for the whole exposure range by shooting less than a dozen frames with a still camera. Basically I’m mimicking the Livegrain process but instead of shooting a grey chart at 24/25 frames a second for many seconds at each exposure, I can shoot only 1 frame for each exposure, analyze it and generate other frames with a random seed (random seed meaning that the grain pattern never repeat itself) with the same statistics of the original frame. The grain frames are shot by lighting a grey chart evenly. I chose to have the following exposure steps: 0EV, +2EV, +4EV, +5EV, -2EV, -4EV, -5EV. One could choose to be more granular than this but I don’t think it’s going to yield better results. I think shooting 0EV, +3EV, +5EV, -3EV, -5EV would already be enough especially when placing the grain under the S-curve of the viewing LUT.
Before starting the analysis I needed to turn each frame into a “zero mean” frame centered around middle grey. Right now the frames have not only different grain levels but also different brightness values. Those differences in exposure need to be equalized so that the only thing we are left with is the grain differences. Once we have the mid grey centered zero mean grain frames, we can start the grain analysis and grain generation using another python script I developed. The results are incredibly accurate.
Compositing the grain layers.
These assets are mid grey centered zero mean plates. The composite mode to use is Linear Light at .500 opacity. There is no guess work on how much strength or opacity you should use. (if you want to match the behavior of your scanned grain). Linear Light composite mode interprets 0.5 (mid grey) as no change as it subtracts it from the image, our grain plates are centered around middle grey but the amplitude of the grain creates variations both in upward and downward direction. Linear Light interprets deviations from 0.5 as double strength. So if the plate “add +0.1 brightness,” Linear Light tries to add +0.2 instead.
If it “subtract –0.1,” Linear Light subtracts –0.2.
That’s why, if you leave the blend layer at 100% opacity, everything comes out too strong.
To fix that, we set opacity to 50%. This halves the effect and brings it back to the correct level — exactly matching what your plate was designed for and affect the image with the intended amplitude.
Every grain asset gets composited onto the image in they’re respective luminance range. It doesn’t matter which log encoding you’re using. You know to which exposure level each assets corresponds to, then you simply take the luminance range of the log encoding you’re using and composite the assets onto it. We are basically masking the log image per luminance and composite the corresponding. You can download the node tree or follow the video part of the lesson to understand this more clearly.
If the grain layers are from a negative stock it’s a good idea to have them composited onto the image before the viewing LUT/S-curve. This way the grain will behave like real film, more present in the midtones, Less present in highlights and shadows because of the Scurve. If you want to have the grain at the end of the chain it could be a good idea to synthesize the grain of a print or slide film, which is already ready for display.
If you wish to get access to the tools I discussed and generate the most organic grain you ever used, you can subscribe to my color science classes and get access to all of my tools. There you’ll find in depth lessons on how to profile film stocks using ColourMatch, the most advanced color matching algorithm out there. You can take a look at a comparison between ColourMatch and other available options on the market here. If you’re interested, find out more here.
If you’re not interested and you just want to download some of the grain layers I generated completely for free, you can click here. If you’re already a subscriber to the mailing list you should have received an email with the download link
A quick demonstration video of the grain assets you can download for free