Color Management is Ruining Your Images

This article’s gonna be a little controversial. Some of you will dismiss it outright and go back to doing things the way you’ve always done. That’s fine. But if a few of you come away questioning the systems and workflows we use to make images — then this article has done its job.

I'm not here to change your mind. Just to spark a conversation. One that might help us all make better images.

You’re probably wondering: is the title just clickbait, or is color management actually messing with your images?

Well — color management is incredibly useful. It helps map images to different standards and ensures consistency. No argument there. But there’s one core idea in modern color management that deserves a closer look:

Scene-Referred.

What does “scene-referred” actually mean?

Let’s start at the scene. We’ve got light. We’ve got objects. Their interaction produces a color stimulus — which the camera captures.

In a scene-referred workflow, the goal is to map that stimulus as faithfully as possible from the scene to the display. The idea is to represent the scene accurately — preserving the original colorimetric values.

That’s how most colorists and camera engineers think about images today. But where did this concept come from?

Kodak. 1990s.

It all started in late 90’s because of Kodak as they were building the Kodak Photo CD system. The system which was quite advanced for its time and needed to be able to ingest images coming different mediums: slide film, negative film, scans of prints, early digital cameras.

Since the images were meant to be displayed simultaneously or in sequence;

The goal of the system was to minimize/eliminate the differences between each medium, as displaying sequences of images coming from different mediums having different “looks” would have produced a jarring effect (they didn’t cut well with each other) In order to do so they came up with the scene referred concept.

Each source image would be transformed to match a “reference camera” — essentially a ground-truth capture of the scene. This way, regardless of where the image came from, it would display consistently, as if seen by the same eye.

Each medium had its own input transform, converting it into a unified, scene-referred color space.

Each source image is transformed to match a “reference camera” — essentially a ground-truth capture of the scene. This way, regardless of where the image came from, it would display consistently, as if seen by the same eye.

Each medium had its own input transform, converting it into a unified, scene-referred color space.

Sound familiar? It should. This workflow is now the backbone of systems like ACES, DaVinci Wide Gamut, and even how RAW software handles input profiles. Your DSLR, mirrorless, phone camera — they all reference the scene.

But before we dive into what happens inside a digital camera and how scene-referred processing works, let’s pause and ask a bigger question:

Is an accurate image actually a better image?

To read more about this Digital Color Management Encoding Solution Second Edition (Edward J. Giorgianni, Thomas E. Madden)

Accuracy vs Art

Just because we can display an image accurately, should we?

Does “realistic” mean “beautiful”?

Let’s be honest — images, throughout history, have rarely been literal. From cave paintings to oil portraits, images have always been interpretations of reality. Abstractions. Not measurements.

Ever seen a painting of a simple still life — maybe fruit on a table — and found it captivating?

Now try photographing that same scene with a modern camera and a colorimetrically accurate workflow. Does it hit the same? Probably not.

And that should tell us something. The images we love — the ones we feel something from — aren’t copies of the world. They’re interpretations

Kodak understood this a long time ago. And David MacAdams, one the head researchers at kodak, wrote an interesting paper in the 50’s.

Published in Proceedings of the IRE ( Volume: 39, Issue: 5, May 1951)

Quality of Color Reproduction by David MacAdams

'“Two conclusions are indicated by the diagram in Fig. 1. First, optimum reproduction of skin color is not "exact" reproduction. The print represented by the point closest to the square ("exact reproduction") is rejected almost unanimously as "beefy."“

(link to the paper below)

This paper makes us reveals two things. First: it’s not just a random guy talking nonsense in a blog post.

Second: they already had a way of creating colorimetrically accurate images even in the 50’s. They could already engineer film stocks with those properties. That ability didn’t come with the digital age. But, they stayed away from that because cognitively it was not our preferred color reproduction. Film hasn’t been engineered to reproduce accurate color, rather it has been refined over decades with one simple goal in mind. Make it look good!

The Scientism Problem

Somewhere along the way, the industry adopted a kind of scientism — the belief that accuracy equals quality.

But as we’ve seen, that’s not true. An accurate image is just that — accurate. That doesn’t mean it’s better.

Yes, there are times when colorimetric fidelity is necessary — digitizing artwork, for instance. You want precision when you're reproducing a painting. No abstraction on top of abstraction.

But when it comes to filmmaking or photography — the tools and workflows we've standardized might actually be holding us back.


Now let’s look at some of the less philosophical and more technical problems when using scene referred colorimetry to process our images. Every digital camera, (cinema cameras and still cameras alike) records a raw image that once it’s debayered it’s in the native camera's color space.

From there a colorimetric matrix is applied which brings the image into the camera’s output color space:

  • ARRI: native → ArriWideGamutRGB

  • Sony: native → SGamut3.cine

  • RED: native → REDWideGamutRGB

The problem with this matrix, as Steve Yedlin has already demonstrated a number of times, is that it expands everything rectilinearly (because of the nature of a matrix)

Remember: this matrix is used to bring the image into a colorimetrically accurate state as defined by the manufacturer. The problem with it is that by the time, skin tones are saturated enough the already very saturated colors get even more saturated. (because of the linearity of the matrix by the same percentage)

Below the same exact datasets. On the left the datasets had the colorimetric matrix neutralized/remouved (essentially yielding camera native) on the right SGamut3 straight out of camera. Both datasets are in Slog3. You can see how the dataset on the right explodes some of the colors near the edge of the gamut. This shows log straight out of camera: there is no display output color management, which would even further exacerbate the issue

This will cause breakage in the image in those very saturated areas that we then try to recover using gamut mapping and gamut compression which wouldn’t be necessary if the image would have been built starting from camera native space avoiding matrices.

It’s like building a boat with poor craftsmanship and patching the leaks with tape.

Wouldn’t it be better to build a better, non flawed boat, to begin with?

The images down below show the same image going through the Adobe Standard profile and through my Portra Profile which is been build by first undoing the colorimetric matrix. (I’m not showing you this image because it’s a good picture. Just notice the different behavior in the bright saturated areas and neon signs.

Unfortunately camera manufacturers don’t share these matrices as they treat them as some sort of trade secret. The only manufacturer that does share the colorimetric matrix with the customers is ARRI. For other cameras we are forced to reverse engineer these matrix in order to build a non flawed color pipeline. (more about this process in a future blog post)

This brings us to the conclusions of this article.

Why do we like the look of film?

Philosophical answer: Because film is the sweet spot between a reproduction of reality that gives us the feeling of reality, but it’s not a faithful colorimetric reproduction of it. It’s an abstraction, and that’s what makes it visually appealing.

Technical answer: Film hasn’t been engineered to match real life stimulus values, it was a pure system essentially engineered on a “make it look good” basis.

Being an analog medium it doesn’t suffer from some of the problem that we experience with poorly handle digital color pipelines

If we want to author images that are nuanced and not technically flawed we should probably stay away from the calorimetric viewpoint of image making, and the scientific thinking that an accurate image is a better image.

To do this the best way is to undo camera’s manufactures matrices and work directly with the native color space of the sensor when building a color pipeline aimed at an artful rendition of the image, by using more complex tools than the ones that comes in our NLE of choice.



More to come in the next articles!

Link to MacAdams paper: pages 468 - 485: https://www.worldradiohistory.com/Archive-IRE/50s/IRE-1951-05.pdf#:~:text=...%20Quality%20of%20Color%20Reproduction,Optimum%20reproduction

Previous
Previous

How to calculate the Inverse Camera Native Matrix

Next
Next

Film Emulation Process