Author Archives: Richard.blogs@ps6.com

ProPhoto RGB: Benefit or Bust?

>> I am consiering a ProPhoto workflow.
>> I notice in your discussions of workflow
>> you mention only sRGB and AdobeRGB. Why
>> don’t you include ProPhoto?

I think my real answer is that I prefer not to discuss ProPhoto or leave it on anyone’s plate for consideration. That may seem harsh, but it is close to the mark, and a completely different stance than most experts. Why? I have some good reasons. I don’t think it is totally implausible, I just think it is ill-advised. I try hard enough to encourage people not to go with AdobeRGB because of the potential problems, and the ProPhoto largess is just the grand illusion that people considering the AdobeRGB route might fall into… head first. ProPhoto has more potential pitfalls than even AdobeRGB, and the results can be more unpredictable and more costly or even impossible to repair.

Sure, there are some well-respected experts in the field that are swearing by ProPhoto and they encourage people to use it, but I think their reasoning is faulty. Here’s the short list as to why ProPhoto will likely do more harm than good to your images:

* A larger colorspace has just as many colors as a smaller one [whatt??!!!]
* ProPhoto’s additional range of color can’t be printed or seen on a monitor
* People/humans can’t actually distinguish the difference in the broader range of colors
* When you use ProPhoto, the additional color bandwidth necessarily gets compressed for use and output
* If you send a ProPhoto image to a service, you will do well to pray, even if you are an atheist

I’ll explain these things further below.

Broader, Not Better
A “larger” color space, contrary to popular opinion, does NOT have more colors. It just has different color. If you are working with 16-bit images defined by 3 color components, ANY given color space will have some 281,474,976,710,656 color assignment possibilities [2^16]^3 (though I believe the actual 16-bit color count is 35,184,372,088,832 possibilities, based on the fact that the 16th bit is actually used for some other purpose than defining color). An 8-bit image with three color components has 16,777,216 color possibilities. That is the number of color variations that each individual pixel might represent. There are 35-billion (16-bit) or 16-million (8-bit) color possibilities, regardless of which color space you are in. ProPhoto, AdobeRGB, sRGB, MyMonitorRGB, SamsBestBitColorRGB, ColorGuruRGB, NitchColorHSB, MegaLSD… It is the bit count and number of components that affects the number of colors, not the size of the color map.

The color space is a mapping of color. It is just a graph of a segment of the color spectrum defined by boundaries — limitations, if you like. Originally, sRGB was defined as the color space that represented what range of color you could actually see on most monitors. It is pretty much still true (with some high-end exceptions). AdobeRGB was an attempt to define a color space that encompassed both sRGB and the CMYK ranges of color. These two theoretically very different spaces overlapped handsomely, but each had its forte. AdobeRGB took the broadest of both of these color spaces, and presented the model as one that could not be rendered by either. Say that again: the AdobeRGB model could not be rendered accurately in print or on screen. It was a new animal in the color space world.

Color spaces compared. This comparison is an approximation.

Color spaces compared. This comparison is an approximation.

Every color model is designed to represent something, from the response of a particular monitor, to a grouping of all monitors, to hypothetical landscapes of color based on theory, fact and fantasy. You might, for example, define a color model that is based green, red and infrared light. Lets call it IRGRERED. It is not a bad model simply because of what it defines, it is a model, and it defines what it should. IRGRERED has the same number of colors as ProPhoto in 8-bit images, because it has three components, with 8-bits per component. It has as many colors, but it has no interest in recording blues. Because of the limitation, IRGRERED is actually better equipped to render definition in the areas where the infrared component can influence “color” and the recorded result. The result just happens to be a recording of what we can’t see as humans because infrared is not part of the visible spectrum. IRGRERED might serve a purpose for rendering black-and-white images with a particular infrared feel or special effects, but you probably don’t want to use it for true color photography.

What this should leave you understanding is:

  • Color space is a map of a range of color, not the number of colors
  • All color spaces rely on bit-count to determine the number of colors
  • Color space can have a specific purpose

The Invisible Portion of ProPhoto
Our fictional colorspace, IRGRERED, is made up of three components of light. One of these components, infrared, is not in the visible spectrum for humans. In essence, the color space defines light we cannot see. If we could see it, it is hard to really determine what we would see it as, and how it would affect our perception of color.

ProPhoto defines something very close to what comprises the visible spectrum for humans. It is what we can USUALLY see. People with color blindness will have limitations. People wearing rose-colored glasses will have limitations. People looking at ProPhoto through an sRGB landscape have limitations. People using a ProPhoto model to print an image will find the result compressed to fit the CMYK model (or some flavor of color used by the output device). Portions of the color in the model get limited by the model you can see it through.

The pure fact: if you work on an image in ProPhoto, you are looking at it through your monitor. Best case scenario, you have an extreme high-end monitor that claims to render AdobeRGB color. You can’t see all of ProPhoto, just what the limitations of AdobeRGB lets you see. As to the rest of the color beyond the reach of AdobeRGB, you have no real idea what you are looking at. Every color correction that you make leaves you with no real way to see what you have done. About 50% (!!) of the color mapping area of the ProPhoto colorspace has to be re-mapped to fit into the AdobeRGB landscape. The actual percentage of the color that actually represents (the larger the mapping of the color space the more you split hairs over shades of medium malachite) is a mystery. But, by area, that means only 54% of the color will be accurately rendered, by number, according to what you see on the most advanced monitor. The other 46% of the color area needs to be re-mapped. 46%!

Much of that remapping will be swapping colors in at the single pixel level for nearly indistinguishable differences in color 1/16000000th of a color away. Regretfully, this might lead to color puddling and banding when too much of the image color in an area of the ProPhoto image is outside the AdobeRGB range. That translates to potential loss of detail when the image is used. Yes, loss of detail, not a gain.

A further confounding factor of going with the larger space is that, in reality, the color in the fringe ranges of color models is uncommon. The differences in color as you move away from the center of the color model become fractionally different, as suggested above. Meanwhile, the core colors (the basic rainbow) are well covered even in simple color models. The upshot is that the fringe color won’t have a major effect on the appearance of an image. In the next section you can try an experiment to see how all this works.

What you know if you understand this:

  • Color in a mapping might not be a mapping of what you can see
  • Color mapping might reflect what you can see, but you may be limit by your medium
  • The difference in color at the fringe of a color map is not as dramatic as at the core.
  • Broader color mappings expend energy on color that can’t be used.
  • Compression from a larger color space to a smaller one can lead to a loss of detail.

How Humans See
Humans see digital images as a composite of individual pixels, not as the individual pixels — unless they are using zoom or a microscope. The fact is that humans can’t actually distinguish colors to the level that they can currently be recorded. That is not to say that things can’t possibly be improved, but it is meant to suggest the concern over minutia in the fractional color may play less of a role than expected.

There is a pretty simple experiment you can do to see the effect of compounding color maps, and the reason you should have very little interest in ProPhotoRGB in the first place. I can generate a GIF image with 256 total colors that you will be hard-pressed to tell is not a JPEG with 16-million colors, and that in meager 8-bit sRGB. Don’t believe it? Try this now:

Try It Now

  1. Open a full color image, be sure the view is 100% (see the lower left of the image window for the viewing size).
  2. Convert the image to Indexed Color (Image > Mode > Indexed Color). There may be some slight difference in the image, but it should be negligible.
  3. Zoom into the image at about 600%+. You will see individual pixels clearly on screen.
  4. Undo the color mode change. You will see the reverse mapping occur to the original bit count.

If you did this correctly, what you see is the image converted to 256 colors (256!!) and then back to 16-million. If you view the results side-by-side at 100%, you will not be able to tell the difference. And that is the reality of color. An image is NOT made up by the extremes and fractional differences that occur in a larger color mapping, the result is made up from the composite. You have 10-million pixels in an image and more, you are hardly worried about the few that represent uncommon color and fractional differences.

The same image in 16-million and 256 colors.

The same image in 16-million and 256 colors.

Zoomed to 600% you can see the difference pixel to pixel.

Zoomed to 600% you can see the difference pixel to pixel.

Take a look at the following link, and just the table at the top. These experts all agree we can’t even see and distinguish 8-bit color (16-million colors)… SO, we make the color space bigger for what purpose? You can’t ever tell the difference anyway.

http://hypertextbook.com/facts/2006/JenniferLeong.shtml

Another interesting self-evaluation you can do is to take this test:

http://www.xrite.com/online-color-test-challenge

This should help confirm how difficult it is to actually see the difference in colors. If a few pixels in a 10,000,000+ pixel stew that we call an image are off by a millionth of an option either way (and it is probably more like a 35-billionth if you work in 16-bit), are we EVER going to see that in an image result? Not without a microscope and an extremely sensitive colorimeter.

What you know if you understand this:

  • 256 colors can look about as good as 16-million.
  • Humans might not actually be able to see more than 100,000 colors and differentiate between them.
  • You might be able to align color and tone in an exercise, but you really have to think about it. There is no point where you really analyze an image at the pixel level.

 …And Here Are The Real Advantages of ProPhoto
Technology would have to take a mighty leap for ProPhoto to be relatively viable in the current science of color viewing and output. That doesn’t make it impossible, just improbable.

Not only does any technological leap need to be justified, but it has to be accepted by the consumer. Marketing will have to clearly define and demonstrate the difference and consumers be willing to lay out the cash for the advantage. The advantage may be difficult to promote.

You can’t see ProPhoto on color displays because it is out of gamut for even the most advanced monitors, and you can’t print it by any current means. It leaves you working blindly on fringe color in your images, and praying the result corresponds — to whatever it is that you can’t really see in either case.

When you go to use the ProPhoto image you have to compress the information to fit the smaller space which can lead to a loss of detail, rather than the expected gain from using a larger space. Compression can lead to loss of detail and damage to the image, and it will have to happen no matter how the image is used.

If you happen to get lucky enough that the service you use understand what a color profile is (ok, usually they do these days) they can just as wildly mishandle it, replace it, drop it, or do any number of things that will render your image in a far different way than you expect, even if you ALWAYS use the same service. With ProPhoto, problems handling the file and profile are pronounced, and errors will result in a dramatic loss of color and intensity.

The common conclusion experts render about ProPhoto RGB seems to be that “bigger has to be better!” when it comes to color graphing. Generally that is where the logic stops. Regretfully that isn’t logic, it’s emotion.

I have seen articles that state that choosing ProPhoto over AdobeRGB is the same reasoning as using 16-bit over 8-bit images. Well, no, it is not. 16-bit color has 2000 TIMES the information as 8-bit color. ProPhoto has NO MORE information than Adobe RGB, it just stores it differently, in fractional differences of rendered color. 16-bit, on the other hand, allows more detail to be capture over the entire tonal range of the image leaving more wiggle room for manipulations.

In the end, using ProPhoto for its ‘benefit’ appears to be a psychological illusion that currently serves no real purpose or gives false hope of additional fidelity that just cannot be discerned.

What you know if you understand this:

  • The technology isn’t there and it may never be.
  • If the technology were there, it might not make a difference because we are human.
  • Much of the expected benefit of using ProPhoto for image editing is based on emotion rather than logic.
  • Don’t use ProPhoto as there are no real advantages and there may be pitfalls.

ProPhoto RGB, Benefit or Bust?
Before I started answering the question originally, I was willing to come to a different conclusion. But the logic leads to: can’t see it, can’t use it, can lead to reduced — rather than enhanced — detail.

What this entry does not say but should imply:

  • Take good care of your source images with archiving
  • Use safe color workflows that give you excellent, predictable results
  • Use non-destructive editing and store the changes so you can replicate them
  • Trust what you see, not what you can’t
  • “Experts” are not always right, including me