Dan Heller's Photography Business Blog Industry analysis from www.danheller.com

The photography world -- the business, the culture, the art, the politics, the technology.

Site Feed

Subscribe to
Posts [Atom]

My Photo
Name:
Location: Santa Cruz, California, United States
My Books on the
Photography Business

Wednesday, July 05, 2006

The business aspects of RAW vs. JPG mode

To shoot in RAW or JPG mode on your digital camera? Do you really get the extra, fine detail that 16-bit RAW mode purports to deliver, and that everyone is raving about? Or is it a case of the Emperor's New Clothes, and no one wants to say anything? Or, more precisely, do people even notice?

The answers to these questions may vary depending on what you want out of your images, but the business implications is the focus of this article. To get there, we need to start at the beginning.

For that, join me as we go into my way-back machine, for a small trip back about 25 years, at a time where music CDs were just beginning to hit the consumer market. The huge controversy of the time was that CDs were lower in audio quality than their vinyl record counterparts. Audiophiles were up in arms about the fact that analog sound was so much better than digital because it had a broader dynamic range--the high pitches were crisper, the bass notes didn't sound muddy, and the "continuous" tones were more life-like than the 1's and 0's of digital sound. Or, so they said.

The problem facing the consumer was that CDs were so much easier to deal with than vinyl records, and they didn't have those annoying skips, pops and scratches that come with everyday use. Yet, as audiophiles yelled louder, the general public showed more than indifference--they felt that CDs sounded not just better than vinyl records, but even closer to real life! The marketplace responded with their wallets; any difference in actual sound quality was imperceptible by the human ear. For the record companies, this was a windfall. The cost of manufacturing CDs was pennies on the dollar, compared to vinyl records. Plus, the public was willing to pay more than double the price for the CD version of an album. Since the music business is all about volume, nothing better could have happened to the music industry.

And today, it's déja vú all over again: the exact the same story unfolding as it did 25 years ago, but this time, the new music technology is MP3. In case you've been living in a cave (or working in your 1970s photography darkroom), MP3 is a form of compression for music that strips down the data to what many consider to be a bare minimum, all for the purpose of stuffing more songs onto smaller storage devices. This compression is causing even more limitations in dynamic tonal range, yet consumer are buying more and more music in this format. As before, few people can actually recognize the difference between the MP3 files and the original CDs from which they were copied, even though there is definitely a technical difference. I confess that I listen to my MP3 files on my fairly advanced stereo, and to me, Aerosmith and Bach sound just as good when I play them on my analog vinyl records. (Except for the skips, pops and scratches.)

And, as before, the business implications for MP3 are dramatic. With the exception of the brief time frame where the music industry stupidly failed to adopt the internet as a distribution channel--an error they have since remedied--record companies have once again struck gold, in that sales have been further fueled by an even easier and cheaper distribution channel, not to mention a broader, worldwide audience.

What can we learn from the music industry that we can map to the photography world? Two things, mainly: a reminder that "volume distribution" is the key to sales and profitability in the internet domain; and secondly, that volume requires a very efficient workflow of production. What does this have to do with RAW vs. JPG shooting modes in digital photography? Everything.

The first thing one might extrapolate from the music industry's lessons is that the "quality" of the product is only important if the audience can tell. Ironically, photography technology is moving in the other direction: we have higher-resolution cameras every year, including better dynamic range. Have we gone "far enough" where a profitable business can be attained using today's technology? Again, as postulated in the beginning of this article, "it depends." And that dependence gets into aspects of the economics of the photography industry that don't necessarily map over from the music industry directly.

Now, I know what you're thinking: a non-professional music-listener like me may not notice or care about losing dynamic range in sound, but professional photographers do care about losing dynamic range in images. Let's assume that's true for the sake of argument. So, there's no question that RAW better, right?

Not so quick, Bucko.

As a professional photographer myself, I really do need the most dynamic range I can get out of my camera, so the preservation of as much original data is critical. But, unlike sound editing, I can do a lot to an image in Photoshop that audio engineers simply can't do to music files. For all practical differences between RAW and JPG, you can still edit individual colors in either format. The question is to what degree of granularity is "enough?" For this, we need to examine whether that data is important enough to preserve, and at what cost.

It's important to recognize that JPG itself has 10 different levels of compression, each controlling how much data is "lost" when storing pixel data. When the compression level is set to 1, a lot of data is thrown out so as to end up with a very small file. (Thus, it's "compressed" a lot of pixels into a smaller space.) At the "10" setting, no compression is done at all; each pixel that the image has is written to the file. The way JPG does this results in a smaller file than its RAW counterpart, and this method can reduce image quality if done repeatedly on the same file. Therefore, the "workflow" for shooting JPG mode assumes that you only capture the initial image in JPG, and then immediately save the image in TIF (or other standard) format during the editing process. The important thing to recognize is that no data is "lost" on the initial image capture in JPG mode, which, in this respect, makes it roughly the same as RAW mode insofar as data loss.

At this point, there isn't much controversy--most (knowledgeable) people already agree up to here. However, this is where major differences exist, and where the controversy begins: RAW images have 16 bits of data with each pixel, whereas JPG images have 8 bits. There's also the fact that RAW mode doesn't apply any color profile or white balance setting at all, which allows you to change them after the fact in Photoshop. I hand-wave this "benefit" away, since most any professional photographer is skilled enough to know 99% of the time what color balance he should be using for any given shot. (And he should shoot a lot, so this isn't a problem.) As for color profile, I'll come back to that later.

The most important part of RAW mode, therefore, is its 16-bit pixels. At first blush, one would say that the 16-bit value is better because it's more precise--closer to the original color that was in the scene. In theory, yes, but in reality? Is it actually "better" data? And if so, can one actually perceive this difference? How significant is it? Or, could it be like the analog audio and high-resolution digital audio debate again, where there is technically a difference, but too insignificant to bother worrying about?

When you have more bits per pixel--that is, more data in the image--you you get what appears to be a smoother transition from one color to the next, thereby eliminating undesirable effects like banding. But, can you have so much data that no one can understand it, including your display device? Or, even your eyes? In a sense, our Emperor could possibly be wearing new clothes, and your camera may even be able to take a picture of it, but it may also be that it's physically impossible to see either the clothes, or the photo of the clothes, because neither the device nor your eyes have the capacity to see them.

And that's what the problem is with 16-bit RAW images. Nothing, except for some computer monitors, have the ability to render images that have that much data in them. Tests on the human eye have also demonstrated that the dynamic range of colors we see, while much larger than what can be captured with digital cameras, still lies within the range of colors that can be represented with 8 bits. While it would be great to actually capture color and light intensity that better approximates real life, there is still another major limitation: no printing system can produce prints from 16-bit images. Regardless of what you start with, you eventually have to convert an image to 8-bits anyway. This is somewhat analogous to the fact that most people can't perceive the difference between an analog recording, and a high-quality audio CD. The better data may be there, but the human ear (and speakers) are unable to discern it.

The first objection people raise to this is:

"but what about the banding of colors I see in my image when I bring it up in Photoshop? I can see the banding in 8-bit images, and less of it in my 16-bit RAW files."


And here's where the rubber meets the road. The answer is simply a matter of having more room to better approximate that data in the first place. Since the camera's sensor is not capable of capturing the actual colors from the real world, it has to approximate what that missing data might have been. This approximation just happens to reside in 16-bit pixel data rather than 8 bit data. Having a finer level of decimal point would be nice if the data weren't so inaccurate from the sensor in the first place. It's like having a calculator try to figure out the 10th decimal point of PI, even though the chip inside is incapable of doing that many decimal places. Sure, you can make a wider display that appears to have more decimal points, and you can guess as to what those added digits may be based on previous digits, but bad data is bad data, no matter how many decimal points you go out. In the imaging world, the additional data associated with 16-bit RAW images may give you a perception of accuracy, but it isn't actually more accurate. It's just fiddling with data intelligently.

People may then respond,

"perception or not, it's a better image. And if it was done by approximation, what's wrong with that?"


If you accept approximation in your pixel values, then the question becomes what tools are best at doing that kind of approximation? If the RAW image doesn't actually contain anymore useful data than the 8-bit data, the job for doing the extended math is open to more candidates than just the camera. What if you just need to be really good at Photoshop? Indeed, the camera's firmware isn't going to be as sophisticated with color approximation as Photoshop would be, even without having those extra bits.

In my own experiments, I have found that I can get an enormous amount of shadow and highlight detail from my 8-bit images using Photoshop techniques that many people may be familiar with, but aren't skilled at. I've also seen people rave about 16-bit RAW mode, only to find that they weren't that experienced with more sophisticated tools in Photoshop that could have yielded perfectly good images in 8-bit mode.

What RAW mode essentially becomes is a short-cut crutch for those who aren't as familiar with image editing as they could be. Personally, I have yet to find an image in both RAW and JPG format that I can't make look at least as good as the other, even if it means doing a little more hands-on work. But then again, I'm not really busting my rear-end to try.

Still, the reluctant skeptic may continue,

"but those bits are there, and the image looks better in 16 bits on my screen, and I like my results..."


Even if you don't buy into the argument that the rendering on the screen is merely a matter of really good image approximation, there's still the final matter of output. All printing devices, whether ink-jet or those that project light onto photographic paper, are limited to 8-bit color profiles. In the end, you're going to have to convert from the 16-bit image to an 8-bit image anyway, and once you do that conversion, you then must choose a color profile for the device that will render the image. If you don't know the details of Photoshop well enough to to do this, the "convenience" of not having to apply those skills in 16-bit mode is moot.

Making matters worse, I brought up the fact that printing devices require 8-bit images, which means we now have to revisit that whole discussion of color profiles. You know how paint companies have their own unique set of colors, and that they may not match those of other paint companies? The same is true of printing devices. But here, the industry did at least one thing right (or they tried to): to create a lowest common denominator color profile that all devices can be aware of, so as to permit a sufficient degree of interoperability between products like cameras and printers. This profile is called "sRGB," which stands for "small RGB", or "Red Green Blue."

One could conclude that shooting in sRGB colorspace is good enough, but here is where we want to swing back into the other direction of maximizing image data. Just because sRGB is common to all devices, it is still a lowest-common-denominator profile for 8-bit color profiles. Most printing devices have a broader dynamic range than sRGB, but they support sRGB for compatibility reasons. To access those extended colors for any given device, you need access to its color profile.

Professionals who use more advanced printers with wider color spaces (also called "color gamut"), often convert their images from their camera's color profile to that of the output device to be used, which they obtain either from the service provider doing the printing, or the manufacturer of the device (or from the paper manufacturer). Again, each is different; the Cymbolics Lightjet has a very narrow space in the yellow range (fewer shades of yellow) for some papers, and deeper reds in other papers. Whereas, Fuji printers are very wide in the Greens (making deep-forest images look very lifelike). This same effect was seen in photographic film back in the day.

So, here's where it all comes together to affect your business: as a photographer, your goal is to capture as much image data as possible while still maintaining an efficient workflow in your image management process. While 16-bit RAW files capture good quality data, its downsides introduce so much overhead in memory capacity, disk storage, and processing overhead, that the extremely minimal added color data simply isn't worthwhile. For most general photographers who wish to run a business, profit is all about volume, and one cannot produce enough product if time and resources are spent dealing with 16-bit RAW images. It's like having to deal with the skips, pops and scratches of vinyl records, even though the analog music is "technically" better.

An obvious exception to that axiom is when the actual image itself is part of a mission-critical deliverable to a client. Specialized photographers, such as assignment photographers shooting products in the studio for ad agencies are good examples. They don't shoot nearly enough images in the course of the day for the overhead of RAW images to negatively impact workflow. Similarly, food photographers and fashion/glamor photographers often need to do precise color-matching of people or products, and RAW image data can be easier to work with for things like that because there are extended tools available for only those formats. But, such examples are rare in the broader photography industry, where the vast majority are either freelance, or work for clients that would never notice the difference between images that were initially shot in either RAW or JPG mode.

Still, the most ardent skeptic of this discussion may still say,

"16-bits is still better than 8, and since I know I'm not losing any data, I'm sticking with it."


Then, at least consider this: the biggest downside is RAW mode is that it is proprietary. It's not just among each camera manufacturer, but from each iteration of your own camera to the next. Yes, the very RAW data you shoot today is not guaranteed to be readable at any given point in the future, even by your own camera if you ever upgrade it (or buy a new model). If you archive your images in RAW mode, they may be readable for a while, but one day, they will suddenly be unreadable. Should you ever need to recover old images from backup disks, they may be totally inaccessible.

This leaves 8-bit images perfectly acceptable, not to mention vastly easier to deal with format. It's compatible with everything, it can be used in many contexts, and is easily portable. Using JPG makes life easier in the same way that music was easier to deal with going from vinyl to CDs, and then again to MP3.

The business issues concerning RAW vs. JPG is one thing, but once you choose to work with JPG, we need to revisit the color profile issue again. When you shoot in JPG mode, you need to select a color profile, or the data may not make sense to any device. While most digital cameras won't give you an option (they force you into sRGB), professional level cameras (like Canon) offer the option of shooting in a variety of them. Among that set is the "Adobe 1998" color profile, which contains the largest range of colors that can be mathematically expressed in 8 bits. Using this profile, you can with it in using Photoshop without having to do conversions. Later, when you want to make a print, you convert copies of those images to new images that have color spaces defined by whatever device (or paper) you're going to print on. (Or, use sRGB if you don't have the device's profile).

I should note that shooting in JPG mode is different than archiving images in JPG format. TIF is a much better format for image archival, and because it is standard, it's guaranteed to be around for a long time. There are other formats becoming available, such as PNG, and there may be a time when cameras shoot in PNG mode, making this entire controversy (and this article) entirely moot.

If the issue of RAW vs. JPG is so academic, why the big fuss? This is largely because of two major cultural dispositions of photographers and the technology industry: one half of the group tends to be intensely focused on minute technical details, which is an important factor in the development phase of any technology. External forces are necessary to keep a legitimate and forceful "push" on camera makers to continue to improve their products. However, the translation of the message they send to the rest of the world taps into the other half of the photography world: there are those who don't understand what they're told, and mis-interpret (or misapply) the message they hear. These are often pro photographers and the media, both of whom act as amplifiers of the mis-translated message. This amplification shapes the message that camera companys' marketing departments hone in on, thus completing the circular feedback mechanism. The camera manufacturers get into the mode of developing technologies that their marketing departments think the customers want, even though this "need" was really a mis-impression from information that came to them through a media source that mis-applied a technical review from a technophile with good intentions, but a misguided sense of real-world applications.

The "fuss" comes into play when someone in the crowd (like me) actually takes a more pragmatic view of the emperor and proclaims that he is not, in fact, wearing any clothes--at least none that are visible to the naked eye.

In summary, "yes!" the techies are right in the most academic sense, that 16-bit images are "better" than 8-bit images. But the real world makes their observations far less beneficial than the work and other problems necessary to bother with RAW mode in the first place. RAW is a very importantly aspect to photography that must exist if for no other reason than to keep track of what cameras do internally, but its practical use to 99% of today's real-world photographers is close to nil.