Sunday, August 28, 2011

It's Time for Digital Cameras To Depart from the Film Model


Digital cameras are computers, ones which lack even a basic predictable response to exposure. There, I said it. It's a weight off my mind. The opportunities offered by our image-capturing computers offers far more opportunities than problems. I'll explain in a moment, but first, let me give you some background.

I worked as an application engineer at the world's leading color printer company for years. I doubt a week went by when we weren't talking about imperfect color response in CCD imaging chips (in scanners at that time), differences across color space representations, additive vs. subtractive color, and the imperfect frequency transmission response of coatings, filters and glass. You'll probably never hear this from the industry leaders; no manufacturer want to reveal that the fundamental reality of their product is that it is imperfect. This isn't to way that cameras and today's post-processing isn't doing a relatively good job, I'm just pointing out that the reality of the situation is that entire process has small issues and attempted corrections, and as light passes through a filter, lens, is recorded on a CCD, stored in a RAW format, then interpreted by software for display on an LCD, converted to TIFF or RGB JPEG, and later printed on a printer, what is not well known or generally accepted in the general market is that there is no possibility of perfection, only a result which is good enough, as far as we can tell. In practice, things get worse from there, if you're overly obsessed with trying to capture exactly "what's there" when you trigger the shutter.

Let's take a simple case. Adobe Lightroom provides a great opportunity to compare adjacent bracketed exposures, and one of the main reasons I use bracketing is that when you change exposure even one stop on a digital camera, forget the theory you learned about stops of light. On a real digital camera, after correcting all RAW files to theoretically have the same exposure, a -1EV exposure will typically need one full "stop" of contrast increase in Lightroom to approach the color and contrast of the -2EV exposure. The 0EV exposure needs two "stops" of contrast increase. But the three exposures may need additional adjustment (increase black levels on the lighter exposures) to approach each other, and they may never be adjustable to become fully identical. So the bottom line is, even on my Canon 5D mark II which cost a few pennies shy of $3000 (after California sales tax), the concept of "one proper exposure" pretty much goes out the window. No doubt others have noticed this, but the greater photography community hasn't fully identified, accepted or acknowledged this yet.

The myth of "the perfect exposure" persists, and many photographers are reluctant to use bracketing (or at least to admit that they do), as if being a technician to obtain a decent exposure in one try were some sort of accomplishment or prize. Really? Sure, it saved film and cost in the past, but in today's digital world with immediate result previews, is being minimally competent really such a challenge or accomplishment, the pinnacle of achievement we should hope for? Are we not pursuing art, some final result? Is using a different or innovative technique frowned upon in other artistic mediums such as oil painting, or are the people who successfully break from tired techniques celebrated when appropriate? In the case of the single exposure photo, given the clear issues with digital sensor response, it's simply a holdover from a different era, an overly used technique that has now become a less than optimal approach. The prejudice and misconceptions on this minor procedural point are so bad in the industry, I hear that some major photo contests disqualify entries which show in EXIF that auto exposure bracketing was turned on. I've never seen that disclosed in any contest rules, so apparently they don't even disclose that judging prejudice (based on technical and experiencial ignorance in digital) before accepting your entry fee! Is it really so taboo to overcome the fact and reality of non-linear response to light in digital sensors? Ignorance is bliss among Luddites; protect the illusions of the old analog paradigm of photography at all costs.

The solution? In some respects, the performance characteristics of digital cameras don't really matter much. Fortunately humans lack the basic capacity of color memory, so you can produce whatever you think is realistic, and neither you nor anyone else will ever know any better.

There are other reasons why it's unfortunate that digital photography is simply trying to mimic the operation of an analog camera. Consider that practically all of the acknowledged leaders in the photography market, in every nook and cranny of the business, from product designers to CEOs to the most prominent and successful photographers, started their careers in film. Change means risk; assuming (even pretending) that digital photography is simply a new way of performing analog photography is safe for everyone involved. They maintain their job security. Everyone's happy. Consumers don't know any better... they're getting a better camera... without the per-frame costs of film (although we're consuming storage and computers at an alarming rate instead).

But today's digital cameras are like the first IBM Selectric typewriters... put a computer inside, to simply emulate a previous manual device. Hooray, we can go back and correct letters within the past 40 typed! Isn't life good? It took another decade or more before computers developed enough to put document design into end users' hands, but now individuals can design entire books at home.

Today's cameras are already computers, so we don't need to wait for technology, the main barrier is inertia, the laziness of established companies lacking imagination and true innovation. Any manufacturer can work towards empowering today's very different buyers and users of cameras. But this is best started with the assumption that they can, perhaps must, break the old analog camera design rules, and introduce revolutionary new and much more effective image-capturing tools and techniques. What the Mac was to text-based personal computers, what the iPhone is compared to traditional cellphones, a re-imagined digital camera could be when compared to film cameras. This leads me into my next topic, where I'll provide some examples which strike me as pretty obvious to digital camera users who really put their digital cameras through their paces today: what Canon should have put into the Canon 5D mark III.

5 comments:

  1. http://www.findpeopleonplus.com/profiles/jean-bernard Breu

    ReplyDelete
  2. i have no idea what you're really trying to say. Are you proposing something here?

    ReplyDelete
  3. What are you talking about? Most of this post doesn't make any sense at all. Are you proposing something or just complaining about the weird way digital exposure works?
    How can you say that we lack color memory? There is a reason we name colors things like Fire Engine Red and Sky Blue... Because people remember what those colors are, and understand the reference.

    ReplyDelete
  4. Max, If Canon and Nikon stopped trying to simply emulate an analog film camera, what features could and would they put inside? The whole approach of capturing a single exposure is a severely "dumbed down" version of that our eyes capture and our brains perceive. How could a camera capture a scene in a way more compatible with our perception onsite? First of all you'd have a different exposure at every point in the scene, then you'd do some post-processing to interpret and reassemble the scene as a whole. Film cameras couldn't do that. Today's computer-based cameras could. Apple's iPhone HDR mode is a good example of this, and it should be extremely embarrassing to Canon and Nikon that they seem to be behind in this emerging area of photographic imaging.

    ReplyDelete
  5. Andrew, We are absurdly overconfident in our ability to remember color.

    "while humans can distinguish thousands (some say millions) of physically present colors, one study suggests that they can identify only 17 in memory."
    http://www.visualexpert.com/Resources/eyewitnessmemory.html

    This has been demonstrated over and over again during presentations at trade shows... show a series of colors including a few shades of "fire engine red", ask the audience to remember one, and they will be completely unable to do so with any accuracy whatsoever. So camera and printer companies get away with having unreliable color representation that cannot be anticipated or compensated or corrected for at one stop different exposure. Meanwhile consumers fall for the marketing pitches touting deeper color spaces (14 bit RAW, etc). If we did not cling to the myth that digital capture can be accurate (if we pursue color calibration at every step in the process for example), what functions might a computer-based imaging system be free to perform? I had to separate some of the immediate answers to that question out into a follow-up post on features Canon and Nikon should be putting into their cameras if they would only stop trying to simply emulate the old hunks of steel.

    ReplyDelete