I see that question a lot posed online, in discussion groups, even under photos. To me liking Photoshop but not liking HDR would be analogous to liking wrenches but not liking hammers. Sure, many people wield HDR poorly, but many carpenters wield a hammer poorly too... what could that have to do with hammers? In other words, what does a poor result have to do with the (value of or utility of) the tool?
Many people vilify HDR; I don't get it. Most people play guitar poorly, but that won't keep me from enjoying the work of many talented guitarists. Of course everyone's entitled to their opinion and their own tastes. If classical music fans want to say, "Ugh, I think I hear a guitar in that piece!", or photography fans want to say "Ugh, Galen Rowell used graduated neutral density filters!", that's their privilege. Surely HDR software will get better and better at expanding dynamic range while producing unobtrusive results, and as that value is delivered for more and more shots, I'll have terabytes of exposure-bracketed images to draw upon.
I find HDR a useful tool about 80% of the time, with maybe 5-10% of all shots I choose to keep being simply not possible without it.
Many people vilify HDR; I don't get it. Most people play guitar poorly, but that won't keep me from enjoying the work of many talented guitarists. Of course everyone's entitled to their opinion and their own tastes. If classical music fans want to say, "Ugh, I think I hear a guitar in that piece!", or photography fans want to say "Ugh, Galen Rowell used graduated neutral density filters!", that's their privilege. Surely HDR software will get better and better at expanding dynamic range while producing unobtrusive results, and as that value is delivered for more and more shots, I'll have terabytes of exposure-bracketed images to draw upon.
I find HDR a useful tool about 80% of the time, with maybe 5-10% of all shots I choose to keep being simply not possible without it.
My example above is pretty obvious, and results like that may be an acquired taste, but can you identify which of the following photos was processed with HDR software and which were not?
Sunset at Mono Lake, Eastern Sierra, California |
Fall colors reflectig in the Merced River, Yosemite National Park |
Half Dome and fall double rainbow around sunburst in Yosemite Valley |
Perhaps more to the point, which do you like better? If you can't tell how an image was produced, does the process or tool used matter? As I browse folders of processed results, I often can't tell how my images were produced until I look at the file name. Those images where the processing does not speak louder than the subject, those are the successes.
As for whether or not a result matches an original scene, no photograph does (unless the entire scene is pure white or pure black).
Consider the scene's brightness. An original scene contains light in a range of up to 17 stops, our eyes can handle 13 stops, a film camera can handle about 11 stops, the best full frame digital cameras at most 8-9 stops. Most of the digital cameras with small format sensors that most people shoot with are probably closer to 4-5 stops. How do you restore some fraction of the shadow and highlight detail in those 8-9 lost stops of light, if not with High Dynamic Range techniques?
Then consider the color. The CCD sensor has one range of colors that it can sense. The RAW format it saves the file in has another range of colors that it can store. The monitor you display it on has yet another. Eventually the image gets converted to 8 bit JPEG format for printing, trying to represent the infinite shades of natural color while preserving only 256 levels of color for red, green, and blue. Then the printer, which uses a subtractive CMYK color scheme of Cyan, Yellow, Magenta and blacK (which doesn't match or directly overlap any of the other color spaces used along the way).
Then consider human perception. Our brains try to assign the brightest thing in a scene to be white. That's we have to have our cameras and software adjust images to a certain "white balance" (strictly a human perceptual distortion). The ambient light available when viewing an image (outdoors in sun, shade, under incandescent light, flourescent, etc) seriously affects our perception of the result as well.
Our eyes and brains are not carbon copies from person to person. Some people report noticeably different perception even from eye to eye. There's truly no such thing as "reality" when it comes to white balance and human color perception.
So given the essentially insurmountable issues at every step of the process, how can anyone claim to produce an accurate copy of a given moment? What would that even mean... accurate to an electronic device, to one person, or to which subset of people, and under which ambient lighting conditions for viewing?
Must we "go with the flow" and pretend with the charlatans that accuracy is possible (or even a desireable goal), or is it safe to observe that the "just as it happened" emperor truly has no clothes?
To each his own though... everyone is entitled to like or not like something for any reason or for no reason. HDR simply happens to be one tool that I find not just extremly useful, but indispensible. I'd sooner part with even basics like UV filters and circular polarizers.
If photographers aspire to be some sort of sterile recording device, then they can be replaced by webcams nailed to trees or doorjambs. The very definition of art requires human involvement and influence... a departure from sterile reality. Exercise your human side, your artistic side... any departure from the fruitless pursuit of perfection will set you free.
If you decide to buy Photomatix HDR software, I do recommend the version with an interface to Lightroom and Photoshop, to give you the most control. You can get a 15% discount by using the coupon code JeffSullivan when you by it from its publisher HDRsoft:
http://www.hdrsoft.com/order.php
No comments:
New comments are not allowed.