*this is quite an old article now! The AHDRIA links are down/broken, I assume due to software like DSLR Remote and other similar, user friendly options becoming available.
An interview with Sean O’Malley
By Jay Weston
Can you introduce yourself to our readers?
As for my background, I have a B.S. in Computer Science, but studied computer graphics and computer vision mainly as a hobbyist until I emerged from undergraduate study. Prior to college, I did some contract work but mainly programmed for problem-solving purposes (going from GW-BASIC to QBasic to Turbo Pascal to C to C++), unleashing my creations on BBS’s, then FTP, then the WWW as time went by. For instance, I authored an early MIME encoder/decoder back when support for binary e-mail attachments wasn’t as ubiquitous as it is today. At various points in time I’ve been involved in projects covering such topics as music typesetting, topography/remote sensing, camera-control scripting, L-systems, and, of course, high dynamic range imaging. In the computer-graphics community, my primary contributions include my work in HDRI, and various tools for the POV-Ray raytracer and Terragen landscape engine.
Currently, I am a graduate student of computer science at the University of Houston (Texas). My “official” research involves biomedical imaging; I’ve published papers and presented work on such topics as intravascular ultrasound (i.e., for heart disease prevention), morphological neuron reconstruction from microscopy data, and seismic data analysis. While this is almost purely computer-vision related, I still keep up with the latest developments in the computer graphics community and high dynamic range imaging in particular.
How did you initially get into HDR and what were your reasons for developing software such as hdr2tiff and AHDRIA?
I’ve been curious about HDRI for some time, but there were several factors that led to my current interest. In general, I wanted a way to import natural (or natural-looking) light into computer-generated scenes. In 2000, I added HDRI output to Terragen as a plugin, and sometime after that started working on ways to get this lighting data into POV-Ray (even though unpatched POV-Ray is inherently “low dynamic range”).
Of course, computer-generated terrains and skies can keep one occupied for only so long (though Terragen creates the most realistic versions of both available in almost any software of its type); I wanted a way to gather my own HDR data. At this point, everyone who starts thinking in this direction runs into a huge price wall: commercial systems for HDRI acquisition are extremely expensive and can’t be used for much else (unlike standard cameras). If you have some background in computer graphics/computer vision, you know it’s possible to do HDR with a camera (digital or otherwise), but this requires much skill and patience.
Fortunately, at the time I noticed that Canon cameras have a high level of remote camera control and an API that seemed designed specifically with “camera hackers” in mind. After that, the first (barely-stable) version of AHDRIA was released at the end of July, 2004.
Can you explain what function AHDRIA serves as a HDR tool?
AHDRIA’s purpose is to allow almost anyone to get involved in HDR photography. Industrial users of HDRI have expensive (but excellent) HDRI capture tools. Academic users of HDRI can design their own hardware and crunch image data to obtain adequate results. Everyone else is usually out of luck. It is for these users–those with a digital camera, a computer (a laptop if you want to be able to photograph anything outside your office), and a desire to create HDR images–that AHDRIA and AHDRIC were designed.
What is the basic workflow and process for creating HDR images with AHDRIA?
To create an HDR image with AHDRIA/AHDRIC, the camera is connected to the computer through its USB interface and activated as usual. AHDRIA can then communicate with the camera.
If I’m shooting a scene for which I’m not sure about the light levels, I adjust the lower and upper shutter speeds and check the viewfinder to see if an adequate light range is being captured. (Though, of course, care must be taken when “eyeballing” since the viewfinder displays only a very rough guesstimate of the resulting image.) I usually shoot with low ISO to limit the amount of noise in the scene and, while HDR images are very forgiving when it comes to post-processing for white balance, I usually apply a white balance setting before shooting anyway (to get a better idea of the “result” and maybe save some work later). Unless lighting is a problem, I leave the aperture small so that most of the scene remains sharp and lens flares are better-defined.
After AHDRIA is set up, it can start capturing images. Once finished, more images can be captured immediately, or the JPEG images from a previous capture can be synthesized using AHDRIC (either by running AHDRIC and locating an ExposureInfo.txt file generated by AHDRIA, or dragging the same onto AHDRIC and waiting for it to finish). The result is a brand-new HDR file, ready for use.
Are AHDRIA/AHDRIC standalone or do they require HDRShop or other 3rd-party software?
AHDRIA/AHDRIC do not require any 3rd-party software (other than Canon’s drivers, of course) as they are completely self-contained HDR capture and synthesis tools. However, other tools can be used in lieu of AHDRIC if needed. Specifically, the series of JPEG images AHDRIA produces can be synthesized into an HDR image in one of three ways: (1) automatically using AHDRIC, (2) using a generic camera curve (modeled by a gamma function), or (3) using a hand-designed camera curve. The latter two methods are available in HDR Shop. Method #1 is recommended since AHDRIC will automatically read the camera parameters written by AHDRIA and handle some of the special cases arising from this type of HDRI acquisition. If more control is needed, however, methods #2 and #3 are also useful in special cases.
Method #2 typically doesn’t produce any banding in the resulting images (a frequent problem) as the gamma curves are always “smooth,” but colors may be unbalanced or washed-out as the curves are not calibrated to the particular camera. Method #3 requires some skill, but is useful for those users with experience (and practice!) who have been unable to reconstruct a particular HDR image using other methods.
What would you say is the difference between images created using AHDRIC VS just straight HDRShop?
The most common problem I have heard about (and experienced myself) is color banding when using HDR Shop. Of course, this isn’t a problem with HDR Shop itself, but requires some skill to solve. Having little patience myself, I use AHDRIC exclusively if I have a JPEG series captured by AHDRIA. If I have a JPEG series for which I no longer have the camera parameters, I use HDR Shop with a gamma curve in lieu of a true “camera curve.” My reasoning behind this is that image-wide color imbalance is much easier to correct than the color banding often induced by building your own calibrated camera curve. (Building a camera-specific curve is a task I usually only perform if I can’t get the results I want in any other way.)
What is the difference between AHRIA and AHDRIC?
The purpose of splitting AHDRIA’s and AHDRIC’s functionality is that, most often, the “image capture” task and the “HDR creation” task are two different problems. Image capture typically needs to be as streamlined as possible, which is AHDRIA’s specialty. After all the needed images are collected, AHDRIC can casually do its number crunching to produce a high-quality output.
However, for anyone who needs to design a different capture pipeline, it is possible to trigger a complete AHDRIA capture–with no user guidance–from any external software package. Similarly, the resulting data could be passed to AHDRIC (again, with no guidance), opening the door to many unusual HDRI applications, including timelapse photography.
How have you found peoples interest in AHDRIA? What sort of feedback have you received?
I’m happy to say I’ve received a lot of positive feedback on this project–more than I expected. I assumed when I started working on AHDRIA that homebrew HDRI was already a solved problem to which I was contributing a not-so-new tool, but apparently not!
Of course, I also get feedback from users who have an incompatible Canon camera or a camera from a different manufacturer, but hopefully other companies will follow Canon’s lead and generate similarly “programmable” cameras in the future. Canon’s newer cameras also seem to be more computer-friendly than their older models, so I see incompatibility being less of a problem as time goes by.
Have you found more interest from 3d artists or photographers? What do you think about some of the latest photographs coming out that use HDR?
I’ve mainly heard from 3-D artists–until HDRI editing tools become more convenient for photographers, I don’t foresee HDRI being in wide usage merely for artistic purposes. Multiple-exposure blending has been done for some time using bracketing or other methods which are currently much more conducive to a Photoshop-style editing environment than HDRI. At the same time, HDR compression algorithms are still a work-in-progress in the computer vision community, though I’m sure it won’t be long until an intuitive and useful method emerges which can be used to render HDR images on low dynamic range photographic media in a pleasing way.
What are your plans for AHDRIA in the future? Will you be developing it any further?
I’d like AHDRIA to work with DSLR cameras and have a Matlab interface to make it a more convenient tool for researchers such as myself. Of course, ultimately, it would be nice if AHDRIA could not only work on Canon digicams and DSLR’s, but on cameras from other manufacturers as well. Though, from where AHDRIA sits now, this would be a rather ambitious undertaking.
Have you got any news for us? What has been happening in your world in the past few months?
nVidia has chosen one of my images for use in one of its HDRI demos, demonstrating its cards’ internal use of HDR.