13 Aug 2015

Smart Exposure Bracketing

“Automatic” exposure bracketing in cameras is dumb: you have to choose whether to turn it on or not, and if on, how many EVs apart you want each photo to be.

Exposure bracketing should have an automatic setting, in addition to on and off, where the camera would automatically decide whether to bracket. When you press the shutter button, it would analyse the scene, and measure its dynamic range [1] If it’s less than that of the sensor [2], the camera would take one photo, as usual [3].

If not, it would try to first determine the least number of photos needed to cover the dynamic range, which would usually be two. For example, instead of taking three photos separated by ±2EV (that is, at -2EV, 0EV and +2EV), the camera may just take two photos, at -2 and +2EV.

Or 0EV and +2EV, if that’s best for the scene. For example, if at normal exposure, shadows are clipping, but not highlights, the camera would take a photo at normal exposure, and another one at +2EV to capture the shadows. Or +3EV if the shadows are really dark.

In this way, the camera would determine the minimum number of photos to take to capture the dynamic range, how far apart those exposures should be, and what the exposures should be (like 0 and +2EV, or -1 and +2EV, both of which are 2EV apart from each other).

This would automate away one task that’s hard to do manually [4]. And time-consuming. Or you may realise only when it’s too late that you should have bracketed the photos to capture the entire dynamic range. Smart exposure bracketing would fix all these problems.


[1] Perhaps ignoring the x% of brightest and darkest pixels. Cameras can, of course, have more sophisticated algorithms to measure the dynamic range of the scene.

[2] Which depends on its ISO. For example, the RX100 has a dynamic range of 12.4EV at base ISO, and 11EV at ISO 300 or so. This table of dynamic range at each ISO would be measured by the manufacturer and programmed into the camera.

[3] Except that it could expose to the right. It would meter as usual, take a photo, and then check if the brightest pixel in the photo has a value less than the maximum possible value in the RAW format. If so, the camera would increase the exposure by the amount needed to drive that pixel to saturation, or one number below saturation, like 254 on an 8-bit sensor. It would take care not to increase ISO, and not to increase shutter speed so much as to cause blur because of camera shake. It would then write the adjustment factor into the RAW metadata, so that the viewer can apply the same amount of darkening to the photo, restoring normal exposure. This reduces noise in the photo.

[4] Since histograms on cameras are calculated from the JPEG, and don’t show shadow or highlight clipping in the RAW file, which is the crucial thing here.

No comments:

Post a Comment