23 Sep 2013

Improving Cameras

There are many ways we can improve cameras. These span the gamut, from taking better photos, to taking better videos, hardware and software improvements, to making things simpler to use, to making the camera more convenient to use…

To begin with, get off the megapixel bandwagon and use the right number of pixels for the best quality. I noticed on multiple cameras that choosing the second-highest resolution resulted in the best photos. Imagine how much better it would be if the sensor had fewer pixels.

And under low-light, downscale the photos, which improves quality by reducing the noise. This is called pixel binning. The iPhone does this, binning the 8 megapixel image down to a 2 megapixel one [1].

Another idea is to take multiple photos and combine them together, again to reduce noise. The excellent iOS app Cortex Camera does this, producing photos that don’t look like they came from an iPad (I tried it out on an iPad).

On the topic of stitching photos, panorama is obvious. This is implemented on both the Android and iOS built-in camera apps, but not on some traditional cameras, like my point-and-shoot. This is the reason a 3rd party app ecosystem is critical, because there are many ideas, and not all of them can be thought of by the manufacturer of the device, any more than Apple can build all the iOS apps one may need. Not supporting apps amounts to declaring that we, the manufacturers, have thought of and implemented everything there is to implement with photography. Concretely, apps provide panorama, HDR and its sibling exposure fusion, pixel-binning, and fusing photos a la Cortex Camera. And automatic upload to Dropbox (or whatever other service you use, like Google Drive, Facebook or Google +).

Another obvious failing of my point-and-shoot is lack of time-lapse video at full resolution. It’s an 8 megapixel camera, but can’t shoot time-lapse video at any higher resolution than VGA. With time-lapse photography, you may take a photo once in many seconds or minutes, which is more than enough time for any camera to handle full-size photos. An app would fix this.

I also want camera sensors to move from 4:3 to 16:9 or maybe even 21:9. Photos look better to the eye when they are wider, for the same reason videos do.

Camera sensors can also have white pixels in addition to red, green and blue ones. This helps, because the eye is more sensitive to variations in brightness than it is to changes in color. Or more green than other colors, if it helps, since the eye is more sensitive to green than it is to red or blue.

Cameras should also shoot 72p video. Why 72? Because that results in more lifelike video than 60. Apparently frame rate is more important for to produce realistic video than resolution. And 72 is the frame rate that results in the best video.

Ideally the 72p video would be recorded at 4k, but I’ll settle for 1080 or even 720 since, after all, the frame rate is more important. And, needless to say, the video should look good. There’s no point in a camera that shoots 72p video at 4k if it looks like crap compared to a 1080p60 video.

Camera UIs should also get simpler. Cameras have an automatic mode, and if you want more control, you’re forced to make more choices than you want to. Modes are too complex. I don’t want to think about whether I want to use aperture priority or shutter priority or the other gobbledygook. If I want to change just one setting, like ISO, I should be able to, without bothering with settings I don’t want to fiddle with. Camera manufacturers are apparently so incompetent that they haven’t figured out a solution to such a basic UI design problem: letting the user change just the one setting he wants to change, and having the camera figure out the other settings. For example, I should be able to set the ISO to 400, or tell the camera to figure out what value to use as long as it’s less than 400, and have the camera figure out the shutter speed and aperture. Another common-sense UI design matter is correcting obviously wrong values, like an exposure of 15 seconds in broad daylight. Modes are clunky and for power users.

Cameras should invariably have touchscreens. They can also have dedicated hardware buttons for common operations, but they definitely should have touch screens. And that’s because touchscreens let you label a button based on what it does. You don’t have to do things like pressing down on a D-pad to change the ISO, or right to activate the self-timer, or DISP to activate long exposure. These are arbitrary actions, and are often unlabeled. Obviously, you can’t change the label of a physical button. Worse, they are often wrongly labeled. I can imagine pressing DISP to change display settings, but not for any unrelated action like taking a long exposure. This would be like building an email app that, instead of having a compose or a reply button, required the user to remember to press F3 to compose a mail and Shift-F6 to reply.

Touchscreens enable more actions and options than there can be buttons on the device. This is necessary for any camera, but becomes critical for apps.

Cameras should also have large Retina displays (displays with a pixel density of at least 300 pixels of inch), because you can preview photos better. Taking a photo was an impressive experience on my Nexus 4 as compared to my point-and-shoot, though the image quality was worse, because of the excellent high-res screen. Ideally I’d like a 10-inch screen, like a Retina iPad, where the preview is so big and so high res that you can see what photo you’ll get before taking it. It makes the preview live up to its name, showing you the photo ahead of time, rather than an approximation of what you’ll get. Unfortunately, space limitations on a camera may prevent a 5-inch screen, to say nothing of a 10-inch one, but manufacturers should at least look for ways to reduce the bezels on all four sides of the screen, like smartphone manufacturers do.

Cameras should also have WiFi, but implemented in a sane way. Rather than requiring you to use a clunky phone app and connect to a WiFi network created by your camera, the camera should automatically connect to your WiFi network when you return home, and upload all your photos either to a cloud service of your choice or to your computer.

Cameras should also have LTE, but that requires LTE prices to become reasonable. And no one wants the nuisance of another bill (even if the money weren’t a concern), so maybe LTE can be paid for by the manufacturer (like WhisperSync), perhaps with a decent usage limit like 1 GB per month on average.

Perhaps we can have devices that are cameras first, and phones second. The Lumia 1020 is one of the few such devices so far.

Cameras, both standalone and phone / table ones, have a lot of scope for improvement. Especially standalone cameras, which are being displaced by phone cameras. To stay relevant, they should innovate, though, sadly, I don’t know if the camera companies have the competence and the skills to innovate rather than just churning out incrementally better products year after year. But that’s okay, since most of these ideas apply to phone cameras, too, which is where they’re likely to be adopted, to make standalone cameras even less relevant.

[1] But it then does something stupid — it upscales the 2 megapixel photo back to an 8 megapixel one. Needless to say, this reduces quality, increases storage space and bandwidth use and requires more resources to display.

No comments:

Post a Comment