See the original posting on Slashdot
A major focus of any smartphone release is the camera. For a while, all eyes were on the camera’s hardware — megapixels, sensors, lenses, and so on. But since Google’s Pixel was introduced, there’s been a lot more interest in the camera’s software and how it takes advantage of the computer it’s attached to. Marc Levoy, former distinguished engineer at Google, led the team that developed computational photography technologies for the Pixel phones, including HDR+, Portrait Mode, and Night Sight, and he’s responsible for a lot of that newfound focus on camera processing. An excerpt from the wide-ranging interview: Nilay Patel: When you look across the sweep of smartphone hardware, is there a particular device or style of device that you’re most interested in expanding these techniques to? Is it the 96-megapixel sensors we see in some Chinese phones? Is it whatever Apple has in the next iPhone? Is there a place where you think there’s yet more to be gotten?
Marc Levoy: Because of the diminishing returns due to the laws of physics, I don’t know that the basic sensors are that much of a draw. I don’t know that going to 96 megapixels is a good idea. The signal-to-noise ratio will depend on the size of the sensor. It is more or less a question of how big a sensor can you stuff into the form factor of a mobile camera. Before, the iPhone smartphones were thicker. If we could go back to that, if that would be acceptable, then we could put larger sensors in there. Nokia experimented with that, wasn’t commercially successful.
Other than that, I think it’s going to be hard to innovate a lot in that space. I think it will depend more on the accelerators, how much computation you can do during video or right after photographic capture. I think that’s going to be a battleground.
Nilay Patel:When you say 96 is a bad idea — much like we had megahertz wars for a while, we did have a megapixel war for a minute. Then there was, I think, much more excitingly, an ISO war, where low-light photography and DSLRs got way better, and then soon, that came to smartphones. But we appear to be in some sort of megapixel count war again, especially on the Android side. When you say it’s not a good idea, what makes it specifically not a good idea?
Marc Levoy: As I said, the signal to noise ratio is basically a matter of the total sensor size. If you want to put 96 megapixels and you can’t squeeze a larger sensor physically into the form factor of the phone, then you have to make the pixels smaller, and you end up close to the diffraction limit and those pixels end up worse. They are noisier. It’s just not clear how much advantage you get.
There might be a little bit more headroom there. Maybe you can do a better job of de-mosaicing — meaning computing the red, green, blue in each pixel — if you have more pixels, but there isn’t going to be that much headroom there. Maybe the spec on the box attracts some consumers. But I think, eventually, like the megapixel war on SLRs, it will tone down, and people will realize that’s not really an advantage.
Read more of this story at Slashdot.