Rethinking the smartphone camera
With every millimeter shaved off the thickness of laptops and smartphones over the years, the cameras on those devices got substantially worse in quality because of the required shrinkage of the lens optics. Cameras on ultra-thin notebooks like the Macbook Air is especially poor in quality because of the tiny lens and image sensors. But even with a better and larger image sensor, a small lens means reduced light gathering capability still results in inferior image quality.
Gigaom mentioned a new startup in Mountain View California called Pelican Imaging which aims to change all this with their new image array sensor. The company’s webpage doesn’t offer much details but the image below is somewhat self explanatory and features a Pelican array camera inside a much thinner phone. If the image quality is at least equivalent to existing solutions, the thinness alone would be a tremendous selling point. If the image quality is better, it’s almost certain to revolutionize smartphone cameras or other thin devices.
Image credit – Pelican Imaging
With an array of 25 image sensors spread out, the fixed lenses for each sensor can be substantially thinner and the combined sensor size of the array can be larger than a single traditional sensors used in existing smartphone or laptop cameras. Using the fast processors in modern smartphones and laptops, I would assume that the 25 images are “stitched” together to form a single image. I wonder how practical that is for video given the computational requirements of merging 25 images together 30 times per second.
The idea looks extremely promising and I definitely have more questions on the technology. I think it’s safe to surmise that this array technology only works on fixed lens applications and not zoom, so this technology won’t replace most point-n-shoot cameras because people still value optical zoom. I’ll try to get in contact with the company because they’re in my neck of the woods.
The more I think about this, the more I like the array. Pelican doesn’t even need to do stitching as the hardware could simply output a single image file just by reading the pixels it wants from the 25 sensors. Just think of it as a single sensor that happens to be chopped up. The alignment of the array should be very accurate since the array is machine manufactured and not hand assembled. The software processing mentioned by GigaOm might only be talking about the 3D mode.
The tiny lenses should have extremely good depth-of-field and low F-STOP characteristics and don’t even need any kind of a focusing mechanism. I wouldn’t be surprised if the array was clear from a few inches to infinity.