Skip to main content
DEAL WATCH: Free $40 at Costco $60.00

Get $40 to spend at Costco when you buy an annual membership for $60 | Read Review

BUY NOW
Cameras

This MIT Project Could Change Photography Forever

MIT's Media Lab wants to bring post-shot refocusing to every smartphone.

Tesseract is an MIT Media Lab project that has since moved out of the lab and into the real world. Credit:

Recommendations are independently chosen by Reviewed's editors. Purchases made through the links below may earn us and our publishing partners a commission.

Smartphone photography has enjoyed a meteoric rise in popularity over the last few years, and it's no wonder why. According to studies, some 58 percent of American adults and 79 percent of teenagers own a smartphone. And that means they've got a camera in their pockets, pretty much everywhere they go.

Sharing platforms like Flickr and Instagram and photo editing apps like VSCO Cam are essential downloads for any photographer on the go. These apps let you adjust exposure, sharpness, and color, or even add creative filters, but they're basically neutered versions of what you can do with a "real" camera and a copy of Adobe Photoshop. Thus far, the only advantage to smartphone photography is its go-everywhere nature.

One project from a team out of MIT's Media Lab could change all that.

Dubbed "Tesseract," this new technology could teach your old smartphone camera a whole new bag of tricks. Some examples of Tesseract's powers include full-resolution, post-capture refocusing (one-upping the oddball Lytro cameras) and a "green screen" effect that could isolate a subject in the real world and place it on any background.

The tech can even mimic the bokeh effects produced by high-end lenses, giving your smartphone photos the creamy out-of-focus backgrounds that people usually pay thousands of dollars to get.

How It Works

The secret sauce behind Tesseract is called compressive light field photography, achieved by using a simple physical mask and incredibly complex math.

Similar methods have been around for a couple of years, but the particular implementation used by this group is surprisingly straightforward: Between the lens and the image sensor, you insert a dappled attenuated mask with millions of little printed codes—almost like QR codes—that the phone could then use to rebuild the entire scene in all three dimensions.

Tesseract does this by reconstructing what is called a light field. The light field encompasses not just the intensity of the incoming light—which all cameras capture—but the direction in which the rays of light are traveling.

The light field is rebuilt from a single 2D image by comparing the captured image to a known set of codes—called an "overcomplete dictionary"—that is developed ahead of time using a very powerful computer. With dictionaries pre-built specifically for each phone's sensor/lens combination, you could easily produce light field images on the fly.

Why It's Awesome

The only light-field camera available to consumers today—the Lytro—is frustratingly limited to 1.2-megapixel shots because it uses microlenses to capture light fields. The advantage of Tesseract is that its coded projections allow for full-resolution photos for everyday 2D photos.

There are two tradeoffs: You lose half the incoming light, and the technique requires substantial computing power.

There are still two tradeoffs, though: You lose half the incoming light, and the technique requires substantial computing power. The benefit is that unlike apps like Nokia's Refocus, you can rebuild the entire scene from a single exposure.

Currently, processing power is the real stumbling block for Tesseract. Though it's feasible for current phones to make use of the tech, they would either need to use an unusually low-resolution sensor or take a lot of time to produce the final image.

For example, though the team has a working proof-of-concept phone, it only outputs images at 940 x 560 pixels—or 0.5 megapixels. Given the exponential growth we've seen in smartphone processing power, however, it shouldn't be long before full-size images can be processed reasonably quickly.

Where It Came From

Tesseract is the work of a team led by Kshitij Marwah—a graduate of the Indian Institute of Technology and the MIT Media Lab. The project was born from a paper submitted by Marwah and others to the Siggraph 2012 conference.

Marwah and his colleagues paint a tantalizing picture of the future of photography.

It described a technology called "Focii," which used the same coded projections on a printed transparency placed in front of a DSLR sensor. This would essentially turn the camera into a full-resolution Lytro. Tesseract is the mobile version of that tech, which was presented at Siggraph 2013.

While promising, this technology isn't slated to appear in any commercially available phones in the near future, and there are still plenty of kinks to be worked out if Tesseract-equipped cameras are to produce the sharp images we expect these days. But Marwah and his colleagues paint a tantalizing picture of the future of photography—one where taking a picture with your phone is only the beginning of the experience.

Up next