Photos of the future

Thought that going digital was the biggest breakthrough in photography? Think again. Scientists are working on away of taking pictures that could change the way we point and shoot forever. Mark Piesing investigates

Wednesday 24 February 2010 01:00 GMT
Comments

Deep in the bowels of Stanford University in California, a monster is being brought to life, not – for once – by a mad scientist playing God, but by a computer scientist, Professor Marc Levoy, and his team. Made from the spare parts of the photographic industry bolted on to a powerful computer, the Frankencamera may be incredibly ugly, but it will be the world's first open-source camera.

By giving researchers, programmers and the curious who buy the almost-£600 Frankencamera control for the first time over all the functions of a camera, Professor Levoy hopes those users will develop, as with the iPhone, the innovative ideas and applications necessary for the next revolution in photography: computational photography. That is the coming revolution that few people have heard of.

"Computational photography will change how we do photography," says the Professor of Computer Science. "It should allow you to fix things that you can't currently – whether by combining pictures in a different way, or by fiddling with optics so that more is recorded than on a normal camera; basically to do what photoshop can do, but the moment you take the photograph."

And so the only angry mob the Frankencamera will meet will be the photo fans desperate to get their hands on it.

Although the origins of computational photography lie in the Nineties, the term was first used by futurologists like Professor Steve Mann of the University of Toronto in 2004. Its big idea is to turn the camera into a powerful computer, that doesn't just digitise an image, but performs extensive computations on the image data as well. After all, despite the arrival of the digital camera, the camera itself has remained at its heart largely unchanged for over 100 years; all digital technology did was basically to replace the film with a sensor. Even the quality of a digital photograph is still largely judged by its closeness to its chemical ancestor.

So a future computational photography camera wouldn't take photos but rather compute them. Not only that, but innovations such as a micro array of multiple lenses would allow much more complex data to be collected than ever possible before; for example, every image captured would include data on its depth. This would enable future cameras to do things that until now have been considered impossible.

At its most tame, it means not only capturing shots of a visual richness unimagined before, but even changing the focus of a shot after it had been taken. At its most extreme, it could mean 3D photographs, and photographs that could be converted into a drawing, a diagram or even a watercolour at the press of a button. According to Shree Nayar, Professor of Computer Science at Columbia University, this isn't just changing how we do photography, it is actually a "new visual medium". "It allows photographers to manipulate photos after the fact. And yes, there is a boundary here beyond which one gets into the realm of art."

This may sound like a big boast just years after the digital revolution changed – and is still changing – how we take photographs, but big high-street names like Nokia, Adobe, Kodak and Hewlett Packard are betting with their dollars that Levoy is right.

However, Levoy and his team aren't alone.

In Silicon Valley, a company called Refocus Imaging has joined the race to turn theory into commercial reality; this time by looking an idea that is so revolutionary that it is hard for people to get their heads around it.

Founded in 2007 by Dr Ren Ng, the company has spent three years transforming Ng's Stanford Phd work on refocus imaging, which won the Association for Computing Machinery award in 2007, into a viable camera that we can buy on the high street.

According to Refocus Imaging's Alex Fishman, the refocus camera will, by replacing the single sensor behind the lens with microsensors, solve a number of problems that digital cameras present photographers with. "Currently, you are forced to take focus decisions (or depth-of-field decisions) before you take the picture", he says. "What's more, you can't change your mind after taking the picture. It also takes time for the autofocus to work as there is an inherent delay before exposure happens – anything from several hundreds milliseconds to a second. The result of this is that you miss your shot."

Although "For the end user the camera looks the same", he adds, "from an experience point of view you won't have to wait to take a photograph, as it will take a photo immediately, with no shutter lag, and you won't have to worry about focusing your shot as you can do it afterwards – as many times as you want. In fact the same exposure can yield multiple pictures after wards – each focused at a different object."

Not only that, but it will be cheaper, he believes, as refocus cameras will make lens design much cheaper, lighter and smaller, as the digital processor will do the work that the optics had to do.

"Its smartness will be loaded on to cheap digital processes rather than expensive optics – in which there have been no major breakthroughs in decades."

What's more, it's a transformational leap that we may not have to wait long for. According to Professor Ramesh Raskar, it may be only two years before technology such as this appears on our high streets, as camera manufacturers and software companies are working hard to deliver it; and he should know.

As Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology (MIT) and head of its camera culture research group, Raskar has been described as one of the most important individuals shaping the future of visual imaging today. He is also co-author of the book Computational Photography, due out this this year.

"The first wave of this revolution is already here", he says. "It is about trying to improve the performance of the existing cameras in the attempt to make the digital camera as good as a film camera." Features that broaden the dynamic range of a digital camera are examples of this. "The second wave should hit in the next couple of years, and it is about coded photos. Trying to make the camera into something else, like the refocus camera."

However, Raskar believes that this is not the end of the transformational process. "The third wave will take us beyond traditional photography, and companies like Microsoft are already developing software like PhotoSense and PhotoTourism to deliver 'sense photographs'", he adds.

'Sense' photographs are all about trying to take photographs that capture the sense of the real experience, not just what the camera and computer are capable of recording. "If you're on a rollercoaster, you can never get a good picture. If you're at a candle-lit dinner you can never take pictures that make the food look appetising."

Beyond that, who knows, but as computing power comes to matter more than optical power, one view is that what we know as "the camera" may disappear entirely, to be replaced by one piece of kit that meets all of our needs; whose screen is in fact also its lens. Or that the "future is bionic" – our eyes will be augmented to become the camera of the future.

Almost one billion mobile phones are sold globally every year with cameras of ever-increasing quality, and it is likely that computational photography may well first appear on a mobile. This is an outcome made more likely by research at MIT into how a small aperture on a mobile-phone camera can simulate that of a single-lens reflex (SLR) camera, and by Levoy's work with Nokia to deliver the Frankencamera's software package for the Nokia 9000 Internet Tablet.

However, before this revolution can occur the problem of resolution has to be solved. So far, prototype computational cameras have at times struggled to produce a picture that has the same high resolution as the camera they use, and this is a big turn-off for consumers and thus manufacturers; even if researchers are less worried.

In the end, the exact length of time it will take to get computational photography out of the lab and onto the high street will, according to the professor, depend on a "negotiation between the wish list of the customers, such as speed, and that of the producers, such as cost", as well as "the possibility of Walkman like moments of inspiration."

For Levoy at Stanford, the future of computational photography and the Frankencamera depends on the manufacturers. "The megapixel war is winding down. So the Asian camera manufacturers can't compete on pixels any more, only on extra features. We would like them to offer the features that the research community is working on all at once, not just when they want to compete."

Photo finish: The experts' wish list

* The "iPhone camera" – which allows you to download apps.

* Micro arrays and coded apertures.

* Refocusing the shot after it has been taken.

* Elimination of blur and time delay.

* Wireless cameras to allow the instant sharing of photographs and comparison with photographs of a particular shot that already exist.

* The ability to take a photograph on your computer, using a camera that you are linked to wirelessly.

* Three-dimensional photography – taking photographs in 3D, and changing D photos into 3D.

* Studio-quality lighting on a mobile phone.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in