Engineers from MIT were able to recove audio from video footage of everyday objects including a potted plant, glass of water and teapot

Sound travels as vibrations through the air and whatever it hits vibrates too. If this is your ear then specialized hair cells translate these movements into something you can hear but for pretty much anything else these vibrations are simply lost.

Now, however, computer scientists have created an algorithim that translates these visual cues into an audio signal, 'reading' the near-invisible vibrations caused by people speaking or music playing from a range of everyday objects from a glass of water to a crisp packet.

Using video footage recorded by high speed cameras researchers from the Massachusetts Institute of Technology were able to listen to test subjects reading nursery rhymes from behind sound proof glass using only video of a crisp packet in the same room.

Previous techniques that recover sound from vibration have all relied on some sort of active interaction with the vibrating object (such as an infrared beam bouncing off a window using a ‘laser microphone’) but this new research needs only a visual signal.

Computer science graduate Abe Davis and his team were even able to extract audio from footage recorded by an ordinary digital camera and although the quality was good enough to make out distinct speech it was enough to identify information such as the number of people in a room and their gender.

These new techniques could have applications in law enforcement and forensics, but Davis has said that he’s more interested in a “new kind of imaging” that would allow scientists to analyse the material and structural properties of an object just by recording a video of it.

“We’re recovering sounds from objects,” he said in a press release. “That gives us a lot of information about the sound that’s going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways.”