Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Robot able to learn like a baby and predict future outcomes being developed by Berkeley scientists

New technology takes inspiration from babies playing and enables machine to imagine its next actions

Josh Gabbatiss
Science Correspondent
Tuesday 05 December 2017 14:17 GMT
Comments
New technology allows robots to predict consequences of actions

A robot called Vestri that has the ability to learn like a growing baby and predict future outcomes has been created by scientists.

The technology, termed “visual foresight”, enables the robot to imagine what its next action should be and then act based on the best results.

In the future, it is thought this could enable self-driving cars to predict the roads ahead, but, for the time being, the robot uses its skill to move objects around a table.

The researchers took inspiration from the way babies learn while they play – a process known as “motor babbling”. They allowed Vestri a week of playing with various objects before giving it the task of moving certain objects from one position to another.

“Children can learn about their world by playing with toys, moving them around, grasping, and so forth,” said Professor Sergey Levine, whose University of California, Berkeley, lab developed the visual foresight technology.

“Our aim with this research is to enable a robot to do the same: To learn about how the world works through autonomous interaction,” he said.

Having spent time getting familiar with the objects, Vestri was able to make predictions several seconds into the future of what its cameras would see if certain sequences of movements were performed.

These predictions were produced by the robot as video scenes that had not actually happened, but could happen if the object was pushed in a specific way.

During this phase, Vestri taught itself to avoid any potential obstructions. Using this knowledge, the robot then chose the most efficient path for moving the object in question.

Overall, Vestri chose the right path around 90 per cent of the time.

“In the past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Professor Levine’s lab who invented the deep learning model Vestri is based on.

The Berkeley researchers now want to expand the number of objects Vestri is able to manipulate, as well as the movements it is capable of making.

By expanding its repertoire in this way, they hope to pave the way for robots that can learn and adapt in all sorts of environments.

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualise how different behaviours will affect the world around it,” said Professor Levine.

“This can enable intelligent planning of highly flexible skills in complex real-world situations,” he said.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in