Blade Runner, autoencoded: The strange film that sums up our fears of AI and the future

The work is a glimpse at how computers remember

Andrew Griffin
Tuesday 30 May 2017 17:14 BST
Comments

Terence Broad’s Blade Runner sometimes looks a lot like the classic 1982 film. Sometimes it looks completely different.

His autoencoded version of Blade Runner is the film as a computer sees it – or, more specifically, as a computer sees it, remembers it, and then regurgitates it.

The film is being shown as part of the Barbican’s science fiction exhibition-meets-festival, Into The Unknown. And it’s perhaps the most cutting edge of all the work featured there – not only being about science fiction, but being created in a way that sounds like it comes straight out of the work of Philip K Dick.

Broad’s project works by analogy with memory, and uses cutting edge artificial intelligence to do so. It uses an autoencoder – that encodes a big data sample, in this case individual frames of films, into a tiny representation of itself which it can then reconstruct later on.

When it does so, a great deal has been lost in the shrinking. But strange things can be found in that reconstruction, too – the technology looks to make up for what it can’t remember by filling in the gaps.

“The reconstructions are in no way perfect, but the project was more of a creative exploration of both the capacity and limitations of this approach,” wrote Broad in an introduction to his work that would later go viral.

In that way it seems remarkably and uncannily similar to human memory. It shrinks everything down and stores it away, so that it can be opened back up and relived with the gaps filled in at a later date.

And the idea was inspired by strange experiments with humans, too. One of the first inspirations was a talk by scientists who had managed to make an MRI machine reconstruct things that people were looking at, simply by looking at the patterns showing in their brains. It could literally see through other people’s eyes, by looking right into their heads.

But all of those human inspirations and influences are taken and turned into a work that is undeniably technological. If the MRI experiment showed us what people are watching from inside their heads, the autoencoded Blade Runner almost allows us to watch how a computer sees, peering inside its own brain in the same way.

It definitely remembers in a different way to how humans do. It’s terrible at recalling and reconstituting faces, for instance, and can’t recognise that the same face belongs to the same person and so needs to move in a straight line. And it also appears to find it impossible to remember a black frame; because there are so few in the film, there’s no point storing the black and instead remembers it as an average of all the other parts of the film, throwing out a beautiful but decidedly unblank green image.

For now the limits of the project are where the interest is found, and the imperfections of the reconstruction make it a work of art. But theoretically computers could eventually become perfect at the work – watching, shrinking and then reconstituting the film as it actually is.

It all sounds eerily like a question that would plague the noir world of Blade Runner. But Blade Runner wasn’t always the aim. The film began as a project for a university course, and required learning techniques that are at the very forefront of AI and visual technology.

“Originally it was really just an experiment; the whole thing started out as a research project,” says Broad.

“For a long time I was training it on these videos of really long train journeys. After a couple of months i got really bored of looking at pictures of trains, so I thought it would be interesting to do it with blade runner.

“But I didn’t think it was going to work - I thought there’d be too much variety in the images. Normally the type of model that i trained it on you train it with lots of pictures of faces, or of bedrooms – lots of the same kind of thing. But then it handled it pretty well.

“It got into being an art project near the end.”

(Even once it became very much an aesthetic piece, it was still fascinating to some people as a technical exercise. When the work was posted onto Hacker News, a number of people were interested in whether Broad had developed a new kind of compression algorithm, apparently inspired by the TV series Silicon Valley. Broad is clear that’s not the case, since the programme uses a great deal of energy and “only works for Blade Runner at a very low resolution”.)

The choice of film happened by a kind of intentional coincidence but fits perfectly into the film because the themes seem to mesh so well. Blade Runner explores the edge of artificial intelligence, the beginnings of humanity and how to know the difference between the two; the autoencoded Blade Runner does the same thing but with the film itself.

“I’d always had the idea [of Blade Runner] in the back of my mind. But I didn’t think it would work.

“But then as soon as we did it we saw that it obviously should be the sole focus for the project.”

Because the computer processes things over time – and takes a while to do it – the process of actually finding that it would work well was one that revealed itself gradually.

“When you’re training it, you would give it a batch of images of random frames. Then it would start giving you the output. So I was just looking at this output while it was training.

“I saw this image and saw you could recognise some of the scenes. But this was a really small resolution. So we saw this and then ti was like, right we need to kind of do this in order and remake the video.

“Then we did a little 10 minute sample. And it was kind of mind blowing, for me and my supervisor. I’ve got the original 10 minute - it’s really noisy and really grainy.

“You can see what’s going on and it’s kind of mind-blowing. Then I thought - let’s just remake Blade Runner, the whole thing.”

That decision put the film squarely in the realm of science fiction – a decision that would see it sit among the greatest work of the genre in the Barbican exhibition. Not simply because it took such a seminal science fiction film, but also because it was a kind of science fiction itself, using brand new techniques to reprocess a film in a way that would be unimaginable and inexplicable to people even 10 or 20 years ago.

But Broad doesn’t seem to see his so much in the history of science fiction as in the history of work using Blade Runner; his piece might be one of the most high-profile reworkings of the film, but it’s far from the first. (One of those was released only this month, putting the sound of the new sequel Blade Runner 2049 over the advert for the Google Home, to neat and chilling effect.)

Broad also likens it to other films that explored the very limits of film and technology as a medium – and of film production studios as copyright owners. Work like 24 Hour Psycho and The Clock both made heavy use of other films and skirted takedowns and copyright claims as they did so.

Despite having worked on a spectacular film about the dangers of AI (and making it even more spectacular), Broad isn’t concerned about the grand predictions of cinema – like those from Terminator.

“I’m not really troubled by Skynet super AIs taking over everything,” he says. “I think what’s more more troubling is there’s lots of evidence that neural networks take biases.”

That isn’t a conscious process – on the part of the AI or the person training it. It’s just a consequence of the fact that computers can only learn from what humans give them, and anything humans give them will be as good or bad as the person doing so.

“It’s just picking up on all the biases of the training data you’re giving it; it’s whatever’s inherent in the people” that have put the data together.

“Until you have some system that really was really intelligent that could empirically understand these things and correct itself” it doesn’t seem like it would be possible to fix such a problem - “you’re always going to be trying to fit some kind of training data that people have labelled in some sense”.

People are always arguing for the transformative power of film, and its ability to make the people who watch it better. So couldn’t robots like Broad’s – with their ability to watch a film – eventually learn away their bad habits in the way that we hope people can?

Probably not, says Broad. At least not yet.

“The auto encoder thing is just working on images,” he says, talking of his own creation. “It doesn’t know any context about what’s going on in the film; it doesn’t have any capacity to understand that. It just understands patterns in images.”

But artificial intelligence has been developing at a stunning rate; it’s one of the rare fields of technology where predictions tend to be conservative and look small in their scope. So is it possible to imagine that such a computer could be generated in the near future, even if it’s not possible to imagine one based on what we have today?

No. Not really, says Broad. AI might be stunningly advanced and developing shockingly fast, but that shouldn’t be our concern for the time being.

“There’s still a lot of research going on,” he says. “People have been able to develop quite efficient ways of doing particular tasks.

“But having something that can do everything and constantly learning on the fly,” he says, before cautioning himself about the kind of certainty about AI that Blade Runner warns about. “Maybe they’re not - maybe it’ll happen really quickly.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in