Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.

US election 2020: New tool detecting deepfakes created by Microsoft

It has also provided tools for creators so their work can more easily be authenticated

Adam Smith
Thursday 03 September 2020 15:46 BST
Comments
This Back to the Future deepfake featuring Robert Downey Jr and Tom Holland is so accurate it's terrifying

Microsoft has announced a new tool that it has developed in order to combat the spread of deepfakes, ahead of the US presidential election in November.

A “deepfake” is a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not.

“Microsoft Video Authenticator” is able to analyse a still photo or a video, and give the viewer a rating on the likelihood that it has been altered.

For videos, Microsoft says its tool is able to provide still-by-still analysis, as such providing a percentage “in real-time” on each frame.

It detects the blending boundary of the deepfake, or fading or grayscale in the images that might not be visible to human eyes.

“We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods,” the company said in a blog post.

“Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media.”

Microsoft also said that it has developed new technology that can detect manipulated content and “assure people that the media they’re viewing is authentic”.

This is done through two components: the first is built into Microsoft Azure, the company's cloud computing service, which allows content makers to add digital hashes and certificates to their content.

These hashes can be used as metadata – information about the media, such as date and location. In combination with a reader for these hashes, which Microsoft says can be used in a browser extension, users can check that the certificates match the hashes and be informed whether the content is authentic.

The authenticator will initially only be available through a partnership Microsoft has made with the AI Foundation, an American artificial intelligence company.

The company is introducing a “Reality Defender 2020 (RD2020) initiative” that will be available to campaign organisations and news outlets.

A number of media companies, including the BBC, Radio-Canada, and the New York Times, will test the authentication technology.

Microsoft says that it hopes work with more technology companies, news publishers and social media companies over the next few months.

At their current stage, deepfakes are primarily used for pornography. In June 2020, research indicated that 96 per cent of all deepfakes online are for pornographic context

However, a report from University College London last month suggested that deepfakes are the most dangerous form of cybercrime.

This is because they are so difficult to detect and could be used for a variety of nefarious purposes, such as blackmail or fraud.

“People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity”, said Dr Matthew Caldwell, who authored the research.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in