Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Apple to start scanning people’s photos and messages to check for child abuse

Andrew Griffin
Thursday 05 August 2021 21:03 BST
Comments
Apple-iPhone-Child Abuse
Apple-iPhone-Child Abuse (Copyright 2021 The Associated Press. All rights reserved)
Leer en Español

Apple is launching new features that will allow its devices to scan through people’s photos and messages to check for signs of abuse.

The company says the feature will be introduced in a way that keeps those communications hidden from Apple, and to ensure that users’ privacy is protected.

But the feature is guaranteed to lead to concerns from privacy advocates, especially given Apple’s public commitment that privacy is a human right.

The company is introducing three new measures, which take in messages, photos, and additional features that will be added to Siri. The additions are coming “later this year”, Apple said, in updates that will come to all of its platforms: iOS 15, iPadOS 15, WatchOS 8 and MacOS Monterey.

For the time being, the features are limited to the US.

The first feature of the three features will use the phone’s on-device machine learning to check the content of children’s messages for photos that look as if they may be sexually explicit. That analysis will be done entirely on the phone, Apple said, and it will not be able to see the those messages.

If a child receives such a message, it will be blurred and the child wil be warned that it could be sensitive and given information about such messages, as well as being given the option to block the contact. If a child decides to view a message, they will be told that their parents will be alerted, and an adult will then be notified.

Similar protections are in place when children send messages, Apple said. Children will be warned before the photo is sent and parents can set up notifications when their child sends a photo that triggers the feature.

But more likely to prove controversial is a second feature that looks through photos for possible Child Sexual Abuse Material, or CSAM. Technology in iOS and iPadOS will scan through people’s iCloud photo library looking for such messages – but in a way that the company claims will be done “with user privacy in mind”.

Once again, the scanning will not first take place in the cloud, but on the device itself. The iPhone or iPad will look through a users’ library at the photos and see if any of them match with a database of known abuse images provided by child safety organisations.

If the similarity of those images is sufficiently high, then the image will be revealed to Apple, which will then be able to see the contents. It will then manually review the images to confirm that there is a match – and if there is, the users’ account will be disabled and reported to authorities.

Apple stressed that both features are done with privacy in mind, and uploaded four technical assessments carried out by professors to illustrate its point.

But the news has proven incredible controversial even before it was announced. On Wednesday, cryptography expert Matthew Green revealed that Apple had been working on the feature – and both he and a number of security and privacy experts warned that the feature could mark a departure from Apple’s previous record.

Professor Green, who works at Johns Hopkins, said that while such scanning technologies are better than more traditional tools, they are still “mass surveillance tools”. He noted that anyone who controls the list of possible images could use it to search for any picture – not just those associated with child abuse – and that there would be no way to know if the system was being abused in that way.

“The theory is that you will trust Apple to only include really bad images,” he wrote in a tweet thread. “Say, images curated by the National Center for Missing and Exploited Children (NCMEC).

“You’d better trust them, because trust is all you have.”

Alan Woodward, a computing expert at the University of Surrey, was one of many who echoed Professor Green’s concerns. He said it could be a “double edged sword: the road to hell is paved with good intentions” and called for more public discussion before it was launched.

The third is more straightforward, and adds new information to Siri and Search aimed at giving children and parents resources to deal with possible abuse. Users can ask specific questions – such as how to report possible abuse – as well as more general ones, and they will receive more detailed information.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in