The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.

Apple responds to growing alarm over iPhone photo scanning feature

Andrew Griffin
Monday 09 August 2021 17:37 BST
Comments
Apple To Scan US iPhones for Child Sexual Abuse Images
Leer en Español

Apple has responded to growing alarm over its new iPhone scanning feature from privacy experts and competitors.

Last week, the company announced that it would be rolling out new tools that would be able to look through the files on a users’ phone and check whether they included child sexual abuse material, or CSAM.

Apple said that the feature had been designed with privacy in mind and that the actual analysis happens on a person’s iPhone rather than on Apple’s systems. The company will only be able to see photos if they are found to be similar enough to an existing database of child sexual abuse imagery, it said.

Despite those assurances, the new feature has been met with intense criticism from privacy and security campaigners who say that it could weaken fundamental protections on the iPhone and be abused to attack innocent users.

Critics have suggested, for instance, that governments could force Apple to add other kinds of imagery into its database, so that the tool could be used by despotic regimes to track dissidents, for instance. Other fears include the possibility that the system will go wrong and flag other kinds of images for review by Apple, essentially removing the privacy from entirely innocent images.

To find out what others are saying and join the conversation scroll down for the comments section or click here for our most commented on articles 

Apple has responded to those criticisms in a new frequently asked questions document posted to its website under the name “Expanded Protections for Children”.

In the introduction to that document, it recognised that while the features have gained support from some organisations, that others had “reached out with questions”.

It first looks to address questions about the tool known as “communications safety in Messages”, which analyses photos sent to children for signs of abuse. It notes that Apple never gains access to those communications, that the end-to-end encryption is still ensured, and that children will be warned before any information is shared with their parents.

It then goes on to address the more controversial feature, known as “CSAM detection”. In that section, Apple makes a number of commitments designed to quell concern about the new feature.

It says that Apple will not scan all photos, but rather only those that have been uploaded to iCloud Photos, suggesting that any phones with that feature turned off will be exempt. Apple had not previously explicitly said that there would be a way of opting out of that scanning feature.

Apple also commits that it has only designed the system to detect child sexual abuse images, apparently in response to concerns that the scope of the feature could be widened in the future.

It says that if it is asked to add other kinds of images to its database it will “refuse any such demands”.

“We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands,” Apple says. “We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.”

It also denies that there will be a way to do this without Apple’s help, by “injecting” other kinds of images into the database so that they would be flagged. It says that Apple is not able to add new images to that database, which comes from child safety organisations, and that because the database is the same for everyone it would not be possible to change it for one specific user.

It also says that “there is no automated reporting to law enforcement”, and so any report that was passed to authorities would be seen by Apple first. “In the unlikely event of the system flagging images that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC,” it says.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in