Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.

Apple gives more detail on new iPhone photo scanning feature as controversy continues

Andrew Griffin
Monday 23 August 2021 09:45 BST
Comments
(AFP via Getty Images)

Apple has released yet more details on its new photo-scanning features, as the controversy over whether they should be added to the iPhone continues.

Earlier this month, Apple announced that it would be adding three new features to iOS, all of which are intended to fight against child sexual exploitation and the distribution of abuse imagery. One adds new information to Siri and search, another checks messages sent to children to see if they might contain inappropriate images, and the third compares photos on an iPhone with a database of known child sexual abuse material (CSAM) and alerts Apple if it is found.

It is the latter of those three features that has proven especially controversial. Critics say that the feature is in contravention of Apple’s commitment to privacy, and that it could in the future be used to scan for other kinds of images, such as political pictures on the phones of people living in authoritarian regimes.

Apple has repeatedly said that it will not allow the feature to be used for any other material, that it will not be used if a phone does not store photos in the cloud, and that a number of safeguards exist to ensure that the process is done in a way that preserves the privacy of users. In the time since the feature was announced, it has defended it in a range of interviews and publications, and says that it is still adding the feature as planned.

Now it has published a new paper, titled ‘Security Threat Model Review of Apple’s Child Safety Features’, that aims to give more reassurance that the feature will only be used as intended. It responds to a number of the security and privacy concerns that have been raised since it was introduced.

One of the specific announcements in the paper is that the database of possible images will not be taken from just one country’s official organisation. Pictures will only be matched if they are present in at least two different groups’ databases – which should ensure that no government is able to inject other content into the database.

Apple will also allow auditors to check through that database, with the full database of identifiers that the feature is looking for being provided so that others can check it is only scanning for child abuse imagery. That database will be included in every device running iOS and iPadOS, even though the feature is only active in the US, so that there will be no way for one specific phone to look for different images.

Apple’s own moderators will also be instructed not to report other kinds of images, the company says in the report, with much the same aim.

It also says that an account will only be flagged if its photo library includes at least 30 images that seem to be CSAM. That is to ensure that there are as few false positives as possible, and should mean that the chance of an account being incorrectly flagged is one in a trillion.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in