Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.

Apple employees are reportedly raising concerns about company’s on-device image scanning to curb child abuse

Employees express worries of repressive governments possibly exploiting the feature

Vishwam Sankaran
Friday 13 August 2021 11:19 BST
Comments
File: Apple says the feature will use the phone’s on-device machine learning to assess the content of children’s messages for photos that may be sexually explicit
File: Apple says the feature will use the phone’s on-device machine learning to assess the content of children’s messages for photos that may be sexually explicit (AFP via Getty Images)

Apple employees are reportedly raising concerns internally about the tech giant’s plans to roll out a feature that would allow its devices to scan through people’s photos and messages to check for signs of child abuse.

Employees with the company have flooded an internal slack channel with over 800 messages on the plan that was announced a week ago, news agency Reuters reported.

Many Apple workers reportedly expressed worries in a thread of messages on Slack that repressive governments could exploit the feature to find materials for censorship or arrests.

Apple announced a week ago that new features under the plan to be rolled out “later this year” would use the phone’s on-device machine learning to assess the content of children’s messages for photos that may be sexually explicit.

“When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources and reassured it is okay if they do not want to view this photo,” Apple noted in a blog post.

It said the features will be coming as updates to all of its platforms, including iOS 15, iPadOS 15, WatchOS 8 and MacOS Monterey.

“We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM),” Apple noted.

The tech company said children would be warned before they send sexually explicit photos and parents could set up notifications when their child sends a photo which triggers the new system.

In one of the features, Apple said it would use a database of known CSAM images provided by child safety organisations and apply on-device machine learning to look for matches in the photos stored on the device.

“Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices,” the company noted.

While Apple said the feature is designed so the company does not get access to the messages, it could lead to concerns from privacy advocates, given the tech giant’s long history and commitment to securing the privacy of its users.

Core security employees were reportedly not part of the complainants on the topic, but some reportedly said they thought the company’s response was reasonable to crackdown on illegal content.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in