Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.

Microsoft retires controversial AI that can guess your emotions

Tech giant warns that ‘new guardrails’ are required for artificial intelligence

Anthony Cuthbertson
Wednesday 22 June 2022 11:29 BST
Comments
Microsoft ruled on 21 June that its emotion-, age- and gender-guessing AI should not be made public due to fears of unethical use
Microsoft ruled on 21 June that its emotion-, age- and gender-guessing AI should not be made public due to fears of unethical use (Microsoft)

Microsoft has announced that it will halt sales of an artificial intelligence service that can predict a person’s age, gender and even emotions.

The tech giant cited ethical concerns surrounding the facial recognition technology, which it claimed could subject people to “stereotyping, discrimination, or unfair denial of services”.

In a blog post published on Tuesday, Microsoft outlined the measures it would take to ensure its Face API is developed and used responsibly.

“To mitigate these riskes, we have opted to not support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup,” wrote Sarah Bird, a product manager at Microsoft’s Azure AI.

“Detection of these attributes will no longer be available to new customers beginning 21 June, 2022, and existing customers have until 30 June, 2023, to discontinue use of these attributes before they are retired.”

Microsoft’s Face API was used by companies like Uber to verify that the driver using the app matches the account on file, however unionised drivers in the UK called for it to be removed after it failed to recognise legitimate drivers.

The technology also raised fears about potential misuse in other settings, such as firms using it to monitor applicants during job interviews.

Despite retiring the product for customers, Microsoft will continue to use the controversial technology within at least one of its products. An app for people with visual impairments called Seeing AI will still make use of the machine vision capabilities.

Microsoft also announced that it would be making updates to its ‘Responsible AI Standard’ – an internal playbook that guides its development of AI products – in order to mitigate the “socio-technical risks” posed by the technology.

It involved consultations with researchers, engineers, policy experts and anthropologists to help understand which safeguards can help prevent discrimination.

“We recognize that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve,” wrote Natasha Crampton, Microsoft’s chief responsible AI officer, in a separate blog post.

“We believe that industry, academia, civil society, and government need to collaborate to advance the state-of-the-art and learn from one another... Better, more equitable futures will require new guardrails for AI.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in