Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Who is to blame for bad AI? Companies are an easy target – but we also need to look at our own habits

It's easy to understand why tech ethics advocates focus their attention on companies. To examine our own complicity, and that of our colleagues and loved ones hits close to home, writes Andrew Sears

Monday 15 June 2020 17:35 BST
Comments
A number of companies in the US have stopped selling facial recognition software to police departments
A number of companies in the US have stopped selling facial recognition software to police departments (David McNew/AFP/Getty)

In 2018, Amazon began selling a facial recognition AI product to police departments. It didn’t take long for "Amazon Rekognition" to attract the condemnation of human rights groups and AI experts, who criticised the product’s high error rate and propensity for mistaking black Congresspeople for known criminals.

Despite this, Amazon continued selling their AI to police departments for two full years, only stopping when the killing of yet another unarmed black man – George Floyd – drew mainstream attention to Rekognition’s flaws. Amazon has committed only to a one-year moratorium on the sale of Rekognition to police departments. In recent days, Microsoft has also said it will stop selling such technology to police departments until there is more regulation in place, while IBM has said it will no longer offer its facial recognition software for “mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values”.

However, Amazon has so far declined to comment on whether it will continue to market the product to federal law enforcement agencies.

How do we make sense of these events? A common narrative would focus on Amazon as the principal actor, employing the “evil corporation” trope to explain the company’s behaviour. But a more constructive analysis begins with the realisation that companies -- especially public ones -- have very little agency of their own.

However, companies respond deterministically to two powers greater than themselves: systemic forces on the macro-level and then individuals’ consumption habits. This doesn’t let companies off the hook; they are still responsible for the technologies they create. But to effectively fight bad tech, we have to address these more fundamental causes.

To start, we need to widen our lens and put into focus the system within which technology companies operate. The system is the collection of beliefs, incentives, and institutions within which companies make their decisions, including concepts like regulation, the market, consumerism, and liberalism. We also need to put into focus the individual people that technology products serve, along with all of their desires, virtues, vices, fears, and aspirations. In particular, we need to focus on the power that individuals have to affect change and their responsibility to wield that power. If we change either the rules of the system or the behaviours of individuals, companies must change in response.

To make this more clear, let’s return to the example of Amazon and Rekognition. In response to criticism from human rights groups, Amazon shareholders considered a proposal last year to ban the sale of Rekognition to police departments. Only 2.4 per cent of shareholders voted in favour of the ban. Even worse: less than a third voted in favour of an independent human rights assessment of the product. Not only did Amazon's shareholders overwhelmingly want to keep selling flawed facial recognition AI to police departments -- they didn’t even want to verify whether or not the product violated basic human rights.

Amazon’s past behaviour suggests that unless underlying systemic flaws are addressed, the company will resume its business with police departments one year from now. Its announcement even hints at this fact, challenging Congress to pass regulation and change the rules of the system before the 12-month clock is up. But regulating facial recognition technology is only part of the picture.

America has a system that says the sole responsibility of public companies is to maximise shareholder wealth; that rewards speed-to-market and punishes thoughtful innovation; that gives no ownership stake or voice to the communities that bear the consequences of corporate decisions. As we are reminded at this moment, we have a system that reflects America’s darkest sins, that perpetuates centuries of inequality and oppression. A temporary change to Amazon’s sales policy does nothing to address these deeper problems that allowed Rekognition to go ahead in the first place.

We can understand this situation further by examining it through the lens of individual behaviour and responsibility. Powerful people have a responsibility to defend the vulnerable. Have Amazon’s shareholders seemingly failed to exercise this responsibility because they have been conditioned to value wealth accumulation above social duty? Engineers and product managers have a responsibility for the ethical quality of the products they build; sadly, many have been conditioned to leave ethical questions to academics and legislators. Police officers have a responsibility to protect and serve; so why are they purchasing powerful software that’s known to be racially biased?

Here’s the even harder truth: during the time that Amazon has been selling Rekognition, the number of Amazon Prime subscribers has grown 50 per cent from 100 to 150 million people. Everyday people like you and me have joined regulators and shareholders in failing to hold Amazon accountable for its behaviour. Prime turns out to be an irresistible product for people who have been conditioned to express ourselves through consumption, to love immediacy and convenience, to place our hope in the acquisition of material goods.

It's easy to understand why tech ethics advocates focus their attention on companies. To examine our own complicity, and the complicity of our colleagues and loved ones -- that cuts quite close to home.

Companies may be the most visible and easiest target, but that doesn't mean they're the right one. Amazon will likely be right back to selling AI to police departments one year from now unless the flawed systems and values that gave rise to Rekognition are changed in the meantime. Until we shift our focus, lasting progress may prove elusive.

Andrew Sears is an advisor at the tech ethics organisation All Tech is Human and the founder of the technovirtuism blog

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in