Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Is there such thing as ethical AI?

A new study has found that women are using generative AI far less than men because they are more concerned about its potential harm to jobs, privacy and mental health. So should we consider our choice of chatbot in the same way we think about ethical consumerism, asks Andrew Griffin – and if so, what is the best option?

Head shot of Andrew Griffin
Artificial intelligence apps have been rated by AI safety researchers in a study published in December 2025
Artificial intelligence apps have been rated by AI safety researchers in a study published in December 2025 (AFP/Getty)

The food we eat, the clothes we wear, and the media we consume have all become increasingly ethically charged. We are living in an era where consumerism is both rife and filled with moral concerns – where the very sofa we sit on to think about those thorny issues might itself be an ethical quandary.

There are plenty of reasons to worry about the ethics of using AI, too. They range from the concrete, such as its environmental impact, to copyright issues, such as whether the data used to train systems was responsibly gathered. It is both easy and important to think carefully about the ethics of AI companies and their products. But sometimes it is unnecessary: we already have a very good example of what unethical AI is. In recent weeks, Grok – the chatbot created by Elon Musk’s xAI and which is largely used through X, formerly known as Twitter – started being used to generate sexualised and violent images, particularly of women.

But Grok’s misbehaviour is actually something of a default state for such systems. AI is built to be obliging and has no moral codes of its own to hold it back, which means that it will do what it can to respond to any request. Other AI systems will not produce such imagery only because they have been specifically prohibited from doing so by their creators.

This speaks to grander and more knotty concerns that touch on the most foundational question of how we should be as people: what damage is using AI doing to us as humans, and, if it is doing damage, are we being unethical in using it at all?

It is a question that is becoming increasingly common and more urgent. Research published last week showed that there is a gender gap in AI use: women are using it less than men, primarily because of the risks it brings. The study found a gap of up to 18 per cent between the genders and suggested this could be because women “exhibit more social compassion, traditional moral concerns, and pursuit of equity”. “Greater concern for social good may partly explain women’s lower adoption of GenAI,” the authors suggested.

Those ethical concerns are many. The paper cited worries that using chatbots to complete work was unfair, or amounted to cheating, for instance. But there are many more: the potentially sensitive and personal data they gather; the ways AI might be used to undertake unethical behaviour, such as violent actions; and mounting concerns about how these systems further entrench bias and other forms of unfairness.

This is something campaigner Laura Bates has been raising the alarm about for some time, and a subject she covers extensively in her book The New Age of Sexism: How the AI Revolution is Reinventing Misogyny. She argues that unchecked AI can amplify misogyny, harassment, and inequality – from virtual assistants defaulting to female voices in subservient roles, to bias in hiring algorithms, to the creation of deepfake sexual content. Giving evidence to the Women and Equalities Committee in the House of Commons last year, she argued that ethical AI should be designed with an awareness of these risks, noting that many of the same concerns were raised 20 years ago about social media and that we are now seeing the same mistakes repeated at a greater scale with AI.

The potential ethical problems of AI – or, more specifically, the large language models that power products such as ChatGPT and Gemini – begin right at the start of the process. These models are powerful because they have been trained on a vast corpus of writing to learn how words tend to fit together. But there is no clean way to access such a huge amount of text.

To obtain it, many AI companies have resorted to scraping text from the internet: everything from Reddit comments to works by great authors. These words form the foundation of large language models, and they would not exist without them – yet they have often been taken with little concern for copyright holders or whether the people who wrote them would consent to such use.

Some of these ethical concerns have played out in the courts, where arguments over whether this kind of use is morally sound have been fought most substantially and expensively. But legal rulings do not provide an easy ethical framework. Last summer, for instance, a US judge found that Anthropic’s use of books without their authors’ permission fell within “fair use” rules. At the same time, federal judge William Alsup reprimanded the company for copying and storing more than 7 million pirated books.

That data was used to train Claude, an AI model that is sometimes held up as one of the more ethical choices in AI. Of course, some of this reputation is simply good marketing: Anthropic launched a series of ads late last year that sought to present Claude as working hand in hand with humans, built around the message “keep thinking”, and focused on helping rather than replacing human intelligence.

It is both easy and important to think carefully about the ethics of AI companies and their products. We need to consider the training that goes into these systems in the first place. When a new model is built, it goes through a training process that might include showing a human reviewer two possible responses an AI could give. The reviewer chooses their preferred response – perhaps based on an ethical requirement such as avoiding harm – and that preference is then fed back into the system.

Training may also involve a more formal and codified set of principles. DeepMind uses a “robot constitution” for robots designed to operate in the real world. Some principles are grand, inspired by Isaac Asimov’s Three Laws of Robotics (for instance, that a robot “may not injure a human being”), while others are more mundane (such as staying away from sharp objects).

Similarly, Anthropic says it used “constitutional AI” in building its Claude assistant. This constitution was based on the Universal Declaration of Human Rights and adopts a similarly high-minded tone, including an instruction to “choose the response that most supports and encourages freedom, equality, and a sense of brotherhood”.

But Anthropic’s work on this constitution also highlighted the problems of principles-based approaches. In its early experiments, the company said, the system became “judgemental or annoying” in how it applied its high-minded rules. As a result, it had to be given additional principles to avoid “sounding excessively condescending, reactive, obnoxious, or condemnatory”.

Anthropic has said that one aim of creating and publicising its constitution was transparency. “AI models will have value systems, whether intentional or unintentional,” the company wrote. “One of our goals with constitutional AI is to make those value systems explicit and easy to alter as needed.”

This commitment to transparency runs across most major AI companies, though some take it more seriously than others. Mistral, the French AI company, places particular emphasis on open-sourcing more of its work. It was also notable, as Laura Bates pointed out in her evidence last year, that at the AI Action Summit in Paris, the UK and US governments refused to sign a pledge calling for AI to be ethical and safe, even though 60 other countries did so.

In the face of the consumer backlash Musk has faced this week, the solution may be a relatively old-fashioned one: taste. What AI users choose may increasingly become a matter of research and ethical and aesthetic preference.

It may not, in fact, be so different from buying a sofa after all.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in