Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Zendesk on trust in the age of generative AI

THE ARTICLES ON THESE PAGES ARE PRODUCED BY BUSINESS REPORTER, WHICH TAKES SOLE RESPONSIBILITY FOR THE CONTENTS

Provided by
Wednesday 01 November 2023 09:05 GMT
Working together: with AI helping with workload, humans will be freed up to help with more complex issues
Working together: with AI helping with workload, humans will be freed up to help with more complex issues (iStock)

Zendesk is a Business Reporter client.

Pablo Kenney, Vice President at Zendesk, explains why trust is such an important consideration for organisations that are considering the use of generative AI.

Business Reporter: What are the benefits of generative AI to organisations?

Pablo Kenney: Organisations are eager to create online experiences that people feel good about. Take customer support as an example: organisations need to understand what a customer needs and why they are asking a particular question. The response, whether it’s a chatbot or a human, needs to be able to provide the right information quickly, accurately and fully. This is where AI can play such an important role: it can find the right data and generate new information by analysing existing data. And at its best, it can do so faster and more accurately than any human.

But AI isn’t just useful for external interactions. It can be a vital tool for the employee experience (EX) as well. Many jobs contain tasks that are routine and mundane. AI can enhance productivity and morale by taking on those routine tasks, allowing people to focus on the parts of their job where they add real human value – the parts they enjoy most. More than that, AI gives people access to information they wouldn’t have otherwise – deep data analysis; new scenarios that can be created in real-time. This gives people the opportunity to do things that simply aren’t possible without AI.

BR: Why is trust so important for organisations that are using generative AI?

PK: Our clients want to provide high-quality service to their customers. Underlying their ability to do this are some basic tenets that involve trust: trust in the secure handling of your personal data, and trust in fair treatment. If your customers don’t trust you then you won’t keep many of them. Companies that lose customers know all too well it’s a challenge to win them back.

As well as working to maintain trust, companies need to remember that, to their customers, generative AI tools are like a third party in the room: customers need to trust generative AI just as much as they trust any humans they are dealing with. That of course means any interactions with AI powered chatbots must be appropriate, accurate and safe.

BR: How can organisations build trust in AI systems?

PK: Building responsible AI is part of our commitment to provide customers with dependable products and solutions. In our AI development process, the most important things are transparency and accountability. Transparency involves a few things. You need to tell people when they are dealing with an AI – even if it sounds just like a human. And you need to explain to people what interactions with an AI involves, for example what will happen to their data and how decisions will be made.

Companies must hold themselves accountable for the decisions they make. A lot of people may be responsible for different parts of a decision – there will rarely be just one person who decides. So there must be a process where the organisation as a whole accepts accountability, and where consumers can contest a decision that they are unhappy with by interacting with the organisation as a whole, rather than getting passed from department to department.

BR: What can destroy trust? What happens if trust is broken?

PK: Personal data breaches, chatbots exhibiting inappropriate humour, or AI failures in timely problem-solving can all break trust. AI systems that don’t show empathy can be very frustrating to customers, even if they are aware that they are dealing with a machine.

All of these things can destroy trust, especially if previous experience with products or with humans in an organisation have already damaged trust. And once trust is destroyed it will be very hard to get people to want to interact with you, to buy things from you, in the future.

BR: How can organisations manage personal data to ensure AI systems are trusted?

PK: As the demand for AI-driven and personalised experiences grows, companies are finding it increasingly important to secure data across the customer journey. There are some basic principles that need to be followed: privacy by design and security by design. We help our commercial clients follow these so that they can be trusted by their customers. Privacy by design means that personal data is only processed when it is necessary to do so and that organisations monitor why they are using personal data and how the data they collect is used.

Security by design is just as important. This requires basic hygiene to be put in place to protect data: for example, software must be kept up to date; access to confidential data and key systems needs to be managed and protected by strong authentication; data should be backed up securely; and third-party suppliers need to be audited to ensure they are also secure.

These are the basics of any privacy and IT security system. But with AI there are other risk factors that need to be considered. For example, the outputs of the system might be low-quality – poor customer advice from a chatbot perhaps, or unsafe recommendations. Data might be passed to a third party without appropriate controls and transparency, and AI might create new avenues for social engineering.

Organisations must think carefully and imaginatively about the potential risks of using AI and put guardrails in place to mitigate potential damage by relying on trusted partners with a track record of security and privacy protection.

BR: Is there such a thing as too much trust? How can you get the right level of trust?

PK: You can certainly have too much trust. Consumers need to be prepared to question the outputs from AI – and need to have a route to do so, such as escalating a problem from a chatbot to a human.

It’s just as important for customer service agents to question AI systems. They will earn their customers’ trust by showing that they don’t trust blindly in the outputs from an automated system but are prepared to question it. In fact, the use of AI in customer service should enable this to happen: if agents are freed from having to deal with lots of routine questions, they will have the time to address complex issues where a personalised and more creative solution is needed.

BR: How can consumers tell whether an AI system is trustworthy?

PK: I don’t believe that consumers should have to worry about whether an AI system is trustworthy or not. AI will be just a small part of the business that a consumer is interacting with. The focus of the company should be on building trust with the company, not just with the AI system. If consumers trust the company, then they will trust the AI system as part of that.


For more information please visit www.zendesk.com.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in