Article • 2 min read
Ethics of AI in CX
AI ethics act as a safeguard against biases, privacy violations, and unintended consequences that can harm others (and your business). Learn how to use AI for CX ethically.
按: Staff Writer Hannah Wren
最後更新: October 9, 2023
The Ethics of AI in CX include:
Developing ethical AI guidelines
Teaching customer service agents the code of AI ethics
Opting for transparency and explainability
Informing customers when they’re interacting with an AI chatbot
Building chatbots that are contextual and use-case specific
Rating AI against your own biases
When it comes to the customer experience (CX), consumers are ready for artificial intelligence (AI).
According to the Zendesk Customer Experience Trends Report 2023, 73 percent of consumers expect more interactions with AI in their daily lives and believe it will improve customer service quality.
Like Spider-Man uses his spidey senses for good, businesses can use AI to improve their customer experience. But as we know, “With great power comes great responsibility”—businesses and individuals must commit to prioritizing the ethics of AI in CX to create a safe and positive experience for all.
To learn more about the ethics of AI in CX, you’ll need to understand the basics: what AI ethics means, ethical considerations to keep in mind, and how to ethically implement AI across your CX.
Table of contents:
- What are AI ethics?
- Important AI considerations for CX leaders
- 6 tips for using AI ethically for CX
- Frequently asked questions
- Couple AI and agents to strike an ethical balance
What are AI ethics?
AI ethics is a term that describes a collection of moral principles that shapes the responsible development, deployment, and use of AI technologies. As AI becomes more prominent in products, services, and everyday life, organizations should establish policies to ensure they use AI properly.
AI ethics are also encouraged nationally. In October 2022, The White House Office of Science and Technology Policy (OSTP) released the Blueprint for an AI Bill of Rights as an ethical framework for using AI in the U.S.
The Blueprint for an AI Bill of Rights contains five principles that the OSTP believes every American should be entitled to, including:
- Safe and effective systems: AI systems should be safe and effective.
- Algorithmic discrimination protections: AI algorithms should prioritize fairness and equity and must not contribute to discrimination in any way.
- Data privacy: AI systems should include built-in protections to safeguard data.
- Notice and explanation: Businesses should inform users whenever they use an AI system.
- Human alternatives, consideration, and fallback: Users should be able to opt out of using an AI system and receive help from a person when applicable.
Whether at a national or organizational level, prioritizing AI ethics can help ensure the safety of businesses and individuals. This allows for the responsible design and use of AI while minimizing any unintended negative consequences.
The impact of AI ethics on CX
From AI chatbots to customer experience software, businesses turn to AI technology to enhance CX. When doing so, it is crucial to keep the ethics of AI in mind. According to IBM, 85 percent of consumers believe it is important for businesses to consider ethics when using AI technology.
Failure to do so may lead to unexpected consequences. For example, if you aren’t transparent about how your AI technology collects and uses customer data, you may lose customer trust, turning your customer service superheroes into devious supervillains.
On the other hand, ethical use of AI can transform how service teams work, providing your customers with immersive and efficient customer service interactions. According to our CX Trends Report, 74 percent of consumers believe AI will improve customer service efficiency.
Important AI considerations for CX leaders
As a business leader, you’ve likely considered implementing AI to improve your customer experience. And you’re not alone—72 percent of business leaders are making it a priority to expand AI across the customer experience in the coming year, according to our CX Trends Report. To ensure your business uses AI for good, keep the following questions top of mind.
How can we optimize operations and productivity while maintaining empathetic customer experiences?
Businesses can use AI to optimize their operations and deliver valuable, empathetic customer experiences by reducing wait times and providing 24/7 support. However, if this is done incorrectly, AI can lead to less empathetic customer interactions and poorer experiences. To avoid this, treat AI as a sidekick to your support agents rather than a robot designed to replace them. For example:
- Businesses can build customer empathy by providing fast and effective resolutions. AI bots can immediately answer questions and route customers to live agents when needed. In some instances, AI chatbots can completely resolve a customer’s issue without involving an agent. This reduces customer wait times and frees up your agents, allowing them to save the day elsewhere.
- Similarly, AI-enabled customer service software can give your live agents the pertinent information they need to quickly solve issues with an empathetic human touch.
AI can also help your human agents adapt on the fly by automatically suggesting helpful articles based on real-time customer messages. That way, your agents can spend more time assisting customers and less time manually sorting through resources.
Businesses can also maintain empathy by leveraging customer data to provide personalized experiences (which most customers prefer).
59% of consumers believe businesses should use customer data to personalize their customer support experiences. Source: The Zendesk CX Trends Report 2023
If you use AI without prioritizing empathetic connections, fast resolutions, and personalized experiences, you could make your customers feel uncared for and ready to leave. In our CX Trends Report, 66 percent of consumers state that a bad support interaction can ruin their day, and 73 percent will switch to a competitor if the problem continues.
How can we allow for a diverse set of viewpoints?
It’s also important to be sure you allow for varying perspectives. If you don’t, you could end up with AI technology that amplifies racial, sexist, or ageist biases. For example:
- Steven T. Piantadosi, head of the computation and language department at the University of California, Berkeley, got ChatGPT to write code that stated only white or Asian men could make good scientists.
- Studies show that an AI lending platform charged higher interest rates and loan fees to individuals who attended historically Black and Latino colleges than those who went to New York University.
Businesses need to hire diverse teams and test algorithms on diverse groups—or they’ll get unsettling outcomes that magnify hidden prejudices of a monocultural workforce. Additionally, businesses can test against biases by:
Continually monitoring AI interactions for biases
Assessing the data their AI technology feeds off of for biased information
Conducting third-party audits
Using AI bias detection tools
Fortunately, testing against AI biases is becoming more common, and it is even a legal requirement in some areas. In 2023, New York City passed a law that requires businesses using automatic employment decision tools to pass a third-party audit proving the AI system is free of racist or sexist bias.
Can our AI software protect customer privacy and security?
When using AI, customer privacy and security should be a top priority. This includes obtaining consent and implementing data security best practices to help prevent the abuse, theft, or loss of customer data. Research any AI technology before you use it to ensure it provides sufficient cybersecurity measures (depending on where you operate). This could include maintaining your compliance with things like:
- General Data Protection Regulation (GDPR): GDPR is a data privacy and security law passed by the European Union (EU) to help protect the data of people in the EU.
- California Consumer Privacy Act (CCPA): CCPA grants California consumers data privacy, including the right to opt out of the sale and sharing of personal data.
- Canada Personal Information Protection and Electronic Documents Act (PIPEDA): PIPEDA ensures that private-sector organizations obtain consent before collecting or using the data of Canadian consumers.
- Health Insurance Portability and Accountability Act (HIPAA): HIPAA is a federal law designed to prevent sensitive health information from being shared without a patient’s consent. Companies in the healthcare industry must be sure they maintain HIPAA compliance while using AI.
Using AI technology that complies with security standards relevant to your geographic location and industry can help protect both the data of your company and your customer base.
Take control of the customer experience with AI
Download our playbook to find out how you can ethically leverage AI to transform your CX and improve customer satisfaction while controlling support costs.
6 tips for using AI ethically for CX
Now that you understand ethics in AI and its importance in the CX world, it’s time for you to use artificial intelligence as a customer service superpower. For those ready to empower their super team of agents with AI, we’ve gathered six tips to help you avoid ethical issues along the way.
1. Develop ethical AI guidelines
One of the best ways to ensure you’re using AI ethically for CX is to develop an AI code of ethics for regulating AI use. These guidelines may vary depending on your use of AI but generally include:
Adhering to data privacy regulations
Prioritizing transparency
Ensuring fairness, diversity, and inclusion
Giving users the freedom to opt out of AI interactions
Committing to continual testing and auditing
With your ethics of automation guidelines set for your business, stakeholders can use them as a guiding light when implementing AI across your CX.
2. Teach customer service agents the code of AI ethics
After creating your AI guidelines, be sure to educate your employees about the ethical concerns of using AI for CX—and remember, machines rely on the information we give them.
“They’re just regurgitating what’s already there,” says Mikey Fischer, who developed a system that translates natural language into code and recently completed his Ph.D. in computer science at Stanford University, specializing in AI and natural language processing.
According to Fischer, that’s why it’s important to keep a human in the loop to evaluate for AI bias even after testing a system, making sure it works for specific users and looking at individual use cases instead of global demographics.
It also helps to have a fallback system so customers can contact a real person if they encounter something unexpected or inaccurate.
3. Opt for transparency and explainability
No matter what a business is using AI for, there must be transparency into how companies operate in regards to AI and explainability—which means understanding why an AI system made a certain decision.
“Companies that are transparent with technology or make it open access tend to get into fewer issues,” says Fischer. “Businesses get into trouble with AI when they over-promise or hide things, not necessarily because the technology isn’t perfect—everyone understands there are limitations to technology.”
Just like companies are transparent about pricing or business values, they can be open about AI, like what their technology can and can’t do and the internal processes around it. This also means being honest when something goes wrong.
4. Inform customers when they’re interacting with an AI chatbot
Transparent businesses also ensure customers know when they’re interacting with AI, according to Fischer. For instance, the “B.O.T law” requires companies to inform consumers in California when they’re talking to a chatbot instead of a human.
That way, a customer never has to second-guess whether or not they are talking to a chatbot or a customer service agent. This is becoming increasingly important, as 65 percent of business leaders believe that the AI technology they use is becoming more human-like (according to our CX Trends Report).
5. Build chatbots that are contextual and use-case specific
Businesses can reduce bias by creating AI technology that is contextually relevant and use-case specific.
“When AI is set for a task that’s too broad, there is no way for it to be unbiased because a lot of what is ethical is contextually relevant,” Fischer says. “When we’re all forced into a monoculture of technology, it’s not really possible for it to hit all the nuances of a given demographic or whoever the AI is trying to serve.”
Examples of domain-specific chatbots—chatbots set for a specific task—include Bank of America’s Erica, which helps clients manage their finances, and The World Health Organization’s WhatsApp bot, which provides users with reliable information about COVID-19.
“It’s about being user- and use-case specific, so the chatbot has enough context to give it the ability to be ethical,” says Fischer. “If it has a specific task, the user and the system have more of the melding of the minds.”
6. Rate AI against your own biases
AI technology learns from the number of inputs we give it, learning from the world the way it is or has been, not as it should be. As a result, we pass on our biases—conscious or unconscious.
“There is no such thing as an unbiased system,” Fischer explains. “AI is always based on some definition of fairness that it’s trying to optimize for, and there are many definitions of what fair means.”
The example Fischer gives is a chatbot for creditworthiness. Do you define creditworthiness as someone more likely to repay a loan or optimize it to maximize profit?
And even when we think we’ve programmed AI without bias, it can learn prejudices we may not realize we have.
“AI systems have millions of parameters, and sometimes it’s not immediately clear to a human the reason why a decision was made,” says Fischer. “Even the most careful parents can produce a child that is far from what they expected.”
For example, a bank might find that its creditworthiness algorithm has a racial bias and remove race as an input. However, the algorithm can statistically deduce race from other factors, like geographical location or where someone went to college.
“It’s hard to fully remove discriminating factors,” Fischer explains. That’s why, as people responsible for building the AI experiences of the future, we need to rate chatbots against our own biases.
Frequently asked questions
Empower agents with ethical AI
While using AI for customer service can enhance the customer experience and streamline your support processes, it’s critical to do so responsibly and ethically by maintaining empathy, diversity, and privacy.
By empowering human agents with AI, you can extend your agents’ abilities and provide meaningful automated customer support while boosting productivity, increasing customer satisfaction, and saving time and money.