AI has a customer trust problem. Here’s how to solve it.

To reap the benefits of intelligent automation, companies must put transparency first

head in profile with data inside
Dan Tynan

Dan TynanThe Works Contributor

Jul 25, 20255 MIN READ

A corporate event agency in Nashville wanted to get a better handle on how its clients’ conferences were being received. Did the speakers connect with the people in the room? Were men more engaged than women? At what point did the audience lose interest? 

They asked Bob Hutchins, an AI strategist and founder of Human Voice Media, for ideas. He suggested that instead of handing out surveys after each event, they use computer vision and AI algorithms to analyze the audience’s demographics, body language, and facial expressions in real time.

At first, this suggestion was met with skepticism. Could a machine really do this better than humans? How would they know the results weren’t just another AI hallucination? 

Hutchins eventually won them over, but it took some persuading. 

“When you’re spending $50,000 on a keynote speaker, you want to measure the return on that investment,” he says. “But first, you have to convince the CFO to trust that the technology actually works. And that means listening to their concerns and dealing with their fears.”

AI offers a wealth of potential benefits for consumers, including faster service, better support, and a more personalized experience. But companies still need to persuade their customers to trust the technology—and so far, that's been a struggle. 

Late last year, a research team at the University of Melbourne surveyed more than 48,000 people across 47 countries about their attitudes toward AI. Globally, only 43% say they're willing to put their trust in AI systems, down from 52% two years earlier. Their biggest concerns: misinformation, inaccurate results, and the loss of human connection. 

“AI delivers results—when customers use it,” says Murali Swaminathan, CTO at Freshworks. “Companies that get trust right, with sufficient guardrails and human oversight, can unlock significant competitive advantage.” 

Building customer trust requires being honest about AI’s limitations, being transparent about when customers are interacting with a bot, knowing exactly when to hand off to a human, and designing systems that work so seamlessly that customers don’t mind the automation.

Refresh Virtual Summit

Get ready to eliminate complexity

Don't oversell what AI can do

Blindly trusting in AI can certainly come back to bite you. If you don't believe that, ask the many attorneys who've been caught filing AI-generated court documents citing fictional case law and wound up getting fined or reprimanded.

But other organizations are getting ahead of the problem by setting realistic expectations, one guard against spectacular failures that have the potential to damage customer relationships and employee confidence. 

That approach shaped how ContractPodAi built Leah, its generative AI legal solution, which was using AI to help attorneys review contracts and other complex documents a decade before ChatGPT and other LLMs arrived on the scene. 

Making sure customers understand what the technology can do—and where humans still need to play a key role—is vital to gaining their trust, says Anurag Malik, the company’s president and CTO. 

Read also: Teaching employees to trust AI

ContractPodAi has instituted guardrails to minimize hallucinations, but it doesn't claim that Leah's output is 100% accurate for every use case, and it encourages firms to keep human attorneys in the loop to review the results. 

"We'll never go in claiming this technology will replace all your lawyers or solve all your problems," he says. "If you tell your customers they can just close their eyes and everything is going to work, they will be very unhappy when they find out it doesn't. Be honest about the outcomes and you'll have a happy customer."

Don't pretend that chatbot is a person

 The need for transparency also applies to how and when companies employ AI in public-facing roles. Fooling customers into thinking they're dealing with a person instead of a human-sounding bot can backfire badly when the ruse is revealed. 

When researchers at the University of Zurich conducted a covert experiment, deploying chatbots pretending to be humans on Reddit to test AI's ability to influence opinions, the blowback was immediate and intense. The discussions forum banned the researchers, threatened legal action, and said it would expand its user verification procedures to prevent future bots from registering as humans. 

Companies that get trust right, with sufficient guardrails and human oversight, can unlock significant competitive advantage.

Murali Swaminathan

CTO, Freshworks

A handful of U.S. states have passed or are weighing laws requiring companies to disclose when they are using AI, but in the meantime, some companies are using transparency as a competitive advantage rather than a compliance burden. 

When digital marketing firm Helium SEO deployed AI chatbots to handle customer interactions, it took pains to make sure its customers knew they were dealing with AI, says CTO Paul DeMott.

"The moment we introduced a chatbot to help manage incoming client queries, I knew the tech could not pretend to be a person," he says. "We were upfront about when clients were speaking with the bot and exactly what it could help them with,” says DeMott. “That transparency alone cuts down on frustration."

Establish clear boundaries between machines and humans

Just as important are the things you don't allow AI to do. Every company needs to know when and where to draw the line, says Hutchins.

"Just because AI can automate something doesn't mean it should," he says. "If the stakes of an interaction are personal, permanent, or meaningful, I don't think you want a machine involved. So a bot can handle a late delivery notice, but I would not advise having it write an apology letter for a medical delay." 

Smart organizations also train chatbots to recognize when customers are getting frustrated. Hutchins works with businesses to online chatbots and voice-activated response systems that detect “psycholinguistic triggers” that indicate a customer is unhappy with the bot's responses and escalate to a human agent quickly. 

That handoff must be as seamless as possible, notes Freshworks’ Swaminathan. 

"Ideally, the conversation is going so well you don't even realize you're talking to a bot," he says. "But the moment you're experiencing any frustration, the bot needs to say, 'I'm sorry, I'm not able to help you; let me connect you to a live agent so you can resolve your issue more quickly.'"

Be transparent about how you're using AI 

Ultimately, the goal is to create interactions so smooth and seamless that customers don't care that they're conversing with a machine, but know there's a human ready to help when things don't work, says Swaminathan. 

This is the “invisible until it isn’t” principle, he says.

Building this kind of experience demands proactive transparency rather than hidden automation.

"Trust is fragile," notes Hutchins. "And once broken, winning it back from a customer can only come from human-centric engagement. It’s nearly impossible to do it with an algorithm.”