Teaching employees to trust AI

Nearly every company has embraced AI in principle, but full adoption still lags. Here’s how to get employees to actually use it.

Blog
Dan Tynan

Dan TynanThe Works Contributor

Jul 15, 20254 MIN READ

These days, it's hard to find a large organization that isn't experimenting with AI in some way. But while enterprise adoption continues to climb, making it a part of employees' daily workflows remains elusive. 

Just a fraction of employees use AI daily, according to Freshworks’ Global AI Workplace Report, a survey of 4,000 knowledge workers across seven countries. But it’s a priority for 61% of business leaders, who say that winning over AI “holdouts” is important for success.

What’s holding them back? Concerns regarding the accuracy of generative AI models, uncertainty about where the technology should be applied, and worries about job security all play a role in limiting adoption. Call it a trust gap, says Freshworks CTO Murali Swaminathan.

“It’s keeping organizations from realizing the full benefits of intelligent automation,” he says. “Building trust in AI agents is similar to building trust in humans. Once you understand they're working well, you stop micromanaging them. That allows you to automate more tasks and get more done.”

So how can organizations sow trust among employees to reap AI’s benefits? By helping employees overcome fears of job displacement, proving AI’s value through tangible results, and maintaining human oversight that employees can see and trust.

Refresh Virtual Summit

Get ready to eliminate complexity

Assuage their fears

If you want your employees to trust AI, you need to start by talking about the elephant in the server room: the fear that these systems will ultimately replace them. A recent study by Stanford University's Social and Language Technologies Lab (SALT) found that 1 out of 4 respondents are concerned that AI will take their jobs.  

"Trust starts with how people feel," says Bob Hutchins, an AI strategist and founder of Human Voice Media. "Many people are worried about becoming irrelevant."

Read also: The strategic vs. tactical divide with AI

The question business leaders need to address is, is the AI replacing a person or a task? 

People who fear they will be replaced by AI—or employers who believe they can reduce headcount using autonomous agents—tend to assume AI is a lot more capable than it actually is, notes Lisa Simon, chief economist at Revelio Labs, a workforce intelligence firm. 

"Sure, AI can replace tasks, but someone needs to pull all these tasks back together into an actual job," she says. "There's a lot of value in the employee's expertise and experience, and the coordination of tasks isn't going away. That's the part that AI can't do yet, at least not well."

Demonstrate the benefits

You can't expect employees to take it on faith that AI will make their jobs easier; you need to prove it, says Tim Cooley, founder and president of DynamX Consulting

"Most people start out distrustful of AI," says Cooley, whose firm provides modeling and simulation analysis for the defense industry and works with SMBs to develop their own AI strategies. "You get them to trust AI by letting them use it enough to see that it works."

Building trust in AI agents is similar to building trust in humans. Once you understand they're working well, you stop micromanaging them.

Murali Swaminathan

CTO, Freshworks

One approach, says Cooley, is to set up a pilot project where employees compare existing workflows against AI-augmented ones. Say you post a job opening online and get 500 resumes in response. Instead of having your AI recruiting software scan all 500 to identify the top three candidates, have it scan 50 of them and ask your HR managers to do the same. Did they pick the same three? The more AI-generated results align with your managers', the more trust they will put into these systems over time. 

"You start with low-risk projects and then work your way up,” he says. “Once people get familiar with the technology, they'll use it without thinking and then spread the word to everyone else."

After employees see the outputs of AI are predictable and reliable, they'll develop more confidence in these tools and begin to use them on a regular basis, says Swaminathan.

Keep humans visibly in the loop 

Companies need to be clear about which tasks are being automated and when human oversight kicks in. Clear communication about AI’s role helps employees understand how the technology fits into their workflow rather than replacing it. 

"Employers shouldn't just send out an email saying, 'We've approved using ChatGPT, go nuts,'" says Daniel Space, principal of DanFromHR, a human resources consultant and creator of the popular DanFromHR TikTok channel. "Instead, they need to ask employees what they think AI can do. What prompts have they used and what results have they seen? What are some areas where they would recommend against using GPT?"

But some workers may be reluctant to admit they're using AI on the job, for fear their employers will think they're cheating. Managers need to help them overcome those fears by pointing out how using AI can benefit them personally, adds Simon. 

"On average, people adopting AI have much higher salaries, which makes a lot of sense," she adds. "If you're open about your use and can get twice the work done, you're automatically more valuable to your employer. Companies need to encourage workers to become better versions of themselves." 

Establishing a regular feedback and measurement process is key, says Swaminathan. 

"When AI performs a task to our satisfaction, employees can give it a thumbs-up," he says. "If the AI didn't work as intended, they can give it a thumbs-down, which automatically creates a support ticket for us to figure out what went wrong."

The biggest friction for employees usually comes from ambiguity, Hutchins adds. 

"I’ve found that when teams are brought in early, given clarity on what the AI does and doesn’t do, and shown how it fits into their role rather than overwrites it, adoption improves," he says. "Transparency, yes, but also empathy."