Don’t ban shadow AI. Bring it into the light.

The signal from unsanctioned AI use isn’t always a compliance issue; sometimes it’s employees telling you what they need

illustration of two hands above and below an open laptop
Howard Rabinowitz

Howard RabinowitzThe Works contributor

Apr 27, 20265 MIN READ

Key takeaways
  • Shadow AI isn't just a security problem, it can also be a demand signal that employees aren't getting the AI tools they need to do their jobs effectively

  • IT asset management tools can turn shadow AI governance from a one-time audit into a continuous function by tracking license compliance, login activity, and patch status across the organization

  • Organizations that de-stigmatize unsanctioned AI use and surface what's working can turn employee-led adoption into a company-wide productivity advantage


Shadow AI is taking a toll. Workers are bringing the AI tools that they use off the clock into their workflows, and 86% of IT leaders say they have experienced at least one negative incident—a security breach, internal data leak, or compliance violation—tied to its use.

That’s according to a new Freshworks survey of 1,000 mid-market IT leaders, which finds, paradoxically, that shadow AI is not all bad news. Nearly 80% of those surveyed admit that the use of unsanctioned AI is making their workers more productive.

The scale of the governance challenge isn’t just the number of employees using shadow AI (more than 70% of workers, according to the survey), but how and where they are using it. It’s entering organizations through every channel at once—public AI platforms, browser plugins, personal productivity tools and SaaS features, the Freshworks data reveals. 

But most organizations are still trying to govern AI with the defensive security frameworks built for a different era with policies designed to block unauthorized software, not to manage tools that employees are actively using to do their jobs better. Closing that gap starts with understanding how workers are actually using AI, and building governance around that reality.

You don’t tend to see people using illicit AI tools when the right tool is there. You see it when there’s a shortcoming with the IT-sanctioned tool or the better tool is not available to employees.

Emily Lewis-Pinnell

Founder, EVAILA

A “demand signal’ for IT leaders

Instead of viewing shadow AI as a strictly security issue, IT leaders should see it as a “demand signal” for where organizations are failing to meet employees’ needs, argues Emily Lewis-Pinnell, founder of AI consultancy Evaila

“You don’t tend to see people using illicit AI tools when the right tool is there,” says Lewis-Pinnell. “You see it when there’s a shortcoming with the IT-sanctioned tool or the better tool is not available to employees.” 

For employees, the use of shadow AI is usually tied to their job performance. One in three workers “go rogue” and use unsanctioned AI when they feel their productivity is hampered by the AI tools available to them, according to a 2026 Qualtrics study.

The question is whether IT leaders are able to hear that demand signal. More than 90% of leaders surveyed by Freshworks say they’re confident they already have full visibility into the AI tools used across their organization—while in the same survey, 71% acknowledge that unsanctioned shadow AI use is happening sight unseen, right under their noses.


Read also: IT leaders say they can see ‘shadow AI.’ Their own data says otherwise.


Fighting fire with (AI) fire

While there is no single strategy to bolster governance of shadow AI, the best bet might be to “fight fire with fire” and use AI itself to accelerate AI governance, says David Talby, CEO of John Snow Labs, an AI healthcare platform developer. 

“It’s not about bending the rules or rubber-stamping approvals,” says Talby. “It’s about using the very tool we’re trying to govern to streamline and improve the governance process itself.” 

The first step toward governing shadow AI is to use an AI-powered IT asset management tool to map and inventory AI usage across the organization, according to Talby. These tools (including Freshworks’ ITAM platform) can surface where unsanctioned AI is being used, what it is being used for, and by whom—they can also track license compliance, login activity, and patch status, providing ongoing governance. Depending on what they find, they can assign a level of risk. Low-risk tools require no committee review. Medium-risk AI may be paused, while high-risk cases are escalated to the governance committee for review.

“Email summarization and spend filtering—that should not be something that you need six senior people in the room to discuss,” says Talby. “The governance committee should really only review the 5-10% of use cases that are riskier.” 

There's a downstream benefit to keeping asset data current, too. When infrastructure data is complete and up to date, AI-powered service tools can act on reliable information. This means routing tickets accurately, assessing impact, and resolving issues faster. When it's stale or fragmented, even the best AI agent is working blind.

But AI alone can’t solve shadow AI. Many mid-market companies face deeper organizational and structural governance problems, the Freshworks survey found. Only a third of those surveyed say their companies have clearly designated IT or security as the owner of AI governance—and 9% have no designated owner at all. One in three acknowledge that even their existing AI governance policies are applied inconsistently. 

Without that foundation of reliable asset data feeding governed workflows, even the most sophisticated AI governance framework is operating on guesswork.

New research

The 'complexity tax' costing your business time, money, and talent

A positive light on shadow AI

Organizations don’t just need to put the right guardrails in place to govern AI. They need to remove the stigma of workers using AI tools to boost their productivity. 

Research shows that employees are hiding AI use not just from leaders, but from each other. A 2025 KPMG survey of 48,000 people across 47 countries found 57% of employees admitted to having covered up their AI use from their colleagues, not just their managers.

In fact, shadow AI may be hidden due to shame. “Studies show that people do sometimes think less of their coworkers’ work if they find out they have used AI than if they haven’t,” says Lewis-Pinnell. “Tools are hard, but people are harder. We just haven’t quite figured out when it’s okay to use AI and when it isn’t.”

While the Freshworks survey suggests apprehension among mid-market IT leaders about shadow AI’s security risks, it also reveals that they think it's helping their workers do their jobs better. Of the 79% who believe unapproved AI is improving workers’ productivity, 35% say that it’s boosting their workers’ performance significantly.

That should be celebrated, argues Lewis-Pinnell. Organizations that surface success strategies from AI tools, she says, can seed productivity across the company: “If you can tap into that adaptation among your workforce, that’s the way to start driving toward competitive advantages.”

For mid-market IT leaders, bringing shadow AI into the light starts with the employee. The workers already using AI on their own aren’t the problem to manage—they’re the signal of what a supported, AI-enabled workforce could look like.