IT leaders say they can see ‘shadow AI.’ Their own data says otherwise
New research reveals a striking gap between how much control mid-market IT leaders believe they have over unsanctioned AI, and how much they actually do
Key takeaways
Ninety-two percent of IT leaders say they have full visibility into AI tools across their environment, yet 71% acknowledge that unapproved AI use is already common, a contradiction that points to a risky blind spot.
Nearly 8 in 10 (79%) IT leaders believe “shadow AI” (employees using unapproved AI tools) is making employees more productive, even as 86% report negative incidents in the past year — creating a governance dilemma with no clean resolution.
Mid-market organizations face a structural disadvantage: Governance ownership is fragmented, enforcement is inconsistent, and 72% say the legacy platforms built for large enterprises are too complex or heavy for their needs.
Ask a thousand mid-market IT leaders whether they can see the AI tools running across their organizations, and a full 92% will say yes. In fact, they’ll readily acknowledge that unauthorized AI usage is as high as 71%, according to a new survey from Freshworks.
"Shadow AI isn't spreading because people are ignoring the rules, it's spreading because governance hasn’t kept up. Most organizations know AI is everywhere, they just don’t have the systems to control it," says Murali Swaminathan, CTO at Freshworks. "The real challenge for mid-market companies isn’t visibility, it’s control. They can see AI adoption is happening, but don’t yet have the mechanisms to manage it.”
Shadow AI is already everywhere and IT knows it, but the governance hasn’t caught up yet. That conundrum is the central finding of a new Freshworks survey of 1,000 IT decision-makers at U.S. mid-market companies, one that suggests the risks of unsanctioned AI adoption are accumulating faster than most organizations realize.
The gap between confidence and control
IT leaders aren't just missing what's happening outside their purview; they're missing what's happening inside it. The survey reveals an overconfidence pattern that cuts across every dimension of AI visibility, including IT's own behavior:
99% say they're confident about monitoring AI agents, yet 13% admit those controls are significantly weaker than those applied to human users.
39% identified their own department as most likely to adopt AI without approval, a greater portion than any other team in the organization.
Unsanctioned AI enters through every channel: public AI platforms (40%), browser plugins and extensions (34%), personal productivity tools (32%), SaaS features (34%).
The breadth of entry points reflects how thoroughly AI capabilities have been embedded into everyday workplace tools, often without any deliberate decision to adopt them.
Read also: Stop flying blind
Productive and dangerous at the same time
The governance challenge is compounded by the fact that IT leaders largely believe shadow AI is delivering results. That perception shapes how seriously organizations treat the risks:
79% say employees using unapproved tools are more productive, with 35% calling those gains significant.
86% report at least one negative incident tied to unapproved AI in the past year.
Nearly a quarter (23%) have experienced more than three such incidents.
46% express extreme concern about employees unintentionally sharing sensitive data with external tools.
Organizations are managing real productivity gains and real risk exposure simultaneously, yet most governance frameworks weren't designed with that trade-off in mind.
A structural problem, not only a policy one
The data suggests that inconsistent enforcement also reflects genuine gaps in how governance authority is assigned and resourced:
Only a third (33%) of respondents identify IT or security as the clear authority on AI governance.
One-third (33%) admit their formal AI governance policies are inconsistently enforced.
9% have no clear governance owner at all.
Over a quarter (27%) say slow approval processes are the primary reason employees circumvent IT in the first place.
The mid-market faces a particular bind: These are organizations large enough to experience enterprise-grade AI sprawl, but without the resources typically available to manage it, the problems will only compound.
