To best manage AI, bone up on “policy fluency”
HR tech expert Julia Bersin unpacks the skills that managers need today to get the best out of humans and AI
Here’s what most companies know: Today’s AI agents are capable of reasoning, taking autonomous action, and completing multi-step tasks without human hand-holding, and organizations are rushing to put them to work—literally. Some companies are even assigning AI agents names, giving them defined roles, and dropping them into the org chart alongside their human colleagues.
But Julia Bersin, director of research at the advisory firm The Josh Bersin Company, a global leader in HR thought leadership, thinks that “AI as employees” is the wrong way to view teams and talent. “AI agents are tools,” she says. As such, managers need to bring a set of skills to implement guardrails that ensure AI agents’ independent work is error-free, while helping human workers stay engaged and motivated.
We talked with Bersin recently about the expanding definition of “manager,” the need to keep the definition of “AI” in check, and the essential skill of “policy fluency.”
The rise of AI agents raises an obvious question: Do the same skills that make a great people manager translate to managing AI?
Julia Bersin: It’s important to distinguish between the two. AI agents are tools—they’re technology. Managing an agent looks very different from managing a human. What we’re seeing is that anyone can manage an agent. It’s not just people managers. Employees who might have previously been doing the work the agent is now doing are suddenly having to oversee it. That requires things like AI literacy, understanding what the agent is good at, where it fails, and how to design workflows around it. Data governance, quality control, human oversight—those all come into play.
When it comes to managing humans in an AI-driven world, that’s where human-centered leadership skills really kick in. We recently published research on what we call “supermanagers”—managers who are using AI to become better people leaders while helping drive AI transformation on their teams. Those skills look a lot more like trust-building, communication, listening, and fostering psychological safety.
Even so, some companies are not seeing AI agents as “tools,” but naming them and putting them on the org chart. Is that just a gimmick or a harbinger of real transformation?
The personification question is interesting. AI in its conversational form can sound a lot like a human, but I think it’s important not to blur that line. When companies put AI agents with names on the org chart, they need to clearly label them as AI, so nobody confuses them for an actual person. The real goal is clarity around roles and accountability: What is the AI agent responsible for, and who is managing it? If it’s the department leader overseeing the agent, that line is shown. If it’s someone else on the team, same thing. Roles and responsibility are the most important parts.
When it comes to managing humans in an AI-driven world, that’s where human-centered leadership skills really kick in.
Julia Bersin
Director of Research, The Josh Bersin Company
If, as you note, AI literacy is a key skill for managers, how does that change the specific tech expertise companies are looking for when they hire managers today?
There’s a baseline knowledge that matters: What is the AI agent actually doing? Beyond that, it’s less about technical depth and more about policy fluency—data integrity, where the AI is pulling its information from, and how to manage ethical concerns. And then there’s a whole set of skills around translating AI-related communications down to the team level. What does this mean for a team’s function? How will they make sure their team feels confident using AI within the bounds of what’s allowed?
There’s conflicting research on whether AI agents are creating more, or less, friction on teams. What are you actually seeing?
It’s genuinely a mix. PwC’s 2025 survey found that workers were twice as likely to be excited about AI’s impact as they were worried. But Pew Research, earlier last year, found about half of workers are anxious about AI’s future impact on their jobs. The framing matters—excitement about potential is very different from anxiety about job security. There are also big industry differences. Technology and financial services workers are more likely to embrace AI, while deskless industries like food and beverage are much less so. And the headlines around layoffs aren’t helping. That’s going to create apprehension regardless of the data.
What does it actually take culturally to get the best out of hybrid human-AI teams?
One of the top qualities we’ve identified in “supermanagers” is their ability to create a culture of experimentation. That means structures that let employees experiment safely with AI and transparency—celebrating team members for innovation and creating open dialogue. A lot of this comes back to trust, which is the foundation of employees feeling safe with AI and safely using it. What does that look like in practice? Companies running hackathons, competitions, community spaces where people share AI experiments. CarGurus has run what they call “AI proto-thons.” Starting team meetings by having someone walk through one AI experiment they ran—that builds an experimentation muscle over time.
It sounds like managers need to strike a balance between leading on governance of AI agents and empowering human workers to innovate new ways of working with them. Is that accurate?
Yes. The best use cases for innovation are coming from the front lines, not from leadership. AI is a democratizing technology—anyone can reinvent how work gets done. Driving transformation purely top-down isn’t going to capture the full potential of it. We call this worker empowerment. With the right guardrails, secure tools, and education in place, it’s a genuine opportunity to celebrate and amplify employee innovation when it comes to collaboration with AI agents. Not every experiment is going to land, but exercising that muscle is where the breakthroughs happen.
