AI for ITSM: What to Consider First
AI comes in two general types: Large language models (LLMs), such as chatbots, which are trained on public and private sources of information to answer questions in plain English; and agentic AI, where AI agents ask multi-step probing questions, carry out tasks, and integrate with third-party tools to automate workflows.
When it comes to IT service management (ITSM), the combination of the two forms of AI is powerful in reducing the human effort required to answer common end-user questions and complete repetitive, time-consuming tasks. However, most ITSM practitioners are still at the early stage of implementing AI, asking themselves important questions like these:
“Where should we start?”
“How concerned should we be about security and privacy?”
“Where can we achieve the highest return on resource investment?”
"How do we build the organizational capability to support AI initiatives?"
In this article, we discuss practical considerations for assessing your ITSM readiness and implementing AI where it will have the most significant impact, starting with the basics.
Summary of key considerations before adopting AI for ITSM
Consideration | Purpose |
|---|---|
Diagnose before you deploy | Run a needs-first assessment to ensure that every AI feature addresses a real operational gap. |
Keep AI human-centered | Use AI to enhance human capability and keep service experiences personal and efficient. |
Adopt AI in measured stages | Scale AI gradually to minimize disruption and prove value at each adoption stage. |
Map AI to the incident lifecycle | Deploy AI across different stages of the incident lifecycle to create compound value and measurable ROI. |
Understanding AI’s role in service management
Over the years, ITSM automation has focused on making current workflows faster rather than questioning whether those workflows should exist at all. This often happens because IT leaders gravitate toward solutions that promise efficiency gains without disrupting established procedures.
Look closely at your ITSM practice and you'll find many opportunities to eliminate manual reviews that create bottlenecks. The biggest wins are in automating the decisions that currently require analyst expertise. For example, AI can analyze service ticket content, user impact, and system dependencies to automatically categorize incidents and assign priority levels that would typically require the judgment of an experienced analyst.
You will likely find dozens of similar opportunities within your other ITIL practices. The illustration below is based on the Freshservice 2025 ITSM Benchmark report, which suggests that with AI, leading enterprises can resolve over 80% of issues on first contact resolution (FCR), with assignment happening in 8 hours instead of days and complete resolution cycles dropping to 14 hours rather than weeks.
How AI eliminates manual touchpoints and transforms service delivery
Virtual agents can handle the majority of routine requests by engaging users in diagnostic conversations rather than forcing them to navigate knowledge bases or submit vague email descriptions. When users report issues, conversational AI collects complete information up front, asks relevant follow-up questions, and either resolves problems immediately or creates properly formatted tickets with full context.
The "Zero Manual Touch" shown in the illustration refers specifically to initial routing and categorization processes—not complete human elimination. It is worth noting that human agents remain essential for complex problem-solving, escalations, and situations particularly requiring empathy and judgment. AI, in the meantime, can handle the repetitive classification and routing tasks.
The key is knowing where to start. Certain workflows are perfect AI candidates, while others need significant restructuring before automation makes sense. You need to be strategic about which processes to automate first, and there are several factors to consider when selecting your targets.
People-first AI for exceptional employee experiences
Freddy AI Agent
Freddy Al Copilot
Freddy Al Insights
Diagnose before you deploy
When it comes to AI transformation, understand that real transformation is only possible when you consider AI as an enabler and focus on your actual service delivery problems first. The choice of platform should become relevant only after you understand what you're really trying to fix.
Start by targeting the ITIL processes where simple rule-based automation has failed because the work requires contextual understanding or judgment calls that only experienced staff can make (even though they can't scale that expertise across every ticket or decision). There can also be issues with standardized processes that have likely persisted despite multiple automation attempts because they involve complexity, variability, or human judgment that traditional tools/approaches can't handle effectively.
As shown in the illustration below, consider mapping all the pain points of your ITSM practices.
A typical process-wise mapping that can benefit from AI-based automation
Once you identify breakdowns at a process level, evaluate which AI capabilities can specifically address the underlying causes. The goal should be to identify where your processes consistently break down due to missing context or complex decisions, then assess how AI can specifically address those friction points. Only after mapping these specific pain points should you evaluate AI tools based on their ability to solve the particular challenges.
Discard vanity metrics—focus on outcomes
Another point to remember is that most AI tools are built for generic ITSM challenges that may not be the real causes of concern for your organization. If the vendor you’re considering doesn't spend the first 30 minutes understanding your operational headaches, they're likely selling you a prebuilt solution designed for a different organization's specific challenges, not yours.
Don't fall for vanity metrics like “tickets processed by AI” or “chatbot interaction rates.” These metrics don't necessarily mean that AI solved, understood, or improved the problems, just that it was involved in the process. Instead, push your vendors to show how their tools can actually improve KPIs like MTTR reduction, escalation reduction, first-touch success rates, ticket deflection rates, end-user satisfaction scores, etc.
For example, consider whether an AI-driven asset discovery tool justifies its cost if it's simply scanning for the same device types your current tools already find. In that case, you're paying more for getting the same inventory list consolidated. More valuable would be an AI discovery tool that predicts which assets will hit capacity or fail based on usage patterns.
For a comprehensive framework on measuring AI ROI and identifying the right KPIs for your organization, see our CIO's Guide to Maximizing ROI: AI Powered IT Operations.
Most importantly, remember that the AI implementation should focus on problems where the expertise of humans is valuable but their capacity is limited. Scope out scenarios like:
Senior technicians who can diagnose complex technical issues in minutes but only handle 5-10 tickets per day
Expert technicians who solve complex escalations but rarely document their troubleshooting process due to a lack of time or interest
Change managers who intuitively know which platform updates will cause cascading failures but can't review every change request
Problem managers who spot the early warning signs of major outages but only when they manually correlate incident patterns
Do you see the opportunity here? Traditional manual triaging creates a capacity bottleneck where only a handful of tickets receive deep analytical attention daily. With AI-enabled automation, you can scale that analytical depth across your entire service delivery pipeline, with the same analytical rigor, to hundreds of tickets simultaneously.
People-first AI for exceptional employee experiences
Freddy AI Agent
Freddy Al Copilot
Freddy Al Insights
Keep AI human-centered
Don't mistake scaling expert analysis for eliminating experts altogether. The point is worth emphasizing because it's a common implementation mistake where enterprises try to automate away expertise rather than amplifying it.
The consideration here is straightforward, but the practice isn't always implemented well. Some enterprises tend to choose AI-for-ITSM tools that promise maximum ticket automation and fewer service desk staff. In such cases, agent experience considerations are treated as less important and tackled through training programs only after the AI platform is selected.
Here is what to do instead.
Combine AI efficiency with human empathy
Service desk quality depends on human judgment for complex user issues. AI can spot emotional cues and make helpful suggestions, but it can't actually feel empathy or have a genuine human conversation. The support person will still need to bring real understanding and emotional intelligence to solve the customer's problem in a way that feels personal and caring.
Consider designing AI-driven automation that helps rather than threatens service desk expertise. Here, AI sentiment analysis can guide service desk interactions without replacing human empathy, essentially balancing efficiency with empathy.
To put this in perspective, consider a solution where AI can detect emotions in customer messages and suggest helpful actions, but the human support person still makes the final decision about how to respond with genuine care.
The illustration below highlights how this works in practice.
AI/human collaboration for intelligence, empathy, and judgment
Design interfaces for different user skill levels
Your AI platform selection should account for the fact that the employees submitting IT tickets may not be technical experts. While they may know that there is an issue with a system, they can't independently diagnose the root cause or select the proper ITIL category when raising a ticket. Meanwhile, there is always a mix of IT service users, ranging from executives who require simple guided experiences to power users like developers who want detailed diagnostic tools.
Design AI chatbots and service portals that speak business language and can also understand casual slang. For example, if users can't choose from among the “hardware,” “software,” and “network” categories, give them the option to describe problems naturally. The AI should be able to analyze the user's description and map keywords, phrases, and context to predefined ITIL categories. For example, when someone says, “I can't access the customer database,” the AI would categorize it as an “application access issue” in the CRM/Database subcategory, assign it to the appropriate team, and set priority based on the user's role
Interactions should also adapt based on user roles and their evident technical knowledge. Evaluate whether the AI engine is capable enough to learn and build a profile over time, and adapt to each user's actual technical knowledge rather than making blanket assumptions based on their role. With this capability, the AI can provide guided troubleshooting with screenshots and simple language for basic users. For technical users, the AI can offer advanced diagnostic tools, direct access to system logs, and the ability to provide detailed environment information.
Adopt AI in measured stages
Integrating AI automation with legacy ITIL tools creates failures that are nearly universal in enterprise environments. Legacy ITIL tools weren't built for AI integration, so it's common to see API timeouts, data format mismatches, and authentication issues. The more systems you connect, the more failure points you create. Model hallucination is another common issue with general-purpose language models, which can convincingly provide incorrect information. Eventually, you may find resistance from both staff and users who can't understand how the AI makes decisions or predict when it might fail them.
The urge to automate everything at once is understandable, considering that most AI solutions come preloaded with every available feature, all of which work perfectly in controlled demos until real users with unpredictable or edge cases start using the system. A better option is to activate AI capabilities gradually, prove that each one works, and implement fallback procedures before adding the next level of complexity.
Start with simple, transparent AI functions like ticket categorization and basic auto-responses where outcomes are easy to validate. Once teams trust these foundational capabilities, add contextual AI that provides recommendations based on historical data and similar incidents. Only after establishing confidence in AI judgment should you consider implementing predictive capabilities that your team will actually trust and use.
In practice, a typical progression should look like this:
Level 1 - Awareness Stage (months 1-3): You are evaluating AI's potential, considering cost, complexity, and security. Start by evaluating AI's potential for your specific ITSM challenges. Focus on understanding cost, complexity, and security implications before any implementation. Use this time to assess your data quality and process readiness.
Level 2 - Active Stage (months 3-9): AI begins to assist in everyday operations while enhancing productivity. Implement ticket categorization and basic auto-responses where outcomes are easy to validate. AI reads incoming tickets and suggests categories like "password reset" or "software installation," and auto-responses acknowledge receipt and provide basic status updates. These functions are transparent and easy for agents to verify or correct.
Level 3 - Operational Stage (months 6-15): AI is embedded into optimized processes to drive service efficiency. The system suggests knowledge base articles that solved similar previous problems, recommends appropriate assignment groups based on ticket content, and identifies related incidents that might indicate larger issues. Agents can see the reasoning behind recommendations and choose whether to follow them. Most importantly, in this phase, the chatbot retains a long-term memory of past interactions with the same user and gathers environmental information from closed tickets.
Level 4 - Systemic Stage (months 12-24): Scale AI across significant workloads. AI begins managing substantial portions of routine tasks, assisting service agents extensively, and automating complex workflows. At this stage, AI becomes critical to daily operations, enabling substantial efficiency gains and reducing agent workload significantly.
Level 5 - Transformational Stage (18+ months): AI is integral to business decision-making. Implement proactive capabilities like predicting system failures based on monitoring data, identifying users likely to submit tickets before problems occur, and automatically adjusting resource allocation based on anticipated demand patterns. AI becomes integral to business decision-making and organizational strategy.
AI maturity progression can also follow different approaches. Rather than following a linear path from basic to advanced, you may prioritize high-impact use cases first, regardless of complexity. For instance, if your organization is facing severe agent burnout, you can jump directly to implementing AI agents for ticket deflection (typically a Level 4 capability) while still working on basic categorization. The key should always be to match your AI roadmap to your most pressing business problems, and not rigidly follow a predetermined sequence.
The illustration below shows process-level layering of your AI deployment.
Layered AI for ITSM approaches
That said, if your AI solution leverages a general-purpose language model, hallucination is an inherent characteristic that you cannot eliminate (neither gradually nor completely). Gradual implementation can help you catch hallucinations faster through human oversight and validation loops, but it is not meant to solve the core technical limitations of the AI platform itself.
To overcome this challenge, start with AI suggestions that require human approval, then gradually increase automation for well-defined, low-risk scenarios where errors have minimal impact and can be easily detected. A well-implemented ITSM infrastructure already includes validation checkpoints. Cross-reference AI suggestions against your knowledge base of proven solutions, use CMDB data to verify AI recommendations about system relationships, and leverage change advisory board processes for any AI-suggested modifications. When evaluating AI platforms, prioritize those that can integrate with these existing validation workflows.
Map AI to the incident lifecycle
Even as you take a layered automation approach, consider how AI can change the workflow across your incident lifecycle. Each stage of your incident lifecycle generates data that can improve AI performance through other stages. Instead of just processing incidents faster, AI can prevent incidents from entering the workflow, deflect appropriate issues to self-service, and reduce the mean time needed to resolve (MTTR) complex problems.
Map your AI strategy across these critical lifecycle stages to maximize this data value. However, before deploying this integrated approach, assess whether your incident management practice is ready for a lifecycle-wide AI implementation.
Unlike other audits that are supported by standardized tools, ITSM AI assessments typically require custom evaluation because each organization's incident lifecycle and data maturity vary significantly. Essentially, you audit your current incident management processes against AI implementation requirements, evaluating each stage to identify readiness gaps and opportunities.
The following illustration shows a comprehensive framework for evaluating an organization's readiness to implement AI across its ITSM incident lifecycle. It breaks down the assessment into four critical stages, with each stage containing three specific evaluation criteria that organizations need to assess before deploying AI capabilities.
Evaluate each lifecycle stage of incident management for AI implementation readiness
Note that the mock assessment scores above demonstrate how an organization might have varying levels of readiness across these areas, with some criteria showing stronger foundations than others.
As general guidance, you can score each criterion by taking these concrete measurements and converting them to percentages. For instance, if 75 out of 100 sampled tickets were categorized correctly, that's a 75% score. If only 40% of your major incidents have documented root causes, that's your score for that criterion. The word of advice is to focus on concrete metrics like accuracy rates and utilization percentages rather than theoretical capabilities.
If you're scoring below 70% (can be different depending upon use cases) in any area, you can expect that AI implementation will struggle until you address those foundational gaps first. For example, if data quality scores low in the detection stage, focus on standardizing monitoring data collection before deploying anomaly detection AI. If multiple areas score poorly, prioritize foundational elements like data standardization and process consistency first, as these enable improvements across all stages.
Addressing the gaps is a dedicated task because it requires practitioners to understand the interconnections and make systematic changes to workflows, data structures, and team responsibilities. Also consider the aspect of collaboration among IT operations, service management, and business stakeholders to agree on changes that aren't purely technical, such as standardizing incident categorization schemas or redesigning escalation procedures. These are the foundational elements that would determine whether your AI-driven ITSM practice truly delivers exponential value.
Leveraging AI to improve each incident lifecycle stage
Factor in the compound value, where each improvement multiplies the others.
Essentially meaning:
Fewer incidents + smarter routing + faster resolution = exponential ROI.
However, this compound effect will only materialize when your foundational processes are solid. This multiplicative effect also explains why the assessment framework discussed above is so critical, as weak performance in any area can break the entire value chain.
Conclusion
Within the next few years, organizations still running traditional ITSM will become as obsolete as companies that refused to implement email in the 1990s. While you debate whether to implement AI chatbots, your competitors are eliminating incidents before they occur, resolving problems before users notice, and delivering services that adapt in real-time. The window for gradual adoption is closing. Are you ready to start building an AI-native ITSM practice?
Platforms like FreshService's Freddy AI are already demonstrating how AI capabilities can handle increasingly sophisticated tasks that previously required human judgment. Freddy AI can analyze ticket content to automatically suggest solutions from past resolutions and predict escalation probability before human agents even review the ticket. The AI engine is also capable of performing root cause analysis by correlating incidents across multiple systems, essentially helping you identify underlying problems that human analysts might miss.
Book a demo to learn more.