Building trust in every interaction
Introducing the Freddy AI Trust framework
At Freshworks, we believe that trust isn’t just an aspiration—it’s a requirement.
As businesses around the world embrace the potential of AI, it’s easy to focus on speed, scale, and automation. But real progress can’t come at the cost of confidence. Whether it’s a support agent relying on Freddy AI Copilot for recommendations or a customer engaging with a virtual agent for help, each interaction must be secure, explainable, and respectful of organizational (and human!) boundaries.
That’s why we’ve developed the Freddy AI Trust framework—our commitment to earning and sustaining trust in every one of our AI-powered experiences. It’s more than a checklist or compliance statement. Our trust framework is a system of safeguards, design principles, and governance practices that underpin how Freddy AI is built, deployed, and used across our platform.
Why trust matters now
As AI adoption accelerates, trust has become the deciding factor in whether businesses move from pilots to production. Customers want to know: What happens to their data? Can they control how AI behaves? Will the outcomes be safe, fair, and accurate?
These are real concerns, and we’ve built the Freddy AI Trust framework to address them head-on. Our goal is simple: to ensure every AI interaction is not just powerful, but trustworthy.
The Freddy AI Trust framework: Five pillars of protection
The Freddy AI Trust framework is built on five key pillars, each designed to address specific concerns about enterprise AI adoption. Together, they form the foundation for how Freddy AI operates across our customer and employee experience products.
1. Safety
We’ve engineered Freddy AI to reduce common risks associated with fast-moving AI advancements. We’ve built multiple content validation layers that prevent the generation of harmful or unintended interactions. This includes guardrails to detect and block toxic or unsafe content and alignment checks to ensure Freddy AI acts only on its intended uses. This approach helps maintain alignment with service-specific objectives, safeguards against misuse, and reduces the risk of AI going off track.
2. Privacy
Protecting customer data is fundamental to Freddy AI. Our privacy approach revolves around de-identification techniques to mask and safeguard sensitive information within our system. Certain Freddy AI features are also powered by models that are fine-tuned using a customer’s own service data. When these features are used, the data remains isolated and powers models built exclusively for delivering personalized, account-specific intelligence to that customer.
We also support regional data residency, allowing customers to store data in specific geographic locations based on their needs. These privacy controls help ensure compliance with global data protection laws while giving organizations clear oversight of how their data is handled.
3. Security
AI systems with conversational interfaces like Freddy AI face unique security risks when it comes to prompt injections, jailbreak attempts, and data exfiltration. To combat these, Freddy AI is built with multilayered defenses that actively scan for and block malicious inputs using Azure AI Content Safety and Prompt Shields. These protections help detect and reduce the risk of manipulative prompts reaching the LLM models. Combined with enterprise-grade infrastructure, access controls, and encryption, Freddy AI aims to support secure and trustworthy AI interactions.
4. Controls
Freddy AI is designed to help customers feel in control of their AI experiences. Freddy AI combines the enterprise-grade power of Azure OpenAI for generative intelligence with the precision of customer-specific discriminative models trained exclusively on your data. This personalization helps improve accuracy, relevance, and resolution speed, enhancing the experience for your teams and customers alike.
Certain Freddy AI features draw on account-exclusive models to deliver stronger AI personalization and performance. While customers can opt out if needed, doing so limits the full value Freddy AI is designed to deliver.
5. Traceability
Transparency is essential for building trust. Freddy AI includes traceability features that show how certain responses were generated to help users and administrators validate accuracy of AI-created answers. Our citation system grounds answers in trusted knowledge sources, providing clarity on the origin of information shared with users. Such visibility helps users build trust with Freddy AI tools and organizations build confidence in AI-assisted service delivery.
While no safeguard is foolproof, we’ve invested deeply in building and applying the right controls to reduce risk, support responsible AI use, and uphold Freshworks service integrity. Because AI systems are constantly evolving, our Trust approach is designed to continuously adapt and improve as we continue Freddy AI innovation.
Secure AI, successful AI
Trust isn’t something you add at the end—it has to be woven into every part of the experience. That’s why the Freddy AI Trust framework touches everything from architecture to user interface design.
For example, when an agent uses Freddy AI Copilot to draft a response, they see the source knowledge used to generate that suggestion. When an AI agent responds to a customer, the AI stays within predefined flows—guided by business logic, not freeform improvisation. And when a support leader wants to understand how AI is being used across their team, they can access detailed reports and configuration options directly from the admin console.
It’s trust in action—not just in principle.
We’ve built the Freddy AI Trust framework not just to meet today’s expectations, but to grow with our customers as AI continues to evolve. It reflects years of customer conversations, internal innovation, and an unwavering focus on delivering safe, helpful, and enterprise-ready AI.