Building with AI

The philosophy and long-term bets behind how Freshworks builds

Blog
Srinivasan Raghavan

Srinivasan RaghavanChief Product Officer at Freshworks

May 07, 20263 MIN READ

It started with one of my product leaders, Jason Aloia, who built a planning app in two days. PRD drafted with AI, design mockups the same day, working prototype before the week was out. It completely reframed our AOP planning conversations, not because it was polished, but because it was real enough to react to.

I asked if I could share the story. Not as a curiosity, but because it represents exactly how I think product development should work. And increasingly, it's how we're building at Freshworks.

Bolted-on vs. built-in: The distinction that matters

Most companies talk about building with AI. Most are doing something different: bolting AI onto existing products and calling it transformation. Bolt-on means your product has AI features. Built-in means AI has changed how the product gets built, the process, the roles, the tools, and the pace. Those are very different things. We're doing the second one at Freshworks. 

Read also: Why most AI pilots stall before they scale

How AI shows up across the full build lifecycle

The shift isn't in one function. It's everywhere.

PMs use AI to generate PRDs in minutes, not weeks. Specific prompts produce epics and user stories that flow directly into the development cycle without a handoff meeting. UX teams go from that PRD to design mockups the same day using Figma Make. What used to take two weeks of back-and-forth now happens in a single afternoon.

New research

The 'complexity tax' costing your business time, money, and talent

UX Research uses AI to synthesize customer feedback at a scale no manual process can match, surfacing patterns across hundreds of conversations that would previously take weeks to analyze. The insight loop between customers and product decisions has compressed dramatically.

LLMs like Claude and tools like Cursor are part of the workflow daily for code generation, component development, and database schema design. Deployment pipelines connect directly from code to production via GitHub, with AI assisting in testing and CI/CD.

And content—documentation, release notes, in-product copy—is no longer the last thing that gets done before a launch. AI keeps it in sync with what's built throughout the cycle, not bolted on at the end.

What this means for us as a team

The shift is immediately evident in how we work.

The first visible change is speed, but that’s not the real shift. PRD to working prototype in days instead of weeks. A concept that used to take a full sprint to validate can now be stress-tested before the meeting ends. But speed alone isn't the point.

The bigger change is in collaboration quality. When you can spin up something real instead of presenting a deck, the conversation changes entirely. PMM, UX, engineering, and product stop debating assumptions and start reacting to evidence. We move faster, but more importantly, we move with more clarity. We focus on the problems that actually matter rather than the ones that look important on a slide.

Research has changed, too. Our UX Research team can now synthesize feedback across hundreds of customer conversations in a fraction of the time. That means larger signals inform product decisions, not just the loudest voices in the last three customer calls.

And documentation, historically the thing everyone puts off, now stays live throughout the build cycle. AI keeps it in sync as the product evolves, so when we hand off to engineering or go-to-market, there's no scramble to catch up.

The cumulative effect: Builders spend significantly less time on process and significantly more time on the actual problem. That's the shift that matters.

The architecture bet

This directly shapes how fast and flexibly we can build. We've built a pluggable, multi-model architecture that lets us swap between OpenAI, Claude, Mistral, and others depending on the task, performance, and cost. Not a single model bet. A flexibility bet.

I've seen this pattern before. At Five9, I watched automatic speech recognition costs drop by roughly 10,000% over five years. The same curve is playing out with LLMs. The companies that locked into a single model early are already paying for that decision. Flexibility isn't a feature. It's a survival strategy.

The long game

Staying relevant when AI moves this fast isn't about chasing every model release or shipping an AI feature for every press cycle. It's about committing to a different way of working before the market makes it obvious.

The companies that win the next decade won't be the ones that added AI fastest. They’ll be the companies that changed how products got built before the rest of the market caught up.