How we manage deployment with
Freshworks Cloud Platform
Kubernetes is said to be the platform to build platforms on. In Freshworks’ case, this is very true. Once Kubernetes established itself as the de-facto container orchestration platform, we set out to create a platform around it with certain very specific capabilities in mind.
Kubernetes is a complex beast and takes some effort to tame. Once that is done, there are tremendous advantages. The idea was to build a platform that creates various automations on top of Kubernetes that can then be more easily leveraged by our engineering teams without them having to reinvent the wheel. Our main concentration was on automating various best practices in two areas: deployment and security.
This article discusses the thinking and implementation details of our deployment platform and how we leverage Kubernetes, Istio and other technologies to achieve a modern cloud-native system.
The benefits of Freshworks Cloud Platform
As Freshworks grew, we realized that we needed standardization around deployment that kept the bar high as far as security, observability, scaling, updates, and cost optimization were concerned. Freshworks follows a DevOps model of operating. The same team that builds the products is responsible for running them in production while also being on call.
Each team, however, is different and it was normal for them to have different practices and priorities. It was unfair to expect each of these teams to uniformly ensure a certain bar for the above mentioned aspects.
This is where Freshworks Cloud Platform plays a part, ensuring that irrespective of the team, a certain level of the bar is always maintained.
Bettering the security posture of applications deployed via Freshworks Cloud Platform is one of its foremost goals. To this end, before deployment, Freshworks Cloud Platform verifies the security posture of any container that is deployed. It will refuse to deploy containers that are not from a certain set of allowed container registries or containers that have not passed SAST and DAST checks.
This is possible since the Freshworks Cloud Platform is designed to work well with Runway, our internal CI/CD system that is based on Jenkins. Freshworks Cloud Platform can look at the results of the various security-related jobs, especially those that ran SAST and DAST on the code.
There are other security features that Freshworks Cloud Platform has. FCP uses multiple tools to secure workloads at various levels.
- Deployments based on Kubernetes use GoAudit, which is automatically injected into all deployed pods. GoAudit helps detect system tampering.
- KubeAudit ensures that the Kubernetes cluster set up by the Freshworks Cloud Platform follows the recommended security best practices.
- Open Policy Agent is quickly becoming the standard in enforcing security policies at runtime for Kubernetes clusters. For example, we use it to enforce that our clusters only run containers from Freshworks container registries.
At Freshworks, we have our internally developed and managed services for the three pillars of observability: metrics, logging, and tracing, named Trigmetry, Haystack, and Distrace, respectively. Information about observability endpoints and agents are all automatically included when containers are deployed on Freshworks Cloud Platform. Accounts on the observability services are created on-the-fly using Lambda helpers where required.
Freshworks Cloud Platform supports Kubernetes’ Horizontal Pod Autoscaler for scaling applications up and down automatically. Scaling parameters can be configured from the Freshworks Cloud Platform UI. However, since FCP supports background/deferred jobs, we’re working on a scaler that is based on custom metrics.
The great thing about Kubernetes is that it not only creates and terminates containers in response to changing load conditions. It can also add and remove compute nodes to the Kubernetes cluster automatically as well, making it a system that is fairly efficient.
Having a service mesh such as Istio is very powerful. A service mesh is incredibly useful since it provides a lot of functionality previously included into applications. This simplifies the application architecture, while providing a single place from where policies are enforced. Various functionalities such as timeouts, retries, circuit-breaking, authentication, and observability can be centrally managed via clearly defined policies.
FCP uses Istio to power its Blue/Green deployments. When deploying new releases of running applications, developers specify various Blue/Green parameters and via Istio, FCP completely automates this.
FCP is aware of the various types of workloads. For example, it is aware if a workload is production or non-production. For non-production workloads, it automatically leverages spot instances for cost-savings. Users can also opt in for spot instances for certain types of workloads.
At Freshworks, while we have multiple SaaS products we also have more than 20 in-house platform services. Products rely on them heavily for capabilities such as logging, rate limiting, and communication. A deployment system then needs to support both products and platform services.
The main workloads
The chunk of our workloads are products and platform services. While most of our products are based on the Ruby on Rails framework, some use Java. Most platform services are based on Java with a spattering of Go and Node. These products and services are independently developed, tested, and deployed.
A shift towards micro-services
About a year ago, Freshdesk, our largest product in terms of both revenue and deployment footprint, started a slow but sure journey towards microservices. Currently, about half a dozen Java-based microservices dot its architecture diagram. These co-exist along with the Ruby on Rails monolith.
While we started with implementing only new features and capabilities as microservices, over a period of time, these microservices are expected to implement existing features so that they slowly supplant the Ruby on Rails monolith.
Microservices-based architectures can greatly benefit from more cloud-native forms of deployment, monitoring and management, and the Freshworks Cloud Platform greatly helps in this regard.
Freshworks products depend on a variety of background workers to process deferred work. Depending on the language and framework the application instantiating the background worker is written in, background workers themselves can be implemented in a variety of ways. Freshworks Cloud Platform needs to support the instantiation and management of these background workers.
While preparing applications for containerization and container orchestration should be a separate article in itself, this section gives you a quick overview of what was on our mind. For more detailed information on some of the concerns we were dealing with, please read A Manager’s Guide to Kubernetes Adoption.
The various workloads described thus far: the monoliths and the background workers, except for the microservices, were initially deployed on to virtual machines automated with Chef. To be able to orchestrate those workloads with Kubernetes, we first had to containerize them. At this time, we had to take a step back and see how to do it as best as we could rather than blindly converting services to containers.
To this end, many aspects of a 12-factor app were implemented. Since we already operated at scale, it was safe to assume that none of our applications had any locally saved state, which would have made them hard to scale and orchestrate.
Subscribe for blog updates
Thank you for subscribing!
OOPS! something went wrong try after sometime