Network access for private clusters

[This is the third blog in a series on #Kuberneteslearnings by Freshworks engineers.]

In our previous article, we looked at how we solved ‘access control’ for all our engineers to the various Kubernetes clusters that we manage. Another problem we had to solve was network connectivity between Kubernetes clusters and our internal tools.

One of our internal tools automates all our deployment pipelines. As mentioned in our previous article, we had to operate in a dual world of AWS OpsWorks (existing) and Kubernetes (the new) during the migration. So we had to ensure our tools for deployment would work across both (until we transitioned to something native to Kubernetes).

We also wanted to continue to provide the same experience to our engineers. Our engineers rely on the deployment automation tool to define all the stages/steps of their deployment pipeline while the system takes care of orchestrating all of it. It also includes a lot of controls to shape traffic after deployments so that we can roll out changes to the desired subset of our customers, observe for errors, and decide to roll out further (or rollback).

The challenge

The deployment automation tool, which lives in its own AWS account and Virtual Private Cloud (VPC), has to interact with various Kubernetes API endpoints during deployment. All our Kubernetes API server endpoints are private endpoints (we need guarantees that our API server endpoints are accessible only from networks that we trust). This meant that we needed to have private connectivity from our central deployment automation tool to various VPCs (where Kubernetes clusters are provisioned) in various AWS accounts.

If, however, you configure your EKS cluster endpoint to be private (highly recommended for all production deployments), then the endpoint can be accessed only from:

  • The VPC itself where the worker nodes are provisioned,
  • Any peered VPC,
  • Any network that is connected through Direct Connect or VPN.

We, obviously, didn’t want peered VPCs as this would create a complex mesh of networks given the number of AWS accounts and VPCs we have. We also operate out of multiple AWS regions to serve our customers across the world. So we didn’t want to go down the route of creating a number of peering connections.

VPC PrivateLink to the rescue

VPC PrivateLink allows you to securely connect your VPC with AWS services (such as SQS, Kinesis) or any other service/application hosted by a third party through Interface Endpoints. This means you don’t require traffic to flow out of your VPC through an internet gateway or NAT device to access those services.

So we piggybacked on VPC PrivateLink to create a network architecture that would help solve the challenges we faced.

For every EKS cluster provisioned (through Terraform templates), we created a Network Load Balancer (NLB) and a VPC Endpoint Service (that points to the NLB). The NLB was configured with each Subnet that the EKS Control Plane was provisioned with.

On the VPC where our automation tools reside, we created a VPC PrivateLink Interface Endpoint for every Endpoint Service created in the above step. That way, if we have to reach any specific EKS private endpoint, we can simply hit the VPC PrivateLink Endpoint and the network traffic remains completely private.

Lambda sync

One key missing part that we still haven’t discussed is how NLB discovers the EKS endpoint IPs that it needs to send traffic to. The private endpoint that EKS provides is a Domain Name System, or DNS, endpoint. And the control plane of EKS is deployed as multiple EC2 Instances across Availability Zones for High Availability. This means the DNS endpoint would resolve into multiple IP addresses and those IP addresses could potentially change in the future (if the EKS service decides to scale or replace Instances).

So we ran a Lambda function to take care of keeping our NLBs up-to-date with the IP addresses of the EKS private endpoint.

Here’s what the Lambda function does:

  1. It describes all the NLBs that are tagged as front-ending a Kubernetes Master (the tags are added to the NLBs as part of our Terraform template that orchestrates the cluster provisioning),
  2. It picks up the ‘cluster_name’ from the tag and makes EKS API calls to get the cluster DNS endpoint,
  3. Performs a DNS resolution to get all the current IP addresses of the EKS Master DNS endpoint,
  4. Updates the respective NLB’s target groups with a fresh set of IPs.

Here are the relevant code snippets of the Lambda function:

With that, we can now access all of our EKS Master endpoints through VPC PrivateLink keeping our control plane accessible only from trusted networks.

We just have one last step left.

Access across regions

As mentioned at the beginning of this article, we run our infrastructure across many AWS regions to serve our customers in different geographies. So our Kubernetes clusters are also provisioned across multiple AWS regions. This meant establishing private connectivity from one central VPC to many VPCs running in different regions.

So here’s what we did:

  1. In the AWS account where our automation tools reside, we created ‘Empty VPCs’ in each of the regions where we would be having our EKS clusters. These ‘Empty VPCs’ do NOT run any infrastructure at all.
  2. We created ‘Cross Region VPC peering’ connections between all these ‘Empty VPCs’ and the VPC where our automation tools reside (as depicted in the top portion of the diagram above).

Once we had the above setup in place, we were equipped to create EKS clusters in any region as well as corresponding ‘VPC PrivateLink Endpoints’ in the VPC where our automation tools reside. And with that, we were able to have private connectivity to all of our EKS endpoints flowing through the VPC PrivateLink.

All the above steps were written as Terraform templates so we could confidently create the infrastructure whenever required (say, we want to bring a new region online).

Other options considered

When we started designing this, we also looked at enabling Route53 inbound and outbound endpoints for our EKS clusters (as described in this blog). While this option allows the DNS resolution to happen outside of the VPC, it still requires a peering connection to each VPC. So we quickly pivoted to the architecture described above.

We hope you found this interesting and can apply this architecture for some of your use cases. Do share your thoughts (including any other ways of simplifying the architecture) in the comments below.