Kubernetes CRD for Egress IP Address Management

Kubernetes CRD for Egress IP Address Management

Photo by Christian Stahl on Unsplash

A common problem for enterprise Kubernetes deployments is using a fixed IP address for outbound (egress) traffic. By default, in Kubernetes the egress IP address may vary and is shared across workloads. While other solutions exist, they are cloud provider specific. or not designed in a Kubernetes native manner.

In this post, we are announcing an open source Kubernetes Custom Resource Definition (CRD), called kube-static-egress-ip, that can be used to assign a static egress IP address for a specific Kubernetes workload.

The Problem

In an enterprise environment, application components running in a Kubernetes cluster will often communicate with services, like databases, messaging tools, etc., running outside the Kubernetes cluster. In a secure environment, the external services will be protected by a firewall that manages a white-list of IP addresses that can access the secured service.

Kubernetes networking provides several rich constructs to manage traffic across workloads using Network Policies, and to manage inbound traffic using Services and Ingresses. However, how traffic from Kubernetes workloads appears to external services is left up to Kubernetes CNI network plug-in implementations.  Most CNI implementations use the IP address of the node that the pod is running on, as the egress address. This means that the egress address will vary based in which node a pod is scheduled on.

In addition, all outbound traffic will use the IP addresses. This does not allow fine grained control on traffic, as all workloads will be seen with the same IP addresses.

The Solution

Today, we are announcing an open source and Kubernetes native solution to the challenge of managing outbound traffic from a Kubernetes cluster – kube-static-egress-ip. The solution consists of a Kubernetes Custom Resource Definition (CRD) that is used to define egress traffic rules, and a DaemonSet that acts as a controller for these resources and manages egress traffic flows from pods.

Let’s take a closer look at how this solution works:

In the diagram below, we have a cluster with three nodes, and 2 workloads (applications.) Traffic from pods with the label “app=x” should be allowed to communicate with the external service. And, traffic from other pods, for example pods with the label “app=y” should not allowed to communicate with the external service. A firewall controls access to the external service, using a IP address white-list.


Here are the steps to configure this:

  1. Run the kube-static-egress-ip DaemonSet
  2. Install the kube-static-egress-ip CRD
  3. Create a CR as follows:
apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
  name: eip
  - egressip:
    service-name: x

And, that’s it!

For full details on how this works, check out our Git repo.


Kubernetes is an extensible system, and it’s true power is in the solutions that it enables. kube-static-egress-ip provides an open source and Kubernetes native solution to a common problem with enterprise adoption of Kubernetes. The current implementation is functional, but is only the beginning of what we intend to deliver.

You can try out kube-static-egress-ip, provide feedback, or submit your contributions at:



Enterprise-Wide Kubernetes, Ingress and Load Balancing
Kubernetes Namespaces with Nirmata
No Comments

Sorry, the comment form is closed at this time.