Kubernetes and Kubernetes architecture elegantly automate application container lifecycle management, but can be complex to configure and manage. In this post, I will introduce you to ten Kubernetes best practices and show you how to easily apply them to your clusters.
Let’s start with the best practices!
Kubernetes Best Practices
Disallow root user
All processes in a container run as the root user (uid 0), by default. To prevent potential compromise of container hosts, it is important to specify a non-root and least-privileged user ID when building the container image and make sure that all application containers run as a non-root user.
Disallow privileged containers
Privileged containers are defined as any container where the container uid 0 is mapped to the host’s uid 0. A process within a privileged container can get unrestricted host access. Without proper settings, a process can also gain privileges from its parent. Application containers should not be allowed to execute in privileged mode and privilege escalation should not be allowed.
Disallow adding new capabilities
Linux allows defining fine-grained permissions using capabilities. With Kubernetes, it is possible to add capabilities that escalate the level of kernel access and allow other potentially dangerous behaviors. Ensure that application pods cannot add new capabilities at runtime.
Disallow changes to kernel parameters
The Sysctl interface allows modifications to kernel parameters at runtime. In a Kubernetes pod these parameters can be specified as part of the configurations. Kernel parameter modifications can be used for exploits and adding new parameters should be restricted.
Disallow use of bind mounts (hostPath volumes)
Kubernetes pods can use host bind mounts (i.e. directories and volumes mounted on the container host) in containers. Using host resources can enable access of shared data or may allow privilege escalation. In addition, using host volumes couples application pods to a specific host. Using bind mounts should not be allowed for application pods.
Disallow access to the docker socket bind mount
The docker socket bind mount allows access to the Docker daemon on the node. This access can be used for privilege escalation and to manage containers outside of Kubernetes. Hence access to the docker socket should not be allowed for application workloads.
Disallow use of host network and ports
Using the container host network interfaces allows pods to share the host networking stack allowing potential snooping of network traffic across application pods.
Require read-only root filesystem
A read-only root filesystem helps to enforce an immutable infrastructure strategy; the container only needs to write on mounted volumes that can persist state even if the container exits. An immutable root filesystem can also prevent malicious binaries from writing to the host system.
Require pod resource requests and limits
Application workloads share cluster resources. Hence, it is important to manage resources assigned for each pod. It is recommended that requests and limits are configured per pod and include at least CPU and memory resources.
Require livenessProbe and readinessProbe
Liveness and readiness probes help manage a pod’s lifecycle during deployments, restarts, and upgrades. If these checks are not properly configured, pods may be terminated while initializing or may start receiving user requests before they are ready.
Applying the Best Practices
Now that we understand the best practices, let’s take a look at how to apply these to your clusters and ensure that all workloads comply with them.
For a number of the pod security settings, Kubernetes itself provides a policy object called the Pod Security Policy (PSP). However, PSPs are themselves complex to configure and manage across workloads and can result in errors where pods are not scheduled. Also, PSPs are a beta resource and are not likely to be generally available (GA) due to their inherent limitations and usability issues. Hence, there exists a need for better solutions.
Kyverno is an open-source Kubernetes native policy management framework that can validate, mutate, and generate workload configurations. Kyverno installs as an admission controller webhook receiver. This means that all requests to create, edit and delete workload configurations can be inspected by Kyverno.
Kyverno policies are a Kubernetes resource and simple to write. For example, here is a policy that checks for liveness and readiness probes in each pod:
apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: require-pod-probes spec: rules: - name: validate-livenessProbe-readinessProbe match: resources: kinds: - Pod validate: message: "Liveness and readiness probes are required" pattern: spec: containers: - livenessProbe: periodSeconds: ">0" readinessProbe: periodSeconds: ">0"
Kyverno is designed for Kubernetes, and hence has deep knowledge of how Kubernetes architecture works. For example, if a pod cannot be deployed due to policy enforcement, Kyverno will check for the pod controller (e.g. the Deployment that created the pod) and reports a Policy Violation on the pod controller. The policy violations are also created as Kubernetes resources, and in the same namespace as the workload. This makes for a good developer experience as it’s easy to inspect and fix the non-compliant configurations.
Installing Kyverno in your cluster is easy using this single command:
kubectl create -f https://github.com/nirmata/kyverno/raw/master/definitions/install.yaml
Next, you can apply the best practice policies that include the ones above and several more:
git clone https://github.com/nirmata/kyverno.git cd kyverno kubectl create -f samples/best_practices
The policies can be set to simply validate new and existing configurations and report violations. Or, you can set them to enforce checks and block configuration changes that do not comply with the policies.
You can read more about Kyverno at the GitHub site:
https://github.com/nirmata/kyverno/blob/master/README.md
Using Kyverno with Nirmata
An easier way to operate Kyverno and manage policies is with Nirmata. Simply sign up for a free account and register your cluster by selecting the option to manage an existing Kubernetes cluster. Once your cluster is registered, Nirmata can automatically deploy Kyverno as a cluster add-on service:
As Kyverno runs in your cluster, Nirmata collects policy violations and correlates them back to workloads. Nirmata provides easy ways to centrally manage policies for multiple clusters, manage violations, create exceptions for workloads, and even generate alarms for violations.
Summary
Kubernetes is powerful and provides many configuration options for workloads. However, Kubernetes is insecure out-of-the-box and requires careful tuning to secure the cluster resources and workloads.
Kyverno makes it easy to audit workloads for best practice compliance and enforce policies. The Kyverno repository has several best practice policies you can immediately use to get started.
Nirmata provides a simple way to register any Kubernetes cluster and install add-on services like Kyverno. Nirmata integrates with Kyverno to provide central visibility, reporting, and customizable alerts across all your clusters. You can get started with a 30-day free trial and then continue using the free tier.
Sorry, the comment form is closed at this time.