Kubernetes YAML Example: How to Enforce Best Practices with Kyverno Policies

Kubernetes YAML Example: How to Enforce Best Practices with Kyverno Policies

One of the great things about Kubernetes is its ease of configuration. For the most part, this comes from the resource specifications being YAML based. While the plethora of YAML syntax checking and formatting tools make our experience with Kubernetes resources smoother, they only address half of our problems, i.e., the syntax-related issues. However, the major chunk of issues that cost significant cycles to fix come from the semantics. How can we ensure best practices in our cluster? For example, you may want to ensure that you’re using stable images, i.e. the ones that are not tagged with ‘latest’, from Docker Hub in the production namespace. This is where Policies come into the picture. A Policy is a set of rules that apply to a Kubernetes environment to ensure best practices, security, and compliance.

Kyverno Policies

Let’s see Policies in action with a couple of examples. We’re using the community favorite, Kyverno Policy Engine. Kyverno is easy to use and lets you write policies as Kubernetes-styled YAMLs.

Example 1

To prevent over-utilization of the cluster’s CPU and Memory resources, it’s important that the incoming Pods specify their resource requests and limits. We can ensure this in our cluster with one simple Kyverno Policy as follows.

 

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-requests-limits
  annotations:
    policies.kyverno.io/category: Best Practices
    policies.kyverno.io/severity: medium
    policies.kyverno.io/subject: Pod      
spec:
  validationFailureAction: audit
  background: true
  rules:
  - name: validate-resources
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "CPU and memory resource requests and limits are required."
      pattern:
        spec:
          containers:
          - resources:
              requests:
                memory: "?*"
                cpu: "?*"
              limits:
                memory: "?*"

Kyverno isn’t just restricted to Kubernetes Native Resources. It can work with Custom Resource Definitions (CRDs) from other tools as well such as ArgoCD, Flux, OpenShift, etc. Let’s see an ArgoCD example.

Example 2

You may want to prevent updates to the ‘project’ field after creating an Argo Application. You could do this with a very simple Kyverno Policy as follows:

 

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: application-prevent-updates-project
  annotations:
    policies.kyverno.io/category: Argo
    policies.kyverno.io/severity: medium
    policies.kyverno.io/subject: Application
spec:
  validationFailureAction: audit
  background: true
  rules:
    - name: project-updates
      match:
        any:
        - resources:
            kinds:
              - Application
      preconditions:
        all:
        - key: "{{ request.operation }}"
          operator: Equals
          value: UPDATE
      validate:
        message: "The spec.project cannot be changed once the Application is created."
        deny:
          conditions:
            any:
            - key: "{{request.object.spec.project}}"
              operator: NotEquals
              value: "{{request.oldObject.spec.project}}"

These were just two of the 190+ policies from the Kyverno policy library, which is the largest among all Policy Engines.

There’s a lot more that Kyverno can do. The above policies contain a single validation rule. There are three types of rules in Kyverno: validate, mutate, and generate. Further, the above policies are Cluster Policies and applicable to all incoming resources in the cluster. We could also make them namespace specific by creating them as Namespace Policies inside the desired namespace.

How does Kyverno perform its magic? Kyverno runs as a Dynamic Admission Controller inside a Kubernetes Cluster. It registers for validating and mutating webhook HTTP callbacks from the kube-api-server and applies configured Policies to matching resources. Below is a high-level overview of Kyverno’s architecture. You can read more about Kyverno on kyverno.io.

Nirmata Policy Manager for Kyverno Policies

As the name suggests, Policy Management is the task of managing policies. We saw how we can implement and deploy policies with Kyverno. However, this may get complicated over time and scale with multiple in-production clusters. Policy Managers help us solve this problem.

Nirmata Policy Manager (NPM) offers a way to efficiently manage your policies by allowing Kubernetes users to curate their own set of Policies, monitor policy violations, generate policy reports, set alarms, and more. NPM allows you to supervise your Kyverno deployment and facilitates a suite of management tasks to help you ensure security, compliance, and best practices across the board.

It takes less than five minutes to onboard a Cluster to NPM and enforce security best practices. The above snapshot shows the NPM dashboard. Check out more on NPM here.

Summary

Managing Kubernetes deployments can be complex. This complexity grows with the number of clusters we have as well as the scale of the clusters themselves. This article is a gist of how we can secure our clusters using easy-to-use Kyverno policies, and manage Kyverno Policies using Nirmata Policy Manager.

For more information on the topics above, please reach-out to Nirmata to have a conversation. Discover more about Kyverno by visiting this Nirmata page.

How to Migrate from Kubernetes Pod Security Policies (PSPs) to Kyverno
Securing Container Base Images Using Kyverno Policies
No Comments

Post a Comment