Kubernetes Policy Management Made Easy Using the Enterprise Kyverno Operator

Kubernetes Policy Management Made Easy Using the Enterprise Kyverno Operator

Managing Upgrades and Policies in Kyverno

Kyverno’s rapid growth and constant release of new features make it challenging to keep up with in the DevSecOps space. Staying up-to-date with the latest and greatest version requires frequent upgrades, resulting in frequent Day 2 activities. Additionally, it’s essential to ensure that the already installed Kubernetes policies are functioning correctly when upgrading Kyverno. Another overlooked aspect is the Kyverno version compatibility with Kubernetes. Kyverno OSS supports a limited range of Kubernetes versions (see matrix here), this makes it difficult for users to move to newer versions because they are now blocked to do so because of Kubernetes version incompatibility.

The need for an orchestration tool

Managing the lifecycle of any application, including Kyverno, becomes increasingly challenging as the use cases become more complex. To streamline and automate tasks like deploying, scaling, upgrading, and deleting applications, we require a higher level of abstraction. This is where orchestration tools come in handy – they provide an optimal level of abstraction and flexibility to the user to interact with the underlying application.

What are operators?

Although not a new concept in Kubernetes policy management, the Operator pattern aims to replicate the behavior of a human operator and automate it in a reliable and extensible way, allowing us to add additional capabilities to Kubernetes itself. For further information on the Operator pattern, please refer to the official documentation here.

What is the Enterprise Kyverno Operator?

Enterpise Kyverno Operator

Enterpise Kyverno Operator

The Enterprise Kyverno Operator offers comprehensive lifecycle management capabilities that go beyond just Kyverno itself, extending to related components like policies and adapters. Managing these components at scale can be challenging due to compatibility and upgrade issues. However, the Operator provides a seamless solution for your policy and governance ecosystem, ensuring stability and smooth operation. The Operator itself can be used with `nctl` (Nirmata CLI) and is also available as a Helm chart.

Enterprise Kyverno Operator in Action

The Operator is available as a Helm chart that you can use for installing the operator. If you have a cluster with three nodes or less, you can deploy the operator without requiring any license key. However, you will need a license key to provide to the install command. If you do not already have a license key, you can request a free trial license here.

A detailed explanation of the operator chart options can be found here.

# add kyverno-charts helm repo
helm repo add nirmata https://nirmata.github.io/kyverno-charts

# update nirmata helm repo
helm repo update nirmata

# install the operator
# licenseKey field is needed only for clusters with > 3 nodes
helm install enterprise-kyverno-operator nirmata/enterprise-kyverno-operator -n enterprise-kyverno-operator --create-namespace nirmata/kyverno --set licenseManager.licenseKey=<your license key here>

Let’s look at the components installed as part of the Operator.

kubectl get all -n enterprise-kyverno-operator

The enterprise-kyverno-operator deployment is a set of controllers that are responsible for reconciling kyverno, policysets and adapters. 

By default, the Operator also installs Kyverno policies for Pod Security Standard (PSS) restricted profile and RBAC Best Practices. These policies (and more) are curated by Nirmata and can be found here.

To view the list of policies installed,

kubectl get cpol

Now let’s look at the Custom Resources created by the Operator. Use the -o yaml option to view the complete resource spec.

> kubectl get kyvernoes.security.nirmata.io -n enterprise-kyverno-operator                                                       
NAME      NAMESPACE   VERSION                RUNNING   HA MODE
kyverno   kyverno     v1.9.5-n4k.nirmata.1   true      true

> kubectl get policysets.security.nirmata.io -n enterprise-kyverno-operator                                                                    
NAME                      ALL POLICIES READY
pod-security-baseline     true
pod-security-restricted   true
rbac-best-practices       true

# inspect the status field
> kubectl get kyvernoes.security.nirmata.io -n enterprise-kyverno-operator kyverno -o json | jq '.status'
{
  "isHA": true,
  "isRunning": true,
  "lastUpdated": "2023-06-16T13:41:38Z"
}

> kubectl get policysets.security.nirmata.io -n enterprise-kyverno-operator pod-security-restricted -o json | jq '.status'
{
  "allPoliciesReady": true,
  "lastUpdated": "2023-06-16T13:41:41Z"
}

Now consider a scenario where a malicious user got access to your cluster and tries to tamper the Kyverno deployment. Let’s suppose they delete the kyverno deployment. In a regular Kyverno installation, deleting the kyverno deployment would mean that there is no longer an entity to stop bad requests from being admitted into the cluster. But, with the Operator in place, we can ensure that when the kyverno deployment is deleted, the Operator automatically recreates it for you.

> kubectl delete deploy kyverno -n kyverno                                                                                                 
deployment.apps "kyverno" deleted

> kubectl get deploy -A -w
NAMESPACE                     NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
enterprise-kyverno-operator   enterprise-kyverno-operator   1/1     1            1           27m
…
kyverno                       kyverno                       1/1     1            1           31s
kyverno                       kyverno-cleanup-controller    1/1     1            1           32s
kyverno                       kyverno                       1/1     1            1           38s

As we can see, even though the kyverno deployment is deleted, the Operator creates a new one for us. In future, the Operator will have the capability to block such tampering actions as well.

By default, the operator deploys 3 replicas of Kyverno. Now let us consider the case when a user wants to update the replica count of kyverno to 1. As we saw with the delete operation, simply by updating the replica count in the deployment will lead to the Operator setting it back to 3. To avoid this, and to indicate to the Operator the actual intent to decrease the replica count, we have to either update the Kyverno CR or use the helm upgrade command with the updated replica count. Let’s look at both ways to achieve this.

# scale by patching the Kyverno Custom Resource
> kubectl -n enterprise-kyverno-operator patch kyvernoes kyverno --type=merge -p '{"spec": {"replicas": 1}}'
kyverno.security.nirmata.io/kyverno patched

# verify if the deployment is scaled
> kubectl get deploy kyverno -n kyverno
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
kyverno     1/1     1            1           17m

# update replica count using helm
# licenseKey field is needed only for clusters with > 3 nodes
> helm upgrade enterprise-kyverno-operator nirmata/enterprise-kyverno-operator -n enterprise-kyverno-operator --create-namespace nirmata/kyverno --set licenseManager.licenseKey=<your license key here> –-set kyverno.replicaCount=1

Kyverno Adapters are an integral part of the Enterprise Operator. In this video, we saw how to install the standalone Kyverno AWS Adapter and here is a blog post on how you can write policies for EKS Best Practices based on this adapter. Now with the Enterprise Operator, it is easy to manage the lifecycle of various adapters. We will look at how to install the Kyverno AWS Adapter using the Operator.

# install the adapter
> helm upgrade enterprise-kyverno-operator nirmata/enterprise-kyverno-operator -n enterprise-kyverno-operator
--set licenseKey=<your-license-key> \
--set awsAdapter.createCR=true \
--set awsAdapter.roleArn=<role-arn> \
--set awsAdapter.eksCluster.name =<cluster-name> \
--set awsAdapter.eksCluster.region=<cluster-region>

# verify if the adapter is running
> kubectl get kyvernoadapters aws-adapter-config -n enterprise-kyverno-operator                        
NAME                 ADAPTER TYPE   NAMESPACE             VERSION   RUNNING
aws-adapter-config   AWS            kyverno-aws-adapter   v0.3.0    true

# view the adapter resource using
# licenseKey field is needed only for clusters with > 3 nodes
> kubectl get awsacfg -n kyverno-aws-adapter kyverno-aws-adapter -o yaml

To generate the CIS Benchmark reports, we need to configure the cis-adapter similar to the aws-adapter above. Detailed instructions can be found here.

Conclusion

In this post, we have gained an understanding of how the Operator simplifies the management of a Kyverno deployment, including its policies and data adapters. As Kyverno plays a critical role in Kubernetes policy and governance by sitting in the admission review path, it is crucial to ensure its maintenance and stability. The Operator assumes the responsibility of human operators in ensuring that Kyverno continuously reconciles with the configuration spec’s desired state. Additionally, the Operator functions as a “control plane” for all policy and governance requirements in the cluster, providing Kubernetes-native installation, update, and deletion of components.

What’s next for the Enterprise Kyverno Operator?

This is just the initial release for the Operator. Soon it will be able to perform complex Day 2 operations, such as upgrading Kyverno, backup and restore of policies with minimal human intervention. Additionally, there are advanced features in the works, including audit logging, tamper detection and prevention, and signed policies. Keep an eye on this space for upcoming updates!

Explore a complete Kubernetes policy and governance solution at: https://try.nirmata.io 

Check-out (and download) our free guide to K8s policy management and security while you are here.

Experimental Generic JSON Validation with Kyverno
Kyverno v1.10: Increased scale, external service calls, and more
No Comments

Sorry, the comment form is closed at this time.