How it works: Kubernetes Policy Management with Nirmata Policy Manager

How it works: Kubernetes Policy Management with Nirmata Policy Manager

 Earlier this year, the CNCF Kubernetes Security Special Interest Group (SIG) and Policy Working Group (WG) issued a new, free report that can be downloaded by clicking the link here: white paper on Kubernetes Policy Management – which helps to help educate the community on best practices for managing Kubernetes configurations and runtimes using policies. The paper has provided a clear understanding of why Kubernetes policy management is becoming necessary for the security and automation of Kubernetes clusters and workloads.

 

As discussed in our previous blog post, DevSecOps teams can use the paper as a guide to enhance their Kubernetes Policy management. Although the paper was very comprehensive in terms of describing the policy management framework concepts, it did not provide us with the implementation guidelines, in terms of the tools to use. In this article, we will discuss how the policy architecture components can be applied in DevOps environments while mapping them to Nirmata products.

Mapping Architecture Components with Nirmata Products

Kubernetes policy management, as described in the paper, makes use of XACML (eXtensible Access Control Markup Language). The OASIS standard XACML architecture defines policy languages, architectures, and processing models. It is intended to define security policies such as allow/deny decisions based on user, resource, action, and environmental variables. Because of its ability to modularize K8s policy management through abstraction, the architecture has grown in popularity among DevOps teams. The separation of concerns has subsequently eased the writing of policies, requests, and responses.

 

The XACML architecture uses the services of its many components, including the Policy Administration Point (PAP), Policy Enforcement Point (PEP), Policy Decision Point (PDP), and the Policy Information Point (PIP), when used for Kubernetes policy management (PIP). With the CNCF adoption of the XACML architecture, Nirmata’s policy management solutions have been designed to take advantage of its useful features. Nirmata solutions for Kubernetes can be easily mapped to XACML components to reinforce policies across the Kubernetes clusters  and at key stages in the cloud native delivery pipeline, as detailed below.

SOURCE: https://github.com/kubernetes/sig-security/blob/main/sig-security-docs/papers/policy/images/XACML-architecture.png

 

The Policy Administration Point (PAP)

Policy Administration Point (PAP) in Kubernetes is a multi-cluster policy administration interface that allows rules to be bound to managed clusters. The PAP is also used to author, deploy, and manage policy changes depending on the binding within managed clusters.

The Policy Manager for Kubernetes is the Nirmata product in charge of policy administration. The Policy Manager serves as a PAP by focusing on Kyverno policies’ entire lifecycle management, including compliance reports, alarms, and extensive insight into policy infractions. Other functional advantages of using the policy manager include:

  •  Multi-cluster Kyverno management
  •  Policy-as code (GitOps) for deploying policies across clusters
  •  Policy Customizations
  • Policy Violation Reports
  • DevOps Collaboration to eliminate friction and delays
  • Supply Chain Security Enforcement and Integrations
  • Adhering to the CIS Benchmarks
  • Compliance Standards
  • Engine health maintenance
  • Policy Drift and Tamper detection

Companies can use the Nirmata Policy Manager alongside any other Kubernetes platform or distribution, such as OpenShift and Rancher.

Nirmata’s Policy Manager 

 

Policy Enforcement Point (PEP)

PEPs help enforce policies that keep Kubernetes workloads and clusters in the desired state as defined by the policy. They can also help with configuration auditing and alerting any API resources that violate the set policies. There are three types of enforcement within the PEPs: admission enforcement, runtime enforcement, and enforcement using the built-in Kubernetes policy objects.

Through the Kubernetes API Server, PEPs are used in Kyverno to help maintain Kubernetes workloads and clusters. As part of a software delivery pipeline, you can use the Kyverno CLI to apply policies to YAML resource manifest files. This command-line utility integrates Kyverno into GitOps workflows and analyzes resource manifests for policy compliance before committing them to version control and applying them to clusters. As a result, Kyverno allows you to use the Kyverno CLI in your CI/CD pipelines to test policies and validate resources before applying them to your cluster.

 

Policy Decision Point (PDP)

The extensible Kubernetes API server collaborates with Policy engines, which serve as the PDP for Kubernetes admission controls or runtime scanners in the cluster during runtime. They have the authority to make policy decisions in the areas of security, resilience, and automation.

Nirmata’s enterprise-grade policy engine based on Kyverno functions as a Policy Decision Point alongside the Kubernetes API. In a Kubernetes cluster, it acts as a dynamic admission controller. It receives and modifies admission webhook HTTP callbacks from the kube-apiserver before applying matching policies to return results that enforce admission policies or reject requests.

So how does Kyverno work as a PDP? Kyverno functions by deploying pods and services into your existing cluster. It generates several Admission Webhook configurations in the cluster. These webhooks are responsible for handling API requests that come into Kubernetes by either validating something from the request (Validating Admission Webhook) or modifying the request before it is applied (Mutating Admission Webhook). Some of the benefits of Kyverno include: – ensuring deployments are secure, that they meet certain organizational criteria (for example, defining a cost center label), or that all deployments mount the same volume. For more information about Kyverno, see the below architecture diagram.

 [kyverno architecture diagram]

SOURCE: https://kyverno.io/docs/introduction/

 

Policy Information Point (PIP)

Kubernetes policy engines frequently use PIPs to query extra information such as metadata and configuration data from the Kubernetes API server to make data-driven policy decisions. Kyverno employs the PIP component using helpers or collectors, which retrieve data from external systems and make it available to Kyverno for policy decisions. In addition, the helper component retrieves configuration from cloud provider APIs for use in policy decisions. Another example is that the helper component obtains certificates for image verification policies from KMS tools.

How Kyverno Employs PIP – [Image src- https://kyverno.io/docs/installation/]

 

Learning More: Policy Management with Nirmata

Kubernetes policy is a powerful and useful feature that affects security, but there are still issues. Because of the complexity of deploying applications with so many moving parts, scaling policies comes with challenges. There are numerous things to secure, each in its way, and it is easy to overlook something. This is addressed by employing a Policy as Code (PaC) engine such as Kyverno, which employs efficient version and access controls, automated testing, and deployment. Kyverno enables rapid infrastructure growth while maintaining security, and it provides explicit visibility and control over the policies imposed across your clusters, environments, and apps.

Nirmata also enables centralized policy administration across Enterprise Kubernetes environments and supports upstream Git for policy management across clusters. Sign up for a demo with Nirmata Policy Manager  to start your policy management journey with Kubernetes the right way. Please reach-out to Nirmata if you have any specific questions needing answers regarding this post, Kyverno or our Kubernetes policy management solution.

6 Signs It's Time to Upgrade from Kyverno to Nirmata Control Hub
Securing software supply chains on Kubernetes using Nirmata and Venafi
No Comments

Sorry, the comment form is closed at this time.