SecOps Automation in Openshift Clusters using Kyverno

SecOps Automation in Openshift Clusters using Kyverno

HCS Blog Image 2

Guest Contributors: Benoit Schipper (HCS), Marcel Booms (HCS)

OpenShift’s Built-In Security Features

OpenShift is renowned for its robust out-of-the-box security features, including Role-Based Access Control (RBAC), built-in network policies, and default admission controllers. These features collectively establish a secure default state for OpenShift clusters. However, it is crucial to recognize that security is a dynamic and evolving process. There are always areas for improvement, particularly in addressing specific organizational requirements, supply chain security, and continuous compliance.

The Need for Enhanced Security Measures

While OpenShift provides a solid foundation, certain security aspects require users to configure it further to meet their specific needs. Custom policies for compliance, resource management, and supply chain security are essential to effectively tailoring the security posture of OpenShift clusters. This is where Kyverno, an open-source CNCF policy engine designed explicitly for Kubernetes, becomes invaluable.

Automating Security and Operational Tasks with Kyverno

Kyverno automates several critical security and operational tasks:

  1. Resource Quotas: Ensuring that namespaces do not exceed their resource quotas.
  2. Resource Limits: Enforcing containers’ CPU and memory usage limits to prevent resource exhaustion.
  3. Policy Enforcement: Defining and enforcing granular policies for resource configurations to ensure compliance with security standards.
  4. Supply Chain Security: Validating images ensures only trusted images are deployed, thus maintaining the supply chain’s integrity and authenticity.

Policy Enforcement on Multi-Tenant OpenShift Platforms

Whenever you have multiple Tenants (DevOps teams in our case) on a single Kubernetes-like platform, it is challenging to balance freedom and guardrails while preventing a platform from becoming inflexible for your DevOps teams. Platform engineers or enablers should never forget this when deciding upon a guardrail that affects everyone on the platform. What will this guardrail prevent the DevOps teams from doing? What will happen if we do not implement this guardrail?

When considering these questions, engaging in open communication with your DevOps teams is crucial to understanding their needs and workflows. This ensures that the guardrails implemented foster productivity rather than hinder it. It also ensures that the platform’s multi-tenant services stay up without allowing a single DevOps team to hinder others. It helps when you have experience with the DevOps team responsible for a service to understand the nuances of deciding upon certain guardrails.

Let’s look at a practical example of enabling a specific service while implementing guardrails to prevent any DevOps team, or a few of them, from causing issues for others. The “Red Hat OpenShift Pipelines” operator provides a CI/CD (Continuous Integration/Continuous Delivery) service on OpenShift. It is based on the open-source project Tekton. Tekton is a Kubernetes-native framework for creating CI/CD systems, allowing developers to build, test, and deploy applications across cloud providers or on-premises environments.

 

Key features of Red Hat OpenShift Pipelines include:

  1. Pipeline as Code: Define and manage pipelines using YAML files.
  2. Kubernetes Native: Seamlessly integrates with Kubernetes, leveraging its native features for scaling and managing resources.
  3. Extensibility: Customize and extend pipelines with reusable tasks and custom resources.
  4. Security: Utilize Kubernetes security features, including role-based access control (RBAC), to secure the CI/CD process.
  5. Scalability: Automatically scale pipelines to handle increased workloads.

 

Integrating the open-source project Tekton within OpenShift Pipelines allows developers to utilize robust CI/CD capabilities while ensuring a consistent and scalable approach to application delivery on the OpenShift platform. This cluster-wide operator will enable teams to create CI/CD-related objects within the scope of their projects in OpenShift, which means that the operator is used by all DevOps teams that decide to use it. In our case, all of our DevOps teams are suggested to use this to build their code and container images. What could go wrong? Nothing, right? It turns out it can!

If you have many teams creating Tekton resources on your cluster, it can cause a strain on the OpenShift platform, (or any Kubernetes based platform) more specifically etcd. Kubernetes uses etcd as its primary data store. Etcd is a distributed key-value store that provides a reliable way to store data across a cluster of machines. Once we noticed the effects on the OpenShift platform, we had to devise a plan to remediate and prevent this from happening again. Using the Kyverno operator within the OpenShift cluster, we decided to implement a ClusterPolicy that does the following:

 

  1. Whenever a Tekton Pipeline resource is created within a namespace, we mutate the namespace to receive the following annotation: operator.tekton.dev/prune.schedule: 50 */6 * * *
    1. Setting this annotation creates a resource within the “openshift-pipelines” namespace that prunes the old Pipeline runs within, effectively removing older resources.
    2. DevOps teams can use another annotation to specify their needs, such as keeping the last two Tekton Pipeline runs or runs from the previous two hours.
  2. Within the same ClusterPolicy, we prevent DevOps teams from setting the following annotation in the namespace: operator.tekton.dev/prune.skip: `*?` 
    1. This part of the ClusterPolicy prevents teams from setting this annotation. This annotation prevents DevOps teams from preventing the Pruning of Tekton Pipeline resources.

 

In conclusion, managing the strain on the OpenShift platform’s etcd caused by multiple teams creating Tekton resources requires strategic intervention. Utilizing the Kyverno operator within the OpenShift cluster, we implemented a ClusterPolicy to automatically prune old Pipeline runs, thereby reducing the load on etcd. This policy ensures namespaces are annotated to trigger regular pruning while allowing teams to customize retention based on their needs. Additionally, we enforced a rule to prevent teams from turning off this pruning mechanism, ensuring the platform remains efficient and responsive. This approach balances team autonomy with necessary guardrails to maintain platform performance and stability. Problem solved and prevented for the future!

Self-Service for DevOps teams 

In today’s IT world, organizations strive to offer more self-service and autonomy to their users. OpenShift, Red Hat’s Kubernetes platform, facilitates this by providing users with extensive rights to manage their resources. This leads to more efficient workflows and greater flexibility. However, how do you manage this freedom without losing control over your infrastructure? For example, sometimes, more than the standard RBAC model is needed. Recently, we deployed Kyverno for a client to make it easier for DevOps teams while simultaneously enforcing specific rules. Let’s have a look at a few examples.

Namespace Management

OpenShift allows users to manage various aspects of their environments, such as creating namespaces self-service. By default, users can be given complete freedom or be entirely restricted in their rights. However, this freedom can lead to uncontrolled growth and potential security risks without proper policies.

A concrete example is enforcing naming conventions for namespaces. You can set policies to ensure that all new namespaces adhere to specific naming rules, such as requiring a particular prefix. For instance, DevOps teams can only create namespaces starting with their team identifier. Another example is using a postfix in the namespace name to trigger actions automatically. For instance, installing namespaced operators requires privileges that shouldn’t be given to regular users. 

Another example is a Kyverno policy automatically installing an operator based on the namespace name.It provides self-service capabilities to DevOps teams.

This example installs a keycloak operator in a namespace when the created namespace ends with -keycloak

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
 name: rhbkeycloak-namespace
 annotations:
   policies.kyverno.io/title: Allowed Namespace Names for Regular Users
   policies.kyverno.io/description: |
     This policy adds an OperatorGroup, Subscription, several Roles, several Rolebindings, several NetworkPolicys to namespace ending with *-keycloak
spec:
 validationFailureAction: Enforce
 background: false

Definition of the policy

rules:
 - name: keycloak-operatorgroup
   context:
   - name: namespaceprefix
     variable:
       jmesPath: split(request.object.metadata.name, '-')[0]

The keycloak-operatorgroup rule starts with extracting the namespaceprefix from the namespace name

match:
     any:
     - resources:
         kinds:
           - Namespace
         names:
           - "*-keycloak"

The rules matches any create namespace event where the namespace ends with -keycloak

preconditions:
     all:
     - key: "{{ namespaceprefix }}"
       operator: AnyNotIn
       value:
       - openshift
       - default
     - key: "{{ request.operation }}"
        operator: Equals
        value: CREATE

The rule must not match any namespace that starts with openshift or default

generate:
     apiVersion: operators.coreos.com/v1
     kind: OperatorGroup
     name: "{{ namespaceprefix }}-keycloak"
     namespace: "{{request.object.metadata.name}}"
     data:
       spec:
         targetNamespaces:
           - "{{request.object.metadata.name}}"

Generate an OperatorGroup resource when all conditions are met.

generate:
     apiVersion: "operators.coreos.com/v1alpha1"
     kind: Subscription
     name: "{{ namespaceprefix }}-rhsso-operator"
     namespace: "{{request.object.metadata.name}}"
     data:
       spec:
         channel: "{{ kyvernoparameters.data.keycloakchannel }}"
         installPlanApproval: Automatic
         name: keycloak-operator
         source: redhat-operators
         sourceNamespace: openshift-marketplace

Each generate rule must have its own set of match rules. Therefore, this generate rule has the same match rules as the generate OperatorGroup rule.

Kyverno significantly enhances self-service capabilities within OpenShift by allowing for the enforcement of policies that maintain control and consistency. By automating tasks and enforcing naming conventions and permissions, Kyverno empowers DevOps teams to efficiently manage their environments without compromising security or governance. This balance of flexibility and control makes Kyverno an invaluable tool for any organization looking to streamline their Kubernetes operations.

How can we help?

Expert guidance can make all the difference in implementing and optimizing Kyverno in your OpenShift environment. Here’s how HCS Company and Nirmata can help you achieve your goals:

Achieving Your Goals with Nirmata and HCS Company

HCS Company specializes in Containerization and has extensive experience with OpenShift. We can help you effectively deploy, manage, and optimize your OpenShift clusters. By leveraging our expertise, you can ensure your OpenShift environment is robust, efficient, and tailored to your organizational needs. Whether you’re looking to streamline operations, enhance security, or improve resource management, HCS Company provides the strategic and technical support necessary to achieve these objectives.

Realizing Your Vision

As Kyverno experts, we can help you fully leverage Kyverno’s capabilities to enhance policy management and automation within your OpenShift or any other Kubernetes-like environment. Together with HCS, we can work closely with you to develop and implement policies that align with your specific requirements, ensuring that your infrastructure remains secure and compliant. With our support, you can maintain high flexibility for your DevOps teams while enforcing the necessary standards and controls.

Conclusion

Kyverno offers a robust solution for managing user permissions and automation within an OpenShift environment. With policies, you can regulate user freedom, enforce consistent naming conventions, and enhance the security and manageability of your cluster. Integrating Kyverno into your OpenShift workflows creates a powerful combination of flexibility and control, enabling your organization to operate efficiently and securely.

With Kyverno, teams can onboard and prepare their environments to meet their needs without requiring additional privileges. This capability allows for setting resource limits, labeling namespaces, deploying operators, and adding privileges for specific products—all while maintaining standards and control.

Are you curious about more applications of Kyverno in OpenShift or any other Kubernetes-like distro? Are you looking for expertise in the field? Feel free to reach out to us at Contact Nirmata or Contact HCS.

Kubernetes Policy Driven Resource Optimization with Kyverno
Kyverno Reports Server - The ultimate solution to scale reporting
No Comments

Sorry, the comment form is closed at this time.