Reloading Secrets and ConfigMaps with Kyverno Policies

Reloading Secrets and ConfigMaps with Kyverno Policies

Policy is commonly thought of as being primarily (if not solely) useful in the area of security, blocking the “bad” while allowing the “good”. This misconception is understandable because many tools which operate by implementing “policy” are often limited to these tasks and have no other abilities to exploit. But in reality, policy can be more versatile; it can be an extremely powerful tool for operations teams in order to automate common tasks in a declarative fashion, saving time and reducing risk.

If you’ve used Kubernetes for any length of time, chances are high you’ve consumed either a ConfigMap or Secret, someway and somehow, in a Pod. If not directly then certainly indirectly, possibly without your knowledge. They’re both extremely common and useful sources of configuration data. Challenges arise however when you need to update that data yet you’ve already got running Pods. While there are some various little tools you can cobble together to make this happen, in this article I’ll share how you can do this easier, better, define it all in policy as code, and consolidate all those tools down to one by using Kyverno.

Protecting Secrets and ConfigMaps

Secrets and ConfigMaps consumed by Pods have special status because those Pods are dependent upon their existence. At the next level down, the app running within the container is dependent upon the contents of that Secret or ConfigMap. Should the Secret or ConfigMap get deleted or altered, it can cause your app to malfunction or, worse, Pods to fail on start-up. For example, if you have a running Deployment which consumes a ConfigMap key as an environment variable and that ConfigMap is later deleted, a new rollout of that Deployment will cause Pods to fail as the kubelet cannot find the item. This prevents Pods from starting which fails the rollout. You now have to think about how you protect this dependency graph. Kubernetes RBAC only goes so far before it creates negative traction. Your cluster and Namespace operators need permissions to do their jobs but people still make mistakes (even if driven by automation). And while there’s a nifty immutable option for both ConfigMaps and Secrets, it has some serious negative side effects if used. Aside from the obvious immutability preventing further updates, once that option is set it cannot be changed without recreating the resource. This is where Kyverno can step in and help. Kyverno can augment Kubernetes RBAC by limiting updates and deletes on a highly granular basis. For example, even if cluster operators have broad permissions to update or delete Secrets and ConfigMaps within the cluster, Kyverno can lock down specific resources based on name, label, user name, ClusterRole, and more.

 

Making Secrets and ConfigMaps Available

These API resources are Namespaced, which means they exist in one and only one Namespace. A Pod cannot cross-mount a Secret or ConfigMap from an adjacent Namespace. You need a solution to make these resources available to the Namespaces in which they’re needed. Duplicating them, even with a GitOps controller, creates bloat and adds complexity. And what if you forget to make one of them available in a new Namespace? See the previous section on what happens. How about supplying them into existing Namespaces? Again, another point of complexity. Kyverno again solves this by using its unique generation capabilities. You can define and deploy a ConfigMap or Secret once in a Namespace and have Kyverno clone and synchronize it to not only new Namespaces but existing ones as well. Permit your Namespace administrators to delete them with no problem as Kyverno will restore the Secret or ConfigMap automatically by re-cloning it. All this is defined in a policy, dictated by you, declared with no knowledge nor use of a programming language.

 

Reloading Pods

Now you’ve got a handle on your Secrets and ConfigMaps, which is awesome. But your containers still don’t know about the changes you’ve made to them. You now have a final concern which is how to intelligently and automatically reload those Pods so they pick them up. A few tools out there can do this, but they’re point solutions. Once again, Kyverno can come to the rescue and in a declarative way. When the new changes are propagated to all those downstream ConfigMaps and Secrets, Kyverno can trigger a new rollout on the Pod controllers which consume them.

Putting it all together

This is the architecture we’ll put together illustrated below.
Overall Kyverno Secret reloading architecture. 
Let’s walk through this before diving into the policies which enable the solution.
  1. A Namespace called platform contains the “base” Secret (this article is written with a Secret in mind but it works equally for a ConfigMap) called blogsecret. This is a simple string value but imagine it is, for example, a certificate or something else. We’ll use a Kyverno validate rule here to ensure that this Secret is protected from accidental tampering or deletion by anyone who does not hold the cluster-admin role or the Kyverno ServiceAccount itself.
  2. This Secret will be cloned to all existing Namespaces (except system-level Namespaces like kube-* and that of Kyverno’s) and also any new Namespaces that are created after the point at which the policy is created. Further updates to blogsecret will be immediately propagated. And any deletion of this synchronized Secret into any of those downstream Namespaces will cause Kyverno to replace it from the base.
  3. Once the changes land in the cloned Secrets, Kyverno will see this and write/update an annotation to the Pod template portion of only the Deployments which consume this Secret, thereby triggering a new rollout of Pods.

Now let’s look at the Kyverno policies which bring it all together.

Validation

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: validate-secrets
spec:
  validationFailureAction: enforce
  background: false
  rules:
    - name: protect-blogsecret
      match:
        any:
        - resources:
            kinds:
              - Secret
            names:
              - blogsecret
            namespaces:
              - platform
      exclude:
        any:
        - clusterRoles:
          - cluster-admin
        - subjects:
          - name: kyverno
            kind: ServiceAccount
            namespace: kyverno
      preconditions:
        all:
        - key: "{{ request.operation }}"
          operator: AnyIn
          value:
          - UPDATE
          - DELETE
      validate:
        message: "This Secret is protected and may not be altered or deleted."
        deny: {}

In this Kyverno policy, we tell Kyverno exactly which Secret and in which Namespace we want to protect. The validationFailureAction mode here is set to enforce which will cause a blocking action to occur if anyone or anything other than the cluster-admin ClusterRole or the kyverno ServiceAccount attempts to change or delete it. Attempting to do so, if not allowed, will prevent the action and display the message:

$ kubectl -n platform delete secret/blogsecret
Error from server: admission webhook "validate.kyverno.svc-fail" denied the request:
resource Secret/platform/blogsecret was blocked due to the following policies
validate-secrets:
  protect-blogsecret: This Secret is protected and may not be altered or deleted.

Generation

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: generate-secret
spec:
  generateExistingOnPolicyUpdate: true
  rules:
  - name: generate-blogsecret
    match:
      any:
      - resources:
          kinds:
          - Namespace
    generate:
      kind: Secret
      apiVersion: v1
      name: blogsecret
      namespace: "{{request.object.metadata.name}}"
      synchronize: true
      clone:
        name: blogsecret
        namespace: platform
In the above Kyverno policy, Kyverno is told to create and synchronize the blogsecret from the platform Namespace into existing and new Namespaces. The variable {{request.object.metadata.name}} here is used to apply to the Namespaces by name without selecting any one specifically.

Mutation

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: restart-deployment-on-secret-change
spec:
  mutateExistingOnPolicyUpdate: false
  rules:
  - name: reload-deployments
    match:
      any:
      - resources:
          kinds:
          - Secret
          names:
          - blogsecret
    preconditions:
      all:
      - key: "{{request.operation}}"
        operator: Equals
        value: UPDATE
    mutate:
      targets:
        - apiVersion: apps/v1
          kind: Deployment
          namespace: "{{request.namespace}}"
      patchStrategicMerge:
        spec:
          template:
            metadata:
              annotations:
                ops.corp.com/triggerrestart: "{{request.object.metadata.resourceVersion}}"
            (spec):
              (containers):
              - (env):
                - (valueFrom):
                    (secretKeyRef):
                      (name): blogsecret
Finally, in this mutation policy, Kyverno is asked to watch for updates to the blogsecret in the Namespace in which the updates occur and mutate any Deployment that happens to consume blogsecret in an environment variable using a secretKeyRef reference. Once all that is true, it will add an annotation called ops.corp.com/triggerrestart to the Pod template portion of the spec containing the resourceVersion information from that Secret. This is so there is always some unique information available in order to provide a valid update to the Deployment. And, by the way, adding an annotation is also what happens when performing kubectl rollout restart except the key kubectl.kubernetes.io/restartedAt will be added with the time.

Demo

Rather than show a ton of step-by-step terminal output, I figured it’d be nicer to record this one. This is a real-time demo just to show I’m not trying to hide anything.

So that’s it. Hopefully this idea produced a “that’s cool!” response from you as this combination of abilities can be used to solve all sorts of use cases beyond what most admission controllers can provide. It also perfectly highlights one real-world (and common) scenario in which policy is not just a security feature but can be a powerful ally for operations used to build some fairly complex automation. Kyverno enables all of this, and more, with simple policies which can be understood and written by anyone in a matter of minutes with neither knowledge nor use of a programming language.

Thanks for reading and feedback and input is always welcome. Feel free to hit me up on Twitter, LinkedIn, or come by the very active #kyverno channel on Kubernetes Slack.

You can also learn more about Kyverno and policy management from Nirmata or reach-out to Nirmata with any specific question you may have.

Kubernetes Policy, Security and Governance with Nirmata at KubeCon North America 2022
Introducing Kyverno! Kubernetes Native Policy Management
No Comments

Post a Comment