Day-0 in IT parlance refers to the initial design, build and deployment plan of an IT solution. Typically, this is the phase where users are launching the minimum viable product (MVP) or the basic configuration to validate the design and get some run time experience to build upon for subsequent phases.
For Kubernetes environments, typically this includes cluster build automation, add-on management, deployment of basic cluster services to make the environment ready to be available for initial test applications and a pilot development team. Once users have had some experience running these environments, they focus on additional aspects of cluster governance, workload security and advanced use cases to standardize the environments for different developer use cases. These aspects are typically covered in the Day-1 and Day-2 phases of the project.
That said, quite often the Day-1 /2 seems to focus on tactical issues and most customer environments are playing catch up until they are hit either by a known vulnerability or even worse, have a security breach. This was clearly evident in our survey, where we found 70% of organizations we had spoken to had no workload security implementation and were not doing anything proactively to govern their environments. And when problems arise, the lack of governance can result in drastic actions where either environments are shut down, reverted to older stacks, creating significant loss in confidence, which leads to increased costs and reset of timelines. There are enough Kubernetes stories going around on Reddit boards, Twitter and discussion forums calling out for solutions to address these issues.
While different solutions exist to address these issues, there is ample evidence that points to the fact that if the platform teams implement- policy-based guardrails for their Kubernetes environments on Day-0, they would have been significantly better off, avoiding any resets, unexpected costs and downtime.
Kubernetes Policy Management as Day-0 Kubernetes Imperative
Policy Management in Kubernetes can be implemented in the background or via admission control at API Server level and it allows administrators to manage the workload configuration and environment configuration with a set of Kubernetes policies. A best practice is to use the Admission Control capability available with Kubernetes API Server as that allows administrators to enforce controls before the resources are deployed in the runtime environment. This not only ensures that the workloads being deployed are compliant with security policies and best practices, but also acts as a checkpoint to prevent the non-compliant configuration from entering into runtime environments in the first place.
Here are some key reason why Kubernetes planners should think for policy management as part of their Day-0 strategy –
- Secure cluster add-on services – To make a Kubernetes cluster operational, anything beyond the control-plane components is essentially a workload and needs the necessary pod security and best practice compliance. This includes the CNI, DNS, ingress etc., all critical resources to make the cluster operational. Essentially, workload compliance is needed not just for end-user applications but for cluster services and DevOps tool chains as well. Pod security standards, workload security and best practices, need to be applied to these services.
- Enforce best practices – As cluster components are built, there are many Day-0 requirements that have to be addressed – label compliance, naming conventions, security contexts, resource quotas, health checks etc. which can be automated or enforced for compliance and generation using policy management.
- Generate default resources – There are many Day-0 cluster level resources that a platform team may have to be generated based on specific conditions , which can be automated with policy management. e.g. volume creation, DevOps tools setup based on certain namespace labels.
- Implement tenant and app isolation – Multi-tenancy is a key requirement for most of the Enterprise environments and policy management is ripe to address this use case from the get go.
- Define granular Access Management – There are many access requirements that cannot be addressed via standard role based access. This is where granular policies can be applied using policy management. Check out this example.
- Automate operations workflows – There are many use cases where a resource has to be modified to ensure compliance. Just validation is not enough as the remediation workflow has to be launched. Through certain policy management approaches, the resource compliance for many use cases can be automated using mutate and generate policies.
- Infrastructure Governance: Kubernetes is no longer just about application management, and is increasingly used for infrastructure lifecycle management with projects like Cluster API, Crossplane, AWS Controllers for Kubernetes, and others. Policies become a must have to provide the necessary governance in the management cluster as well as the tenant or workload clusters.
Prevention is better than the cure
It is critical that users prevent the non-compliant resources from entering the Kubernetes runtime environment rather than find what is broken in the runtime environment when it is too late. Equally importantly, implementing these Kubernetes policies once the cluster is operational and actively being used becomes much harder. Not to mention, lack of these guardrails may end up encouraging bad behaviors.
Nirmata for Kubernetes Policy & Governance
Nirmata open sourced Kyverno, a policy engine, which is a CNCF project with incubation status. Its capability to mutate and generate resources, in addition to validation, uniquely positions it to address compliance along with remediation.
The Kyverno community has over 250 policies for different use cases and the community has created many policies that are recommended for Day-0 use. You can find many that will fit your use cases at platform level and highlight why policy management should be part of your Day-0 plan.
At Nirmata, we offer complete policy management for Kubernetes. Our cloud native policy management solution, powered by Kyverno, facilitates the autonomy, agility, and alignment necessary for DevSecOps teams, by automating the creation, deployment, and lifecycle management of policy-based intelligent guardrails. Nirmata delivers policy insights, reports, tamper-detection, alerts, and collaboration by integrating with external tools, processes, and workflows. Nirmata offers an Enterprise distribution of Kyverno and SaaS based Nirmata Policy Manager. A free trial for both products is available from respective product pages. Let us know what you think about Nirmata products.