Image by Gerd Altmann from Pixabay
Virtualization technologies make things easier to manage. In this post, we discuss how virtualizing Kubernetes can help address the complexity of Kubernetes cluster management. We also describe available techniques and best practices for virtual Kubernetes clusters.
A Brief History of Virtualization
In the technology domain, virtualization implies the creation of a software-defined or “virtual” form of a physical resource e.g. compute, network or storage. Users of the virtual resource should see no significant differences from users of the actual physical resource. Virtualized resources are typically subject to restrictions on how the underlying physical resource is shared.
The most commonly used form of virtualization is server virtualization, where the physical server is divided into multiple virtual servers. Server virtualization is implemented by a software layer called a virtual machine manager (VMM) or hypervisor. There are two types of hypervisors:
- Type 1 Hypervisor: a hypervisor that runs directly on a physical server and coordinates the sharing of resources for the server. Each virtual machine (VM) will have its own OS.
- Type 2 Hypervisor: a hypervisor that runs on an operating system (the Host OS) and coordinates the sharing of resources of the server. Each VM will also have its own OS, referred to as the Guest OS.
There is another form of virtualization of compute resources, called operating system (OS) virtualization. With this type of virtualization, an OS kernel natively allows secure sharing of resources. If this sounds familiar, it’s because what we commonly refer to as “containers” today, is a form of OS Virtualization.
Server virtualization technologies, which became mainstream in the early 2000s, enabled a giant leap forward for information technology and also enabled cloud computing services. The initial use case for server virtualization, was to make it easy to run multiple types and versions of server operating systems such as Windows or Linux, on a single physical server. This was useful for the software test and quality-assurance industry, but did not trigger broad adoption of virtualization technologies. A few years later, with VMware’s ESX Type 1 Hypervisor server consolidation became a way to drive efficiencies for enterprise IT by enabling the sharing of servers across workloads, and hence reducing the number of physical servers that were required. And finally, VMware’s VMotion feature, which allowed the migration of running virtual servers across physical servers, became a game changer as patching and updating physical servers could now be performed without any downtime and high levels of business continuity were now easily achievable for IT servers.
Why Virtualize Kubernetes
Kubernetes has been widely declared as the de-facto standard for managing containerized applications. Yet, most enterprises are still in the early stages of adoption. A major inhibitor to faster adoption of Kubernetes is that it is fairly complex to learn and manage at scale. In a KubeCon survey, 50% of respondents cited lack of expertise as a leading hurdle to wider adoption of Kubernetes.
Most enterprises have several applications that are owned by different product teams. As these applications are increasingly packaged in containers and migrated to Kubernetes, and as DevOps practices are adopted, a major challenge for enterprises is to determine who is responsible for the Kubernetes stack, and how Kubernetes skills and responsibilities should be shared across the enterprise. It makes sense to have a small centralized team that builds expertise in Kubernetes, and allows the rest of the organization to focus on delivering business value. Another survey shows an increasing number (from 17.01% in 2018 to 35.5% in 2019) of deployments are driven by centralized IT Operations teams.
One approach that enterprises take is to put existing processes around new technologies to make adoption easier. In fact, traditional platform architectures tried to hide containers and container orchestration from developers, and provided familiar abstractions. Similarly, enterprises adopting Kubernetes may put it behind a CI/CD pipeline and not provide developers access to Kubernetes.
While this may be a reasonable way to start, this approach cripples the value proposition of Kubernetes which offers rich cloud native abstractions for developers.
Managed Kubernetes services make it easy to spin up Kubernetes control planes. This makes it tempting to simply assign each team their own cluster, or even use a “one cluster per app” model (if this sounds familiar, our industry did go through a “one VM per app” phase).
There are major problems with the approach “one cluster per team / app” approach:
- Securing and managing Kubernetes is now more difficult. The Kubernetes Control plane is not that difficult to spin-up. Most of the heavy lifting is with configuring and securing Kubernetes once the control plane is up, and with managing workload configurations.
- Resource utilization is highly inefficient as there is no opportunity to share the same resources across a diverse set of workloads. For public clouds, the “one cluster per team / app” model directly leads to higher costs.
- Clusters now become the new “pets” (see “pets vs cattle”) and eventually cluster-sprawl where it becomes impossible to govern and manage deployments.
The solution is to leverage virtualization for proper separation of concerns across developers and cluster operators. Using virtualization, the Ops team can focus on managing core components and services shared across applications. A development team can have self-service access to a virtual cluster, which is a secure slice of a physical cluster.
The Kubernetes Architecture
Kubernetes automates the management of containerized applications.
Large system architectures, such as Kubernetes, often use the concept of architectural layers or “planes” to provide separation of concerns. The Kubernetes control plane consists of services that manage placement, scheduling and provide an API for configuration and monitoring of all resources.
Application workloads typically run on worker nodes. Conceptually, the worker nodes can be thought of as the “data plane” for Kubernetes. Worker nodes also run a few Kubernetes services responsible for managing local state and resources. All communication across services happens via the API server making the system loosely coupled and composable.
Kubernetes Virtualization Techniques
Much like how server virtualization includes different types of virtualization, virtualizing Kubernetes can be accomplished at different layers of the system. The possible approaches are to virtualize the control plane, virtualize the data plane or virtualize both planes.
Virtualizing the Kubernetes Data Plane
Here is the definition of a Kubernetes namespace:
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.
Easy, eh? Well, not quite. For a namespace to be used as a virtual cluster, proper configuration of several additional Kubernetes resources is required. The Kubernetes objects that need to be properly configured for each namespace are shown and discussed below:
- Access Controls: Kubernetes access controls allow granular permission sets to be mapped to users and teams. This is essential for sharing clusters, and ideally is integrated with a central system for managing users, groups and roles.
- Pod Security Policies: this resource allows administrators to configure exactly what pods (the Kubernetes unit of deployment and management) are allowed to do. It is critical that in a shared system, pods are not allowed to run as root and have limited access to other shared resources such as host disks and ports, as well as the apiserver.
- Network Policies: Network policies are Kubernetes firewall rules that allow control over inbound and outbound traffic from pods. By default, Kubernetes allows all pods within a cluster to communicate with each other. This is obviously undesirable in a shared cluster, and hence it is important to configure default network policies for each namespace and then allow users to add firewall rules for their applications.
- Limits and quotas: Kubernetes allows granular configurations of resources. For example, each pod can specify how much CPU and memory it requires. It is also possible to limit the total usage for a workload and for a namespace. This is required in shared environments, to prevent a workload from eating up a majority of the resources and starving other workloads.
Virtualizing the Kubernetes Control Plane
Virtualizing the Kubernetes control plane means that users can get their own virtual instance of the control plane components. Having separate copies of the apiserver, and other Kubernetes control plane components, allows users to potentially run separate versions and full-isolated configurations.
For example, different users can even have namespaces with the same name. Another problem this approach solves is that different users can have custom resource definitions (CRDs) of different versions. CRDs are becoming increasingly important for Kubernetes, as new frameworks such as Istio, are being implemented as CRDs. This model is also great for service providers that offer managed Kubernetes services or want to dedicate one or more clusters for each tenant. One option service providers may use for hard multi-tenancy is to require separate worker nodes per tenant.
Current State and Activities
The Kubernetes multi-tenancy working group is chartered with exploring functionality related to the secure sharing of a cluster. A great place to catch-up on the latest developments is at their bi-weekly meetings. The working group is looking at ways to simplify provisioning and management of virtual clusters, across managing namespaces using mechanisms like CRDs, nested namespaces, as well as using control plane virtualization. The group is also creating security profiles for different levels of multi-tenancy.
A proposal for Kubernetes control plane virtualization was provided by the team at Alibaba (here is a related blog post). In their design, a single “Super Master” coordinates scheduling and resource management across users and worker nodes can be shared. The Alibaba Virtual Cluster proposal also uses namespaces and related controls underneath for isolation at the data plane level. This means the proposal provides both control plane and data plane multi-tenancy.
What Nirmata provides
Nirmata is an infrastructure-agnostic management plane for Kubernetes. The platform has roles for Kubernetes cluster operators, as well as development teams that share clusters. With Nirmata, operators can easily manage shared clusters and enable self-service access to development teams for creating and managing virtual clusters.
Underneath, Nirmata utilizes Kubernetes and Kyverno (an open source Kubernetes policy engine) to provision and manage namespaces and shared services for each team.
Here is a CLI that shows how a developer can request a virtual cluster by specifying the required memory and CPU for the virtual cluster:
Now the standard Kubernetes command line (kubectl), or any other compatible tool, can be used to manage workloads:
And access to shared resources, like the kube-system namespace is prevented:
To enable this experience in Nirmata, the IT Operations team sets up policies for automated cluster lifecycle management of the physical and virtual clusters.
Conclusion
Virtualization technologies have transformed compute, storage and networking and led to cloud computing. By making it easy to provision and manage resources, virtualization technologies enable the mass adoption of complex technology.
Kubernetes provides a powerful set of tools but can be complex to configure and operate at scale. This complexity is one of the largest factors inhibiting enterprise adoption of Kubernetes.
Virtualizing Kubernetes allows separations of concerns across developers and operators, without crippling developer access to the underlying Kubernetes constructs that developers care about. Virtual Kubernetes clusters allow developers the freedom and flexibility they require and allow operators to add the necessary guardrails for production workloads and cross-team access.
Try Nirmata for free at https://try.nirmata.io
Sorry, the comment form is closed at this time.