Use Virtual Clusters to Tame Sprawl in Kubernetes

Use Virtual Clusters to Tame Sprawl in Kubernetes

Virtual clusters are the answer to a tricky balancing act: Developers need the freedom to self-serve, but without the potential for a developer’s error causing a security or operational risk that compromises the entire system. 

Virtualization in general is a way to apply software-defined segmentation, isolation, and management to virtual representations of physical resources, whether it is storage or machines or clusters. In addition to other benefits, virtualization allows developers to safely self-serve. Many new technologies have followed a similar trajectory — think about servers and virtual machines. Now as more organizations adopt Kubernetes and start to struggle with best practice enforcement as well as the management and resource utilization problems related to cluster sprawl, they are starting to apply the same virtualization techniques to clusters. 

How virtual clusters work

Virtual clusters improve resource utilization by packing multiple virtual clusters onto a single physical cluster. While virtual clusters are often used interchangeably with namespaces, the two concepts are not identical. Namespaces are the foundation for virtual clusters, but virtual clusters allow (and require) each cluster to be configured separately from each other, allowing access controls, security policies, network policies and resource limits to be configured differently for each virtual cluster. The advantage of virtual clusters is that for users, they mostly appear and behave just like physical clusters.

Using virtual clusters, however, can also make it easier to ensure that organizational governance policies are uniformly enforced throughout the organization. Adding the virtualization layer makes it easier to use automation tools, including a policy engine like Kyverno, to audit, mutate and create policies and ensure that best practices in terms of security settings, resource utilization and monitoring are set up appropriately. 

Tackling cluster sprawl

As Kubernetes scales across the enterprise and individual applications scale up, cluster sprawl becomes a real threat to both Day 2 management as well as the budget. Organizations face a Goldilocks problem when it comes to cluster sizing, and thus cluster sprawl. Clusters that are too large (more than a hundred nodes) present networking challenges; too many very small clusters become challenging to manage and don’t use resources efficiently. 

Virtual clusters are a way to tackle cluster sprawl by putting more virtual clusters into each physical cluster. In addition, the fact that the clusters are software-defined makes it easier to use automation tools to audit and monitor the clusters. 

How this looks to developers

With virtual clusters, the end experience for users is essentially the same as if they were spinning up a physical cluster. Both the Kubernetes data plane and control plane are virtualized, so the interface that developers see is unchanged. 

Using virtual clusters is a way to balance the ability of developers to self-serve with the need for centralized control over best practices. Operators can easily manage both the physical and virtual clusters centrally, without becoming a bottleneck for development teams who want to spin up new (virtual) clusters whenever needed. 

Virtual clusters and Day 2

Because virtual clusters are software-defined, it’s easier to audit, track and manage them. Many Day 2 problems come down to managing complexity. When clusters are set up correctly from the beginning, are neither too large or too small, and have appropriate monitoring capabilities backed-in, Day 2 becomes much easier. 

Are virtual clusters challenging?

One of the most persistent misconceptions about virtual clusters is that they are very difficult to set up and manage. Another related misconception is that virtual clusters are insecure, because they don’t provide the same level of isolation that you would see between virtual machines, for example. 

First of all, software multi-tenancy is not particularly challenging to manage once it’s automated. Kubernetes itself has a fairly steep learning curve — if the organization is already being successful with Kubernetes, the additional skills needed to set up virtual clusters are relatively easy to acquire, especially if you’re using a platform like Nirmata. 

Secondly, virtual clusters do provide workload isolation and can be secure enough for more enterprises. If organizations are using virtual clusters correctly and using a policy engine to ensure adherence to best practices, they ensure that every virtual cluster has the right configurations related to resource quotas, network policies, access controls and namespaces. In Kubernetes, isolation is achieved through configuration best practices. As long as these best practices are followed, virtual clusters will be isolated from each other even if they’re running on the same physical cluster. 

With Nirmata, platform and IT operations teams can make it easy for developers to request specific resources and services and have a virtual cluster automatically spun up for them. Cluster administrators still have visibility over the resource consumption and can adjust capacity limits and other configurations if the cluster is either approaching its limits or isn’t performing according to the SLAs. See how it works here

Complexity: Your Day 2 Enemy
Kubernetes Scaling Strategies
No Comments

Sorry, the comment form is closed at this time.