Scaling, especially autoscaling, is one of Kubernetes’ biggest attractions. Using Kubernetes makes it easier to scale applications up and down, either because of fluctuations in usage patterns or because the application is adding users.
Just because Kubernetes simplifies scaling doesn’t mean that organizations don’t have to think about scaling at all. There are different ways to scale in Kubernetes as well as best practices to follow. Smart organizations will evaluate the pros and cons of different scaling strategies and decide how to manage it best given the technical and organizational priorities. Kubernetes can handle the actual scaling automatically, but humans need to determine the strategy and tell Kubernetes what to do.
What does ‘scaling’ mean?
There are actually two ways to think about scaling. The first one is probably what most readers think of first: Scaling an application up or down due to changes in usage. But that’s not the only thing to think about. Scaling Kubernetes can also mean expanding Kubernetes’ footprint across the enterprise and increasing the number of applications running in Kubernetes. As organizations think about scaling strategies, they should consider both types of scaling, because the strategies involved are related. An organization with a single but very popular application running in Kubernetes will have different challenges from an organization with many lesser-used applications, but they are all a question of scale.
What to centralize?
One of the most important considerations when considering scaling strategies is how to centralize the visibility, governance, and security controls that really should be handled by a team of specialists while still giving developers as much freedom as possible to self-serve. Where to draw the line isn’t always obvious, but can have ramifications for how resilient the organization’s tech stack is as well as how smoothly the Day 2 operations go.
Choosing the right guardrails to put in place — and ensuring that Kubernetes isn’t used differently by each team in the organization — is essential to the project’s long-term success in the enterprise.
Best practices
There’s no one-size-fits-all best strategy for managing scaling. However, there are best practices that all organizations should follow, regardless of how they choose to approach scaling.
Automate everything
The beauty of Kubernetes is in declarative configuration management, where humans can declare the desired outcome and the system works to match the current state to the desired state. Automate everything that is automate-able and let the highly-trained team members focus on making technical decisions that a computer can’t understand.
Keep systems of record
There should be a centralized auditing and record-keeping process that enables organizations to manage version controlling, service catalogues and other organizational records. Without some central record-keeping, the Kubernetes deployments will rapidly become impossible to track.
Avoid cluster sprawl
Cluster sprawl happens when the number of clusters becomes unmanageable, usually because organizations are using one cluster per application or per team and neither sharing resources appropriately nor centralizing enough of the management. This leads to problems with security and management as well as resource inefficiency.
Finding the right balance
As Kubernetes scales, there are also some technical trade-offs to consider. There are limits to the number of nodes that a cluster can have — more than a couple hundred nodes per cluster and you start having problems with the network and scheduling delays. On the other hand, you wouldn’t want to break up a 300-node cluster into 100 three-node clusters, either, because that would lead to unmanageable cluster sprawl and high costs.
Likewise, greater central control can come at the cost of developer agility and the ability to self-serve. On the other hand, when each team has its own bespoke Kubernetes stack, it’s impossible to effectively manage Day 2 operations.
In our experience, it’s important to enable shared clusters, so that each cluster can be securely used by multiple teams and applications. In practice, this involves creating virtual clusters, using namespaces and other Kubernetes security constructs on top of clusters. This enables developers to continue self-serving, and spinning up their own cluster whenever needed while still giving operators the centralized control needed to ensure Day 2 operations go smoothly.
Automating configuration management is crucial to an organizational scaling strategy based on virtual clusters. Developers will see a virtualized Kubernetes end-point for each virtual cluster, which could potentially let them each choose different versions and different configurations for each virtual cluster. When configurations are automated with a policy engine, organizations can set guardrails on the available configurations, ensuring consistency among all the virtual clusters running on the same physical cluster.
Using a policy engine like Kyverno that not only offers pass/fail guardrails but also automatically generate and mutate configurations is key to having both the ethos of ‘automating whatever possible’ as well as ensuring that the system’s operational needs don’t become unmanageable as it scales. Nirmata’s unified management plane also helps organizations tame the operational complexity that increases exponentially with scale by providing a single dashboard to manage the entire system and get visibility. This allows organizations to get centralized operational control without sacrificing the ability of developers to self-serve. Try it out to see how it works.
Sorry, the comment form is closed at this time.