Amazon Elastic Kubernetes Service (EKS) is a popular managed service for building cloud-native applications due to its feature-rich offerings and seamless integration with other AWS services. However, Kubernetes itself is considered insecure by default, prioritizing functionality over security. Although AWS provides several recommendations to secure Amazon EKS clusters, it’s crucial to enforce best practices and prevent misconfigurations to avoid unintended attacks. To help with this, AWS has released an official EKS Security Best Practices guide, which can be found at https://aws.github.io/aws-eks-best-practices/security/docs/.
How can policies help?
Policy engines, such as Kyverno, allow developers to define and enforce custom policies for their clusters. When it comes to securing Amazon EKS clusters, Kyverno policies can be an effective tool to ensure compliance with best practices and industry standards. In this post, we’ll explore how Kyverno policies can be used to secure your Amazon EKS clusters by writing simple policies for the guidelines laid out by AWS. So, whether you’re new to Amazon EKS or looking to enhance your existing security practices, read on to learn more about using Kyverno policies to improve Amazon EKS security.
Setup & Execution
Let me take you through the setup. I have an EKS Cluster running on Kubernetes version 1.25, which you can set up yourself by following the AWS official guide. In addition, I’ve installed Nirmata Enterprise for Kyverno v1.9.1 and the Kyverno AWS adapter v0.3.0. If you’d like to test out the enterprise version of Kyverno, you can request a free trial here or install Kyverno as an Amazon EKS add-on using the AWS Marketplace. Alternatively, you can also install the open-source Kyverno version v1.9.1.
The Kyverno AWS adapter, an open-source project from Nirmata, securely fetches the cluster configuration information using AWS APIs and stores it as a Custom Resource called AWSAdapterConfig. This allows us to write Kyverno policies just as easily as we would write for any other Kubernetes resource. You can find instructions on how to install the adapter here.
Nirmata offers a curated policy pack, which includes Amazon EKS best practices. You can find the entire policy pack here, but for this exercise, you will only need the EKS best practices.
Now let’s take a look at the pods and the resources –
# kyverno-aws-adapter pod
$ kubectl get pods -n nirmata-aws-adapter
NAME READY STATUS RESTARTS AGE
kyverno-aws-adapter-6d88f6dcdd-k6bc5 1/1 Running 0 45s
# kyverno-aws-adapter Custom Resource
$ kubectl get awsacfg -n nirmata-aws-adapter kyverno-aws-adapter
NAME CLUSTER NAME REGION CLUSTER STATUS KUBERNETES VERSION LAST POLLED STATUS
kyverno-aws-adapter demo-blog us-west-1 ACTIVE 1.25 success
To view complete status details that contains cloud configuration information, view the complete YAML using –
$ kubectl get awsacfg -n nirmata-aws-adapter kyverno-aws-adapter -o yaml
# kyverno pod
$ kubectl get pods -n kyverno
NAME READY STATUS RESTARTS AGE
kyverno-7c444878f7-zmmd2 1/1 Running 0 2m30s
# EKS Best Practices policies
$ kubectl get cpol NAME BACKGROUND VALIDATE ACTION READY
add-networkpolicy true audit true
add-networkpolicy-dns true audit true
add-ns-quota true audit true
check-amazon-inspector true audit true
check-ami-deprecation-time true audit true
check-cluster-endpoint true audit true
check-cluster-logging true audit true
check-cluster-remote-access true audit true
check-cluster-rolearn true audit true
check-cluster-secrets-encryption true audit true
check-cluster-tags true audit true
check-immutable-tags-ecr true audit true
check-instance-profile-access true audit true
check-public-dns true audit true
check-vpc-flow-logs true audit true
require-pod-probes true audit true
require-requests-limits true audit true
restrict-image-registries true audit true
... ... ... ...
Nirmata Policies in Action
Let’s take an example to understand how Nirmata policies and the Kyverno AWS Adapter work together to achieve the best practices guidelines set forth by AWS.
This is the recommendation for using immutable tags with ECR. The guideline is stated simply as “Immutable tags force you to update the image tag on each push to the image repository. This can thwart an attacker from overwriting an image with a malicious version without changing the image’s tags. Additionally, it gives you a way to easily and uniquely identify an image.”
Now it is up to the Security Admin to figure out and manage all tags within ECR and ensure they are immutable as per the security guideline. This can be a manual task which is error-prone when you have to do this at scale.
Let’s look at a Kyverno policy for checking immutable tags with ECR. The resource kind we are matching on is the `AWSAdapterConfig` which we get from the Kyverno AWS Adapter. We have a single rule in this policy that checks whether every ECR repository has the imageTagMutable field set to true.
View the JSON output of the AWSAdapterConfig resource to see the various configuration information made available to us for writing policies by the AWS Adapter. In this example, we will consider the `status.ecrRepositories[]` list. Note: If you do not have any ECR repositories configured for your account, you may not see this field. It means the list is empty (and we are automatically compliant with this guideline).
$ cat check-immutale-tags-ecr.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: check-immutable-tags-ecr
annotations:
policies.kyverno.io/title: Check Immutable Tags for ECR
policies.kyverno.io/category: EKS Best Practices
policies.kyverno.io/severity: medium
policies.kyverno.io/subject: Cluster
policies.kyverno.io/description: >-
Immutable tags are not enabled on all ECR repositories.
spec:
validationFailureAction: audit
background: true
rules:
- name: check-immutable-tag
match:
any:
- resources:
kinds:
- AWSAdapterConfig
validate:
message: "The `imageTagMutable` field must be set to true on all ECR repositories."
foreach:
- list: "request.object.status.ecrRepositories[]"
pattern:
imageTagMutable: true
If there is any violation, we can view this using the kubectl CLI by looking at the PolicyReport.
$ kubectl get polr cpol-check-immutable-tags-ecr -n nirmata-aws-adapter
NAME PASS FAIL WARN ERROR SKIP AGE
cpol-check-immutable-tags-ecr 0 1 0 0 0 4d3h
With just a few lines of YAML, we can enforce the guideline recommended by AWS. Many of the other recommendations can also be expressed as code which reduces the burden on the Security team to manually validate for all configurations and they can easily scale with this approach.
Nirmata Policy Manager for centralized visibility and management
Nirmata Policy Manager (NPM) provides a cloud-native solution for centralized policy management and automation of Kubernetes clusters, allowing organizations to enforce compliance, governance, and security policies across their entire infrastructure. By implementing NPM, application teams can focus on writing business logic without the burden of ensuring adherence to the organization’s best practices and standards.
At Nirmata, we provide an extensive mapping of compliance standards to policies. We have codified the list of EKS Best Practices into Kyverno policies which can be found here. The Kyverno AWS Adapter captures all the AWS cloud configuration information in the `AWSAdapaterConfig` Custom Resource which makes it easy to write YAML-based Kyverno policies.
Using kubectl CLI to view the Policy Reports is limited to only one cluster. NPM provides centralized visibility of Policy Reports across all your Kubernetes clusters – you can easily apply filters based on clusters, namespaces and PolicyReport status.
NPM also provides a Compliance Score for each of the clusters. This reporting can be used to demonstrate compliance to auditors, stakeholders, or other parties.
NPM also provides detailed reporting on compliance status. You can view all the controls listed in the compliance standard and the policy execution results for each of the individual controls.
Conclusion
In this blog post, we have demonstrated how policies can be utilized to codify Amazon EKS security best practices. However, the policies we implemented are only the starting point. At Nirmata, we continually revise our policy list to relieve you of the burden of imposing guardrails and enable you to concentrate on your business applications.
We also witnessed the capabilities of the Nirmata Policy Manager, which offers a comprehensive view of all the Kubernetes security and governance activities for your cluster fleet through a single pane of glass. Furthermore, NPM is set to introduce several exciting features that will further streamline your workflows and enhance cluster governance and visibility.
Bonus Use Case
You can also watch this use case demo on YouTube.
EKS nodes can be provisioned through custom-built AMIs or AMIs provided by your platform vendor. If you attempt to provision a deprecated or deregistered AMI, AWS returns an error. However, what happens when nodes have already been provisioned, and the AMIs become stale beyond their deprecation timeline? It is crucial to ensure that nodes are not running on deprecated AMIs as there would be no official support or updates available. Fortunately, detecting this through Kyverno policy is as easy as writing any other validation policy.
Let’s look at the below policy –
> cat check-ami-deprecation-time.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: check-ami-deprecation-time
annotations:
policies.kyverno.io/title: Check AMI deprecation Time
policies.kyverno.io/category: EKS Best Practices
policies.kyverno.io/severity: medium
policies.kyverno.io/subject: Cluster
policies.kyverno.io/description: >-
AMIs past their deprecation time
spec:
validationFailureAction: audit
background: true
rules:
- name: check-ami-deprecation-time
match:
any:
- resources:
kinds:
- AWSAdapterConfig
validate:
message: "This rule audits for AMIs that are past their deprecation time"
foreach:
- list: "request.object.status.eksCluster.compute.nodeGroups[].amazonMachineImage"
deny:
conditions:
any:
- key: "{{ time_before('{{ element.deprecationTime }}', '{{ time_now_utc() }}') }}"
operator: Equals
value: true
AMI information is captured in the AWSAdapterConfig under `status.eksCluster.compute.nodeGroups[].amazonMachineImage`. This also includes the deprecation time of the AMI. In the validate rule, we check that if the deprecation time is a time in the past, then it is an old AMI and it is denied. We can check in the policy report that there is a failure for this policy and rule type.
You can check the output by fetching the PolicyReport Custom Resource.
> kubectl get polr cpol-check-ami-deprecation-time -n nirmata-aws-adapter
NAME PASS FAIL WARN ERROR SKIP AGE
cpol-check-ami-deprecation-time 1 0 0 0 0 2m26s
You can also view this information on the Nirmata Policy Manager.
You can now easily audit for AMIs that have exceeded their deprecation time and ensure that your EKS Clusters won’t encounter any issues, even if the nodes remain operational for an extended period! Let us know if you have more such use cases or how you would use Kyverno policies to achieve operational efficiency. Get in touch!
Additional Information
Kyverno 1.9 introduces an array of new features that offer interesting use cases. Check out the release blog here. To get a sneak peek of Kyverno 1.10 features, check out this CNCF webinar presented by the lead maintainers of Kyverno – Chip & Jim.
In addition, Nirmata offers a robust version of Kyverno that includes several benefits, such as the Operator for lifecycle management and Adapters for seamless integration with other cloud services and tools. Sign up for a free trial of Nirmata Policy Manager.
Sorry, the comment form is closed at this time.