With the community Ingress NGINX controller reaching its retirement this month, many of us are facing a looming migration deadline.
This guide focuses specifically on moving to the F5 NGINX Open Source Ingress Controller, which is the free, open-source version maintained by the NGINX engineering team at F5, not the commercial NGINX Plus version. It offers a production-grade solution without the licensing fees, but there is a catch: it operates quite differently from the community version you’re currently running.
Why This Isn’t a Simple Swap
Before you run a single kubectl command, it’s important to understand one thing:
This is not an image replacement exercise.
Even though both controllers use NGINX under the hood, their control planes are entirely different implementations. That means:
- Annotation formats differ
- Feature behavior is not always identical
- Some configurations don’t map directly.
If you assume compatibility, you’ll break things.
Phase 1: Assessment and Discovery
The first step is not installation, it’s visibility.
Every Ingress resource in your cluster needs to be reviewed because the annotation syntax changes between the two controllers.
Start with a full audit:
- Enumerate all Ingress resources across namespaces
- Extract the metadata.annotations section
- Identify which annotations are actively used
Your goal here is simple: understand what needs to change before making changes.
This is where automation helps.
Instead of manually inspecting resources, you can use Kyverno policies to scan your cluster and surface deprecated resources with a list of annotations. The policy reports give you a clear, centralized view of what needs attention; no guesswork.
# Kyverno ClusterPolicy: audit-only; every matched resource produces a policy report
# whose message includes the full metadata.annotations map (blocking is disabled).
# Apply: kubectl apply -f list-annotations-audit-policy.yaml
# Inspect: kubectl get policyreport -A
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: list-metadata-annotations-audit
spec:
validationFailureAction: audit
background: true
rules:
- name: report-metadata-annotations
match:
any:
- resources:
kinds:
- Ingress
validate:
message: "metadata.annotations={{ to_string(object_from_lists(items(request.object.metadata.annotations || `{}`, 'key', 'value')[?!(regex_match('^kubectl\\.kubernetes\\.io/last-applied-configuration$', @.key))].key, items(request.object.metadata.annotations || `{}`, 'key', 'value')[?!(regex_match('^kubectl\\.kubernetes\\.io/last-applied-configuration$', @.key))].value)) }}"
deny:
conditions:
any:
- key: "{{ request.object.metadata.name }}"
operator: NotEquals
value: ""
Once the policy is applied, review the message field in the generated policy reports. This is where you’ll get a clear view of all annotations and the retired resources in your cluster that are using them.
With Nirmata AI assistance, you can now scan all your clusters offline and generate a report to review all the annotations in your environment. The platform aggregates these policy reports into a centralized dashboard, giving you a real-time view of annotations used by retired ingress resources across clusters. You can also export these reports for auditing and tracking purposes, which is especially useful when coordinating changes across teams.
Phase 2: Annotation Mapping: Not Always 1-to-1
If you decide to continue using standard Kubernetes Ingress resources, you’ll need to update annotation prefixes.
- Community controller: nginx.ingress.kubernetes.io/
- F5 controller: nginx.org/
For example:
nginx.ingress.kubernetes.io/client-body-buffer-size → nginx.org/client-body-buffer-size
Sounds simple, but it’s not always that clean.
Before jumping into mapping, it’s important to understand what each annotation actually does. Avoid the temptation to map everything 1:1 without validation. Since the annotation syntax and underlying behavior differ between controllers, a direct translation can lead to unexpected behavior or broken configurations.
Start by running a comprehensive audit of all Ingress resources. Focus specifically on the metadata.annotations section and identify which annotations are in use. Common examples include rewrite targets, SSL redirects, proxy buffer sizing, and client body size limits. Each of these needs to be carefully evaluated before mapping to their equivalents in the F5 ecosystem.
Some annotations:
- Doesn’t exist in the F5 version
- Behave differently
- Require structural changes instead of direct mapping
This is where most manual migrations become tedious and error-prone.
Once you’ve identified and finalized the list of annotations that need to be migrated, this process can be automated using a Kyverno mutation policy. This allows you to consistently update retired or legacy annotations across all Ingress resources in your cluster.
Start by deploying the policy in a single namespace to validate the behavior. Once you’re confident with the results, you can gradually expand the scope to the entire cluster and apply the mutation more broadly.
This approach helps you avoid manual edits altogether and move toward a more automated, policy-driven migration path.
apiVersion: policies.kyverno.io/v1
kind: MutatingPolicy
metadata:
annotations:
policies.kyverno.io/category: Other
policies.kyverno.io/description: Converts all nginx.ingress.kubernetes.io/ annotation keys to nginx.org/ prefix for Ingress resources
policies.kyverno.io/severity: medium
policies.kyverno.io/subject: Ingress
policies.kyverno.io/title: Convert nginx.ingress.kubernetes.io annotations to nginx.org
name: convert-nginx-annotations
spec:
matchConditions:
- expression: has(object.metadata.annotations) && object.metadata.annotations.exists(key, key.startsWith('nginx.ingress.kubernetes.io/'))
name: has-nginx-ingress-annotations
matchConstraints:
resourceRules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
mutations:
- jsonPatch:
expression: |
variables.nginxIngressKeys.map(oldKey, [
JSONPatch{
op: "add",
path: "/metadata/annotations/" + jsonpatch.escapeKey(oldKey.replace('nginx.ingress.kubernetes.io/', 'nginx.org/')),
value: object.metadata.annotations[oldKey]
},
JSONPatch{
op: "remove",
path: "/metadata/annotations/" + jsonpatch.escapeKey(oldKey)
}
]).flatten()
patchType: JSONPatch
variables:
- expression: "has(object.metadata.annotations) ? object.metadata.annotations.filter(key, key.startsWith('nginx.ingress.kubernetes.io/')).map(key, key) : []"
name: nginxIngressKeys
Consider Moving to VirtualServer CRDs
If you’re already making changes, it’s worth asking: should you stick with standard Ingress resources at all?
The F5 ecosystem provides the VirtualServer CRD, which is designed to replace annotation-heavy configurations with a more structured, Kubernetes-native approach.
Benefits:
- Eliminates “annotation soup.”
- Improves readability and maintainability
- Aligns better with advanced routing use cases
If you’re doing a large-scale migration, this path is often cleaner in the long run.
Using Kyverno to Reduce Migration Risk
Migration at scale is where Kyverno really shines.
Instead of relying on manual updates or one-off scripts, you can:
- Detect deprecated resources and annotations
- Mutate resources to align with new formats
- Enforce consistency across teams and namespaces
This turns what is usually a one-time painful migration into a repeatable, policy-driven process.
Phase 3: DNS Cutover: Don’t Rush This Part
Once your configuration is updated and the new controller is running, the final step is traffic migration.
A few practical tips:
- Test the new LoadBalancer IP using curl with the correct host header
- Verify routing behavior before exposing it publicly
- Check controller logs for errors or unexpected rewrites
- Confirm SSL termination is working as expected
When you’re ready:
- Gradually shift traffic (start small—~5%)
- Monitor error rates and latency
- Watch for spikes in 4xx/5xx responses
If anything looks off, roll back immediately. DNS-based cutovers are easy to revert; use that to your advantage.
Final Thoughts
Migrating from the community NGINX Ingress Controller is not difficult, but it is easy to underestimate.
The biggest mistake teams make is treating it like a drop-in replacement. It’s not.
By taking a policy-driven approach with Kyverno, you:
- Gain visibility before making changes
- Reduce manual effort
- Lower the risk of production issues
And most importantly, you make the migration repeatable across all your clusters.
