From Policy Engine to AI-Native Platform: Introducing Cloud Agents for Infrastructure Governance

2 April 2026

From Policy Engine to AI-Native Platform: Introducing Cloud Agents for Infrastructure Governance

PRODUCT LAUNCH 

Nirmata’s new Cloud Agents give platform engineers a one-click way to run deterministic, LLM-powered diagnostics directly on their clusters — no scripts, no setup, no surprises.

When we launched Nirmata, the goal was straightforward: give teams a better way to govern Kubernetes at scale. Kyverno has become the CNCF standard for Kubernetes policy enforcement. Nirmata Control Hub has become the enterprise control plane layered on top. And for the last few years, that has been the story — policy-as-code, applied at the cluster level, governed centrally.

Then something changed. AI started showing up not just in the applications engineers were deploying, but in the tools they used to build and operate infrastructure itself. And we started asking a harder question: what does infrastructure governance actually look like in an AI-native world?

Today, we’re sharing our answer: Cloud Agents — a new capability in Nirmata Control Hub that brings deterministic, AI-powered infrastructure analysis directly to your clusters, with a single click.

A journey in three acts

We’ve been thinking carefully about AI agents — not just as a category of software, but as a spectrum of trust and autonomy. In a recent post, we laid out a practical taxonomy:

Chapter 1

Personal Agents

Chat assistants on your device, acting on your behalf. High autonomy, creative, human in the loop (HITL).

Chapter 2

Service Agents

Production workers with their own identity. Low autonomy, constrained, reliable. LLM used only where needed.

Chapter 3 — Now

Cloud Agents

Agents-as-a-service. Run across your clusters on our platform. Deterministic workflows, AI reasoning on top.

Cloud Agents sit in the most constrained — and most trustworthy — tier of this taxonomy. They run on our infrastructure, use their own identity, have restricted tool access, and follow highly deterministic workflow graphs. The LLM is the analyst, not the operator. It reasons over data collected by the workflow. It never touches your cluster with a write operation.

This is a deliberate architectural choice. Creativity is a liability in production. The reliability and repeatability you expect from Kubernetes operators and policy engines — that’s the standard Cloud Agents are held to.

What Cloud Agents do

Cloud Agents catalog — six specialized agents, each pre-built for a specific infrastructure governance task.

 

We are launching with six agents, each targeting a problem that platform engineers spend real hours on every week:

Cost Analyzer

Identifies over-provisioned resources and idle workloads. Quantifies wasted CPU and memory. Recommends right-sizing actions with estimated savings.

Workload Troubleshooter

Diagnoses CrashLoopBackOff, OOMKilled, Pending pods, and node pressure — read-only API queries only. Produces prioritized root-cause reports.

RBAC Blast Radius

Maps the full access scope of every ServiceAccount. Surfaces cluster-admin grants, wildcard permissions, privilege escalation paths, and MITRE ATT&CK mappings.

Policy Recommender

Scans live workloads and generates Kyverno policies tailored to your environment — security hardening, resource governance, and best practices.

Compliance Auditor

Runs a compliance scan against a selected framework. Maps violations to controls. Produces a pass/fail summary with remediation guidance.

Remediator

Analyzes Kyverno violations and generates LLM-powered YAML fixes. Reports original vs. remediated resources. No changes applied to the cluster.

 

Under the hood: deterministic workflows, not freestyle agents

What distinguishes Cloud Agents from a general-purpose AI assistant querying your cluster? The architecture.

Workload Troubleshooter execution DAG — parallel data collection steps feed into LLM-powered analysis and report generation.

 

Each agent runs as a structured DAG — a directed acyclic graph of workflow steps. Data collection happens in parallel across namespaces, nodes, and workloads. The results are aggregated and passed to an LLM only at the analysis step, where natural language reasoning adds genuine value. Final reports are generated from structured findings, not freeform generation.

The foundation beneath every Cloud Agent is Nirmata’s AI-native workflow engine — our secret sauce. It is purpose-built for infrastructure governance workloads: horizontally scalable, fully customizable using CEL (Common Expression Language) for policy-driven control flow, and designed with built-in observability and governance from day one. Every workflow step is logged, every decision is auditable, and every execution graph is visible in the NCH. This isn’t a general-purpose orchestration framework bolted onto an LLM. It is a governance-first runtime where AI reasoning is one constrained, auditable step in a deterministic pipeline.

This architecture matters for three specific reasons:

  • Repeatability. Run the same agent twice and you get the same analytical framework, not a different answer depending on the model’s mood.
  • Cost efficiency. LLM calls happen at the edges of the workflow — analysis and synthesis — not on every data fetch. Token usage stays bounded and predictable.
  • Safety. The agent has no write access. No kubectl exec. No mutations. It runs in a restricted, sandboxed mode. It collects, analyzes, and reports. Your cluster state is never at risk.

A real example: Cost Analyzer in action

Cost Analyzer report on a live EKS cluster — 85% CPU waste, 88% memory waste, and concrete remediation guidance generated in under 30 seconds.

This EKS cluster exhibits significant over-provisioning with 85% CPU waste and 88% memory waste. The cluster utilizes only 14.9% of CPU capacity and 11.5% of memory capacity, indicating substantial cost optimization opportunities.

 

This report — identifying 7 out of 16 pods running as best-effort workloads with no resource requests, 7 pods lacking resource limits, and an estimated 75–85% infrastructure cost waste — was generated in 24 seconds. No custom scripts. No analyst time. One click.

That is the compound value proposition: the agent collects data faster than a human can, reasons across the entire cluster at once, and produces an immediately actionable report.

Scheduling and operational integration

Cost Analyzer schedule configuration — daily at 12AM UTC, with cron expression, cluster selection, and optional namespace scoping.

Cloud Agents are not just for on-demand investigation. Every agent can be scheduled on any cron expression — daily cost analysis, weekly compliance audits, on-push policy recommendations. The Agent Runs view gives you a full audit trail: who triggered each run, which cluster, what trigger type, how long it took.

Agent Runs view — complete history of manual and scheduled runs across all clusters, with status, trigger, and duration at a glance.

 

This transforms agents from interactive tools into persistent governance signals. Your compliance posture is not something you check before an audit — it is something you measure continuously, with findings automatically available to engineering leadership.

The platform beneath the agents

Cloud Agents are built on the same foundation that powers Nirmata Control Hub’s policy enforcement and compliance management — Kyverno’s policy engine, the go-nctl governance library, and our multi-cluster control plane. This means the reports agents produce can be directly linked to existing policies, violations, and remediation workflows already in NCH.

The Cost Analyzer doesn’t just tell you a pod is over-provisioned — it can surface the Kyverno policy gap that allowed it. The Compliance Auditor maps to the same framework controls your team is already tracking. The Remediator generates YAML fixes aligned with policies you own and maintain.

Agents amplify the governance infrastructure you have already built, rather than creating a parallel system you have to maintain separately.

What comes next

This launch is the first chapter. We are building toward a world where every cloud infrastructure decision — resource sizing, RBAC scope, compliance posture, workload health — has an agent that can analyze it continuously, explain it in plain language, and surface it to the right person at the right time.

If you are a platform engineer running Kubernetes at scale, we built this for you. If you are an engineering leader who needs visibility into cost, compliance, and risk without assembling a team of analysts, this is your on-ramp.

Try Cloud Agents today

Available now in all the tiers of the Nirmata Platform. Free trial included — no credit card required.

Start free trial   |   Request a demo

 

 

 

 

Shadow AI Is the New Shadow IT: Governing AI Agents from Code to Runtime
RSA 2026 Was Agent Washing Season. Here's What Actually Matters...

Latest

From the blog

The latest industry news, interviews, technologies, and resources.

View all blogs
CISOs Have a Prevention Problem. And Nobody Is Telling Them.
CISOs Have a Prevention Problem. And Nobody Is Telling Them.

The security industry has spent a decade building better cameras. Wiz. Orca Security. Prisma Cloud.  Exceptional tools. World-class at finding…

From Static Scanning to IDE-Native AI Governance: Building DevGuard
From Static Scanning to IDE-Native AI Governance: Building DevGuard

For years, the industry mantra has been simple: shift security left. Catch issues earlier in CI/CD. Add more scanners. Add…