AIBOM Attestation and Kyverno at Admission Control
The problem with discovering agents after they’re already running
A developer adds an agent to a microservice. It has access to a database tool, a code-execution tool, and calls Claude. Nobody in security, compliance, or platform engineering knows it exists — until it causes an incident.
This is shadow AI: unauthorized or uncontrolled AI usage that is invisible to the teams responsible for governing it. It is the AI equivalent of shadow IT, and it is happening at scale across every organization adopting LLMs. The root cause is a gap in the governance stack. Traditional tools — SBOMs, CSPM, vulnerability scanners— were built for deterministic software. They do not understand agent frameworks, tool declarations, model identifiers, or the relationships between them.
The standard response is discovery: scan what’s running, map the tools, build a dashboard. But discovery after deployment has already lost the race. By the time you’ve mapped an agent’s tools and model, it has already handled requests, accumulated permissions, and potentially taken irreversible actions. Discovery is useful for understanding. It is not a control.
The question worth asking is different: what was this agent declared to be, and was that declaration verified before it ever ran?
That shift — from discovery (what’s running?) to attestation (what was this agent declared to be, and was that declaration verified before it ran?) — is what this post is about. I’ll show how to wire three components together so that every agent Pod admitted to a Kubernetes cluster carries a verified, policy-enforced capability declaration. No attestation means no admission. An unapproved framework means no admission. An undeclared tool means no admission. The gate runs at admission time, not after.

What an AIBOM is, and why it’s not an SBOM
A Software Bill of Materials captures package dependencies — what libraries an artifact was built with. That’s useful for vulnerability management. It tells you nothing about what an AI agent is wired to do.
An AI Bill of Materials captures something different: the AI-specific building blocks of an agent. Framework (LangChain, Pydantic AI, CrewAI, OpenAI Agents SDK, Mastra, and dozens more). Declared tools — what MCP servers or function tools the agent calls. Model name and provider. Memory and retriever components. The relationships showing which agents reach which tools.
We built an AIBOM scanner to do exactly this — across every language where agents are being written today. The scanner performs deep static analysis on Python, TypeScript, Go, Java, Rust, and C#, extracting fully qualified framework symbols and producing structured AIBOM JSON with typed relationships: USES_TOOL, USES_LLM, USES_MEMORY.
Here is a TypeScript agent using the Anthropic SDK directly — no framework abstraction. The scanner detects it at the SDK call level:
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const tools: Anthropic.Tool[] = [
{
name: "web_search",
description: "Search the web for current information",
input_schema: {
type: "object" as const,
properties: {
query: {
type: "string",
description: "The search query",
},
},
required: ["query"],
},
},
{
name: "sql_lookup",
description: "Query the internal analytics database",
input_schema: {
type: "object" as const,
properties: {
query: {
type: "string",
description: "SQL SELECT statement to execute",
},
},
required: ["query"],
},
},
];
async function handleToolCall(
name: string,
input: Record<string, string>
): Promise<string> {
if (name === "web_search") {
// Replace with your actual search integration
return `Search results for: ${input.query}`;
}
if (name === "sql_lookup") {
// Replace with your actual database client
return `Query results for: ${input.query}`;
}
throw new Error(`Unknown tool: ${name}`);
}
export async function runResearchAgent(prompt: string): Promise<string> {
const messages: Anthropic.MessageParam[] = [
{ role: "user", content: prompt },
];
while (true) {
const response = await client.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 4096,
tools,
messages,
});
if (response.stop_reason === "end_turn") {
const textBlock = response.content.find((b) => b.type === "text");
return textBlock && textBlock.type === "text" ? textBlock.text : "";
}
if (response.stop_reason === "tool_use") {
const assistantMessage: Anthropic.MessageParam = {
role: "assistant",
content: response.content,
};
messages.push(assistantMessage);
const toolResults: Anthropic.ToolResultBlockParam[] = [];
for (const block of response.content) {
if (block.type === "tool_use") {
const result = await handleToolCall(
block.name,
block.input as Record<string, string>
);
toolResults.push({
type: "tool_result",
tool_use_id: block.id,
content: result,
});
}
}
messages.push({ role: "user", content: toolResults });
}
}
}
// Example usage
if (require.main === module) {
runResearchAgent("What are the latest trends in AI governance?")
.then(console.log)
.catch(console.error);
}
Source: TypeScript agent using @anthropic-ai/sdk with two declared tools
Running the scanner against this file produces:
{
"bomFormat": "AIBOM",
"specVersion": "1.0",
"serialNumber": "urn:uuid:996fe0f9-35eb-467f-8ac8-727d9ca73577",
"version": 1,
"metadata": {
"timestamp": "2026-04-05T00:47:53Z",
"tools": [
{
"name": "aibom-scanner",
"version": "0.1.0"
}
],
"component": {
"name": "./kyverno-aibom-reference",
"source": {
"path": "/kyverno-aibom-reference",
"commit": "0a2cc10",
"branch": "main",
"remote": "https://github.com/nirmata/kyverno-aibom-reference.git"
}
}
},
"components": [
{
"bom-ref": "28292d-agent-1",
"type": "ml-model",
"category": "agent",
"name": "Anthropic",
"framework": "anthropic_sdk",
"file_path": "src/research-agent.ts",
"line_number": 3,
"confidence": 0.9,
"properties": [
{
"name": "language",
"value": "typescript"
}
],
"risk_category": "limited-risk",
"nist_function": "MAP",
"risk_score": 1
}
],
"relationships": [],
"workflows": [],
"total_components": 1,
"total_relationships": 0,
"total_workflows": 0
}
scanner output — bomFormat: AIBOM, bom-ref, file_path, category as discriminator
Two things worth noting. The scanner extracts the actual model ID (“claude-3-5-sonnet-20241022”) from the client.messages.create() arguments — not just the class name. Tool descriptions come from the description field in the tool definition array, making the AIBOM directly readable by auditors and LLMs. The risk_category and nist_function fields are auto-inferred from tool capabilities, giving you EU AI Act Article 11 and NIST AI RMF alignment out of the box.
The key insight: static analysis of the source at build time captures the agent’s declared capability set — what it was built to do. This is the attestable baseline.
Attesting the AIBOM to the image digest
Once you have the AIBOM JSON, you attach it to the container image as a signed OCI attestation using cosign. The attestation is bound to the image digest — tamper the image and the attestation no longer verifies. We use a Nirmata-namespaced predicate type (https://nirmata.com/aibom/v1) so Kyverno policies can specifically target Nirmata-formatted attestations. This is intentional: it gives the reference architecture a clean targeting surface that won’t collide with other attestation types.
The scanner is available today as nctl agent aibom generate, part of the Nirmata nctl CLI. The full CI gating surface is two commands with complementary –fail-on options:
# ── Generate AIBOM ──────────────────────────────────────────────────────
- name: Generate AIBOM
run: |
nctl agent aibom generate . \
--output json \
--file aibom-current.json
# ── Upload SARIF to GitHub Security tab ────────────────────────────────
- name: Generate SARIF report
run: |
nctl agent aibom generate . \
--output sarif \
--file results.sarif
- name: Upload SARIF to GitHub Security
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: results.sarif
The generate command fails the build if the repo contains undocumented agents above a risk threshold. The diff command is the day-to-day CI gate — it compares the current scan against a committed baseline and fails only when something new appeared. Two baseline strategies are valid:
# ── Gate on baseline diff ───────────────────────────────────────────────
# On PRs: fail if any new agents, tools, or models were added that
# are not in the committed baseline (aibom-baseline.json).
# Update aibom-baseline.json deliberately when new agents are approved.
- name: Gate against approved baseline
run: |
nctl agent aibom diff aibom-baseline.json aibom-current.json \
--fail-on added
The full CI pipeline — generate, gate, publish, attest is here.
The gate runs before the attestation step. If diff –fail-on added exits non-zero, the workflow stops and nothing gets attested. The attestation is evidence of what passed the gate, not a bypass of it.
The COSIGN_EXPERIMENTAL=1 flag enables keyless signing: cosign uses the GitHub Actions OIDC token to get a short-lived certificate from Sigstore’s Fulcio CA, logged in Rekor. No long-lived keys to manage. For air-gapped environments, key-based signing is the alternative.
Kyverno admission enforcement
This is the core technical section. Kyverno 1.14 introduced ImageValidatingPolicy (policies.kyverno.io/v1alpha1) — a dedicated type for image signature and attestation verification via CEL. ClusterPolicy still works but is now marked deprecated for new policy authoring. Two policies do the work: the first verifies the attestation signature; the second inspects what the AIBOM declares.
Policy 1: require-aibom-attestation
Blocks any Pod whose image lacks a valid cosign keyless attestation of type https://nirmata.com/aibom/v1, signed by GitHub Actions OIDC. If the image never went through the attested CI pipeline, it doesn’t get in.
apiVersion: policies.kyverno.io/v1alpha1
kind: ImageValidatingPolicy
metadata:
name: require-aibom-attestation
annotations:
policies.kyverno.io/title: Require AIBOM Attestation
policies.kyverno.io/description: >-
Requires every agent Pod image to have a valid Nirmata AIBOM attestation
signed by the CI pipeline via Sigstore keyless signing. Images without a
valid attestation are blocked at admission.
spec:
validationActions: [Deny]
webhookConfiguration:
timeoutSeconds: 15
failurePolicy: Fail
matchConstraints:
resourceRules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE", "UPDATE"]
resources: ["pods"]
matchImageReferences:
# Scope to your agent registry — update this glob to match your registry
- glob: "registry.example.com/agents/*"
attestors:
- name: ciPipeline
cosign:
keyless:
identities:
# Update subject to match your GitHub org and workflow path
- subject: "https://github.com/your-org/*/.github/workflows/*.yml@refs/heads/main"
issuer: "https://token.actions.githubusercontent.com"
ctlog:
url: "https://rekor.sigstore.dev"
attestations:
- name: aibom
intoto:
# Must match the --type value passed to cosign attest in CI
type: https://nirmata.com/aibom/v1
validations:
# Step 1: verify the image was signed by the CI pipeline
- expression: >-
images.containers.map(image,
verifyImageSignatures(image, [attestors.ciPipeline])
).all(e, e > 0)
message: "Image must be signed by the CI pipeline via Sigstore keyless signing."
# Step 2: verify the AIBOM attestation signature is valid
- expression: >-
images.containers.map(image,
verifyAttestationSignatures(image, attestations.aibom, [attestors.ciPipeline])
).all(e, e > 0)
message: "Image must have a valid AIBOM attestation (https://nirmata.com/aibom/v1)."
Policy 1 — ImageValidatingPolicy: require valid AIBOM attestation
| validationActions: [Deny] is what makes this a hard gate. [Audit] records violations without blocking. They are not equivalent governance controls. |
Policy 2: enforce-aibom-constraints
Once the attestation signature is verified, extractPayload deserializes the AIBOM JSON and makes it available for CEL evaluation. Three things are enforced: approved frameworks, prohibited tools, approved models. The verifyAttestationSignatures(…) > 0 guard inside each map ensures extractPayload only runs on verified images.
apiVersion: policies.kyverno.io/v1alpha1
kind: ImageValidatingPolicy
metadata:
name: enforce-aibom-constraints
annotations:
policies.kyverno.io/title: Enforce AIBOM Constraints
policies.kyverno.io/description: >-
Extracts the AIBOM payload from the OCI attestation and enforces:
(1) agent frameworks must be in the approved list,
(2) prohibited tools (filesystem, shell_*) must not be declared,
(3) models must be in the approved list.
Uses c.category (not c.type) to filter components — c.type is the
CycloneDX field ("ml-model", "library"); c.category is the AIBOM
discriminator ("agent", "tool", "model").
spec:
validationActions: [Deny]
webhookConfiguration:
timeoutSeconds: 15
failurePolicy: Fail
matchConstraints:
resourceRules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE", "UPDATE"]
resources: ["pods"]
matchImageReferences:
# Update this glob to match your registry
- glob: "registry.example.com/agents/*"
attestors:
- name: ciPipeline
cosign:
keyless:
identities:
# Update subject to match your GitHub org and workflow path
- subject: "https://github.com/your-org/*/.github/workflows/*.yml@refs/heads/main"
issuer: "https://token.actions.githubusercontent.com"
ctlog:
url: "https://rekor.sigstore.dev"
attestations:
- name: aibom
intoto:
type: https://nirmata.com/aibom/v1
validations:
# Rule 1: every agent must use an approved framework.
# extractPayload() requires verifyAttestationSignatures() to have run first —
# the > 0 guard ensures extraction only runs on verified images.
# c.category is the AIBOM discriminator field; c.type is the CycloneDX type.
- expression: >-
images.containers.map(image,
verifyAttestationSignatures(image, attestations.aibom, [attestors.ciPipeline]) > 0
&&
extractPayload(image, attestations.aibom).components
.filter(c, c.category == "agent")
.all(c, c.framework in [
"anthropic_sdk",
"pydantic_ai",
"openai-agents",
"langchain",
"langchain_ts",
"mastra",
"voltagent",
"langchaingo"
])
).all(e, e)
message: "Agent framework is not in the approved list."
# Rule 2: filesystem and shell_* tools are prohibited.
- expression: >-
images.containers.map(image,
verifyAttestationSignatures(image, attestations.aibom, [attestors.ciPipeline]) > 0
&&
extractPayload(image, attestations.aibom).components
.filter(c, c.category == "tool")
.all(c, c.name != "filesystem" && !c.name.startsWith("shell_"))
).all(e, e)
message: "Prohibited tool declared (filesystem or shell_*). Admission denied."
# Rule 3: models must be in the approved list.
# Extend this list as your organisation approves new models.
- expression: >-
images.containers.map(image,
verifyAttestationSignatures(image, attestations.aibom, [attestors.ciPipeline]) > 0
&&
extractPayload(image, attestations.aibom).components
.filter(c, c.category == "model")
.all(c, c.name in [
"claude-3-5-sonnet-20241022",
"claude-3-5-haiku-20241022",
"gpt-4o",
"gpt-4o-mini",
"llama-3.1-70b"
])
).all(e, e)
message: "Model is not in the approved list."
Policy 2 — ImageValidatingPolicy: enforce AIBOM contents via CEL + extractPayload
What this unlocks: the agent registry as a byproduct
Every admission decision Kyverno makes generates a PolicyReport resource in the cluster automatically. These reports are structured records: which Pod, which image digest, which policy, what the attestation contained, whether it passed.
In Nirmata Control Hub (NCH), these PolicyReports feed an agent registry automatically. Every admitted agent appears in NCH with its verified framework, tools, and model — linked to the image digest and the policy that admitted it. Registration is a byproduct of enforcement, not a separate operational step.
The attestation approach verifies what agents declared they would do before they do anything. These are complementary. Attestation gives you a tamper-resistant baseline at admission; graph-based discovery gives you runtime correlation. Neither replaces the other, but the attestation is the control. The discovery is the investigation surface.
For the CISO: the attestation is auditable evidence, not a dashboard observation. A signed OCI attestation attached to a specific image digest satisfies EU AI Act Article 9 (risk management), maps to the GOVERN function in the NIST AI RMF, and covers SOC 2 CC6.1 (logical access controls) with something you can show an auditor — a signed artifact with a Rekor transparency log entry, not a screenshot.
Current limitations: Static analysis captures declared tools, not dynamically registered ones. An agent that reads its tool configuration from a database or environment variable at runtime won’t have those tools in the AIBOM. The scanner can detect the env var reference (emitting “model_name”: “$MODEL_NAME”) but not the resolved value. Treat the AIBOM as a necessary floor, not a complete ceiling.
The attestation gate requires CI discipline. If an engineer can push an image directly to the registry without going through the CI pipeline, there’s no attestation to verify. Kyverno will block that image — which is correct behavior — but pair this with registry admission policies that require images to come from a specific CI source.
Keyless signing depends on the OIDC chain being intact. For air-gapped or high-security environments where outbound Sigstore connectivity is restricted, switch to key-based cosign signing. Replace the keyless: block with a keys: block referencing your public key or KMS URI.
Getting started
The reference Kyverno policies — the two ImageValidatingPolicy manifests and the GitHub Actions workflow — are at nirmata/kyverno-aibom-reference. Clone the repo and apply the policies in your own cluster to see the admission gate in action.
Download nctl run the scanner against your own agent codebase and see what surfaces.
To see how NCH integrates the full governance chain — agent governance, human-in-the-loop approval, compliance dashboards — book a demo of Nirmata.
The AIBOM is the inventory. The attestation is the control. The registry writes itself.
