AI, Open Source, and the Human Bottleneck

20 February 2026

AI, Open Source, and the Human Bottleneck

AI Open Source Bottleneck

Open source has always evolved alongside shifts in technology.

From distributed version control and CI/CD, from containers to Kubernetes, each wave of tooling has reshaped how we build, collaborate, and contribute. Generative AI seems to be the newest wave and it introduces a tension that open source communities can no longer afford to ignore.

AI has made it simple to generate contributions. It has not however made the necessary review process simpler.

Recently, the Kyverno project introduced an AI Usage Policy. This decision was not driven by resistance to AI. It was driven by something far more practical: the scaling limits of human attention.

 

Where This Conversation Began

Like many governance changes in open source, this one didn’t begin with theory. It began with a Slack message.

“20 PRs opened in 15 minutes 😱”

What followed was a mixture of humor, curiosity, and a familiar undertone many maintainers recognize immediately as discomfort.

“Were they good PRs?”
“Maybe they were generated by bots?”
“Are any of them helpful or are mostly they noise?”

One maintainer captured the sentiment perfectly:

“Just seeing this number is discouraging enough.”

Another jokingly suggested we might need a:

“Respect the maintainers’ life policy.”

Behind the jokes was something deeply real. Our Maintainers and our project at large were feeling the weight of something very new, very real, and clearly on the verge of changing how open source projects like ours will be maintained.

 

The Maintainer Reality Few People See

Modern AI tools are extraordinary productivity amplifiers.

They generate code, documentation, tests, refactors, and design suggestions in seconds. But while output scales infinitely, review does not. The bottleneck in open source has never been code generation.

It has always been human cognition.

Every pull request, regardless of how it was produced must still be:

  • Read
  • Understood
  • Evaluated for correctness
  • Assessed for security implications
  • Considered for long-term maintainability
  • More often than not, commented on, questioned, or simply clarified
  • Viewed by more than one set of eyes
  • Merged

In open source, there is always a human in the loop. That human is typically a maintainer, a reviewer, or a combination of both.

When low-effort or poorly understood AI-generated PRs flood a project, the burden of validation shifts entirely onto the humans who bear the majority of the weight in this loop. Even the most well-intentioned contributions become costly when they lack clarity, context, demonstrated understanding, and ownership.

Low-effort AI contributions don’t just exhaust maintainers, they quietly tax every thoughtful contributor waiting in the queue.

AI Boomers, AI Rizz, and the Reality of Change

We’re currently living through a fascinating cultural split in the developer ecosystem.

On one side, we see what might playfully be called “AI boomers” otherwise known as those folks deeply skeptical of AI, hesitant to adopt it, or resistant to its growing presence in development workflows. While it might be hard to believe, there are many of these people working in and contributing toopen source software development.

On the other side, we see contributors with undeniable “AI rizz.” These are enthusiastic adopters of AI eager to automate, generate, accelerate, and experiment with AI and AI tooling in the open source space and everywhere else possible.

Both reactions are understandable.

Both are human.

But history has taught us something consistent about technological change:

Projects, like businesses, that refuse to adapt rarely remain relevant.

It’s become clear that AI is not a passing trend. It is a structural shift in how software is created. Resisting it entirely is unlikely to be sustainable and blindly embracing it without guardrails is equally risky.

AI as Acceleration vs. AI as Substitution

Open source contributions have traditionally served as one of the most powerful learning engines in our industry. Developers deepen expertise, explore systems, build portfolios, and give back to the communities they rely on.

But it seems that the arrival of AI has changed how many contributors produce work. The unfortunate thing is that this hasn’t happened in a globally productive way, rather it has happened in a way that undermines the one thing that a meaningful contribution requires:

Understanding.

Using AI to bypass understanding is not acceleration. It’s debt for both the contributor and the project.

Superficially correct code that cannot be explained, reasoned about, or defended introduces risk. It also deprives contributors of the very growth that open source participation has historically enabled.

Across open source communities, we’re hearing the same message shared with AI touting contributors: AI can amplify learning but it cannot replace learning.

Ownership Still Matters — Perhaps More Than Ever

During an internal discussion about AI-generated contributions, Jim Bugwadia, Nirmata CEO and Kyverno founder, made a deceptively simple observation about what needs to happen with AI generated and assisted contributions:

“Own your commit.”

In a world of AI-assisted development, that idea expands naturally.

If AI helped generate your contribution, you must also own your prompt and whatever is generated by it.

Ownership means:

  • Understanding intent
  • Verifying correctness
  • Taking responsibility for outcomes
  • Standing behind the change

AI can generate output but it can’t and shouldn’t assume accountability. The idea of having a human in the loop isn’t something that can or should ever be only Maintainer facing. To be fair, this concept must be Contributor facing too.

Disclosure As Trust Infrastructure

Transparency has always been foundational to open source collaboration.

AI introduces new complexities around licensing, copyright, provenance, and tool terms of service. Legal frameworks are still evolving, and uncertainty remains a defining characteristic of this space.

Disclosure is not about tools or bureaucracy.

Disclosure is about accountability. It is trust infrastructure.

Requiring contributors to disclose meaningful AI usage helps preserve:

  • Transparency
  • Reviewer trust
  • Licensing integrity
  • Contribution clarity
  • Responsible authorship

This approach aligns with guidance from the Linux Foundation and discussions across the broader CNCF community, both of which acknowledge that AI-generated content can be contributed provided contributors ensure compliance with licensing, attribution, and intellectual property obligations.

Why Kyverno Chose to Lead Here

Kyverno is not a hobby project. Our project is used globally, in production, across organizations ranging from startups to enterprise-scale companies. Adoption continues to grow, and the project is actively moving toward CNCF Graduation.

Kyverno itself exists to create:

  • Clarity
  • Safety
  • Consistency
  • Sustainable workflows 

All through policy as code.

In this case, we are applying the same philosophy to something new: AI usage.

If policy as code provides guardrails and golden paths in platform engineering, then we should be considering how to provide similar guidance in the AI-assisted development space.

Developers can’t sustainably leverage AI within open source ecosystems if projects fail to define the appropriate expectations for them to keep in mind as they develop.

AI-Friendly Does Not Mean AI-Unbounded

There is an important distinction emerging across open source communities, Being AI-friendly does not mean accepting unreviewed AI output.

Maintainers themselves are often enthusiastic adopters of AI tools and rightly so. Across projects, maintainers are using AI to:

  • Accelerate repetitive tasks
  • Improve documentation
  • Generate scaffolding
  • Explore design alternatives

One emerging pattern is the use of AGENT.md-style configurations, designed to guide how AI tools interact with repositories and project conventions.

Kyverno is actively exploring similar approaches. The goal is not simply to manage AI-assisted contributions, but to improve their quality at the source.

Discomfort, Growth, and Privilege

AI is forcing open source communities to confront unfamiliar challenges:

  • Scaling review processes
  • Defining authorship norms
  • Navigating licensing uncertainty
  • Re-thinking contributor workflows

Discomfort is inevitable. But as Jim often reminds our team:

“Discomfort in newness is typically a sign of growth.”

The pressure to navigate these new challenges and answer these pressing questions is not a burden. Raising to this challenge is a privilege. It means:

  • Our project matters
  • The ecosystem is evolving
  • We’re participating in shaping the future

A Shared Challenge Across Open Source

Kyverno’s AI policy work was informed by thoughtful discussions and examples across the ecosystem. We dove into a variety of projects, each reflecting different constraints and priorities for us to keep in mind as we embark on our own journey.

Moving forward, what matters most, is that communities and community members from different projects and industries around the globe engage deliberately with these questions rather than simply responding reactively to the tooling.

Open source sustainability increasingly depends on shared governance patterns, not isolated experimentation.

An Invitation to the Ecosystem

AI is not going away, nor should it.

The question is not whether AI belongs in open source. The question is how we integrate it responsibly.

Sustainable open source in the AI era requires:

  • Human ownership
  • Transparent authorship
  • Respect for reviewer time
  • Context-aware contributions
  • Community-driven guardrails

AI is a powerful tool. But open source remains at its core, a human system.

While AI changes the tools and accelerates output, it does not change the responsibility.

Acknowledgements & Influences

Kyverno’s AI Usage Policy was shaped by the openness and thoughtfulness of many communities and leaders, including:

  • Ghostty
  • KubeVirt
  • Linux Foundation working groups
  • QEMU maintainers
  • Mitchell Hashimoto’s writings on AI adoption

Open source benefits enormously when governance knowledge is shared. Thanks to everyone who has already shared and to those who will help us continue to adapt our AI policies as we grow our project.

Introducing the Nirmata Cloud Controller: Preventive Cloud Governance at Scale
Preventive Cloud Governance with Nirmata Terraform Controller

Latest

From the blog

The latest industry news, interviews, technologies, and resources.

View all blogs
AI Bots Are Now Exploiting Your Automation — And Kubernetes Is Next
AI Bots Are Now Exploiting Your Automation — And Kubernetes Is Next

Last week, an autonomous bot called hackerbot-claw — describing itself as “an autonomous security research agent powered by claude-opus-4-5” —…

AI Without Guardrails: How Ungoverned AI Amplifies Cloud Risk and Unpredictable Costs
AI Without Guardrails: How Ungoverned AI Amplifies Cloud Risk and Unpredictable Costs

AI has dramatically lowered the friction to create infrastructure. Developers can now generate Kubernetes manifests, Terraform modules, and CI/CD pipelines…