AI-slop threatens open source: why your company must audit now

AI-generated code is flooding open-source repositories with low-quality contributions, creating maintenance burdens and security risks. For businesses relying on open-source software, this is a hidden risk to your software supply chain — and it's growing fast.

Here's something most business leaders haven't considered: the free, open-source software your company depends on is being quietly overwhelmed by AI-generated code that nobody properly reviews. And that code may already be in your systems.

Let me be direct. I'm not anti-AI. Artificial intelligence is transforming how we work, and AI-assisted coding tools genuinely help experienced developers move faster. But there's a growing problem that the tech industry is only now confronting — and your business needs to understand it before it becomes your problem.

What's happening: the flood of AI slop

Open-source software — Linux, curl, VLC, Blender, and thousands of other tools — runs much of the modern internet. Companies like yours use it daily, often without realizing it. These projects rely on volunteer contributors who submit improvements through what's called "pull requests."

Until recently, submitting code required real skill. You had to understand the project, write proper code, and present it for review. That natural friction kept quality high.

AI coding tools eliminated that friction overnight.

Now anyone can prompt an AI to "fix this bug" or "add this feature," generate plausible-looking code, and submit it as a contribution. The problem? Much of this code is superficially correct but functionally broken, insecure, or completely missing the point.

"We started receiving pull requests that supposedly fixed reported issues, but which in hindsight were clearly one-off 'fix this problem' requests from an author using AI code tools. When writing code is easy and bad work is almost indistinguishable from good work, the value of external contributions is probably less than zero."

— Steve Ruiz, developer of tldraw (January 2026)

Ruiz isn't alone. In January 2026, Daniel Stenberg — the creator of curl, one of the most widely used tools in computing — shut down his bug bounty program entirely. He was drowning in AI-generated security reports that were either wrong, irrelevant, or nonsensical. As he put it: "In the old days, someone actually invested time in the security report. There was a built-in friction, but now the floodgates are open."

The real risk to your business

You might think: "We don't contribute to open source. How does this affect us?"

Two ways.

First, your software supply chain. Your business almost certainly runs on open-source components — directly or through the software vendors you pay. If those components are being maintained by overwhelmed volunteers who now spend their days sifting through AI-generated garbage instead of improving the software, quality suffers. Bugs linger. Security patches get delayed. The foundation your business rests on gets shakier.

Second, security vulnerabilities. GitGuardian's 2026 State of Secrets Sprawl report found that leaks of API keys, passwords, and other secrets surged 81% year-over-year, with 29 million secrets exposed on public GitHub repositories. AI-assisted repositories showed 40% higher secret exposure rates. AI tools generate code quickly, but they don't care about security — they'll happily hardcode a database password or include an exposed API key if the prompt doesn't explicitly say otherwise.

The analogy: contaminated water supply

Think of open source as your water supply. For years, it's been clean because only qualified engineers worked at the treatment plant. Now, thousands of people with garden hoses are adding "water" to the system. Most of it looks like water. Some of it isn't. And the plant workers — who are volunteers — are too busy filtering out the bad water to improve the system. You're still drinking from it.

What the industry is doing (and why it's not enough)

GitHub introduced new settings in early 2026 allowing project maintainers to restrict or disable external pull requests. Some projects are using AI tools like CodeRabbit to pre-screen contributions. Developer Mitchell Hashimoto created a system to limit contributions to "vouched" users only.

These are band-aids. The open-source community is built on the principle that anyone can contribute. Closing that door goes against everything these projects stand for — and maintainers know it.

The Godot game engine team explicitly rejected GitHub's new restrictions. As project manager Emilio Coppola told Tweakers: "We don't want to limit who can contribute to the project, so we won't be using the new GitHub features."

Meanwhile, Jean-Baptiste Kempf of VLC described the quality of merge requests from junior contributors as "abysmal." Blender's CEO noted that AI-assisted contributions "wasted reviewers' time and affected their motivation."

These are the maintainers of software your company uses. They're telling you they're struggling. Are you listening?

What you can do today

Not tomorrow, not next quarter. Today.

Your action list:

  • Inventory your open-source dependencies. Ask your IT team or vendor for a Software Bill of Materials (SBOM). You need to know what you're running. Most companies can't answer this question.
  • Check when those dependencies were last updated. An unmaintained package is a vulnerable package. If a project's maintainers are burned out and walking away — and they are — your risk increases.
  • Set a policy for AI-generated code. If your own developers use AI tools, establish clear rules: all AI-generated code must be reviewed by a human who understands it. No exceptions.
  • Monitor security advisories. Subscribe to CVE feeds for your critical dependencies. Don't rely on your vendors to tell you — they're often slower than the public disclosures.

The contrarian take

Here's where I'll push back against the panic: not all AI-generated code is bad. Experienced developers using AI as a tool — reviewing, refining, and understanding what the AI produces — are genuinely more productive. The problem isn't the technology. It's the absence of expertise.

AI code without human understanding is like a first-year medical student performing surgery after watching a YouTube video. The instruments are right. The steps are technically described. But without judgment, context, and experience, the result is dangerous.

Your business doesn't need to fear AI. It needs to demand that AI-generated code — whether in open-source projects you depend on or in your own codebase — is always reviewed by someone who actually understands what they're looking at.

That's not an AI policy. That's a competence policy. And it's one you should have had regardless.