linuxtoasterblog → the pincer attack

The Pincer Attack — How AI Killed Open Source

All of the assumptions behind Open Source are breaking simultaneously.

Economic Scarcity

The best argument for open source was economic. Don't reinvent the wheel. Use what already exists. Pool effort across organizations. This made sense when building a wheel was expensive — when writing a library took hundreds of hours, when the alternative was paying a team to duplicate work someone else had already done. The cost of production was high, so sharing production made sense.

Open source was a rational response to programmer scarcity. AI ended the scarcity. Now the entire open source ecosystem is experiencing a pincer attack: the trust model is collapsing, vendors are cloning entire open source stacks, and individuals are yoinking ideas and having LLMs reimplement what they need from a package without using the package.

94 Million Downloads, Three Hours

Recently, LiteLLM, a Python library downloaded over 94 million times a month, was compromised on PyPI. A threat actor pushed malicious versions containing a credential stealer that executes automatically on every Python process startup. You didn't even have to import the library. Just having it installed was enough.

The payload harvested secrets, established persistence through systemd, and could laterally move across Kubernetes clusters to deploy privileged pods on every node. The compromised versions were live for three hours before PyPI quarantined the package. Three hours, 94 million monthly downloads, and a maintainer whose GitHub issue about the compromise was closed as "not planned."

The Trust Model Is Collapsing

Open source always ran on trust. You pulled in a dependency and trusted that the author wasn't malicious, wasn't compromised, wasn't having a bad day. You trusted that the community reviewing the code was large enough and competent enough to catch problems. You trusted that the volume of contributions was low enough that humans could actually review them.

The intent behind open source contributions is unknowable, and AI makes contributing incredibly inexpensive. A pull request generated by AI looks exactly like one written by a human. A subtle backdoor introduced across three seemingly innocent commits by three seemingly unrelated accounts is now a weekend project for anyone with bad intentions and a prompt. You can't code-review faster than AI can generate plausible-looking attacks.

Every company running open source dependencies — which is every company — is now running code from an ecosystem where the cost of contributing maliciously has dropped to near zero while the cost of detecting malice has stayed the same or gone up. This is a supply chain with the locks removed.

The same companies that spent years telling programmers to contribute for free are about to discover what happens when anyone can contribute, for any reason, at machine scale. The signal-to-noise ratio in open source is collapsing, and so is the trust model that made the whole thing work.

Attack From Above

Cloudflare put one developer on the task of cloning Next.js — a framework representing years of work by Vercel and hundreds of open source contributors. It took a week and roughly $1,100 in inference costs. One person, one week, a thousand bucks. That's what years of community effort is worth now when a sufficiently motivated company points AI at your project. They didn't need to fork it, contribute to it, or even engage with the community. They just rebuilt it. And they have every right to — the code was open, the LLMs learned from it, and now the economics favor cloning over collaborating.

Attack From Below

Andrej Karpathy — the guy who coined "vibe coding" and now apparently "yoinking" — recently laid out a philosophy of writing code like bacterial genomes: small, modular, self-contained. His test for good code? "Can you imagine someone going 'yoink' without knowing the rest of your code or having to import anything new?" The anti-dependency model. When AI can read a library, understand the three functions you actually use, and rewrite them inline in your project in seconds — why would you ever pip install anything? You don't need the package. You don't need the maintainer. You just need the idea, and the LLM extracts it for you.

Fame

You wrote a library because you hit a problem and thought other people probably hit it too. You published it. People started using it. Then a lot of people started using it. It felt great. Then you started getting issues at 3 a.m. from people who didn't read the README. Unpleasant emails from people who talked to you like you owed them something for using your code. Then came feature demands from billion-dollar companies who never sent you a dime. You kept going because you felt that there was a social contract: community, contributions, reputation, maybe a job offer.

Downloads are dropping. Not because your library got worse. It's Yoinkers taking your ideas into their own code. Some published their own version of your library but left your contact information in the README, now you are getting demands to fix things you didn't break, things that are not even in your library.

The PRs have tripled, and tripled again. Everybody with a ChatGPT account is trying to help by adding more code. They look reasonable but are all AI slop. Reviewing takes all of your time.

Then someone compromises your PyPI credentials. Now you're in the news. "Why wasn't he using 2FA?" "Why was one person a single point of failure?" You — the person who built this for free, maintained it for free, absorbed the entitlement for free — are the villain in someone else's supply chain postmortem.

And underneath it all, the LLMs are training on your code, your docs, your issue responses. The AI that's replacing your library learned how to do it from you.

Reinvent the Wheel

So how do we build systems now? The old way.

Cash has always been the cheapest way to acquire anything, including software. You want to have that relationship — vendor, customer, SLA, accountability. What you cannot buy, write yourself, with AI. Either way, own it. Understand it. Be able to fix it at 2 a.m. without filing an issue into the void and hoping a volunteer in another hemisphere wakes up feeling generous.

LinuxToaster is a set of Unix tools re-imagined for the AI era. From toast — sed with a brain — to ito, version control built for AI, to squawk, a messaging bus for AI and humans. Small, composable components you can put together to harness AI to solve problems, Unix style. Commercial software. Vendor, customer, accountability.

Keep me in the loop

Product updates, new features, the occasional blog post. No spam.