Is Open Source Like the Dinosaurs Waiting to Become Oil Fields?

For the last four decades, open source has defined how we build software. It was never just about code—it was about empowerment and the power of breaking large problems into smaller tasks via collaboration. Open source took the means of production of critical software out of the hands of well-resourced corporations and put them into the hands of collective action. It enabled the development of everything from Linux to the modern web, forming the backbone of the internet itself.
But what happens when software writes itself?
AI is already far beyond being a glorified autocomplete. Look at Replit as an example. A year ago, it was an interesting idea—you could type out some code, and it would help you fill in the gaps. It wasn’t very good. It made mistakes. But now? You can generate entire applications from a prompt without touching a single line of code. And it is only going to get better at doing it.
We’ve gone from tools that help developers write functions to systems that generate entire applications. This isn’t just about making developers more efficient. It’s a fundamental shift in how software is created, how it evolves, and who—or what—is doing the creating.
It feels like an extinction-level event for open source. The dynamics of collaboration, ownership, and licensing are about to undergo a radical transformation.
The Original Premise of Open Source Is Being Undermined
Open source exists because code, in its traditional form, is a scarce resource. Writing software takes time, effort, and expertise. The four freedoms of open source—use, study, modify, and share—were designed to ensure that once software was written, it could be freely adapted and improved.

This model made sense in a world where software development was inherently manual and labor-intensive. But AI is starting to break that model.
Imagine an AI that can:
- Point to an open-source repository and not just clone it, but rewrite it.
- Restructure the code, rename methods and variables, optimize performance, and even modify the architecture.
- Take a description of a problem and generate a software solution from scratch, bypassing the need for a pre-existing codebase entirely.
I think this future is close. If not this year, then in a few years.
At what point does an AI-generated fork stop being a fork? If an AI reworks 100% of a project’s codebase, is it still under the original license? If AI generates a functionally identical but legally distinct codebase, does the idea of licensing even matter?
If AI can endlessly generate and regenerate software, then code is no longer scarce. And if code is no longer scarce, what happens to open source? Here are some scenarios I've been thinking about...
1. A Fragmented Landscape of Infinite Forks
Instead of large, centralized projects with many contributors, we could see millions of AI-generated forks. If AI can tailor software to a specific use case, why would people contribute to a single upstream repository when they can generate a custom version for themselves?
Collaboration doesn’t disappear, but it becomes radically distributed—small, ephemeral, and specialized projects replace large centralized projects with hundreds of contributors.
2. The Rise of AI Maintainers
If AI dominates software creation, humans won’t disappear from the equation entirely. Instead of writing code line by line, they may become curators, validators, and strategists.
An AI maintainer (a human) might:
- Oversee AI-generated changes to ensure they align with the project’s goals.
- Verify AI-created code for security, correctness, and efficiency.
- Manage the governance of AI-assisted open-source projects.
But this raises a fundamental question: if AI writes and maintains most of the software, does anyone still need to collaborate at scale?
3. The Rise of the Fair Code Idea
A relatively new phenomenon in open source is the Fair Code model—licenses that impose ethical or commercial restrictions on use. The Sustainable Use License (SUL) used by N8N is an example of this. These licenses aren’t about absolute freedom; they aim to ensure that software contributors aren’t 'exploited'.
Free Software/open source folks like myself don't consider these licenses to be open source (and neither does the Open Source Initiative as they fail their definition of open source).
These licenses may gain traction as more developers recognize that if software is infinitely reproducible, its original development community may no longer be needed. If collaboration itself is at risk, why not use licensing as a defensive measure—to protect the values that made open source possible in the first place? It’s a strange inversion of the original open-source logic, but ultimately, it comes down to priorities. As new and existing open source software projects face dwindling communities, will they turn to restrictive licensing as a way to prevent their own decline?
Will we see a surge in restrictive licenses designed to curb AI-driven exploitation of open-source repositories? If AI can rewrite and repackage open-source projects into proprietary software (for example), developers may push back by making AI-specific exclusions a standard part of licensing.
4. A Robots.txt for Open Source?
Will we see new mechanisms to prevent AI-powered IDEs from scraping and cloning repositories?
Much like how robots.txt signals that a website doesn’t want to be indexed (even though it can’t actually prevent it), we could see a new kind of “AI exclusion” clause in software licenses.
- NoAI.txt? Projects could adopt a standard declaring their repositories off-limits to AI training and automated code generation.
- GitHub Compliance? Platforms like GitHub might introduce features to block AI tools from cloning repositories that opt out (though there will always be workarounds).
- Legal Barriers? Licenses might explicitly prohibit AI-generated modifications without human review or contribution.
But will any of this matter? Open-source projects have always relied on good-faith compliance. AI changes that equation. Just as web scraping exists despite robots.txt, AI will always find ways to ingest open-source repositories unless hard enforcement mechanisms exist.
5. The Collapse of Open Source Collaboration?
Open source has always relied on a steady stream of human contributors. But what happens if:
- AI writes the bulk of the software?
- Human maintainers dwindle as fewer people need to code?
- Projects stop getting contributions because developers generate their own AI-customized versions instead?
Open source, as a movement, could shrink into an archival function rather than an active development model. Some projects might survive, but only as reference points—over time they may become historical artifacts that AI pulls from when generating new code.
Or, more dramatically, some projects could close their repositories entirely and go proprietary. If AI erodes the collaborative incentives of open source, why not monetize what’s left? A once-free tool that now has dwindling human contributors might decide to lock itself down.
Is Open Source Waiting to Become Oil?
AI could wipe out open source as we know it. AI is poised to automate large parts of software development and rewrite codebases on demand.
The question is not whether open source will survive this evolution—it will, in some form. The real question is what it will become—a vital, evolving movement, or just raw material for AI systems that don’t need collaboration in the first place.
If open source wants to avoid becoming an oil field—something mined for value but no longer alive—it has to figure out how to adapt.
Member discussion