Open Source SOS! 🚨 Code Chaos Explained 🚀
Tech
🎧



The open source landscape experienced significant strain in 2025, marked by a 23% increase in monthly pull requests, according to the Octoverse report. Projects like tldraw and Fastify faced challenges managing the volume, leading to the closure of pull requests and the discontinuation of programs like Fastify’s HackerOne. Maintainers reported burnout due to the sheer number of requests for mentorship. Organizations, including The Processing Foundation, implemented strategies such as requiring issues and approvals before submissions, exemplified by projects like Codex and Gemini CLI. Contributors utilized AGENTS.md to guide automated instructions. The trend highlighted the need for structured contributions and a renewed focus on community norms, with numerous individuals sharing examples of this approach online.
THE INCREASING VOLUME OF OPEN SOURCE CONTRIBUTIONS
Forty-five minutes later, you’ve crafted a thoughtful, encouraging response with a few clarifying questions. Who knows: Maybe this person might be a great new person to mentor, so it’s worth your time if they put in theirs. And then…nothing. Or the follow-up makes it clear the contributor doesn’t have the context needed to explain the change, often because AI made it easy to submit something plausible before they were ready to maintain it. Or you realize you’ve just spent your afternoon debugging someone’s LLM chat session. This is becoming more common. Not because contributors are acting in bad faith, but because it’s never been easier to generate something that looks plausible. The cost to create has dropped. The cost to review hasn’t. Open source is experiencing its own “Eternal September”: a constant influx of contributions that strains the social systems we rely on to build trust and mentor newcomers. Projects across the ecosystem are seeing this same occurrence. tldraw closed their pull requests. Fastify shut down their HackerOne program after inbound reports became unmanageable at scale. The overall volume keeps climbing. The Octoverse 2025 report notes that developers merged nearly 45 million pull requests per month in 2025 (up 23% year over year). More pull requests, same maintainer hours. The old signals, like clean code, fast turnaround, and handling complexity, used to mean someone had invested time into understanding the codebase. Now AI can help users generate all of that in seconds, so these signals aren’t as telling.
THE NEED FOR STRATEGIC MENTORSHIP
If I asked a room of open source contributors how they got started, they’d all say it began with a good mentor. When you mentor someone well, you’re not just adding one contributor. You’re multiplying yourself. They learn to onboard others who do the same. That’s the multiplier effect. But maintainers are burning out trying to mentor everyone who sends a pull request. If we lose mentoring newcomers, we lose the multiplier entirely. We can’t abandon mentorship, especially as many long-time maintainers step back from active contribution. (I wrote more about this generational challenge in Who will maintain the future?) So, we need to be strategic about who we invest in.
THE 3 Cs: COMPREHENSION, CONTEXT, AND CONTINUITY
Looking at what’s working across projects, I see three filters maintainers are using. I call them the 3 Cs: Comprehension, Context, and Continuity. Do they understand the problem well enough to propose this change? Some projects now test comprehension before code is submitted. Codex and Gemini CLI, for example, both recently added guidelines: contributors must open an issue and get approval before submitting a pull request. The comprehension check happens in that conversation. I’m also seeing in-person code sprints and hackathons thriving in this area, where maintainers can have real-time conversations with potential contributors to check both interest and comprehension. I’m not expecting contributors to understand the whole project. That’s unrealistic. But you want to make sure they’re not committing code above their own comprehension level. As they grow, they can always take on more.
AI-ASSISTED CONTRIBUTIONS AND THE IMPORTANCE OF DISCLOSURE
Disclosing AI is about giving reviewers context. When I know a pull request was AI-assisted, I can calibrate my review. This might mean asking more clarifying questions or focusing on whether the contributor understands the trade-offs, not just whether the code runs. There’s also AGENTS.md, which provides instructions for AI coding agents, like robots.txt for Copilot. Projects like skikit-learn, Goose, and Processing use AGENTS.md to tell agents instructions, like follow our guidelines, check if an issue is assigned, or respect our norms. This can help to place the burden of gathering the context needed for a review to the contributor (or their tools).
BUILDING A HEALTHY OPEN SOURCE COMMUNITY
Drive-by contributions can be helpful but limit your mentorship investment to people who come back and engage thoughtfully. Your mentorship can scale up over time: Comprehension and Context get you reviewed. Continuity gets you mentored. As a maintainer, this means: don’t invest deep mentorship energy until you see all three. What this looks like: Let’s compare this to our first example above. This time, a polished pull requests lands without following the guidelines. Close it. Guilt-free. Protect your time for contributions that matter. If someone comes back and is engaged in issues; if they submit a second pull request and respond thoughtfully to feedback, now you pay attention. That’s when you invest. This is how you protect the multiplier effect. You’re not abandoning newcomers. You’re being strategic. There’s another benefit too: clear criteria reduces bias. When you rely on vibes, you tend to mentor people who look like you or share your cultural context. The 3 Cs give you a rubric instead of gut feelings, and that makes your mentorship more equitable. Pick a C to implement: Start with one but look for all three when deciding who to mentor. This isn’t about restricting AI-assisted contributions. It’s about building guardrails that protect human mentorship and keep communities healthy. AI tools are here to stay. The question is whether we adapt our practices to maintain what makes open source work: human relationships, knowledge transfer, and the multiplier effect. The 3 Cs give us a framework for exactly that. Adapted from my FOSDEM 2026 talk. Thanks to Anne Bertucio, Ashley Wolf, Daniel Stenberg, Tim Head, Bruno Borges, Emma Irwin, Helen Hou-SandĂ, Hugo van Kemenade, Jamie Tanna, John McBride, Juan Luis Cano RodrĂguez, Justin Wheeler, Matteo Collina, Camilla Moraes, RaphaĂ«l de Courvill...[truncated due to length]
This article is AI-synthesized from public sources and may not reflect original reporting.