>I write code with AI tools. I expect my team to use AI tools too. If you know the codebase and know what you're doing, writing great code has never been easier than with these tools.
This statement describes transitional state. Contributora became qualified in this way before AI. New contributors using Ai from day one will not be qualified in the same way.
>The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?
...
>When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
The negative net value of external contributiona is to make the decision. End external contributions.
For the purpose of thinking up a new model.. unpacking that net is the interesting part. I don't mean sorting between high and low effort contributions. I mean making productive use of low effort one-shots.
AI tools have moved the old bottlenecks and we are trying to find where the new ones are going to settle down.
if you’re not writing your code why do you expect people to read it and follow your lead for whatever your preference is for a convention.
I get people who hand write being fussy about this but you start the article off devaluing coding entirely then pivot to how your codebase is written having value that needs to be followed.
It’s either low value or it isn’t but you can’t approach it as worthless then complain when others view your code as worthless and not worth reading too
The idea of pull requests by anyone everywhere at any time as the default was based on the assumption that we'd only ever encounter other hackers like us. For a time, public discourse acknowledged that this wasn't exactly true, but was very busy framing it as a good thing. Because something something new perspectives, viewpoints, whatever.
Some of that framing was actually true, of course, but often happened to exist in a vacuum, pretending that reality did not exist; downplaying (sometimes to the point of actual gaslighting) the many downsides that came with reduced friction.
Which leads us back to current day, where said reality got supercharged by AI and crashed their car (currently on fire) into your living room.
I feel like we could've not went to these extremes with a bit more modesty, honesty and time. But those values weren't really compatible with our culture in the last 15+ years.
Which leaves me wondering where we will find ourselves 15+ years from now.
Everything comes down to this. Its not just open source projects; companies are also slowly adjusting to this reality.
There's roughly two characteristics that humans need in this new environment: Long-ranging technical leadership about how the system should be built (Lead+ Software Engineer), and deep product knowledge about how its used (PM).
Love the transparency. To be fair, rewrites are almost impossible to review. Anything >5k diff takes at least multiple review cycles. I don't know how some maintainers do it while also working on the codebase themselves
I had an odd experience a few weeks ago, when I spent a few minutes trying to find a small program I had written. It suddenly struck me that I could have asked for a new one, in less time than it took to find it.
> ...
> But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
> If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.
When was writing code ever the hard part?
If contributors aren't solving problems, what good are they? Code that doesn't solve a problem is cruft. And if a problem could be solved trivially, you probably wouldn't need contributions from others to solve it in the first place.
An alternative idea: Use a TODO list and stop using GitHub Issues as your personal dumping ground, whether you use AI to pad them or not. If the issue requires discussion or more detail and would warrant a proper issue, then make a proper issue.
Seems like shitty AI issue did more harm than good?
If you don't have time, just write the damn issue as you normally would. I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.
If you don't have time to write an issue yourself or carefully proofread whatever LLM makes up for you, whom are you trying to fool by making it look pretty? At least if it is visibly lazy anyone knows to treat it with appropriate grain of salt.
Even if you are one of those who likes to code by having to correct LLMs all the time, surely you understand if your LLM can make candy out of poo when you post an issue then it can do the exact same thing when it processes the issue and makes a PR. Likely next month it will do a better job at parsing your quick writing, and having it immediately "upscaled" would only hinder future performance.
I’ll call this what it is: a commercial product (they have a pricing page) that uses open source as marketing to sell more licenses.
The only PRs they want are ones that offer free professional level labor.
They’re too uncaring about the benefits of an open community to come up with a workflow to adapt to AI.
It honestly gives me a lack of confidence that they can maintain their own code quality standards with their own employees.
Think about it: when/if this company grows to a larger size, if they can’t handle AI slop from contributors how can they handle AI slop from a large employee base?
Arguably, because LLM tokens are expensive so LLM generated code could be considered a donation? But then so is the labor involved so it's kinda moot. I don't believe people pay software developers to write code for them to contribute to open source projects either (if that makes any sense).
Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need managed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.
(The web site doesn't make it very obvious, but it appears to be a UK company, and Ruiz seems to live in the UK. So not impossible!)
More modern-day, low/no-code platforms are advertised as such... and yet, they don't replace software developers. (in fact, some projects my employer does is migrating away from low/no-code platforms in favor of code, because performance and other nonfunctionals are hidden away. We had a major outage as a result when traffic increased.)
Because it does. The goal here isn't to create good code, it's to create an impression of a person who writes good code. Even now, when software career is in freefall, for many people in poor countries it's still their only way out of poverty so they'll try everything possible to build a portfolio and get a job and the suffering of your little pet project isn't a part of the equation. Those people aren't trying to get Nobel prizes, they're trying to get any job that isn't farming with literal medieval-era technology.
My very radical personal opinion is that either we have small elitist circles of trust, or the internet will remain a global ghetto.
Their slop issues do not actually have value because the fixes based on the slop are equal in their sloppiness.
Author could instead create these slop issues in a place where external contributors can't see them instead of shitting on the contributors for not reading their mind.
Really bizarre lack of self awareness. How do the internal contributors deal with the slop? I wonder what they say about this person in private.
1. Clear steps to reproduce (ideally, using the prepared testcase as input, if applicable)
2. A description of the behavior observed from the program
3. A description of the expected behavior
4. Optionally, your justification for why the program should be changed to behave the way described in #3 and not #4
Everything else belongs on a message board, mailing list, or social media.
But this is all totally foreign to, like, 80% of GitHub's userbase (including the majority of the project managers aka maintainers who are in charge of allowing/disallowing the sorts of things that people post as a way of shaping the tone and tenor of the space).
E.g. maybe you have your application open in a browser and are currently viewing a page with a very prominent red button. You hit that /issue command with "button should be yellow not red".
That half-sentence makes sense if you also have that open browser window as context, but would be completely cryptic without.
An AI could use both the input and the browser window to generate a description like "The background color of the #submit_unsafe button widget in frontend/settings/advanced.tsx should be changed from red to yellow." or something.
Sort of like a semantic equivalent to realpath if you want.
I do see utility in that.
BigBlueButton had to fork tldraw because of this. https://docs.bigbluebutton.org/new-features/#we-have-forked-...
Ignoring AI for a moment: I don't expect anyone to be able to write a design-doc from my own random notes about a problem. They are semi-formed, disconnected ideas that need a lot of refinement. I know that and I have plans around them and know much more context, but if some random person were to take them the outcome would be very bad, or at least require a lot more effort.
A random person has very little chance of being successful with that.
This issue is very similar, only with some AI tools intermediating the notes.
There's a reason that collaborative code platform (not just GH but also GL) "issues" end up being used for much more than bugs:
- message boards suffer from the SSO friction issue. No thanks I will not sign up at some phpBB board of questionable admin quality that will get 0wned sooner than later, or have the board owner bombard me with advertising themselves.
- mailing lists are even worse usability-wise because these by design leak your email address, on top of that their management UI often enough is Mailman which means it probably still stores passwords in cleartext, and spam filters, attachment size limits and overeager virus scanners make it a living hell
- IRC suffers from context loss. Netsplit, go for a smoke and the laptop goes to sleep, whoops, you disconnected and don't see what happened in the meantime. Yes, there's bouncers, but honestly, the UX sucks hard. Also, no file transfers to a channel, no native screenshot/paste functionality.
- Discord, Slack etc. solve the pains of IRC but are walled gardens
- Social media... yikes. No, no, no. Eventually, people that follow both you and the author of some FOSS software get pissed off by your conversation spamming their feed. (Too) many are still only active on Twitter which excludes people who don't want to be on that hellsite. Bluesky, good luck finding non-commies there. Mastodon, good luck and pray that your instance operator and the instance operator of the project team didn't end up in some bxtchfight escalating in defederation. Facebook groups, not everyone wants to leak their real name.
- messenger groups (especially Telegram)... blergh. You will drown in spam.
GH/GL are the sweet spot between UX/SO friction (because pretty much everyone who would want to file an issue has an account) and features, and on top of that both platforms have deals with email providers preventing them from getting blocked. That's why these two platforms are so far superior above everything else mentioned.
> /issue you know that paint bucket in google docs i want that for tldraw so that I can copy styles from one shape and paste it to another, if those styles exist in the other shape. i want to like slurp up the styles
What kind of context may be there?
Also, the entire repository and issue tracker is context. Over time it gets only more complete.
Wouldn’t it be easier to just open the inspector, find the css class, grep the source code, and then edit the properties? It could be even easier in an SPA where you just have to find the component file.
>Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires.
"CEO creates low-effort bug report" -> "CEO uses the low-effort bug report as a starting point to further refine the report and eventually fix the issue in his company's product"
GitHub is an SSO provider and has been for a long time. This criticism is ignorant.
Aside from that, there's nothing stopping anyone from using GitHub's dedicated message boards for message board stuff, or, before those existed, shunting it all off into the "issues" of a separate "$PROJECT/community-bullshit" "repo" instead of cluttering up the actual bugtracker.
> Social media... yikes. No, no, no.
I'm talking about the appropriate-for-social-media stuff people are already posting on GitHub issues. It's like you started writing your comment and lost the context. People are today already misusing GitHub issues for this. I'm saying keep the stuff best kept to social media and email... on social media and email. Don't clutter the bugtracker with it, and for project managers: don't let other users do it either. (You will lose contributors who know how to use a bugtracker efficaciously and are accustomed to it but have a fixed time budget and don't want to have to sift through junk for the privilege of doing free and thankless QA on your software.)
> You will drown in spam.
The irony. Help. It burns.
For emphasis: Everything that isn't a bug belongs on a message board, mailing list, or social media, and not on the bugtracker. Anyone who can't abide by this simple, totally reasonable request should be booted.
So does git and GitHub. Last I checked, authoring a git commit with an email address associated with your GitHub account is what makes GitHub attribute that commit to your account. I assume Gitlab works in a very similar way.
"But 'git clone' is soooo much harder than reading through mailing list archives!" Nah.
This can be a substantial effort, especially if you're not familiar with the project.
there are a lot of people who are not programmers at all. I can teach my plumber everything you said (learning it myself is the easy part), but it will take years. In the end they just know "that button I'm pointing my finger at should be yellow not red". How to we transfer that pointed finger to a ticket is the question here.
The problem is, a "sign up to contribute" is a friction source. It will almost always leak my email. In contrast, I'm already logged in to Github.
[1] https://docs.github.com/en/account-and-profile/reference/ema...
There’s a reason the Support and IT Technician role exists. They’re there for talking to the end user. And they in turn will write a proper report to Engineering.
If you want to wear both hats at once, that is fine. If you want an agent to be your support middleman, that is also fine. But most LLM proponents are acting like it’s a miracle solution to some engineering bottleneck.
Oh, I was unaware of that. I've not seen anyone use it, [0] but I've only paid any attention to the Big Corporate and Traditional Hacker populations.
Thanks much for the information.
[0] I'm certain that folks do use it, so folks shouldn't bother pointing out people that do.
If you set user.email using git-config on your machine to a real email address and decide to author and publish commits with it, then GitHub will, of course, not be able to stop you (aside from maybe rejecting the commits when you tried to push them). It can't just arbitrarily rewrite the email address in the commit. That would break Git's data model.
This week I wrote an issue on tldraw's repository about a new contributions policy. Due to an influx of low-quality AI pull requests, we would soon begin automatically closing pull requests from external contributors.

I was expecting to have to defend my decision but the response has been surprisingly positive: the problem is real, tldraw's solution seems reasonable, maybe we should do the same.
The post prompted some good conversation though on Twitter and Hacker News. Much of the discussion focused on recognizing AI code, using AI to evaluate pull requests, and generally preventing people from using AI tools to contribute.
I think this misses the point. We already accept code written with AI. I write code with AI tools. I expect my team to use AI tools too. If you know the codebase and know what you're doing, writing great code has never been easier than with these tools.
The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?
If writing the code is the easy part, why would I want someone else to write it?
Every repository gets bad pull requests. A few years ago I submitted a full TypeScript rewrite of a text editor because I thought it would be fun. I hope the maintainers didn't read it. Sorry.
One of my first big open source contributions was implementing more arrowheads for Excalidraw. I wrote the code and submitted a PR. The next day, the maintainers kindly pointed me toward their issues-first policy and closed my request.
I stuck with it, though. I was sketching state charts, I wanted the little dots on my arrows, and I cared enough to make it happen. So I stayed engaged and got caught up on the history of the problem. It turned out not to be an implementation question but a design problem. How do we let a user pick the start and end arrowheads? Which components can be adapted, which need to be created? Do we need new icons?
It took a few rounds of constructive iterations, moved along by my research and design work, before we all aligned on a solution. I wrote a new PR, we landed it, and now you can put a little dot or whatever on the end of your Excalidraw arrows.
If this were happening today, what would be different?
We would still need to have discussed the problem and designed a solution. It would have still required the sustained attention of a contributor who cared and wanted to see the change go in. My prototypes would still be useful for research and design but their cost to produce would be lower. I would have made more of them.
Once we had the context we needed and the alignment on what we would do, the final implementation would have been almost ceremonial. Who wants to push the button?
We'd started getting pull requests that purported to fix reported issues but which were, in retrospect, obvious "fix this issue" one-shots by an author using AI coding tools. Without broader knowledge of the codebase or the purpose of the project, the AI agents were taking the issue at face value and producing a diff. Any problems with the issue were multiplied in the pull request, leading to some of the strangest PRs I'd ever seen.
These pull requests would have been incredible a few months prior. They all looked good. They were formally correct. Tests and checks passed. We'd even started landing some. Then I started to notice unusual patterns. Authors almost always ignored our PR template. Even large PRs would be abandoned, languishing because their authors had neglected to sign our CLA. Commits would be spaced out strangely, with too-brief gaps between each commit. Checking into the authors revealed dozens of PRs across dozens of repositories. And why not?
Many other problematic contributions were more obvious by sheer weirdness. Authors would solve a problem in a way that ignored existing patterns, inlined other parts of the codebase, or went hard into a random direction. Once or twice, I would begin fixing and cleaning up these PRs, often asking my own Claude to make fixes that benefited from my wider knowledge: use this helper, use our existing UI components, etc. All the while thinking that it would have been easier to vibe code this myself.
From the discussions this week, it sounds like this is the default experience of every public repository maintainer right now. I could be describing any repo that takes external contributions. To use Excalidraw as another example: they received more than twice as many PRs in Q4 of 2025 than in Q3. While we have much less contribution, our problem recently got worse in a way that forced my hand.



More recently, we started getting PRs that were better-formed but still so far off-base that I knew something had changed. These were pull requests that claimed to solve a problem we didn't have or fix a bug that didn't exist. Each was claiming to close an issue.
A glance at the linked issue confirmed the problem: one of my own AI scripts, a Claude Code /issue command, was giving bad directions.
As a high-powered tech CEO, I'm constantly running into bugs and small UX nits that I don't have time to fully document but do want to capture for later. Previously, I would fire these into Linear or our issues as empty tickets—"fix bug in sidebar"—then hope I remembered what exactly it was when I came back to it later.
To help with this, I'd created a /issue command that created well-formed issues out of these low-specificity inputs. I would run into a bug and write something quickly like /issue dot menu should stay visible when the menu is open sidebar only desktop, fire off Claude Code to try and figure it out, and continue with my work. I'd get an issue, then a follow-up reply with a root cause analysis for bugs or suggested implementation.

As an example, I'm pretty sure my input for this one was something like /issue you know that paint bucket in google docs i want that for tldraw so that I can copy styles from one shape and paste it to another, if those styles exist in the other shape. i want to like slurp up the styles. It was probably fewer. "Slurp" sounds like me.
When it worked—when the issue was obvious or my input was detailed enough—Claude would go off and make a well-researched issue that was ready to be solved. I'd often follow up with another command to /take the issue and take a shot at implementing it. But if the issue was complex or my input contained too little information (or if I drew an unlucky seed) then the AI would head off in the wrong direction and produce an issue with an imagined bug or junk solution. I'd close the issue, spend more time refining it, or sometimes decide that the idea was wrong to begin with.
In this system, slop is lubrication. My old "fix button" tickets were poor quality but useful. They added entropy and noise to the system but were worth it to capture a certain type of bug or idea. This new system is a similar exchange, introducing noise in exchange for other things. What's different is that this noise is well-formed noise. To an outsider, the result of my fire-and-forget "fix button" might look identical to a professional, well-researched, intellectually serious bug report.
In the past, my awful issues would have been ignored. The issues looked wrong and the effort required to produce a fix would have fallen on the author. The noise was never a problem because no one would have tried to write a high-effort diff based on such a low-effort issue.
AI changed all of that. My low-effort issues were becoming low-effort pull requests, with AI doing both sides of the work. My poor Claude had produced a nonsense issue causing the contributor's poor Claude to produce a nonsense solution.
The thing is, my shitty AI issue was providing value. The contributor's shitty AI solution was not. There was a piece in between—a part of the process I would have done but the contributor could not have done—which was to read the issue and decide whether it made sense. Instead, I'd accidentally put out a call for pointless contribution.
This leaves us at a place where the best thing to do for our codebase is to shut down external contributions, at least until GitHub offers better support for controlling who can contribute, when, and where. Assuming the social contract of code contribution remains unchanged, then we need better tools to maintain the peace.
But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.