> If you truly wish to be helpful, please direct your boundless generative energy toward a repository you personally own and maintain.
This is a habit humans could learn from. Publishing a fork is easier than ever. If you aren’t using your own code in production you shouldn’t expect anyone else to.
If anyone at GitHub is out there. Look at the stats for how many different projects on average that a user PRs a day (that they aren’t a maintainer of). My analysis of a recent day using gharchive showed 99% 1, 1% 2, 0.1% 3. There are so few people PRing 5+ repos I was able to review them manually. They are all bots/scripts. Please rate limit unregistered bots.
> If you can't explain what your changes do and how they interact with the greater system without the aid of AI tools, do not contribute to this project.
I think part of the deeper issue is that contributing to an OSS project has become a rite of passage, a way to strengthen your profile. If you need to have contributed to look good but you don't really care about the contribution itself then you resort to this kind of trick.
We had a similar plague for vulnerability disclosures, with people reporting that they had "discovered" vulnerabilities like "if you call this function with null you get a NullPointerException". D'uh.
There is also the fact that we're measuring the wrong thing like speed of development. In my previous employer people had jumped in fully into the AI bandwagon, everyone was marvelled at how fast they were. Once I was reviewing the PR and I had to tell the author "dude, all your tests are failing". He just laughed it out. Everyone can produce software very fast if it's not required to work.
> Q: "Isn't it your job as an open-source maintainer/developer to foster a welcoming community?"
The answer to this implies that the requirement to be welcoming only applies to humans, but even in this hostile and sarcastic document, it doesn't go far enough.
Open source maintainers can be cruel, malicious, arbitrary, whatever they want. They own the project, there is no job requirements, you have no recourse. Suck it up, fork the thing, or leave.
The bigger issue is that that kind of statement is highly manipulative, and indicates someone who is playing politics instead of focusing on results.
The better response is to call the bluff, something along the lines of: "Running an open-source project is quite time consuming. Please don't waste our time with emotional manipulation to get your way. Instead, take the time to understand why your LLM-generated pull request is not useful. You can start by understanding that we have access to LLMs too, and realize that a significant amount of work needs to happen after an LLM proposes changes."
While I am with you on hoping, someone shamelessly PRing slop just is not going to feel shame when one of their efforts fail. It’s like being mean to a phone scammer, they just hang up and do it again
No when people attend courses, paying money for the privilege no less, and get told "Now open a pull request" they don't care about your project - they care about getting their instructor to say they've done a good job.
It's actually a valuable signal to the phone scammer if you're mean, because that means they can stop wasting their own effort of scamming you, and call somebody else.
That is hilarious. I love that you believe that. Being mean to a phone scammer is about your feelings and your time. They do not care. More importantly, the next person who calls you is not gonna be the same person. It’s like slamming the door on some Mormons expecting that that’ll be the end of that, when there’s just two entirely different Mormons that are gonna come by a month later. They cannot have a memory of the thing you did to the other Mormons.
Congratulations for not getting the point. Which was that for the scammer this is a business transaction, and if they can get an early signal that it is not going to work, they can cancel and get to the next one. So they optimize for any potential candidates to get off as early as possible if they figure it isn't going to happen.
I think some folks genuinely don’t realize how selfish and destructive they’re being or at least believe they help more than they hinder. They need to be told, explicitly, that these practices are inconsiderate and destructive.
We need to develop some ethics, or at least, "community standards" (as they may vary significantly between different use cases) around the some of the things this essay talks about. I know I've really been pondering the mismatch between human attention and the ability of LLMs to generate things that consume human attention.
We are still mostly running on inertia where a PR required a certain amount of human attention to generate 500 lines of proposed changes, and even then, nothing stops such PR from being garbage. But at least the rate at which such garbage PRs was bounded by the rate at which you had that very specific level of developer that was A: capable of writing 500 lines of diffs in the first place but B: didn't realize these particular 500 lines is a bad idea. Certainly not an empty set, but also certainly much more restricted than "everyone with the ability to set up a code bot and type something".
Code used to be rare, and therefore, worth a lot. Now it's not rare. 1500 lines of 2026 code is not the same as 1500 lines of 2006 code. The ceiling of the value of a contribution is in how much work the user put it and how high quality the work is. If "the work the user put in" is 30 seconds typing a prompt, that's the value, no matter how many lines of code some AI expanded that into. I'd honestly rather have an Issue filed with your proposed prompt in it than the actual output of your AI, if that's all you're going to put into the PR. There's a lot of things I can do with that prompt that may make it better but it's way harder to do that with the code.
You know, stuff like that. That might actually be a useful counter to some of these slop posts, especially things that are something that may be a good idea but need someone to treat the prompt itself as a starting point rather than the code. Maybe that's a decent response that's somewhat less hostile; close out these PRs with a request to file an Issue with the prompt instead.
I see plenty of well meaning people use ChatGPT and think they’re being helpful. You’re better off with patience and polite explanation than assuming they’re all cynical/selfish assholes trying to cut corners. Some people just get excited and don’t really think about what they’re doing. It doesn’t excuse the behavior, but you should at least try to explain it to them once. Never know when you might educate someone.
I've seen a variety of approaches used (I'm not usually the one doing the confronting) but I still haven't seen any shame, etc. Which is weird, because it's not like it's one monolithic group? But it's still what I've seen.
It might be that people have their change of heart more privately, of course.
Cheap, nearly free voice phone calls killed old fashioned phone service. Once the incoming spam exceeded 95% I shut off the ringer and no longer use voice phone calls.
Once the cost of generating push media drops low enough (close enough to zero) the media is dead.
Pull requests are (ironically) a push media, and infinite zero effort PRs can be generated, therefore PRs are dead.
The proper way to handle the situation is to no longer accept PRs.
In github, enter a repo, "settings" "General" scroll down to Features, then uncheck "Pull requests". Or at least set to collaborators only. Probably need to shut off issues.
It gitlab, (I'm not as certain about this) enter a repo, "Settings", Visibility, "Merge Requests" change to "Only project members"
Its a post AI world, those features cannot be enabled on the internet anymore. Anything that accepts push from the public will get spammed into inability to use it. As a social activity PRs are dead. They were nice, but they are dangerous to leave enabled on the internet now. Oh well thats the cost of AI.
I recently had a quandary at work. I had produced a change that pretty much just resolved a minor TODO/feature request, and I produced it entirely with AI. I read it, it all made sense, it hadn't removed any tests, it had added new seemingly correct tests, but I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.
I want to do good engineering, not produce slop, but for 1 min of prompting, 5 mins of tidying, and 30 mins of review, we might save 2 days of eng time. That has to be worth something.
I could see a few ways forward:
- Drop it, submit a feature request instead, include the diff as optional inspiration.
- Send it, but be clear that it came from AI, I don't know if it works, and ask the reviewers to pay special attention to it because of that...
- Or Send it as normal, because it passes tests/linters, and review should be the same regardless of author or provenance.
I posted this to a few chat groups and got quite a range of opinions, including varying approach by how much I like the maintainer. Strong opinions for (1), weak preferences for (2), and a few advocating for (3).
Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.
I thought that was an interesting idea that I hadn't pushed enough, so I spent a further hour or so prompting around ways to gain confidence, throughout which the AI "fixed" so many things to "improve" the code that I completely lost all confidence in the change because there were clearly things that were needed and things that weren't, and disentangling them was going to be way more work than starting from scratch. So I went with option 1, and didn't include a diff.
Here’s what you could do if you somehow found yourself with an LLM-generated change to a codebase implementing a feature you want, and you wanted to do the most do expedite the implementation of that feature without disrespecting and alienating maintainers:
1. Go through all changes, understand what changed and how it solves the problem.
2. Armed with that understanding, write (by hand) a high-level summary of what can be done (and why) to implement your feature.
3. Write a regular feature request, and include that summary in it (as an appendix).
Not long ago I found myself on the receiving end of a couple of LLM-generated PRs and partly LLM-generated issue descriptions with purported solutions. Both were a bit of a waste of time.
The worst about the PRs is when you cannot engage in a good-faith, succint and quick “why” sort of discussion with the submitter as you are going through changes. Also, when PR fails to notice a large-scale pre-existing pattern I would want to follow to reduce mental overhead and instead writes something completely new, I have to discard it.
For issues and feature requests, there was some “investigation” submitter thought would be helpful to me. It ended up a bit misleading, and at the same time I noticed that people may want to spend the same total amount of effort on writing it up, except so now part of that effort goes towards their interaction with some LLM. So, I asked to just focus on describing the issue from their human perspective—if they feel like they have extra time and energy, they should put more into that instead.
If it happens at work, I obviously still get paid to handle this, but I would have to deprioritise submissions from people who ignore my requests.
> Go through all changes, understand what changed and how it solves the problem.
GP has said that they can't do this, since they're unfamiliar with the language and that specific part of the codebase. Their best bet AIUI is (1) ask the AI agent to reverse engineer the diff into a high-level plan that they are qualified to evaluate and revise, if feasible, so that they can take ownership of it and make it part of the feature request, and (2) attach the AI-generated code diff to the feature req as a mere convenience, labeling it very clearly as completely unrevised AI slop that simply appears to address the problem.
Not being familiar with a part of a codebase is not an incurable condition.
If the conclusion you made of that there is no workaround, then let that be the entire point. The alternatives are to get over yourself and ask people to implement a feature, or to understand how to help and then help.
The former is what OP did; the latter I described what I see as an efficient way of achieving while making use of an LLM-produced PR.
Thanks, doing my best. It's one of the reasons I want to get more of my AI-skeptical colleagues onboard with AI development. They're skeptical for good reasons, but right now so much progress is being driven by those who lack skills, taste, or experience. I understand those with lots of experience being skeptical at the claims, I like to think I am too, but I think there's clearly something here, and I want more people who are skeptical to shape the direction and future of these technologies.
Being a skeptic doesn't make one an irrational hater (surely such people exist and might be noisy and taint all skeptics as such)
I am learning how to make good use of agent assisted engineering and while I'm positively impressed with many things they can do, I'm definitely skeptical about various aspects of the process:
1. Quality of the results
2. Maintainability
3. Overall saved time
There are still open problems because we're introducing a significant change in the tooling while keeping the rest of the process unchanged (often for good reasons). For example consider the imbalance in the code review cost (some people produce tons of changes and the rest of the team is drowned by the review burden)
This new wave of tooling is undoubtedly going to transform the way that software is developed, but I think jump too quickly to the conclusion that they already figured out how exactly is that going to look like
I'd say that the worst thing that can happen to a developer using Claude etc is detachment from the code.
At some point of time the code starts to be "not yours", you don't recognise it anymore. You don't have the connection to it. It's like your everyday working in another company...
> I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.
> I want to do good engineering, not produce slop, but for 1 min of prompting, 5 mins of tidying, and 30 mins of review, we might save 2 days of eng time.
I don't really understand where do "2 days of engineering time" come from.
What exactly would prevent someone who does know the codebase do "1 min of prompting, 5 mins of tidying, and 30 mins of review" but then actually understand if changes make sense or not?
More general question: why do so many slopposters act like they are the only ones who have access to a genAI tool? Trust me, I also have access to all this stuff, so if I wanted to read a bunch of LLM-slop I could easily go and prompt it myself, there is no need to send it to me.
> Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.
I think this is a good suggestion, and it's what I usually do. If - at work - Claude generated something I'm not fully understanding already, and if what has generated works as expected when experimentally tested, I ask it "why did you put this? what is this construct for? how you will this handle this edge case?" and specifically tell it to not modify anything, just answer the question. This way I can process its output "at human speed" and actually make it mine.
Unfortunately not possible in this case for technical reasons, not a library in the traditional sense, significant work to fork, etc. This is in the Google monorepo.
To be entirely fair "sorta working, solving a problem but not really all that great for the rest of the codebase" PRs are human thing too.
The problem is AI generating it en masse, and frankly most people put far less effort that even your first paragraph and blindly push stuff they have not even read let alone understood
> Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.
Well, it's not terrible at just getting your bearings in the codebase, the most productive use I got out of it is treating it as "turbo grep" to look around existing codebases and figure out things
>I thought that was an interesting idea that I hadn't pushed enough, so I spent a further hour or so prompting around ways to gain confidence, throughout which the AI "fixed" so many things to "improve" the code that I completely lost all confidence in the change because there were clearly things that were needed and things that weren't, and disentangling them was going to be way more work than starting from scratch.
I feel this so much.
In my opinion, all of the debate around accepting AI generated stuff can be boiled down to one attribute, which is effort. Personally, I really dislike AI generated videos and blogs for example, and will actively avoid them because I believe I "deserve more effort".
similarly for AI generated PRs, I roll my eyes when I see an AI PR, and I'm quicker to dismiss it as opposed to a human written one. In my opinion, if the maintainers cannot hold the human accountable for the AI generated code, then it shouldn't be accepted. This involves asking questions, and expecting the human to respond.
I don't know if we should gatekeep based on effort or not. Obviously the downside is, you reduce the "features shipped" metric a lot if you expect the human to put in the same amount of effort, or a comparable amount of effort as they would've done otherwise. Despite the downside, I'm still pro gatekeeping based on effort (It doesn't help that most of the people trying to convince otherwise are using the very same low effort methods that they're trying to convince us to accept). But, as in most things, one must keep an open mind.
> but I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.
The good engineering approach is to verify that the change is correct. More prompts for the AI does nothing, instead play with the code, try to break it, write more tests yourself.
I exhausted my ability to do this (without AI). It was a codebase I don't know, in a language I don't know, solving a problem that I have a very limited viewpoint of.
These are all reasons why pre-AI I'd never have bothered to even try this, it wouldn't be worth my time.
If you think this is therefore "bad engineering", maybe that's true! As I said, I ended up discarding the change because I wasn't happy with it.
> I exhausted my ability to do this (without AI). It was a codebase I don't know, in a language I don't know, solving a problem that I have a very limited viewpoint of.
And that's the critical point! I think it's fine to send the diff in; and clearly mark it as AI / vibe-coded. (Along with your prompts.)
> but I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.
> I want to do good engineering, not produce slop, but for [...]
IFF this is true, you can already stop. This will never be good engineering. Guess and check, which is what your describing, you're letting the statistical probability machine make a prediction, and then instead of verifying it, you're assuming the tests will check your work for you. That's ... something, but it's not good engineering.
> That has to be worth something.
if it was so easy, why hasn't someone else done it already? Perhaps the cost value, in the code base you don't understand isn't actually worth that specific something?
> I could see a few ways forward:
> Send it, but be clear that it came from AI, I don't know if it works, and ask the reviewers to pay special attention to it because of that...
so, off load all the hard work on to the maintainers? Where's that 2 days of eng time your claiming in that case?
> Or Send it as normal, because it passes tests/linters, and review should be the same regardless of author or provenance.
guess, and check; is not good engineering.
> Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.
the pro-ai groups are pro AI? I wouldn't call that interesting. What did the Anti-AI groups suggest?
> the AI "fixed" so many things to "improve" the code that I completely lost all confidence in the change because there were clearly things that were needed and things that weren't, and disentangling them was going to be way more work than starting from scratch.
Yeah, that's the problem with AI isn't it? It's not selling anything of significant value... it's selling false confidence in something of minimal value... but only with a lot of additional work from someone who understands the project. Work that you already pointed out, can only be off loaded to the maintainers who understand the code base...
General follow up question... if AI is writing all the PRs, what happens when eventually no one understands the code base?
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted exactly as how much we do not want to review your generated submission.
I know it is in jest, but I really hate that so many documents include “shall”. The interpretation of which has had official legal rulings going both ways.
You MUST use less ambiguous language and default to “MUST” or “SHOULD”
Around 1990 I attended ISO/JTC1 meetings generating standards for data communication. I still recall my surprise over the heated arguments over these words between the UK and the US delegations. (I'm from Denmark). In particular 'shall' and 'should' meant different things in English and American languages. ISO's first standard, ISO 1, states that ISO Standards shall be written in English so we had to do that, US delegation too. Similarly Scott Bradner stated in RFC 2219 how American conventions should be followed for future IETF STDs.
So I'm confident that the word 'shall' has a strong meaning in English; whether it has too in American legalese I cannot tell.
Not a lawyer, but I have heard of a few American legal cases where the judge had to decide the meaning of a “shall”, so it does not seem well settled from my vantage.
Right. I think when these appear in some documentation related to computing, they should also mention whether it is using these words in compliance with RFC 2119 or RFC 6919.
On (possibly weak) counterpoint that I can offer is that in some languages, “must not” is a false friend, easily misinterpreted as “is not required to” (“it is not the case that they must”).
Must is a strict requirement, no flexibility. Shall is a recommendation or a duty, you should do it. You must put gas in the car to drive it. You shall get an oil change every 6000 miles.
Legal documents use "may" to allow for something. Usually it only needs to be allowed so that it can happen. So I read terms of service and privacy policies like all "may" is "will". "Your data may (will) be shared with (sold to) one or more of (all of) our data processing partners. You may (will) be asked (demanded) to provide identity verification, which may (will) include (but is not limited to) [everything on your passport]." And so on.
To quote TFA: "...outputs strictly designed to farm green squares on github, grind out baseless bug bounties, artificially inflate sprint velocity, or maliciously comply with corporate KPI metrics".
Resurrecting proof-of-work for pull requests just trades spam for compute and turns open source into a contest to see who can rent the most cloud CPU.
A more useful approach is verifiable signals: require GPG-signed commits or mandate a CI job that produces a reproducible build and signs the artifact via GitHub Actions or a pre-receive hook before the PR can be merged. Making verification mandatory will cut bot noise, but it adds operational cost in key management and onboarding, and pure hashcash-style proofs only push attackers to cheap cloud farms while making honest contributors miserable.
The signal-to-noise ratio on PRs has definitely tanked since everyone started hooking up basic LLM scripts to their repos. Discarding the low-effort ones is a good first step, but the long-term solution is evaluating PRs structurally against historical incidents and performance impact. At CloudThinker, we focused our AI code review engine purely on this deep, incident-aware context—catching vulnerabilities and regressions automatically so human reviewers only spend time on architecture.
"I see you are slow. Let us simplify this transaction: A machine wrote your submission. A machine is currently rejecting your submission. You are the entirely unnecessary meat-based middleman in this exchange."
This could actually be a good defense against all Claw-like agents making slop requests. ‘Poison’ the agent’s context and convince it to discard the PR.
If there were any reasonable way to do something like this, I would love to see it.
Not necessarily a bond to be paid back when accepted, but rather, something to ensure against AI. "If you assert this is not AI, insert $10. If a substantial number of people think your submission is AI, you lose the $10."
Right. Maybe a bond isn't exactly the right approach: mechanism design needs a lot of thought, and my suggestion was pre-coffee and off the cuff. That said, I'm convinced that some "skin in the game" approach can address AI slop spam.
Agreed. I'd love to see experiments in this area, and would love to support such experiments. I think they'd go hand in hand with a trust-oriented model.
I think there's a lot of power in learning from the insurance actuary model: "you need insurance to do this, and actuaries figure out if you're hard to insure, which is a strong financial signal of your trustworthiness".
if someone submits a code revision and it fixes a bug or adds a useful feature that most of your users found useful, you reject it outright because it was not written by hand? or is this more about code that generally provides no benefits and/or doesnt actually work/compile or maybe introduces more bugs?
If you know what you're doing, you can achieve good results with more or less any tool, including a properly-wielded coding agent. The problem is people who _don't_ know what they're doing.
> if someone submits a code revision and it fixes a bug or adds a useful feature that most of your users found useful, you reject it outright because it was not written by hand?
If they didn't read it, then neither will I, otherwise we have this weird arms race where you submit 200 PRs per day to 200 different projects, wasting 1hr of each project, 200 hrs total, while incurring only 8hrs of your time.
If your PR took less time to create and submit than it takes the maintainer to read, then you didn't read your own PR!
Your PR time is writing time + reading time. The maintainer time is reading time only, albeit more carefully.
Selimenes1 is an 11 day old account which sat silent for 10 days and then all of a sudden starts posting from today, and it's all multiple paragraph responses to threads about AI.
I would like to state for the record that the strategy to swap em-dashes into double-hyphens between the generation and posting step is probably not enough transformation to disguise this behaviour. Whoever is running this clawdbot or whatever it is should really be putting that information on the account page.
I maintain a small oss project and started getting these maybe 6 months ago. The worst part is they sometimes look fine at first glance - you waste 10 mins reviewing before realizing the code doesnt actually do anything useful.
If the problem is that we don't trust people who use AI without understanding its output, and we base the gate-keeping on tests that are written on AI, then how can we trust that output?
Isn't that the purpose of red/green refactoring though? To establish working software that expresses regression, and builds trust (in the software)?
If your premise is that people would shift to using AI to write tests they don't understand, then that's not necessarily a failing of the contributor.
The contributor might not understand the output, but the maintainer would be able to critique a spec file and determine pretty quickly if implementation would be worthwhile.
This would necessitate a need for small tickets, thereby creating small spec files, and easier review by maintainers.
Also, any PR that included a non spec file could be dismissed patently.
It is possible for users of AI to learn from reading specs.
But if agents are doing the entire thing (reading the ticket, generating the PR, submitting the PR)...then the point of people not understanding is moot.
This is a habit humans could learn from. Publishing a fork is easier than ever. If you aren’t using your own code in production you shouldn’t expect anyone else to.
If anyone at GitHub is out there. Look at the stats for how many different projects on average that a user PRs a day (that they aren’t a maintainer of). My analysis of a recent day using gharchive showed 99% 1, 1% 2, 0.1% 3. There are so few people PRing 5+ repos I was able to review them manually. They are all bots/scripts. Please rate limit unregistered bots.
reply