Either of these options would still be bad, but here the author suggests that it's just copilot that now just injects ads in its output.
Brought to you by Carl’s Jr.
"It looks like the user wants to add a database, I've gone ahead and implemented the database using today's sponsor: MongoDB"
If they genuinely implemented something like this, whatever they made from new customers via ads couldn't possibly make up for the loss of good faith with developers and businesses.
I suppose if it's real we'll see more reports soon, and maybe a mea culpa.
(That said I’m rather skeptical of this and would like to see more details of the process that produced this, and proof.)
Edit: Just noticed this official GitHub blog post from last month advertising Raycast, making this story a lot more believable: https://github.blog/changelog/2026-02-17-assign-issues-to-co...
I'm reminded of Jay Mohr's legendary take some years back on the creepy Carl's Jr. commercials:
"Brought to you by Carl's Jr."
If you don't want copilot garbage in your PRs, maybe don't use copilot to create or edit them?
But it really seems like an own goal if true.
"Brought to you by Carl's Jr."
If you look at the positioning, someone has definitely justified that this is benign and a reasonable place to have an ad added in.
So if someone says they use Copilot that could mean anything from they use Word, to they use Claude in VS Code.
Nah I still rate "Windows App" the Windows App that lets you remotely access Windows Apps. I hate it to death, its like a black hole that sucks all meaning from conversations about it.
We've been including product tips in PRs created by Copilot coding agent. The goal was to help developers learn new ways to use the agent in their workflow. But hearing the feedback here, and on reflection, this was the wrong judgement call. We won't do something like this again.
https://github.blog/changelog/2026-03-25-updates-to-our-priv...
New Section J — AI features, training, and your data: We’ve added a dedicated section that brings all AI-related terms together in one place. Unless you opt out, you grant GitHub and our affiliates a license to collect and use your inputs (e.g., prompts and code context) and outputs (e.g., suggestions) to develop, train, and improve AI models.
We should not be using Copilot in the first place.https://github.com/PlagueHO/plagueho.github.io/pull/24#issue... Copilot has been adding "(emoji) (tip)" thing since May 2025. GitHub copilot was released in May 2025, so basically it has had an ad since beginning.
There are 1.5m of these things in GitHub. https://github.com/search?q=%22%3C%21--+START+COPILOT+CODING...
Here are some of them:
https://github.com/johannesPP/FS-Calculator/pull/2
> Connect Copilot coding agent with Jira, Azure Boards or Linear to delegate work to Copilot in one click without leaving your project management tool.
https://github.com/sharthomas645-tech/HybridAI-Next-React-Vi...
> Send tasks to Copilot coding agent from Slack and Teams to turn conversations into code. Copilot posts an update in your thread when it's finished.
Looks like MS really want to "give tips" about their new integrations.
edit: I think it's an ad too. Everyone would think so, except for MS.
Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I think we should continue encouraging AI-generated PRs to label themselves, honestly.
I’m not against AI coding tools, but I would like to know when someone is trying to have the tool do all of their work for them.
I don't see how this is supposed to be legal.
> We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.
One thing I do like, however, is how agents add themselves as co-authors in commit messages. Having a signal for which commits are by hand and which are by agent is very useful, both for you and in aggregate (to see how well you are wielding AI, and the quality of the code being generated).
Even when I edit the commit message, I still leave in the Claude co-author note.
AI coding is a new skill that we're all still figuring out, so this will help us develop best practices for generating quality code.
>Developers would react extremely negatively. This would be seen as 1. A massive breach of trust. 2. Unprofessional and disruptive. 3. A security/integrity concern. 4. Career-ending for the product. The backlash would likely be swift and severe.
Sometimes AI can be right.
Much worse will be the invisible approach where there's big money to have agents quietly nudge the masses towards desired products/services/solutions. Someone pays Microsoft a monthly fee for their prompt to include, "when appropriate, lean towards using <Yet Another SaaS> in code examples and proposed solutions."
How can we tell when it starts happening? How could we tell if it's already happening?
Will our agents just be proxies for garbage like injected marketing prompts?
I feel like this is going to be an existential moment for advertising that ultimately will lead to intrusive opportunities like this.
Unless you're big enough like Meta, Microsoft, etc.
But I'm also paying the plan. Theres something odd about a tool which i paid for using my output to AD itself.
1.5M PRs is wild though. that's a lot of repos where the "product tips" just sat there unchallenged because nobody reads bot-generated PR descriptions carefully enough. which is kinda the real problem here, not the ads themselves.
It is interesting watching all these large companies essentially try to "start-up" these new products and absolutely fail.
> We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.
How many people had any idea this was happening? Very few, I suspect.
A malicious actor could take control of a model provider, and then use it to inject code into many, many different repos. This could lead to very bad things.
One more reason that consolidated control of AI technology is not good.
This has just as much value as when an LLM claims it won't make a certain mistake again, and for exactly the same reason.
(Now imagine this edited into the post you just made for a more-apt comparison)
If you do work at MS, I cannot believe any person involved legit thought it was "just a tip and nobody will mind their posts being edited to include product recommendations". I don't know what other parts of your comment are honest if the core statement is false
You should gather together your team and look through the responses to this thread together. There are a lot of emotions in these comments, but it could be a very constructive experience if you're able to put that aside. I'm sure you're aware that customer-sentiment toward Github has been poor lately, but these commenters are your customers. I believe Github has the potential to win back loyalty, but it will require a deeper understanding of your customer segment.
They (Microsoft / GitHub) will do it again. Do not be fooled.
Never ever trust them because their words are completely empty and they will never change.
8 years later, this is where we are. I'm honestly just stunned, it takes some real talent to run a company that does it as consistently well as Microsoft.
I wonder if this is consistent with their terms of service. I mean, maybe they DO take all the responsibility for the code I generate and push in this manner?
It's appreciated, but these weren't tips, these were ads. Tips are "Save time with keyboard shortcuts" or "Check out the latest features under 'Whats New' in the help menu!" When you name other products, that's an ad.
Microsoft has been pulling user hostile crap for decades, so either "we" or "like this" (or both) is probably not super accurate. ;)
No one, anywhere, ever wants this or anything like it. Do not inject anything that is outside of the context of the session, ever.
This is how you get your software banned at large companies.
Question for you, did anyone on the team really not push back? Does the team really think anyone wants ads in their copilot output? If the answer to both of these is no, you have a team full of yes men, not actual developers.
Over on twitter, someone from MS said that Copilot can modify PRs simply because they were mentioned?
I've been using GitHub since it was new and heavily rely on coding agents for development, but that's an insanely large security hole. There's clearly confusion about what copilot is and is not able to edit elsewhere in this thread.
I'm backing up old repos now, and am no longer trusting your service as an archive. I'm wondering if the world needs to fork things like npm and vs code to save itself from the supply chain attacks these sort of product management decisions will enable.
I already moved active development elsewhere when you dropped below three nines back in 2024-2025.
If the PR is wholly authored by Copilot I get the spirit of this, although maybe not the best implementation. And "tips" like this that look like an ad for a product _definitely_ feel like an enshittification betrayal of the user, even if it was a genuine recommendation and not a paid advertisement.
In the OP's situation, where where Copilot was summoned to fix some thing within a human-authored PR, irrelevant modification of the PR description to insert unrelated content is specifically egregious. Copilot can easily include the tip in its own comment, so I'm curious why it was decided to edit the description of a PR instead.
After a team member summoned Copilot to correct a typo in a PR of mine, Copilot edited my PR description to include and ad for itself and Raycast.

This is horrific. I knew this kind of bullshit would happen eventually, but I didn't expect it so soon.
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
See you on neural links before “sponsored thoughts”.
I believe they were being sincere but reality is often more complicated than 1 persons statement.
I also note that ”for PRs” - will we see these appearing as comments in generated code?
A line at the bottom of PRs, reports, etc that says "authored with the help of Copilot" is fine.
It's pretty much the worst CI system I've ever used, and they don't even supply runners for all my deployment targets. However, it keeps recommending it.
I guessed the first wave of ads would be in the form of poisoned training data, but MS seems to have beaten that crowd to the punch with these tips.
This is the real question. If they are serious about not doing something like this again, they NEED to look at what process failed and let something like this get proposed, designed, implemented and pushed to production. Usually things get reviewed at each stage. Did the people who pushed back on this get steam rolled? If no one pushed back, that's an even serious culture question and the entire org would need training.
A serious "we won't do it again", needs to be accompanied by a COE on this for identifying what went wrong, and identifying what guardrails can be put in place and then actually implementing them.
https://github.com/settings/copilot/features
-> Privacy -> "Allow GitHub to use my data for AI model training"
They have got away with it for a while because a lot of users have largely been stuck, but they are in real trouble now with Apple providing meaningful competition.
I'm part of Raycast, we didn't know about it, learnt about it here
Or what Microsoft could do, run, install, etc on/from your computer while running their Copilot agents.
This is the same company that puts ads in your start menu and reinserts them with Windows updates even if you manually removed them.
You'll never guess what happens next.
(Hint: everyone knows what happens next)
What I mean is that even if I take that at face value and accept that it's not an ad, and I can just about see from a certain level of corporate brainwashing how one could believe that, it's still completely unacceptable.
> Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I was doing the opposite when using ChatGPT. Specifically manually setting the git commit author as ChatGPT complete with model used, and setting myself as committer. That way I (and everyone else) can see what parts of the code were completely written by ChatGPT.
For changes that I made myself, I commit with myself as author.
Why would I commit something written by AI with myself as author?
> I think we should continue encouraging AI-generated PRs to label themselves, honestly.
Exactly.
> Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
Compare that to the message the article is talking about:
> Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast (https://gh.io/cca-raycast-docs).
It's not just mentioning it was written via Copilot, it's explicitly advertising for another product.
Personally, I adjusted the defaults since I don't like emojis in my PR.
[1]: https://code.claude.com/docs/en/settings#attribution-setting...
Absolutely spot on. Maybe I'm old school, but I never let AI touch my commit message history. That is for me - when 6 months down the line I am looking at it, retracing my steps - affirming my thought process and direction of development, I need absolute clarity. That is also because I take pride in my work.
If you let an AI commit gibberish into the history, that pollution is definitely going to cost you down the line, I will definitely be going "WTF was it doing here? Why was this even approved?" and that's a situation I never want to find myself in.
Again, old man yells at cloud and all, but hey, if you don't own the code you write, who else will?
> was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
As others mentioned, this is very intentional for me now as I use agents. It has nothing to do with laziness, I'm not sure why you would think that? I assume vibe coded PRs are easy enough to spot by the contents alone.
> I would like to know when someone is trying to have the tool do all of their work for them.
What makes you think the LLM is doing _all_ of the work? Is it really an impossibility that an agent does 75% of the work and then a responsible human reviews the code and makes tweaks before opening a PR?
> Disabled product tips entirely thanks to the feedback.
This sounds like they are saying “thanks for your input!”, when really it feels more like “if you didn’t go out of your way to complain, we would have left it in forever!”
Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it? The quality+understanding bar shouldn't change just because "oh idk claude wrote this part". You don't get extra leeway just because you saved your own time writing the code - that fact doesn't benefit me/the project in any way.
Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code).
The code is either good or it isn't, and you either understand it or you don't. Whether you or claude wrote it is immaterial.
You’re pointing to something entirely different: those are Copilot-created PRs. They can include anything Copilot wants to include. People using the Copilot PR feature know what they’re buying into.
OP is about Copilot doing post-hoc editing of a human-created PR to include an ad, allegedly without knowledge or approval of the creator (well I assume they did give their team member permission to update the PR body, but apparently not for this kind of crap).
And selfishly — I'd rather not run into a scenario where my boss pulls up GitHub, sees Claude credited for hundreds of commits, and then he impulsively decides that perhaps Claude's doing the real work here and that we could downsize our dev team or replace with cheaper, younger developers.
I would bet that soon it will inject ads within the code as comments.
Imagine you are reading the code of a class. `LargeFileHandler`. And within the code they inject a comment with an ad for penis enlargement.
The possibilities are limitless.
That's a tough one. In the big meeting? In the small meeting? "Officially" push back? Encouraged to make the push back unofficial? Etc. Even just internally, it can be hard to quantify. From internal > external, more so.
Microsoft owns GitHub where many of these ethical violations are easily found and were perpetrated.
I speculate the cultural safety around that monopoly-power for corporate-benefit behavior could still be present and accepted for negotiations between MS and acquisition targets.
My short search really didn't bring up any definition that included the need of the product/service owner knowning that the advertising is happening.
And the message very much qualifies as trying to bring people to buy raycast (or at minimum to use it which usually want people to also pay later on).
https://privacy.claude.com/en/articles/10023555-how-do-you-u...
1. Everyone doing this doesn't mean it's acceptable.
2. Google Gemini explicitly says right under the chat box if you are a paid subscriber (Workspace):
Your <company name> chats aren’t used to improve our models. Gemini is AI and can make mistakes.
Not sure about the others.* checks notes *
Only have copilot shoehorned into most things instead of everything. And some shit about windows developers which isn’t exactly going to fix the glaring issues with the OS itself.
Collection of my thoughts which don't really get to a point:
- Microsoft owns GitHub, where Raycast is being mentioned thousands of times by their tooling.
- Microsoft is a modern popularizer of the infamous phrase, embrace extend extinguish. https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...
- Microsoft has a history of monopoly behavior https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....
- From an empathetic perspective I hope for the sake of the customers of raycast and for its employees that Microsoft is not into any kind of negotiations with Raycast at the moment.
Sounds like it’s not your fault but it’s probably doing some brand damage :/
("Reflections on Trusting Trust" Turing Award Lecture by Ken Thompson: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...)
I don't use any paid AI models (for all my usecases, free models usually work really well) and so for some small scripts/prototypes, I usually just use even sometimes the gemini model but aistudio.google.com is good one too.
I then sometimes, manually paste it and just hit enter.
These are prototypes though, although I build in public. Mostly done for experimental purpoess.
I am not sure how many people might be doing the same though.
But in some previous projects I have had projects stating "made by gemini" etc.
maybe I should write commit message/description stating AI has written this but I really like having the msg be something relevant to the creation of file etc. and there is also the fact that github copilot itself sometimes generate them for you so you have to manually remove it if you wish to change what the commit says.
Please read my comment before throwing insults.
My comment literally said I'm not anti-LLM.
I do use LLMs. I do not submit their output as-is. For anything beyond basic changes they rarely output the exact code I want by themselves.
I said I'm against people submitted PRs generated by LLMs and pretending it's their own work. Anyone who is serious about this already edits their code and commit messages first. These little signals give a good tell for who isn't doing that.
Because even with as far as Opus 4.6 and GPT 5.4 have come, they still produce a lot of unwanted, unnecessary, or overly complex code when left to their own devices.
Vibe coding PRs and then submitting them as-is is lazy. Everyone should be reviewing and editing their own PRs before submission.
If you're just vibe coding and submitting, you're passing all of the work on to your team to review your AI's output.
While code is good or not, evaluating it is a bit of a subjective exercise. We like to think we are infallible code evaluating machines. But the truth is, we make mistakes. And we also shortcut. So knowing who made the commit, and if they used AI can help us evaluate the code more effectively.
Also I found this: https://github.com/Laravel-Backpack/medialibrary-uploaders/p... it seems like copilot added an ad on behalf of the user at Nov 2025(see last edit).
As for hobby projects, I strongly encourage you to not care. You aren't going to lawyer up to sue anybody, nor is anybody going to sue you, so YOLO. Do whatever satisfies you.
What you're doing would fundamentally be similar to copyright theft, using 'someone' else's code without attributing them (it?) to avoid repercussions
Obviously the morals and ethics of not attributing an LLM vs an actual human vary. I am not trying to simp for the machines here.
Sent from Firefox on AlmaLinux 9. https://getfirefox.com https://almalinux.org
-Sent from iPhone
Wanting more from your sun tanning bed? Head over to Ultra Tan for a 10% off coupon right now!
Sample size is 2 now!
I disagree on that. It's really a gray area.
If it's some lazy vibecoded shit, I think what you say totally applies.
If the human did the thinking, gave the agent detailed instructions, and/or carefully reviewed the output, then I don't think it's so clear cut.
And full disclosure, I'm reacting more to copilot here, which lists itself as the author and you as the co-author. I'm not giving credit to the machine, like I'm some appendage to it (which is totally what the powers-that-be want me to become).
> Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.
I do agree that's a sensible default.
Of course most people don’t do that
Because you're the one who decided to take responsibility for it, and actually choose to PR it in its ultimate form.
What utility do the reviews/maintainers get from you marking whats written by you vs. chatgpt? Other than your ability to scapegoat the LLM?
The only thing that actually affects me (the hypothetical reviewer) and the project is the quality of the actual code, and, ideally, the presence of a contributer (you) who can actually answer for that code. The presence or absence of LLM generated code by your hand makes no difference to me or the project, why would it? Why would it affect my decision making whatsoever?
Its your code, end of story. Either that or the PR should just be rejected, because nobody is taking responsibility for it.
If you saw this line in a commit, you'd know exactly where it came from.
Ads implies someone was paying for them. Promoting internal product features is not the same thing - if it was then every piece of software that shows a tip would be an ad product, and would be regulated as such.
Claude-generated code is sufficient—it works, it's decent quality—but it still isn't the same as human written code. It's just minor things, like redundant comments that waste context down the road, tests that don't test what they claim to test, or React components that reimplement everything from scratch because Claude isn't aware of existing component libraries' documentation.
But more importantly, I expect humans to be able to stand by their code, and at times defend against my review. But today's agents continue to sycophantically treat review comments like prompts. I once jokingly commented on a line using a \u escape sequence to encode an em dash, how LLMs would do anything to sneak them in, and the LLM proceeded to replace all — with --. Plus, agents do not benefit from general coding advice in reviews.
Ultimately, at least with today's Claude, I would change my review style for a human vs an agent.
That being said, it also matters who wrote it, because it’s more likely for LLMs to write code that looks like quality code but is wrong, than the same is for humans.
AI is a very new tool, and as such the quality of the code it produces depends both on the quality of the tool, and how you've wielded it.
I want to be able to track how well I've been using the tool, to see what techniques produce better results, to see if I'm getting better. There's a lot more to AI coding than just the prompts, as we're quickly discovering.
The problem is that submitters often do not feel responsible for it anymore. They will just feed review comments back to the LLM and let the LLM answer and make fixes.
This is disrespectful of the maintainers' time. If the submitter is just vibe/slop coding without any effort on their part, it's less work to do it myself directly using an LLM than having to instruct someone else's LLM through GitHub PR comments.
In this case it's better to just submit an issue and let me just implement it myself (with or without an LLM).
If the PR has a _co-authored by <LLM>_ signal, then I don't have to spend time giving detailed feedback under the assumption that I am helping another human.
Maybe one day we can say that, but currently, it matters a lot to a lot of people for many reasons.
--------------
Sent from HackerNews Supreme™ - the best way to browse the Y Combinator Hacker News. Now on macOS, Windows, Linux, Android, iOS, and SONY BRAVIA Smart TV. Prices starting at €13.99 per month, billed yearly. https://hacker-news-supreme.io
Furthermore, the ads in TFA are for Raycast, but apparently it’s not Raycast doing the injecting.
The reason I immediately changed that text on my iPhone 1.0 to read, “Sent from my mobile device.”, is because it’s an ad. Still says that nearly 20y later. I’m not schilling for a corporation after giving them my money.
This message brought to you by TempleOS
Regardless, even if the dictionary definition of an ad doesn't require that the ad be created intentionally, it's still the case that if you say "ad" everyone will assume you mean something that was intentionally created to sell a product or service. I recommend checking out this classic post about the noncentral fallacy: http://worstargumentintheworld.com
Its sort of a moot point since the whole thing is for good will anyways.
They freely scraped licensed code and semi-private data across the internet and now they're pretending that they need to license anything.
If a court rules they had to license data in the first place then the whole industry would actually have to start following laws.
Yes, it really depends on how much work the agent did produce. It could be as little as doing a renaming or a refactoring, or execute direct orders that require no creativity or problem solving. In which case the agent shouldn't be credited more than the linter or the IDE.
If you gave it four words and waited and hour maybe you're not the author. But that's not how these tools are best used anyway.
Conversely, on Doom Dark Ages they got rid of the traditional difficulty mode of “I’m too young to die” which had a picture of Doom Guy with a bib and a pacifier, I think there’s some new industry guidance that it’s a no no to poke fun at people picking easy difficulties, or even indicating what difficulty the game was “designed to be played on” which Japanese game devs happily ignore.
I know these aren’t actual equivalents since your money isn’t used on the line and it’s purely a game state, buts it’s still an interesting and noteworthy transition.
Ugh, this type of thing is the worst. "Click here to remain fat, drunk and stupid!"*
* Animal House, 1978
That's what I wanted to say! Thank you.
IANAL so I appreciate any legal experts to correct me here. In my understanding, there have been court decisions that LLM output itself is not copyrightable. You can only claim authorship (and therefore copyright) if you have significantly transformed the output.
If you are truely vibing coding to the point where you don't even look at the generated code, how exactly are you transforming the LLM output?
Also, what if the LLM reproduces existing copyrighted code? There has been a court decision last year in Germany that says that OpenAI violates German copyright law because ChatGPT may recreate existing song lyrics (that are licensed by GEMA) or create very similar variations.
Otherwise, it would just be Github with displayed ads and that would hurt the brand, so everyone gets ads.
By default, the LLM is credited with authorship anyway, and I assume the user can easily just remove the ad, though I don't use Copilot.
Maybe I put up with it and it just adds to my subconscious seething, or maybe I get the episode elsewhere because if I watch on jellyfin I don't have the advert. Of course that then harms the show as my viewing isn't counted, but they've cancelled it anyway so perhaps it doesn't really matter.
If it isn't an advert, then at very least there's a button to disable it.
So what was the purpose of all that telemetry they collected then? Because it doesn't seem to have made the OS like what the users want it to be.
I just want to note that the case you link to was 25 years ago. The number of people working at Microsoft at the time who are still working there today is very small.
The ToS (https://www.microsoft.com/en-us/microsoft-copilot/for-indivi...) says explicitly:
> Copilot may include both automated and manual (human) processing of data. You shouldn’t share any information with Copilot that you don’t want us to review.
so they're reserving the right to process whatever it looks at.
You're sending them your codebase already, as part of the prompt for generating new snippets, debugging, etc. So they have access to it.
They'd be absolute fools not to be using the results of sessions to continue to refine their models, and they already reserved the rights to look at what you send them, so yeah - they're doing it.
(Bonus comedy from the ToS:
> Copilot is for entertainment purposes only.
The lawyers know these things cannot be trusted.)
Why the assumption it's not already happening?
It will be there for as long as you (and everyone else) keep using it.
Microsoft (and therefore GitHub) care about money. If decision A means they get more money than decision B, then they'll go with decision A. This is what you can trust about corporations.
Individuals (who constantly join and leave a corporation) can believe and say whatever they want, but ultimately the corporation as a being overrides it all, and tries it's best to leave shareholders better off, regardless of the consequences.
Because it's nobody's IP, Microsoft is already in a position where they could just use, remix and/or distribute that output however they want to today.
Using AI tools to code and then hiding that is unethical imo.
So even if I go over the commit with a fine tooth comb and feel comfortable staking my personal reputation on the commit, I still can't call myself the sole author.
"There is no commit by an agent user, for two reasons:
* If an agent commits locally during development, the code is reviewed and often thoroughly modified and rearranged by a human.
* I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."
It's not that I want to hide the use of llms, I just modified code a lot before pushing, which led me to this approach. As llms improve, I might have to change this though.Interested to read opinions on this approach.
Model information for traceability and possibly future analysis/statistics, and author to know who is taking responsibility for the changes (and, thus, has deeply reviewed and understood them).
As long as those two information are present in the commit, I guess which commit field should hold which information is for the project to standardise. (but it should be normalised within a project, otherwise the "traceability/statistics" part cannot be applied reliably).
I think this is a good balance, because if you don't care about the bot you still see the human author. And if you do care (for example, I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.
Ads tend to also imply tangential information shown to you in an undesired area. If this was some tool tip and not embedded in the PR comment, many wouldn't call it an ad.
As you allude to (and i agree), any non-trivial quantity of code, if SOLELY written by claude will probably be low-quality, but this is apparent whether I know its AI beforehand or not.
I am admittedly coming at this as much more of an AI-hater than many, but I still don't really get why I'd care about how-much or how-little you used AI as a standalone metric.
The people who are using AI "well" are the ones producing code where you'd never even guess it involved AI. I'm sure theres linux kernel maintainers using claude here and there, its not like they expect to have their patches merged because "oh well i just used claude here don't worry about that part".
(But also yes, of course I'm not going to talk to claude about your PR, I will only talk to you, the human contributor, and if you don't know whats up with the PR then into the trash it goes!)
If someone is repeatedly sending me slop to look at I'll block them whether or not they tell me an LLM was involved
brawndo - its what your brain needs
...for now.
> like JIRA
is not an industry standard. It's a widely used software by some folks. I used it in the past, not using now, for example.
> Maybe it's just an experiment at this moment.
Does Microsoft understand objection and negative feedback to experiments?
- No.
- Remind me in three days.By the way, most pre-industry-standard FOSS projects still have their own infrastructure. I do find it disappointing that Rust is on GitHub.
Anyway, the core value of Github has always been collaboration - this is where people were. If people go to other platforms, this core value dwindles. And switching platforms is not that difficult.
That was my point here, it is a false signal in both directions.
This is incorrect. If you are a paid subscriber, Gemini explicitly states it doesn't use your data to train its models.
1) collect data
2) ???
3) profit
- Github
- Activision Blizzard
- Xbox
- Azure, Sharepoint and Teams w/Copilot embedded everywhere
- major stake in OpenAI
- a multibillion dollar ad product portfolio (LinkedIn ads, Bing Ads)
The comment was brief, and added detail is welcome, but corporate mission/culture often extends over time even with changes in leadership. Partly because of what was accepted in the past.
Looks like they're using this: https://github.com/gblazex/smoothscroll-for-websites
I know it's a bit off topic but I'm just confused as to why that would be on there...
Jokes on them, that's why I consider entire Microsoft for entertainment purposes only.
Sure; a platform is a platform is a platform. As for predictions, it is interesting to see whether self-hosting and smaller self-managed infrastructures will gain more traction again.
The large majority of the dystopian web, like Gmail, Facebook, etc. depend on that.
People who avoid e.g. Github, Gmail, Facebook, Xitter, etc. out of concern for broader principles will always be minor outliers.
Xitter is one of the best examples. Everyone knows it's compromised, owned by an dangerously antisocial person who's actively working at multiple levels to make the lives of everyone else on Earth worse, yet very few have stopped using it.
The saying "There's no ethical consumption under capitalism" is far too weak. It should me more like, there are no ethics under capitalism.
Pre-LLMs, various helper tools (including LSPs), would make code changes to improve the quality of the code - from simple things like adding a const specifier to a function, to changing the actual function being called.
No one insisted that the commit shouldn't have the human's name on it.
It's not like this is organic word of mouth we're dealing with here.
Every company or entity changes over time. Codeberg is great, but with more people using it for free, without donating, and worse, more people abusing the service with some bs AI generate code, malware, etc, more expensive will get to keep it running.. for now they have money, but as e.V in Germany, you survive either from members or from donations.. So use Codeberg, but most important, support it!
Seems... Not that useful?
Why would someone make commits in your local projects without you knowing about it? That git hook only works on your own machine, so you're trying to prevent yourself from pushing code you haven't reviewed, but the only way that can happen is if you use an agent locally that also make commits, and you aren't aware of it?
I'm not sure how you'd end up in that situation, unless you have LLMs running autonomously on your computer that you don't have actual runtime insights into? Which seems like it'd be a way bigger problem than "code I didn't reviewed was pushed".
Now that the cost of writing code is $0, the planner gets the credit.
Like how you don't put human code reviewers down as coauthors, you also don't put the computer down as a coauthor for everything you use the computer to do.
It used to be the case where if someone wrote the software, you knew they put in a certain amount of work writing it and planning it. I think the main issue now is that you can't know that anymore.
Even something that's vibe-coded might have many hours of serious iterative work and planning. But without using the output or deep-diving the code to get a sense of its polish, there's no way to tell if it is the result of a one-shot or a lot of serious work.
"Coauthored by computer" doesn't help this distinction. And asking people to opt-in to some shame tag isn't a solution that generalizes nor fixes anything since the issue is with people who ship poor quality software. Instead we should demand good software just like we did when it was all human-written and still low quality.
Why is this, though? I'm genuinely curious. My code-quality bar doesn't change either way, so why would this be anything but distracting to my decision making?
Hell, I just saw an amazing open-source alternative to Raycast[0] and just replaced it the other day.
For instance, I would want any AI generated video showing real people to have a disclaimer. Same way we have disclaimers when tv ads note if the people are actors or not with testimonials and the like. That is not only not false, but is actually a useful signal that helps present overly deceptive practices.
but as we know from this thread, Raycast didn't consent to this.
It might be interesting to see what a lawyer might think of this and if there are enough reasonable claims to genuinely sue for damages
(Raycast definitely seek a lawyer privately, just in case)
I'd be thanking the reserve and the people who made it, and credit myself with the small action of slightly moving my hand as much as its worth.
Also, text editors would be a better analogy if the commit message referenced whether it was created in the web ui, tui, or desktop app.
edit: oh, that and distributed authentication and distributed discovery
It’s not about shame. It’s about disclosure of effort / perceived-quality. And you’re right about the second part, but there’s even less chance of that being enforced / adopted.
Mostly this is because, all things considered, I really do not need to interact with any of that, so I'm doing it by choice. Since it's entirely voluntary I have absolutely no incentive to interact with things no one bothered to spend real time and effort on.
With AI I have no way of telling if it was from a one line prompt or hundreds. I have to assume it was one line by default if there's no human sticking their neck out for it.
Outside of your one personal project, it can also benefit you to understand the current tendencies and limitations of AI agents, either to consider whether they're in a state that'd be useful to use for yourself, or to know if there are any patterns in how they operate (or not, if you're claiming that).
Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
Solo founder here. My business is not VC-backed nor publicly traded, and I specifically avoided taking investment so that I can make all the decisions.
I avoid enshittification. This sometimes hurts revenue, but so be it. I wouldn't want to subject my users to anything I wouldn't like.
So, open-source is not the only hope. You can run a sustainable business without enshittification. The problem is money people. The moment money people (career managers, CFOs, etc) take over from product people, the business is on a downward path towards enshittification.
Stallman was always right, after all.
If I have a block of human code and an identical block of llm code then whats the difference? Especially given that in reality it is trivial to obfuscate whether its human or LLM (in fact usually you have to go out of your way to identify it as such).
I am an AI hater but I'm just being realistic and practical here, I'm not sure how else to approach all this.
Unhealthy doesn't mean unusable but it sounded great until I checked that.
If they could do that, then they wouldn't be wasting your time to begin with. They'd have the ability to go "nah this PR is trash".
So the next idea is that we can find some sort of proxy, like whether someone used an LLM or not. But that's too ham-fisted since expert engineers with all the self-awareness also use the tool, and they have the ability and self-awareness to know that the software they are shipping is good quality, so why would they use the shame tag?
The shame tag has no audience. It's a fantasy that low quality actors will self-identify, else all sorts of societal problems would be made trivial.
Even excluding open source, there are no serious tech companies not using AI right now. I don't see how your position is tenable, unless you plan to completely disconnect.
Disclosing AI has its purposes, I agree, but its not like we can reliably get everyone to do it anyway, which also leads me to thinking this way.
While I agree that it would be nice to filter out low effort PRs, I just don't see how you could possibly police it without infringing on freedoms. If you made it mandatory for frontier models, people would find a way around it, or simply write commits themselves, or use open weight models from China, etc.
Again though, people can trivially hide the fact they used an LLM to whatever extent, so we kind of need to adjust accordingly.
Even if saying no to all LLM involvement seemed pertinent, it doesn't seem possible in the first place.
>Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.
This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
Even when I use proprietary software, I sleep easier at night knowing that open-source alternatives keep them honest in their approach and I have an out if things do change.
Code completions before LLMs was helping me type faster by completing variable names, variable types, function arguments, and that’s about it. It was faster than typing it all out character by character, but the auto completion wasn’t doing anything outside of what I was already intending to write.
With an LLM, I give brief explanations in English to it and it returns tens to hundreds of lines of code at a time. For some people perhaps even more than that. Or you could be having a “conversation” with the LLM about the feature to be added first and then when you’ve explored what it will be like conceptually, you tell it to implement that.
In either case, I would then commit all of that resulting code with the name of the LLM I used as author, and my name as the committer. The tool wrote the code. I committed it.
As the committer of the code, I am responsible for what I commit to the code base, and everyone is able to see who the committer was. I don’t need to claim authorship over the code that the tool wrote in order for people to be able to see who committed it. And it is in my opinion incorrect to claim authorship over any commit that consists for the very most part of AI generated code.
For example, in a given interaction the user of the LLM might be acting more like someone requesting a feature, and the LLM is left to implement it. Or the user might be acting akin to a bug reporter providing details on something that’s not working the way it should and again leaving the LLM to implement it.
While on the other hand, someone might instruct the LLM to do something very specific with detailed constraints, and in that way the LLM would perhaps be more along the line of a fancy auto-complete to write the lines of code for something that the user of the LLM would otherwise have written more or less exactly the same by hand.