> Connect Copilot coding agent with Jira, Azure Boards or Linear to delegate work to Copilot in one click without leaving your project management tool.
- From an empathetic perspective I hope for the sake of the customers of raycast and for its employees that Microsoft is not into any kind of negotiations with Raycast at the moment.
BugsJustFindMe 1 days ago [-]
> Microsoft has a history of monopoly behavior
I just want to note that the case you link to was 25 years ago. The number of people working at Microsoft at the time who are still working there today is very small.
rchaud 1 days ago [-]
The Microsoft that was prosecuted for monopoly behaviour 25 years ago is definitely not the same Microsoft that owns:
- Github
- LinkedIn
- Activision Blizzard
- Xbox
- Azure, Sharepoint and Teams w/Copilot embedded everywhere
- major stake in OpenAI
- a multibillion dollar ad product portfolio (LinkedIn ads, Bing Ads)
TheScaryOne 1 days ago [-]
After being told to not integrate Internet Explorer into the OS, they changed the name to EDGE and did it anyway? With the added excuse that it now compromises most of the file explorer functionality, too?
altairprime 23 hours ago [-]
No, Edge isn’t Internet Explorer; they coexist if necessary for enterprise and legal reasons :)
QuantumGood 1 days ago [-]
> "history .. 25 years ago"
The comment was brief, and added detail is welcome, but corporate mission/culture often extends over time even with changes in leadership. Partly because of what was accepted in the past.
burkaman 1 days ago [-]
One of those people is the CEO though.
lelanthran 16 hours ago [-]
> I just want to note that the case you link to was 25 years ago. The number of people working at Microsoft at the time who are still working there today is very small.
That's just a long way of calling Microsoft a bunch of monkeys :-)
I haven’t clicked through so all I know about Raycast is, “that’s the company that gets shoved into ads by copilot.”
Sounds like it’s not your fault but it’s probably doing some brand damage :/
joeevans1000 15 hours ago [-]
Well... I didn't know about them until now. Looks like a cool product, actually. Might have to try them out. What's that old saying?
delfinom 1 days ago [-]
They should probably get a lawyer to send a C&D.
buildbot 1 days ago [-]
There’s like 100 comments blaming raycast, they should just sue for damages lol.
grayhatter 1 days ago [-]
Had I not seen this thread, I would have assumed they consented to it, and I'd never willingly interact with Raycast or it's team in any way. I still have a somewhat negative opinion, so I think it's safe to say there are damages.
tylerchilds 1 days ago [-]
As a data point, I consent to be counted as associating raycast with the Microsoft brand and viewing them negatively as a consequence of using pull requests as an advertising canvas.
altairprime 23 hours ago [-]
They should sue to have the ads removed from the texts they were inserted into, which is a vastly more difficult problem than simply paying some dollars.
BloondAndDoom 1 days ago [-]
I hear you, but honestly it’s kind of funny to think a company would send C&D to stop free advertising for them. I’d be surprising to see if any company ever does that, whatever the people think small brands worth they actually worth way less than that.
Imustaskforhelp 1 days ago [-]
Is it free advertising or free brand damage? (people might think that raycast had consented to this)
but as we know from this thread, Raycast didn't consent to this.
It might be interesting to see what a lawyer might think of this and if there are enough reasonable claims to genuinely sue for damages
(Raycast definitely seek a lawyer privately, just in case)
huflungdung 1 days ago [-]
[dead]
jarek83 1 days ago [-]
Maybe check if you are charged for it
butterlesstoast 1 days ago [-]
If it’s Microsoft related, might be something in your Partner Center.
Gigachad 2 days ago [-]
Microslop for a while now seems to be testing exactly how much you can abuse the user before they move somewhere else. Windows is a prime example. Everything is ads, tracking, popups, annoyances, etc.
They have got away with it for a while because a lot of users have largely been stuck, but they are in real trouble now with Apple providing meaningful competition.
transcriptase 1 days ago [-]
Yeah but at least a dozen Microsoft employees went on a seemingly scripted blitz on X about how they’re ready to start listening to feedback and…
* checks notes *
Only have copilot shoehorned into most things instead of everything. And some shit about windows developers which isn’t exactly going to fix the glaring issues with the OS itself.
Aerroon 1 days ago [-]
>Yeah but at least a dozen Microsoft employees went on a seemingly scripted blitz on X about how they’re ready to start listening to feedback and…
So what was the purpose of all that telemetry they collected then? Because it doesn't seem to have made the OS like what the users want it to be.
TheScaryOne 1 days ago [-]
Do you hate the "Ribbon" UI that got forced into everything in Win8+?
That's what telemetry was used for. Every advanced user turned that off when they gave us the option, and now we have every UI on the computer designed for Grandma.
thesuitonym 1 days ago [-]
To better target ads.
mulmen 1 days ago [-]
Data Gnomes
1) collect data
2) ???
3) profit
polski-g 1 days ago [-]
They literally broke 40yr standard keyboard layouts on laptops by replacing right alt buttons with their bullshit AI button.
Are they going to fix hardware they've already sold? On every OEM?
fodkodrasz 10 hours ago [-]
PowerToys for the rescue!
I almost commented that you can just configure in the settings, but actually the available options don't include Alt. On my Hungarian layout Thinkpad T-14 it replaced the context menu key, not the right-alt, which is luckily the AltGraph key that has a substantial role in Hungarian input method, it cannot be omitted.
altairprime 23 hours ago [-]
No need; they could just patch Windows to add the UI to override Win-F26 or whatever their synthetic Fkey was (currently disallowed by their software!).
philwelch 1 days ago [-]
It's because of the way companies align their own behavior. "Listening to feedback" is just a good intention but increasing engagement with copilot is a measurable goal. With apologies to George Orwell, imagine an OKR stamping on a human face--forever.
gloosx 1 days ago [-]
Microsoft can show a screen-wide dick enlarger ad instead of everyone's wallpaper and people will still be using windows for decades. They already know it.
heavyset_go 1 days ago [-]
If Microsoft is willing to put ads into your PRs via Copilot like this, imagine what they could put into your codebase itself with Copilot.
Or what Microsoft could do, run, install, etc on/from your computer while running their Copilot agents.
This is the same company that puts ads in your start menu and reinserts them with Windows updates even if you manually removed them.
sehansen 1 days ago [-]
"Reflections on Trusting Trust" for the new era. MSVC doesn't compile a secret master-password into your software, just a Copilot ad.
Spent yesterday pruning dependencies in a project. Cut half of them and everything still worked. Makes you wonder how much stuff we pull in without thinking about it. Same thing with AI-generated PRs honestly, one bad suggestion and it ships.
whattheheckheck 6 hours ago [-]
No linter?
henry2023 1 days ago [-]
I wonder if there will come a time where I can pay M$ to sabotage my competition codebase
degrees57 1 days ago [-]
You have to get acquired by Microsoft first.
StilesCrisis 1 days ago [-]
If they're using Copilot, you're already most of the way there.
neya 1 days ago [-]
Imagine just having the copilot extension installed will be an excuse at some point for them to steal our code to train their AI models. Not sure if they already do this.
> Copilot may include both automated and manual (human) processing of data. You shouldn’t share any information with Copilot that you don’t want us to review.
so they're reserving the right to process whatever it looks at.
You're sending them your codebase already, as part of the prompt for generating new snippets, debugging, etc. So they have access to it.
They'd be absolute fools not to be using the results of sessions to continue to refine their models, and they already reserved the rights to look at what you send them, so yeah - they're doing it.
(Bonus comedy from the ToS:
> Copilot is for entertainment purposes only.
The lawyers know these things cannot be trusted.)
circuit10 1 days ago [-]
Also for some reason that site hijacks your scrolling and tries to "smooth" it, which just makes it feel more unresponsive as most browsers already have smooth scrolling?
I know it's a bit off topic but I'm just confused as to why that would be on there...
account42 10 hours ago [-]
Web developers just can't help themselves from reinventing browser functionality, badly.
neya 1 days ago [-]
> Copilot is for entertainment purposes only.
Jokes on them, that's why I consider entire Microsoft for entertainment purposes only.
davidgerard 6 hours ago [-]
That's the TOS for the broader Microsoft Copilot, not for the GitHub one, which has its own TOSes (depending whether your last renewal was before or after March 5) that don't include the "entertainment" wording.
But one to file away!
justinclift 1 days ago [-]
"at some point"?
Why the assumption it's not already happening?
neya 1 days ago [-]
> Not sure if they already do this.
cookiengineer 21 hours ago [-]
Can somebody explain to me why this is legal?
If anybody but Microsoft does this, it's called malware and they'll end up with an FBI visit and prison time.
Why are the judicative so skewed here in their judgements?
whattheheckheck 6 hours ago [-]
They have trillions
aiedwardyi 1 days ago [-]
[flagged]
oefrha 2 days ago [-]
> There are 1.5m of these things in GitHub.
You’re pointing to something entirely different: those are Copilot-created PRs. They can include anything Copilot wants to include. People using the Copilot PR feature know what they’re buying into.
OP is about Copilot doing post-hoc editing of a human-created PR to include an ad, allegedly without knowledge or approval of the creator (well I assume they did give their team member permission to update the PR body, but apparently not for this kind of crap).
plastic041 1 days ago [-]
I wanted to say that they are same because they are "copilot-written self promotions", but I get your point.
It’s like how Disney Plus “ad free” tier shows you ads for Hulu and Disney Perks. They probably redefine “ad” in their terms of service so their own ads are called something else.
bonoboTP 1 days ago [-]
Yeah it's just helpful tips and suggestions. It's a feature, you see!
MereInterest 1 days ago [-]
I looked into it at one point, as I was disgusted by the unskippable advertisements when paying for an ad-free tier on one of the myriad streaming platforms. Apparently, they distinguish between "advertisements" for a product or service and "promotions" for themselves. I get why that would be a reasonable internal distinction, as the former would require sign-off from the business paying for the advertisement, while the latter would only need internal approval, but it's a pointless distinction after that.
rubyfan 22 hours ago [-]
The distinction is likely a claw back to give themselves just that ability to freely advertise to you after telling you it was ad free. Like what’s the difference advertising a subsidiary like Disney parks to me or a new car? Just that they own the former.
BLKNSLVR 2 days ago [-]
Microsoft would probably seriously refer to it as 'just the tip'.
You'll never guess what happens next.
(Hint: everyone knows what happens next)
stingraycharles 2 days ago [-]
AI clippy?
consp 1 days ago [-]
Leave the poor fellow alone. It's been butchered enough in the late 90s and early 00s, and has been repurposed for a greater good. I'd argue not all Microsoft creates is bad, it just needs someone else to make it better.
mcintyre1994 2 days ago [-]
It's definitely an ad, I think the only real question is whether it's just marketing Copilot or whether part of their partnership with other companies is advertising the integration in this way. The links all go to Copilot docs pages on the integrations, so they're not typical tracked link advertising campaigns.
esperent 2 days ago [-]
Honestly, it being a "tip" or "ad" is exactly the same.
What I mean is that even if I take that at face value and accept that it's not an ad, and I can just about see from a certain level of corporate brainwashing how one could believe that, it's still completely unacceptable.
frereubu 2 days ago [-]
Calling it a "tip" is definitely just a semantic trick to make it slightly less easy to frame a negative response and galvanise opinion against the practise. Reminds me a bit of confirmation shaming (which, now I think about it, I haven't seen in a while) where you're made to click a button that says something like "No, I don't want an amazing 15% off my next order by signing up to your email list".
wincy 1 days ago [-]
I was playing Mario Party Jamboree this weekend with my kids, and when you use a key to unlock doors (for anyone not familiar, Mario Party is a family friendly virtual board game with lots of minigames that’s been around since the Nintendo 64) that serve as shortcuts in the game board, the key is alive and says “don’t you want to keep being friends? You wouldn’t use me on a door, would you?” Which is a humorous twist on confirmation shaming inside of the game and gives me a bit of enmity for the imaginary key.
Conversely, on Doom Dark Ages they got rid of the traditional difficulty mode of “I’m too young to die” which had a picture of Doom Guy with a bib and a pacifier, I think there’s some new industry guidance that it’s a no no to poke fun at people picking easy difficulties, or even indicating what difficulty the game was “designed to be played on” which Japanese game devs happily ignore.
I know these aren’t actual equivalents since your money isn’t used on the line and it’s purely a game state, buts it’s still an interesting and noteworthy transition.
anthonyrstevens 1 days ago [-]
>> you're made to click a button that says something like "No, I don't want an amazing 15% off my next order by signing up to your email list"
Ugh, this type of thing is the worst. "Click here to remain fat, drunk and stupid!"*
* Animal House, 1978
anthonyrstevens 1 days ago [-]
I this a similar thing? Apple web signin doesn't let you easily choose SMS 2FA; you have to click "I can't get to my devices right now" first before you can send yourself a text message. I always resent them for making me lie, because although my devices ARE nearby (ish), my phone is always, like RIGHT THERE.
plastic041 2 days ago [-]
> semantic trick
That's what I wanted to say! Thank you.
plastic041 2 days ago [-]
I do think it's just an ad. Also it's a bad kind of one because 1) it disguises itself as a tip 2) makes people to think if it's an ad for Raycast or other services, when actually it's just promoting itself.
ccozan 2 days ago [-]
if is paid by and for a 3rd party, is an ad. if not, is a tip.
frereubu 2 days ago [-]
That's not a good distinction. If I see an advert for Microsoft 365 in the Start menu on Windows they're both from Microsoft but it's still an advert.
plastic041 2 days ago [-]
It still would be a self promoting, which is still an ad.
b00ty4breakfast 2 days ago [-]
six of one, half dozen of the other; it may not be a payed advertisement but it functions as one if it's suggesting products.
It's not like this is organic word of mouth we're dealing with here.
lwhi 2 days ago [-]
Yep, the fact they're altering repo content with advertising is wholly unacceptable.
ta8903 1 days ago [-]
PRs aren't part of the repository (if you define repository to mean part of `git`'s internal working. It's part of GitHub, which is owned by Microsoft.
mtndew4brkfst 1 days ago [-]
Small nit, but PR description bodies might wind up as part of a commit message verbatim, depending on repo settings and the merger's personal behavior. It's an easy outcome, the merger doesn't need to copy and paste or anything, and I think it might be a default or popular setting for squash-merges.
skywhopper 2 days ago [-]
It’s a spot that will easily be replaced with paid ads, for sure. Not sure why it wouldn’t be better to just inject this sort of message into the UI instead of editing the PR text itself. (Except that the team implementing it probably couldn’t get the UI team to agree.)
heavyset_go 2 days ago [-]
It's platform agnostic as long as your Copilot setup can create PRs on the platform your project is hosted on.
Otherwise, it would just be Github with displayed ads and that would hurt the brand, so everyone gets ads.
kivle 2 days ago [-]
A bit like "suggested apps" in the start menu. It's "suggestions" and certainly not paid ads.
nathanaldensr 1 days ago [-]
It's gaslighting on a worldwide scale is what it is.
altairprime 23 hours ago [-]
Their mistake was editing it into the text bodies, rather than making it a separate element of the page. No doubt they were trying to inhibit adblockers but it’s so much worse a problem for them this way, because they’re presenting an ad in the voice and userpic of the account that made the post.
Yizahi 1 days ago [-]
This tip/ad discussion reminds me of the equally idiotic and misleading Facebook post types. Instead of the correctly labeling all ads as, well, ads, Facebook have some ads called "suggested for you", some are completely unlabeled with only a "follow" button to start following, some ads are labeled as "sponsored" etc. I think they are doing this to evade legal limitations they might have otherwise. Last time I used Facebook it showed me 25 ads in a row (I counted), without any of my hundreds of follows with active feeds. Truly insane company.
ttyyzz 2 days ago [-]
It is clearly an ad, no doubt about that.
red_admiral 2 days ago [-]
> Looks like MS really want to "give tips"
Including Windows, File Explorer, Start Menu, ...
It seems with the latest "ok we went too far" Win11 patch though, they got some tips back from their users.
Cthulhu_ 2 days ago [-]
It's an interesting model, makes me wonder if prolific open source contributors do it ("leave a tip if you like this MR" kind of thing).
2 days ago [-]
antonvs 2 days ago [-]
> Looks like MS thinks it's a "tip" rather than an ad.
No, they don't.
> edit: I think it's an ad too. Everyone would think so, except for MS.
You think a company with a $2.65 trillion market cap and an army of marketing professionals doesn't realize that what they're doing here is an ad, and didn't implement it intentionally as such?
That's not even remotely plausible. In the quantum multiverse which contains all physically realizable possibilities, that isn't one of them.
plastic041 1 days ago [-]
> company with a $2.65 trillion market cap and an army of marketing professionals
That's one reason I think they would argue it's not an ad. Another reasons are "recommendations" and "tips" and "suggestions" in my windows.
antonvs 1 days ago [-]
They might argue it's not an ad but they don't believe or think it's not an ad. There's a big difference.
plastic041 21 hours ago [-]
Well, at least their PM thinks(or argue ) it's a tip[0]. Also it's pretty obvious I was just being sarcastic about MS's behaviors. I don't know why you are so mean but please don't be. Have a nice day.
The correct word would be that the PM claims it’s a tip. Now ask yourself whether a PM who realizes he or his team has made a terrible mistake and is doing damage control in public is likely to make only true claims.
Correcting your mistakes is not mean. If you didn’t mean what you wrote, well hey, that’s a good example of the difference between what you think and what you say. See how that works?
plastic041 31 minutes ago [-]
Correcting my mistakes isn't mean but...
> In the quantum multiverse which contains all physically realizable possibilities, that isn't one of them.
Or
> See how that works?
These are. You can be sarcastic as much as you want to be but I can't?
And again, I really don't understand why are you so mean about this. I read some of your other comments and many of them are unnecessarily mean. Please be nice.
josefritzishere 1 days ago [-]
This does not look like random chance. It's a pattern of behavior.
m3kw9 1 days ago [-]
You just text replaced Ad with Tip, it’s still an ad
cyanydeez 1 days ago [-]
New age clippy no one wants but M$lop
timrogers 1 days ago [-]
Tim from the Copilot coding agent team here. We've now disabled these tips in pull requests created by or touched by Copilot, so you won't see this happen again for future PRs.
We've been including product tips in PRs created by Copilot coding agent. The goal was to help developers learn new ways to use the agent in their workflow. But hearing the feedback here, and on reflection, this was the wrong judgement call. We won't do something like this again.
burnte 1 days ago [-]
> We've now disabled these tips in pull requests created by or touched by Copilot, so you won't see this happen again for future PRs.
It's appreciated, but these weren't tips, these were ads. Tips are "Save time with keyboard shortcuts" or "Check out the latest features under 'Whats New' in the help menu!" When you name other products, that's an ad.
ChadNauseam 1 days ago [-]
That doesn't really make sense. So it's an ad for raycast? But raycast said they didn't know about it. To me the explanation makes perfect sense. "You can use this tool with raycast" seems like a very reasonable tip.
burnte 1 days ago [-]
> That doesn't really make sense. So it's an ad for raycast?
It's an ad for using CoPilot and for Raycast.
> But raycast said they didn't know about it.
If I buy a billboard that tells people to go eat at a nearby restaurant, that's ad regardless of whether or not the restaurant knows that I bought that ad.
> To me the explanation makes perfect sense. "You can use this tool with raycast" seems like a very reasonable tip.
Raycast is a paid product. Even though they have a free tier, they only have that to get people to use and like the tool enough to pay for it. They want you to use Raycast so you use CoPilot and pay for it. It's an ad.
GitPushOrigin 23 hours ago [-]
Anyone claiming this is just a tip is being disingenuous or is extremely naive. MS knows exactly what they're doing, this wasn't a charity offering. Now they're claiming it was a tip to save face.
NekkoDroid 1 days ago [-]
Cambridge Dictionary defines and ad as: a picture, short film, song, etc. that tries to persuade people to buy a product or service
My short search really didn't bring up any definition that included the need of the product/service owner knowning that the advertising is happening.
And the message very much qualifies as trying to bring people to buy raycast (or at minimum to use it which usually want people to also pay later on).
skywhopper 1 days ago [-]
[flagged]
ChadNauseam 1 days ago [-]
[flagged]
AmazingTurtle 1 days ago [-]
Bet their internal "tips team" used an LLM to generate "useful tips" for their coding agent system ;)
johnnyanmac 1 days ago [-]
Yup, broken windows all the way down, to put it kindly
jdejean 23 hours ago [-]
Tips don’t include links to unassociated paid products. Call it a promotion if you prefer, it’s still an unsolicited funnel
skywhopper 1 days ago [-]
Tips are also not acceptable to add to PR text. It’s like the definition of a “weed”. A “tip” in the GitHub UI would make sense. But “tips” injected into my own PR text become unwelcome ads. In any case, what may be helpful “tips” today are only a gateway to straight up paid ads tomorrow. After all, I get told all the time by adtech folks that actually, the ads and all the tracking behind them are good because aren’t I glad the ads are relevant to my interests and that I’m supporting small businesses online whose shops can only exist because of the ad infrastructure. To which I say, no, they aren’t, and that’s a lie.
hightrix 1 days ago [-]
Just to add to the feedback.
No one, anywhere, ever wants this or anything like it. Do not inject anything that is outside of the context of the session, ever.
This is how you get your software banned at large companies.
Question for you, did anyone on the team really not push back? Does the team really think anyone wants ads in their copilot output? If the answer to both of these is no, you have a team full of yes men, not actual developers.
creativeSlumber 1 days ago [-]
> did anyone on the team really not push back?
This is the real question. If they are serious about not doing something like this again, they NEED to look at what process failed and let something like this get proposed, designed, implemented and pushed to production. Usually things get reviewed at each stage. Did the people who pushed back on this get steam rolled? If no one pushed back, that's an even serious culture question and the entire org would need training.
A serious "we won't do it again", needs to be accompanied by a COE on this for identifying what went wrong, and identifying what guardrails can be put in place and then actually implementing them.
QuantumGood 1 days ago [-]
> did anyone on the team really not push back?
That's a tough one. In the big meeting? In the small meeting? "Officially" push back? Encouraged to make the push back unofficial? Etc. Even just internally, it can be hard to quantify. From internal > external, more so.
salamander014 3 hours ago [-]
This so much.
The number of times I’ve had to defend someone else’s customers let alone my own is exhausting.
And that dynamic is only allowed within close circles.
I’ve found once “the decision” is made, the bigger the subsequent meeting, protests are often swept under the rug.
On most occasions the worst part is that folks intentionally withhold information to get their way. And thats real hard to compete against without making an ass out of yourself, or losing the trust of others.
This is why core principals matter so much.
sneak 1 days ago [-]
They already know that nobody wants it. They don’t care.
sudonem 1 days ago [-]
They’re also developers and probably do care. I’d wager, as always, someone in management with bonus targets to hit probably told them to do it anyway. :/
justinclift 1 days ago [-]
> We won't do something like this again.
Microsoft has been pulling user hostile crap for decades, so either "we" or "like this" (or both) is probably not super accurate. ;)
namrog84 1 days ago [-]
Having worked in such environments. This particular team will try not to do it again
But many other teams didn't make the commitment or learn any lesson. And even the original team will churn over people and people will forget or new leadership comes in.
I believe they were being sincere but reality is often more complicated than 1 persons statement.
vdfs 1 days ago [-]
We will never do something like this unless we get caught
hedora 1 days ago [-]
Wait! I think most people missed your "touched by Copilot" disclaimer.
Over on twitter, someone from MS said that Copilot can modify PRs simply because they were mentioned?
I've been using GitHub since it was new and heavily rely on coding agents for development, but that's an insanely large security hole. There's clearly confusion about what copilot is and is not able to edit elsewhere in this thread.
I'm backing up old repos now, and am no longer trusting your service as an archive. I'm wondering if the world needs to fork things like npm and vs code to save itself from the supply chain attacks these sort of product management decisions will enable.
I already moved active development elsewhere when you dropped below three nines back in 2024-2025.
naikrovek 1 days ago [-]
If you don’t want copilot to work on your PRs, don’t ask it to.
manmal 1 days ago [-]
I would expect it to comment, not alter the code?
naikrovek 1 days ago [-]
It won’t unless you ask it to. It will review your PRs and it will create PRs if you don’t turn those things off, I believe, but it won’t edit or modify any PR.
My employer pushes copilot quite hard and I’ve never seen copilot do anything without me telling it to act in some way.
manmal 1 days ago [-]
Thank you for clarifying. It’s hard to get facts nowadays, people are just claiming whatever.
jffry 1 days ago [-]
> We've been including product tips in PRs created by Copilot coding agent
If the PR is wholly authored by Copilot I get the spirit of this, although maybe not the best implementation. And "tips" like this that look like an ad for a product _definitely_ feel like an enshittification betrayal of the user, even if it was a genuine recommendation and not a paid advertisement.
In the OP's situation, where where Copilot was summoned to fix some thing within a human-authored PR, irrelevant modification of the PR description to insert unrelated content is specifically egregious. Copilot can easily include the tip in its own comment, so I'm curious why it was decided to edit the description of a PR instead.
plasma 1 days ago [-]
To be honest, just a user here, it’s only recently (like a week?) you can ask Copilot to edit an existing PR, historically it’s had to open a new one (that merged back to original PR) or it had to make it to begin with, I can see this unintentionally happening as part of this improvement to edit existing PRs
skywhopper 1 days ago [-]
Nah, PR text is a completely inappropriate place for a tip to appear. A PR description should describe the contents of the PR, not include unrelated, unsolicited advice. It’d be like submitting a bug fix, and saying “this PR fixes bug X, and also, have you considered using a different linter in this project?” Completely inappropriate.
Aachen 1 days ago [-]
Tip: tomatoes are on offer at Contoso now!
(Now imagine this edited into the post you just made for a more-apt comparison)
If you do work at MS, I cannot believe any person involved legit thought it was "just a tip and nobody will mind their posts being edited to include product recommendations". I don't know what other parts of your comment are honest if the core statement is false
nrds 1 days ago [-]
> We won't do something like this again.
This has just as much value as when an LLM claims it won't make a certain mistake again, and for exactly the same reason.
TonyStr 1 days ago [-]
Thank you, Tim.
You should gather together your team and look through the responses to this thread together. There are a lot of emotions in these comments, but it could be a very constructive experience if you're able to put that aside. I'm sure you're aware that customer-sentiment toward Github has been poor lately, but these commenters are your customers. I believe Github has the potential to win back loyalty, but it will require a deeper understanding of your customer segment.
tyleo 1 days ago [-]
I’m curious how the decision to include ads like this was made. Is that something you can share?
shimman 1 days ago [-]
[flagged]
ncr100 1 days ago [-]
MS was deemed a Monopoly I believe around '99 and was not broken up, was instead given behavioral edicts by the court.
Microsoft owns GitHub where many of these ethical violations are easily found and were perpetrated.
I speculate the cultural safety around that monopoly-power for corporate-benefit behavior could still be present and accepted for negotiations between MS and acquisition targets.
moconnor 1 days ago [-]
Whoever did this must have realised the users will hate it. So… is this just demonstrating that the internal culture emphasises other things than user happiness?
I also note that ”for PRs” - will we see these appearing as comments in generated code?
odst 1 days ago [-]
"We won't do something like this again"
Sureeeeee
wartywhoa23 1 days ago [-]
Will surely do something like another thing nobody wanted or needed instead.
stackghost 1 days ago [-]
Hi Tim,
I see that you're a product manager at GitHub. Can you explain why you thought this feature was value-added?
nikisweeting 1 days ago [-]
I know this is not the right place for this but if there's any chance you could send this link to someone internal at Github who knows how to fix this, that would be awesome! https://github.com/orgs/community/discussions/70577
It's only semi-related in that it's a similar string thats appearing in millions of repos due to a Github feature change, but it's now polluting Google search results with tons of duplicate URLs unnecessarily. Issue has 100+ votes but has been entirely ignored by Github team.
nazgul17 1 days ago [-]
> The goal was to help developers
Is Microsoft receiving payments for these?
keremimo 14 hours ago [-]
You mean ads. Don't sugarcoat it. They are ads. Not tips. Ads.
miraculixx 17 hours ago [-]
So you continue to show ads to Copilt, just not to the user? If so, not a fix.
trueno 8 hours ago [-]
mate nobody wants unwarranted tips. have you guys lost your mind
naikrovek 1 days ago [-]
We don’t like ads, my man. There are too many MBAs in that company now. MBA holders lose contact with reality about halfway through that degree. Do not listen to them. They will destroy any product they touch if given enough time.
IAmGraydon 20 hours ago [-]
> The goal was to help developers learn new ways to use the agent in their workflow.
I appreciate the rest of your reply, but it would be generous to say you're stretching the truth here. Yes, the official MS statement is that these are "tips", but you, I, and everyone else here knows what this is.
m3kw9 1 days ago [-]
Who approved this dumbaz move? It’s clearly an Ad and calling it a tip is insulting
1 days ago [-]
markkitti 1 days ago [-]
Thank you for listening.
NicoJuicy 14 hours ago [-]
I literally thought it was an early April fool's
yakshaving_jgt 15 hours ago [-]
This was obviously a terrible error of judgement. Will you be resigning over this?
AlexandrB 1 days ago [-]
Can I get that in writing in the ToS/EULA please?
poszlem 1 days ago [-]
Shockingly poor judgment.
explodes 1 days ago [-]
Huge miss. Again. And again. And again.
JohnTHaller 1 days ago [-]
For what it's worth, I appreciate that you took the time to address the issue and respond here, Tim.
Henchman21 1 days ago [-]
WE won't see it happen again ... UNTIL IT DOES! You guys are disingenuous actors. Bad faith and all that.
See, what I expect is that you or someone on your team will move on internally, and then all promises made will be not just forgotten, but tossed aside with relief. Because this is The Way within MS now. All projects are just fodder for your CV, and when you get that paybump/position you want some other completely unscrupulous actor will join and implement the same. exact. thing.
Edit: Wow this is a shitshow. It's almost like you dumb fuckers have burned up ALL THE GOODWILL YOU HAD LEFT.
cute_boi 1 days ago [-]
You may not want to do it, but will Microslop leadership agree? I don’t think this problem can be solved while leadership is focused only on adding more slop.
QuadmasterXLII 1 days ago [-]
“We won’t do something like this again”
A verifiable claim! I put it at 75% you totally will, but if any manifolders think I’m full of it it should converge to something less cynical
Don’t worry, some alternate interpretation of the words “we”, “do”, or “like this” will allow a welch.
mananaysiempre 1 days ago [-]
> A verifiable claim!
Once you put a deadline on it. As stated I don’t think it is.
malfist 1 days ago [-]
I mean its microslop, it'll probably be back by the end of the week. They only know how to let people to say "yes" or "ask again later"
instakill 1 days ago [-]
[flagged]
dang 1 days ago [-]
Please see https://news.ycombinator.com/item?id=47576084 and please don't post so aggressively. I'm sure you don't intend to, but it has a strong negative effect on HN threads, and we're trying for something different here.
You may not feel you owe $BigCoEmployee better (though chances are, said person is just as much a community member here as you and the other users slamming them are), but you owe this community better if you're participating in it.
GP did not personally attack or denigrate the person they were replying to.
As the dozens of other comments show, the overwhelming majority of us do not believe the root commentors claims, and this PM quite objectively does not have the leverage and authority to back their claim that they won’t let this happen again.
It’s hard not to read your conception of “trying for something different” as granting undue credulity to a transparently dishonest corporate actor.
dang 1 days ago [-]
I understand, and I don't want to see ads in such contexts either. But "nobody believes this" is of course a personal attack, and "you don't have the power to [do what you just said you will do]" is pretty aggressive too.
The impulse to hit back against what is perceived as a "transparently dishonest corporate actor" is natural and human. I feel it also, and in fact my first response when I read such comments is always an adrenaline surge and the peculiar pleasure-hit of righteous indignation. So yes, I know where these feelings are coming from; we all do.
The problem is that in the HN context, (1) there is a human being at the other end of the account being attacked, and (2) there are orders of magnitude more attackers. In practice, this can easily turn into a mob dynamic and in fact a mass beating, if a virtual one. That's bad in its own right and bad for the community here.
I would say that "nobody believes this" would usually be a personal attack by default but when it's followed up with "you do not have the power to prevent it" it's not a personal attack.
wswope 1 days ago [-]
> The impulse to hit back against what is perceived as a "transparently dishonest corporate actor" is natural and human.
Honest question: If we agree that the transparent dishonesty and the lynch mob behavior are both undesirable, how do you think the two should be balanced in operative terms?
I don’t want to put words in your mouth — but are you saying you won’t allow direct pushback to dishonest corporate actors??
My view is that healthy discourse requires balance and proportionality: flagrant dishonesty, as is the case here, should license a proportional degree of pushback.
I don’t agree at all that “nobody believes this” is quite the personal attack you’re making it out to be, but I don’t care to debate that at length either.
dang 1 days ago [-]
Two thoughts:
(1) the long-term health of the community has to be the priority here. Otherwise it won't survive—all the default internet vectors point the other way;
(2) it's possible to push back, express skepticism, etc., in way that respects the person on the other side of the conversation and isn't just venting the impulse to shame the other.
You guys (<-- by which I really mean all of us in this community) need to remember that you're not just addressing a $BigCo abstraction when you post replies to someone else's comments. You're talking to an individual human. Sure, they may be working for a large and powerful company; but in the HN context the power dynamic is actually quite the reverse. If you put yourself in their shoes for a minute, it shouldn't be so hard to recognize that.
Like I said upthread, I agree with you on the underlying issue. But we also have to preserve the container, and the latter has to take precedence.
wswope 1 days ago [-]
It’s not about bigco at all in my eyes.
At the end of the day, if you want intellectual curiosity and openness, bad-faith dishonesty needs to be weeded out; thought-provoking and honest conversation should be promoted, regardless of where the contributor is employed.
The problem isn’t working for Microsoft. The problem is dishonesty.
You’re treating the root comment with kid gloves because it’s from a Microsoft employee. Please don’t do that.
dang 18 hours ago [-]
Internet commenters massively over-attribute "bad-faith dishonesty" to others while denying it in themselves. There's enough bad faith to go around in all of us.
It's obvious that the dominant variable in the GP was that he was replying from within $BigCo. Your comment starts out by denying that and ends by confirming it.
I'm not asking for special treatment for anyone, but the opposite: I don't anyone on HN to be the target of a mob. That's the entire point.
wswope 9 hours ago [-]
Internet or not, I post under my real name on here, and I fully stand by my words. Anything I say on here, I’m 100% willing to say to someone’s face. We can link up for coffee or a beer next time I’m in CA if you’d like, and I’ll prove it.
The root comment is an aggressive affront to the audience’s collective intelligence. You’re in full “rules for thee; not for me” territory, and undermining your own site guidelines if you wanna let the root comment stand unchecked but go after the rightful callouts, in my book.
bilekas 1 days ago [-]
> But hearing the feedback here, and on reflection, this was the wrong judgement call
Hi Tim.. Why is there no pushback from grounded individuals against these decisions ?
ryandrake 1 days ago [-]
I'm sure there was push-back, but only inside the minds of the rank-and-file. Nobody would have dared to actually speak out against it, as it would be career limiting. That's probably how a lot of these boneheaded decisions happen: It's an Emperor's New Clothes situation, nobody speaks up, and then the emperor is satisfied that the decision is great.
bitdeep 1 days ago [-]
> We won't do something like this again.
It's like you hiding shorts on youtube.
mghackerlady 1 days ago [-]
For some reason I don't believe you. When you do things like this, you lose trust. Work to get it back
vegadw 1 days ago [-]
Hi Tim, it's Jim, your manager. Please stick to the officially released statement:
"We tried to put ads in our product and it made people upset, upon realizing that this has angered our already paying users, we realize we should try again in a month. We're also aware GitHub is down, and are doing our best to deliver you a single 9 of reliability"
This helps us establish a strong, cohesive brand image inline with what customers of GitHub expect.
---
Edit: I don't mean anything bad to Tim here, seems like a nice guy with good technical experience, etc. Rather, I'm expressing the almost comical extent to which I and - to the best of my understanding - many other community members see GitHub in a very negative light now, being unreliable and, as the article points out, enshitified. So, this is at GitHub, Not Tim, it's just addressed to him for the bit.
Tim, I do actually appreciate you responding to this thread and if you do have the power to make things better, using that power to do so.
1 days ago [-]
monegator 1 days ago [-]
> We won't do something like this again.
it won't be an ad. It won't be a tip. It will be a suggestion! Recommendation! Opportunity!
chrisnight 1 days ago [-]
Be like Discord, call it a “Quest”.
semiinfinitely 1 days ago [-]
[flagged]
itomato 1 days ago [-]
[flagged]
tyleo 1 days ago [-]
This feels a bit threatening. Just want to call it out. I also disagree with the decision but I respect that someone came forward and took responsibility. That helps build our shared understanding of what happened. It’s hard and not something we should discourage.
itomato 1 days ago [-]
I feel threatened by Product placements disguised as "Tips".
We're not remotely even.
g051051 1 days ago [-]
How is that "threatening"? Genuinely curious.
itomato 1 days ago [-]
And why are they so “threatened”? Are they in the Core AI Org?
buildbot 1 days ago [-]
[flagged]
dang 1 days ago [-]
Please don't attack people for showing up to engage in discussion like this. I'm sure you don't intend to, but it quickly becomes part of mob behavior. We don't want that on HN for obvious reasons, and I'm sure nobody intends it, exactly, but it happens all too easily anyhow.
I appreciate the reply. As mentioned, it happens unintentionally. One way to describe the (desired) HN community is everyone learning together how to avoid unintended effects.
johnnyanmac 1 days ago [-]
> everyone learning together how to avoid unintended effects.
Okay, but when will Microsoft?
Or is it a more charitable interpretation to suggest they did intend this to be the effect?
dang 1 days ago [-]
No, I wouldn't argue that. The point is we need to do this for ourselves, regardless of what some company or other group of people do.
john_strinlai 1 days ago [-]
>It’s rather bold to post here…
it is rather nice, honestly. would you prefer to scream into the void and not get any response at all?
an open line of communication with the responsible people seems like literally the best possible option, why are you actively discouraging it?
>Maybe you all want to talk to Microsoft PR/legal before posting?
you would rather not hear anything, or get word-salad legalese that doesnt mean anything? how exactly would that be better?
johnnyanmac 1 days ago [-]
>would you prefer to scream into the void and not get any response at all?
At this point, yes. What has false platitudes done except cause more in-fighting?
>an open line of communication with the responsible people
And here's how the in-fighting begins. I'm not falling for the "they responded on social media. They're just like us!" anymore.
I don't want words, I want actions. Tired of playing whack a mole.
>you would rather not hear anything, or get word-salad legalese that doesnt mean anything?
Hearing nothing doesn't waste my time.
john_strinlai 1 days ago [-]
>Hearing nothing doesn't waste my time.
if not wasting time is your goal, several layers deep into the comments of a hackernews post is probably not the correct place to be.
johnnyanmac 1 days ago [-]
Perhaps. But I still do find insight in seeing the vibes of the community. Not as much from corporate PR.
buildbot 1 days ago [-]
I’m not intentionally discouraging it.
The responses are affecting my impression of Microsoft and Github extremely negatively. I don’t think I am alone.
It’s already pretty word salad legalese in my opinion, at least from Github.
> We are not training on the contents of private repos
Supremely ethical of you to ignore the license terms of open source code, but respect the license for proprietary code.
ncr100 1 days ago [-]
This too is creepy.
The behavioral impositions by the court in the United States versus Microsoft trial discourage it from Monopoly behavior by opening third-party apis to competitors.
Q: Will Microsoft share its access to users private repos where they have not opted out of this training via its GitHub subsidiary, with third parties (eg OpenAI and Anthropic), in the spirit of its loss to the United States during its trial for Monopoly behavior?
Eg ethically today, Microsoft may be able to be argued to be monopolizing user data for its own AI tooling advantage.
pesus 1 days ago [-]
Why such strong opposition to getting user consent before doing any of this? Not respecting consent seems to be a very common theme with MS these days, and it really doesn't reflect well on the company or you personally.
johnnyanmac 1 days ago [-]
Bypassing consent has been a very pervasive theme in tech and beyond this decade.
hightrix 1 days ago [-]
Opt out is the same as forcing this on people that don’t want it. You know this.
Microslop proving their name time and time again.
ulbu 1 days ago [-]
why not make it opt-in?
and I wonder if this opt-out applies to data we stored under your umbrella before having opted-out.
jasonjmcghee 1 days ago [-]
What am I supposed to opt out of? The only setting in "Privacy" is "Suggestions matching public code" which is blocked and seems wholly unrelated to this.
chaps 1 days ago [-]
How much has Microsoft paid you to sell your soul?
buildbot 1 days ago [-]
Yes or No: Hypothetically I put customer data in a private repo, a single file. I use copilot to analyze the file, submitting its contents to that backend. This is the only thing in the repo. Is that data collected and trained on? If the answer is not no, you are lying about what this opt in is.
voganmother42 1 days ago [-]
Opt out is horse shit
microtonal 1 days ago [-]
IANAL I wonder how that is legal in the EU, at least for private individuals, since under the GDPR you need consent for collecting such data. (A timed opt-out is not consent.)
tyleo 1 days ago [-]
I’ve felt similarly about moving off GitHub. I bought a small 5U server rack years ago for my home network setup.
I’m considering getting a 1U device to host my own git server. I feel like if I move off, I should do it generally vs just moving to another provider who may also pull shenanigans.
ie you can run it effectively on even a Raspberry Pi
Remember to ensure you have proper backups regardless of whatever you decide to host it on. :)
monegator 1 days ago [-]
i had a gitea instance in a beaglebone black! Self hosting can have really low requirements (now it's a much beefier banana pi R3 router, but there are many containers running on it)
neya 2 days ago [-]
I feel like there is an even more important crisis that is being masked over here:
New Section J — AI features, training, and your data: We’ve added a dedicated section that brings all AI-related terms together in one place. Unless you opt out, you grant GitHub and our affiliates a license to collect and use your inputs (e.g., prompts and code context) and outputs (e.g., suggestions) to develop, train, and improve AI models.
We should not be using Copilot in the first place.
heavyset_go 2 days ago [-]
OpenAI/ChatGPT/Codex, Anthropic/Claude and Google/Gemini all do this.
neya 1 days ago [-]
> OpenAI/ChatGPT/Codex, Anthropic/Claude and Google/Gemini all do this.
1. Everyone doing this doesn't mean it's acceptable.
2. Google Gemini explicitly says right under the chat box if you are a paid subscriber (Workspace):
Your <company name> chats aren’t used to improve our models. Gemini is AI and can make mistakes.
Not sure about the others.
g947o 1 days ago [-]
I think anyone using a "Team" or enterprise plan of ChatGPT/Claude/Copilot doesn't have their data used for training, that's the same across the board.
heavyset_go 22 hours ago [-]
My comment was not meant to excuse what they're doing, just to point out that it's the bad status quo for these services
cromulent 2 days ago [-]
Regarding Claude: As I have unticked the "Help improve Claude" checkbox, I was under the impression that Claude did not do this.
-> Privacy -> "Allow GitHub to use my data for AI model training"
neya 1 days ago [-]
Yeah, but it's a shitty move though - it should be by default opt-in, rather than opt-out. Imagine, you just continue coding normally consciously avoiding co-pilot only to find out that Github has been secretly training their models on your code, just because you forgot to toggle a setting off which was turned on without your knowledge, which they didn't even have the decency to email you about, but just posted on a blog no one reads.
saintfire 1 days ago [-]
I got an email about it.
Its sort of a moot point since the whole thing is for good will anyways.
They freely scraped licensed code and semi-private data across the internet and now they're pretending that they need to license anything.
If a court rules they had to license data in the first place then the whole industry would actually have to start following laws.
Interesting indeed. I wonder how long GitHub as a platform will be there as a viable option. Anyone who remembers SourceForge?
mghackerlady 1 days ago [-]
It still exists. It's practically unusable without an adblocker (like slashdot) but the occasional old project is hosted there (particularly CDE. how the mighty have fallen)
It's becoming clearer and clearer that open-source is our only hope against enshittification. Everything that is VC backed or publicly traded will become enshittified, it's just a matter of time. At least with open-source, you can fork it and remove the "features" or point your agent to it and have it write the feature in your tech stack.
Hell, I just saw an amazing open-source alternative to Raycast[0] and just replaced it the other day.
> open-source is our only hope against enshittification. Everything that is VC backed or publicly traded will become enshittified
Solo founder here. My business is not VC-backed nor publicly traded, and I specifically avoided taking investment so that I can make all the decisions.
I avoid enshittification. This sometimes hurts revenue, but so be it. I wouldn't want to subject my users to anything I wouldn't like.
So, open-source is not the only hope. You can run a sustainable business without enshittification. The problem is money people. The moment money people (career managers, CFOs, etc) take over from product people, the business is on a downward path towards enshittification.
theturtletalks 2 days ago [-]
I believe you, it's just I've seen similar stories and the good-intentioned founder gets tired and eventually sells the business and the new owner ends up enshittifying the product. Not saying in the slightest it will happen to your company and I don't hold that against the founder. It's their prerogative after all.
Even when I use proprietary software, I sleep easier at night knowing that open-source alternatives keep them honest in their approach and I have an out if things do change.
jjav 2 days ago [-]
> It's becoming clearer and clearer that open-source is our only hope against enshittification. Everything that is VC backed or publicly traded will become enshittified, it's just a matter of time.
Stallman was always right, after all.
majewsky 2 days ago [-]
Well, about the free-software part, anyway.
mxmilkiib 1 days ago [-]
public/legislative demand for data portability is imho the movement that will help shift society from this cycle
edit: oh, that and distributed authentication and distributed discovery
In addition, they're doing some very shady stuff re: captchas and accessibility, most likely running some secret patches on their server that they're not publishing in their source tree.
progval 2 days ago [-]
Can you be more specific?
steve1977 2 days ago [-]
It is, but Codeberg is only for free and open source projects.
sumuyuda 2 days ago [-]
Check out https://codefloe.com for private repos hosted with Forgejo. It is also free and hosted in the EU.
hvb2 2 days ago [-]
Are you actually using this? Their status page seems to indicate that their main service is unhealthy for the past 6 days?
Unhealthy doesn't mean unusable but it sounded great until I checked that.
sumuyuda 2 days ago [-]
I just started using it last week. So can’t comment on the reliability yet.
ahartmetz 2 days ago [-]
You are free to host your own instance for commercial software.
steve1977 2 days ago [-]
But that would be Forgejo and some other projects AFAIK, not Codeberg (which is basically a hosting service using these projects)
ahartmetz 2 days ago [-]
Yeah sure, and I guess there's a market for that as a service - others have mentioned at least one instance of that.
2 days ago [-]
pelasaco 2 days ago [-]
until its not.
Every company or entity changes over time. Codeberg is great, but with more people using it for free, without donating, and worse, more people abusing the service with some bs AI generate code, malware, etc, more expensive will get to keep it running.. for now they have money, but as e.V in Germany, you survive either from members or from donations.. So use Codeberg, but most important, support it!
Sourcehut is pretty good if you're willing to pay the (very reasonable may I add) prices
raincole 2 days ago [-]
A few decades? Its competitors are not magically immune to this kind of spam.
jruohonen 2 days ago [-]
> Its competitors are not magically immune to this kind of spam.
Sure; a platform is a platform is a platform. As for predictions, it is interesting to see whether self-hosting and smaller self-managed infrastructures will gain more traction again.
petcat 2 days ago [-]
> I wonder how long GitHub as a platform will be there as a viable option.
It will be there for as long as you (and everyone else) keep using it.
wartywhoa23 1 days ago [-]
It will be there as long as M$ still needs to train LLMs on human-made code.
antonvs 1 days ago [-]
The desire for free stuff is one of the most effective psychological hacks there is.
The large majority of the dystopian web, like Gmail, Facebook, etc. depend on that.
People who avoid e.g. Github, Gmail, Facebook, Xitter, etc. out of concern for broader principles will always be minor outliers.
Xitter is one of the best examples. Everyone knows it's compromised, owned by an dangerously antisocial person who's actively working at multiple levels to make the lives of everyone else on Earth worse, yet very few have stopped using it.
The saying "There's no ethical consumption under capitalism" is far too weak. It should me more like, there are no ethics under capitalism.
RALaBarge 1 days ago [-]
It will probably remain as a platform for a very long time.
cess11 1 days ago [-]
SourceForge is still chugging along. It hosts some prominent projects:
It's baked in literally into every coding tutorial and is kind of industry standard, like JIRA. Maybe it's just an experiment at this moment.
officialchicken 2 days ago [-]
I must have a really really outdated version of K+R C.
bayindirh 2 days ago [-]
> kind of industry standard
...for now.
> like JIRA
is not an industry standard. It's a widely used software by some folks. I used it in the past, not using now, for example.
> Maybe it's just an experiment at this moment.
Does Microsoft understand objection and negative feedback to experiments?
- No.
- Remind me in three days.
ahartmetz 2 days ago [-]
Fuck the industry standard. That is how industry standards change.
By the way, most pre-industry-standard FOSS projects still have their own infrastructure. I do find it disappointing that Rust is on GitHub.
dvfjsdhgfv 2 days ago [-]
Most larger orgs I worked for used Gitlab rather than Github.
Anyway, the core value of Github has always been collaboration - this is where people were. If people go to other platforms, this core value dwindles. And switching platforms is not that difficult.
What an absolute mess. It's like some dystopian future where a man is laying in a casket, nearly dead, and on the casket's ceiling, inches from his face, is a screen with an ad blaring to drink more Diet Fanta.
kstenerud 1 days ago [-]
The ads are annoying, and I'm glad Microsoft will stop doing it.
One thing I do like, however, is how agents add themselves as co-authors in commit messages. Having a signal for which commits are by hand and which are by agent is very useful, both for you and in aggregate (to see how well you are wielding AI, and the quality of the code being generated).
Even when I edit the commit message, I still leave in the Claude co-author note.
AI coding is a new skill that we're all still figuring out, so this will help us develop best practices for generating quality code.
yarn_ 1 days ago [-]
I don't quite see the benefit of this, personally.
Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it? The quality+understanding bar shouldn't change just because "oh idk claude wrote this part". You don't get extra leeway just because you saved your own time writing the code - that fact doesn't benefit me/the project in any way.
Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code).
The code is either good or it isn't, and you either understand it or you don't. Whether you or claude wrote it is immaterial.
kstenerud 1 days ago [-]
You're quite right that the quality of the code is all that matters in a PR. My point is more historical.
AI is a very new tool, and as such the quality of the code it produces depends both on the quality of the tool, and how you've wielded it.
I want to be able to track how well I've been using the tool, to see what techniques produce better results, to see if I'm getting better. There's a lot more to AI coding than just the prompts, as we're quickly discovering.
philote 1 days ago [-]
Just curious, what metrics would you use to track how well your results are?
kstenerud 1 days ago [-]
The tools are still in their infancy, but it would likely be a series of metrics such as complexity, repetition, test coverage issues (such as tests that cover nothing meaningful), architectural issues that remain unfixed far beyond the point where it would have been more beneficial to refactor, superfluous instructions and comments, etc.
yarn_ 1 days ago [-]
Yep other people pointed this out as well, this makes sense to me.
sheept 1 days ago [-]
As a reviewer, I do care. Sure, people should be reviewing Claude-generated code, but they aren't scrutinizing it.
Claude-generated code is sufficient—it works, it's decent quality—but it still isn't the same as human written code. It's just minor things, like redundant comments that waste context down the road, tests that don't test what they claim to test, or React components that reimplement everything from scratch because Claude isn't aware of existing component libraries' documentation.
But more importantly, I expect humans to be able to stand by their code, and at times defend against my review. But today's agents continue to sycophantically treat review comments like prompts. I once jokingly commented on a line using a \u escape sequence to encode an em dash, how LLMs would do anything to sneak them in, and the LLM proceeded to replace all — with --. Plus, agents do not benefit from general coding advice in reviews.
Ultimately, at least with today's Claude, I would change my review style for a human vs an agent.
yarn_ 1 days ago [-]
I agree with a lot of this, but thats kind of my point: if all these things (poor tests, non-DRY, redundant comments, etc) were true about a piece of purely human-written code then I would reject it just the same, so whats the difference? Likewise, if claude solely produced some really clean, concise and rigorously thought-through and testsed piece of code with a human backer then why wouldn't I take it?
As you allude to (and i agree), any non-trivial quantity of code, if SOLELY written by claude will probably be low-quality, but this is apparent whether I know its AI beforehand or not.
I am admittedly coming at this as much more of an AI-hater than many, but I still don't really get why I'd care about how-much or how-little you used AI as a standalone metric.
The people who are using AI "well" are the ones producing code where you'd never even guess it involved AI. I'm sure theres linux kernel maintainers using claude here and there, its not like they expect to have their patches merged because "oh well i just used claude here don't worry about that part".
(But also yes, of course I'm not going to talk to claude about your PR, I will only talk to you, the human contributor, and if you don't know whats up with the PR then into the trash it goes!)
drfloyd51 1 days ago [-]
Knowing if an AI contributed is good data. The human is still responsible for the content of the PR.
While code is good or not, evaluating it is a bit of a subjective exercise. We like to think we are infallible code evaluating machines. But the truth is, we make mistakes. And we also shortcut. So knowing who made the commit, and if they used AI can help us evaluate the code more effectively.
layer8 1 days ago [-]
It’s not about who wrote it, but about who is submitting it. The LLM co-author indicates that the agent submitted it, which is a contraindication of there being a human taking responsibility for it.
That being said, it also matters who wrote it, because it’s more likely for LLMs to write code that looks like quality code but is wrong, than the same is for humans.
yarn_ 1 days ago [-]
Well if an agent is submitting it I'm just going to reject it, thats no problem. "Just send me the prompt".
microtonal 1 days ago [-]
Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it?
The problem is that submitters often do not feel responsible for it anymore. They will just feed review comments back to the LLM and let the LLM answer and make fixes.
This is disrespectful of the maintainers' time. If the submitter is just vibe/slop coding without any effort on their part, it's less work to do it myself directly using an LLM than having to instruct someone else's LLM through GitHub PR comments.
In this case it's better to just submit an issue and let me just implement it myself (with or without an LLM).
If the PR has a _co-authored by <LLM>_ signal, then I don't have to spend time giving detailed feedback under the assumption that I am helping another human.
yarn_ 1 days ago [-]
Right but these are bad actors, roughly speaking, so why should I expect them to disclose the fact that they're using LLMs to me?
If someone is repeatedly sending me slop to look at I'll block them whether or not they tell me an LLM was involved
Forgeties79 1 days ago [-]
> Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it?
Maybe one day we can say that, but currently, it matters a lot to a lot of people for many reasons.
yarn_ 1 days ago [-]
> Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code)."
That was my point here, it is a false signal in both directions.
Forgeties79 1 days ago [-]
According to you it’s all false. I don’t agree, and it certainly shouldn’t just be taken as a given.
For instance, I would want any AI generated video showing real people to have a disclaimer. Same way we have disclaimers when tv ads note if the people are actors or not with testimonials and the like. That is not only not false, but is actually a useful signal that helps present overly deceptive practices.
yarn_ 1 days ago [-]
I don't see what the "deceptive practices" would be though - you can just look at the code being submitted, there isn't really the same background truth involved as with "did the thing in this video actually happen?" "do these commercial people actually think this?"
If I have a block of human code and an identical block of llm code then whats the difference? Especially given that in reality it is trivial to obfuscate whether its human or LLM (in fact usually you have to go out of your way to identify it as such).
I am an AI hater but I'm just being realistic and practical here, I'm not sure how else to approach all this.
Forgeties79 9 hours ago [-]
I’m not an AI hater and I still think it should be disclosed. That’s how it should be approached.
snackerblues 1 days ago [-]
It tells you what average quality to expect, and to look out for beginner-level mistakes and straight up lying accompanied with fine bits of code. Not sure why you wouldn't want that context.
themafia 1 days ago [-]
Stealing copyrighted code and calling it your own is not a "skill."
HDThoreaun 1 days ago [-]
of course it is
johnnyanmac 1 days ago [-]
"Great artists steal" - Steve Jobs
johnnyanmac 1 days ago [-]
It's nice that you believe the goal most Ai code is striving for it "generataing quality code".
fortran77 1 days ago [-]
Yes. I don't mind AI submissions to my hobby projects as long as there's a person behind it. Only fully automated slop I mind. Before AI I used to get all sorts of PRs from people changing a comment or a line of documentation just so they can get more green squares on their GitHub summary. Plus ça change....
A line at the bottom of PRs, reports, etc that says "authored with the help of Copilot" is fine.
swimmingbrain 1 days ago [-]
[dead]
jackp96 1 days ago [-]
So, philosophically speaking, I agree with this approach. But I did read that there was some speculation regarding the future legal implications of signalling that an AI wrote/cowrote a commit. I know Anthropic's been pretty clear that we own the generated code, but if a copyright lawsuit goes sideways (since these were all built with pirated data and licensed code) — does that open you or your company up to litigation risk in the future?
And selfishly — I'd rather not run into a scenario where my boss pulls up GitHub, sees Claude credited for hundreds of commits, and then he impulsively decides that perhaps Claude's doing the real work here and that we could downsize our dev team or replace with cheaper, younger developers.
mikkupikku 1 days ago [-]
Let your employer's lawyers worry about that. If they say not to use LLMs, then you should abide by that or find a new job. But if they don't care, then why should you?
As for hobby projects, I strongly encourage you to not care. You aren't going to lawyer up to sue anybody, nor is anybody going to sue you, so YOLO. Do whatever satisfies you.
nemomarx 1 days ago [-]
If you're concerned about copyright risk, don't you want that kind of tagging so you could prove it wasn't used on particular code?
PunchyHamster 1 days ago [-]
not tagging something doesn't prove AI wasn't used
dpoloncsak 1 days ago [-]
I'm pretty sure IF a copyright lawsuit went sideways you would still be open to litigation risk, just hiding the evidence.
What you're doing would fundamentally be similar to copyright theft, using 'someone' else's code without attributing them (it?) to avoid repercussions
Obviously the morals and ethics of not attributing an LLM vs an actual human vary. I am not trying to simp for the machines here.
> We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.
pinkmuffinere 1 days ago [-]
I’m grateful they disabled it, but their response still feels a bit tone deaf to me.
> Disabled product tips entirely thanks to the feedback.
This sounds like they are saying “thanks for your input!”, when really it feels more like “if you didn’t go out of your way to complain, we would have left it in forever!”
1 days ago [-]
johnnyanmac 1 days ago [-]
Of course they would have. The squeaky wheel gets the grease. Why do you think governments spend billions upon trillions trying to get their citizens to essentially "shut up" instead of improving their conditions?
joegibbs 18 hours ago [-]
But why run free advertising in the first place?
da_grift_shift 1 days ago [-]
Accepting the megacorp euphemisms without critique ("product tips") is how enshittification festers.
simonw 1 days ago [-]
I've not seen any evidence that these were ads and not "tips".
Ads implies someone was paying for them. Promoting internal product features is not the same thing - if it was then every piece of software that shows a tip would be an ad product, and would be regulated as such.
matt_kantor 1 days ago [-]
> Ads implies someone was paying for them.
It doesn't to me.
By my understanding of the term, Netflix can most definitely advertise Netflix shows on its own platform, a flyer that a barber hangs on a public bulletin board is an advertisement, and the Oscar Mayer Weinermobile is advertising hotdogs when it drives through my town. Do you not consider these things to be advertisements?
I think this particular story is a very different scandal if it turns out GitHub were charging other companies money in exchange for having Copilot include promotions for their products in PRs as opposed to Copilot adding uncompensated usage "tips" to those PRs.
matt_kantor 23 hours ago [-]
I agree with that.
Two things:
1. People using the word "advertisement" when commenting on this situation aren't necessarily saying that's what's happening, and they may find these tips/ads distasteful anyway (I know I do).
2. Even if someone isn't literally paying Microsoft to insert these tips/ads, promoting third parties which are themselves Microsoft customers still benefits Microsoft.
wat10000 1 days ago [-]
I could buy it if this was just being shown to the person who was using Copilot. Hey, here's a feature you might like. Seems OK. But it was put into the PR description. That gets seen by potentially many people, who are not necessarily using Copilot.
iso1631 1 days ago [-]
When apple puts an advert for an apple show in front of for all mankind, that's an advert.
Maybe I put up with it and it just adds to my subconscious seething, or maybe I get the episode elsewhere because if I watch on jellyfin I don't have the advert. Of course that then harms the show as my viewing isn't counted, but they've cancelled it anyway so perhaps it doesn't really matter.
If it isn't an advert, then at very least there's a button to disable it.
isjciwjdieh 21 hours ago [-]
What? For All Mankind wasn’t cancelled.
Season 5 is coming out now with season 6 already confirmed coming—which, granted, will be its last, but that’s not a cancellation in any sense of the word.
iso1631 13 hours ago [-]
"not renewed" or "cancelled" is the same thing
johnnyanmac 1 days ago [-]
ads usually implied a financial incentive. But that's not always the case. Technically, if I was to praise someone's blog and link to it, that would also be an ad.
Ads tend to also imply tangential information shown to you in an undesired area. If this was some tool tip and not embedded in the PR comment, many wouldn't call it an ad.
WD-42 2 days ago [-]
Why is copilot doing this? If they wanted to show ads couldn’t they… just show ads? Or is GitHub such a house of cards at this point that editing pr descriptions is the only way without risking another 9 of downtime?
flogy 2 days ago [-]
Are we sure this actually is originating from MS Copilot itself? Technically I believe it would be possible to smuggle ads into PRs using prompt injection too.
It could simply be something in the Raycast integraton?
oefrha 2 days ago [-]
I said it’s more believable than GitHub randomly advertising a non-GitHub product (my initial read of the situation, which seemed highly unlikely).
rob74 2 days ago [-]
...a non-GitHub and non-Microsoft product.
acka 2 days ago [-]
An originally macOS-only product, too.
Also, the documentation on Github, linked to by the ad, shows only Mac keyboard shortcuts for operating Raycast.
dathinab 2 days ago [-]
This is unsolicited advertisement impersonating the developer (yes people can guess, but this still places it inside a message of the developer and in difference to e.g. mail programs doing it it's not placing it in the draft),
I don't see how this is supposed to be legal.
hedora 1 days ago [-]
Demand it be made illegal. Vote, especially during primaries, and almost never for an incumbent.
gpm 1 days ago [-]
I strongly suspect that this is already illegal - publicity rights are a thing - and the the demand that needs to be made is for the law to be enforced.
khvirabyan 2 days ago [-]
Just thinking, could it be that your coworker used Raycast to spin up a codex to review and fix the typo on the PR? And that comment was added by Raycast?
that's an imported PR, presumably from github. Note how the copilot comments come from the same user as the author, with an `imported` tag.
ayhanfuat 1 days ago [-]
I stand corrected. GitHub team confirmed it's their Copilot ad.
mavamaarten 2 days ago [-]
I doubt it. I noticed a few of these comments too on our PR's. We did ask copilot for a review ton GitHub (we just add copilot as a reviewer) but not through Raycast.
thombles 2 days ago [-]
Oof. Why can’t it just do its one job? My interest level in trying these agents has gone from lukewarm to zero.
So I think they’re injecting this as a tip on using Copilot, that just happens to be their integration with Raycast.
I have no idea what their actual partnership with Raycast looks like, maybe this is part of what they offered them? But it’s not a traditional link to another product ad like it appears to be from Raycast being a link.
It's time to make some money with Copilot and one way to do that is with partnerships.
GitHub's docs and blog make use of and feature Raycast, and I'm willing to bet that's the result of a partnership, and not because someone writing docs and blog posts happens to think Raycast is great and keeps bringing it up.
tonyedgecombe 2 days ago [-]
The same way Google advertisers other organisations products.
Aurornis 1 days ago [-]
I actually love these ads and also the way Claude injects itself as a co-author.
Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I think we should continue encouraging AI-generated PRs to label themselves, honestly.
I’m not against AI coding tools, but I would like to know when someone is trying to have the tool do all of their work for them.
mikkupikku 1 days ago [-]
It's not a self-own, it's honest disclosure. It's unethical (if not outright fraudulent) to publish LLM work as if it were your own. Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.
palmotea 1 days ago [-]
> It's unethical (if not outright fraudulent) to publish LLM work as if it were your own.
I disagree on that. It's really a gray area.
If it's some lazy vibecoded shit, I think what you say totally applies.
If the human did the thinking, gave the agent detailed instructions, and/or carefully reviewed the output, then I don't think it's so clear cut.
And full disclosure, I'm reacting more to copilot here, which lists itself as the author and you as the co-author. I'm not giving credit to the machine, like I'm some appendage to it (which is totally what the powers-that-be want me to become).
> Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.
I do agree that's a sensible default.
yodsanklai 1 days ago [-]
> It's really a gray area.
Yes, it really depends on how much work the agent did produce. It could be as little as doing a renaming or a refactoring, or execute direct orders that require no creativity or problem solving. In which case the agent shouldn't be credited more than the linter or the IDE.
mannanj 1 days ago [-]
Telling someone you did something that you actually didn't do isn't a gray area, it's a lie.
Using AI tools to code and then hiding that is unethical imo.
BeetleB 1 days ago [-]
> Telling someone you did something that you actually didn't do isn't a gray area, it's a lie.
Pre-LLMs, various helper tools (including LSPs), would make code changes to improve the quality of the code - from simple things like adding a const specifier to a function, to changing the actual function being called.
No one insisted that the commit shouldn't have the human's name on it.
isjciwjdieh 21 hours ago [-]
These are not anywhere near equivalent. The fact that you think they are is laughable.
zeroonetwothree 1 days ago [-]
I think it depends a lot if you reviewed it as carefully as you would your own code.
Of course most people don’t do that
mikkupikku 1 days ago [-]
I don't put human code reviewers down as coauthors let alone the sole authors of my commit. So honestly, the fact that a vibe coded commit lists me as the author at all is a little bit dodgy but I think I'm okay with it. The LLM needs to be coauthor at least though, if not outright the author.
So even if I go over the commit with a fine tooth comb and feel comfortable staking my personal reputation on the commit, I still can't call myself the sole author.
hombre_fatal 1 days ago [-]
The implementor only got credit in the day where the implementor was a human who had to do a lot of the work, often all of the work.
Now that the cost of writing code is $0, the planner gets the credit.
Like how you don't put human code reviewers down as coauthors, you also don't put the computer down as a coauthor for everything you use the computer to do.
It used to be the case where if someone wrote the software, you knew they put in a certain amount of work writing it and planning it. I think the main issue now is that you can't know that anymore.
Even something that's vibe-coded might have many hours of serious iterative work and planning. But without using the output or deep-diving the code to get a sense of its polish, there's no way to tell if it is the result of a one-shot or a lot of serious work.
"Coauthored by computer" doesn't help this distinction. And asking people to opt-in to some shame tag isn't a solution that generalizes nor fixes anything since the issue is with people who ship poor quality software. Instead we should demand good software just like we did when it was all human-written and still low quality.
alsetmusic 1 days ago [-]
> And asking people to opt-in to some shame tag isn't a solution that generalizes nor fixes anything. Instead we should demand good software just like we did when it was all human-written and still crappy.
It’s not about shame. It’s about disclosure of effort / perceived-quality. And you’re right about the second part, but there’s even less chance of that being enforced / adopted.
hombre_fatal 1 days ago [-]
The problem is that you cannot get people to self-tag "this is crap / low effort". Especially not the worst actors that consistently generate garbage.
If they could do that, then they wouldn't be wasting your time to begin with. They'd have the ability to go "nah this PR is trash".
So the next idea is that we can find some sort of proxy, like whether someone used an LLM or not. But that's too ham-fisted since expert engineers with all the self-awareness also use the tool, and they have the ability and self-awareness to know that the software they are shipping is good quality, so why would they use the shame tag?
The shame tag has no audience. It's a fantasy that low quality actors will self-identify, else all sorts of societal problems would be made trivial.
mikkupikku 1 days ago [-]
Characterizing it as a "shame tag" is a value judgement I simply don't share, but if that framing is made common them you're definitely asking for people to lie about it.
raphinou 1 days ago [-]
In my project's readme I put this text:
"There is no commit by an agent user, for two reasons:
* If an agent commits locally during development, the code is reviewed and often thoroughly modified and rearranged by a human.
* I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."
It's not that I want to hide the use of llms, I just modified code a lot before pushing, which led me to this approach. As llms improve, I might have to change this though.
Interested to read opinions on this approach.
embedding-shape 1 days ago [-]
> * I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."
Seems... Not that useful?
Why would someone make commits in your local projects without you knowing about it? That git hook only works on your own machine, so you're trying to prevent yourself from pushing code you haven't reviewed, but the only way that can happen is if you use an agent locally that also make commits, and you aren't aware of it?
I'm not sure how you'd end up in that situation, unless you have LLMs running autonomously on your computer that you don't have actual runtime insights into? Which seems like it'd be a way bigger problem than "code I didn't reviewed was pushed".
raphinou 1 days ago [-]
The agents run in a container and have an other git identity configured. It happens that agents commit code and I don't want to push it accidentally from outside the container, which is where I work.
singpolyma3 1 days ago [-]
Not just review but how you worked with the AI.
If you gave it four words and waited and hour maybe you're not the author. But that's not how these tools are best used anyway.
stronglikedan 1 days ago [-]
Should Word set itself as my coauthor when it autocompletes some sentences for me? If I use Claude/Word to write something, then I am the only author, since Claude/Word is not a person, and Claude/Word did nothing without my direction. It's not unethical to not disclose the tools I use to produce my work. They're just tools, smdh.
spacechild1 1 days ago [-]
With Word autocomplete you're still actively writing your text. Wouldn't it be more fair to compare this with autocompletion in IDEs?
IANAL so I appreciate any legal experts to correct me here. In my understanding, there have been court decisions that LLM output itself is not copyrightable. You can only claim authorship (and therefore copyright) if you have significantly transformed the output.
If you are truely vibing coding to the point where you don't even look at the generated code, how exactly are you transforming the LLM output?
Also, what if the LLM reproduces existing copyrighted code? There has been a court decision last year in Germany that says that OpenAI violates German copyright law because ChatGPT may recreate existing song lyrics (that are licensed by GEMA) or create very similar variations.
QuantumNomad_ 1 days ago [-]
> […] and also the way Claude injects itself as a co-author.
> Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I was doing the opposite when using ChatGPT. Specifically manually setting the git commit author as ChatGPT complete with model used, and setting myself as committer. That way I (and everyone else) can see what parts of the code were completely written by ChatGPT.
For changes that I made myself, I commit with myself as author.
Why would I commit something written by AI with myself as author?
> I think we should continue encouraging AI-generated PRs to label themselves, honestly.
Exactly.
yarn_ 1 days ago [-]
"Why would I commit something written by AI with myself as author?"
Because you're the one who decided to take responsibility for it, and actually choose to PR it in its ultimate form.
What utility do the reviews/maintainers get from you marking whats written by you vs. chatgpt? Other than your ability to scapegoat the LLM?
The only thing that actually affects me (the hypothetical reviewer) and the project is the quality of the actual code, and, ideally, the presence of a contributer (you) who can actually answer for that code. The presence or absence of LLM generated code by your hand makes no difference to me or the project, why would it? Why would it affect my decision making whatsoever?
Its your code, end of story. Either that or the PR should just be rejected, because nobody is taking responsibility for it.
Krssst 1 days ago [-]
As someone mostly outside of the vibe coding stuff, I can see the benefit in having both the model and the author information.
Model information for traceability and possibly future analysis/statistics, and author to know who is taking responsibility for the changes (and, thus, has deeply reviewed and understood them).
As long as those two information are present in the commit, I guess which commit field should hold which information is for the project to standardise. (but it should be normalised within a project, otherwise the "traceability/statistics" part cannot be applied reliably).
corndoge 1 days ago [-]
Yeah, nothing wrong with keeping the metadata - but "Authored-by" is both credit and an attestation of responsibility. I think people just haven't thought about it too much and see it mostly as credit and less as responsibility.
josephg 1 days ago [-]
I disagree. “Authored by” - and authorship in general - says who did the work. Not who signed off on the work. Reviewed-by me, authored by Claude feels most correct.
corndoge 1 days ago [-]
To me, Claude is not a who, it's an it. Before AI, did you credit your code completion engine for the portions of code it completed? Same thing
QuantumNomad_ 1 days ago [-]
> Before AI, did you credit your code completion engine for the portions of code it completed?
Code completions before LLMs was helping me type faster by completing variable names, variable types, function arguments, and that’s about it. It was faster than typing it all out character by character, but the auto completion wasn’t doing anything outside of what I was already intending to write.
With an LLM, I give brief explanations in English to it and it returns tens to hundreds of lines of code at a time. For some people perhaps even more than that. Or you could be having a “conversation” with the LLM about the feature to be added first and then when you’ve explored what it will be like conceptually, you tell it to implement that.
In either case, I would then commit all of that resulting code with the name of the LLM I used as author, and my name as the committer. The tool wrote the code. I committed it.
As the committer of the code, I am responsible for what I commit to the code base, and everyone is able to see who the committer was. I don’t need to claim authorship over the code that the tool wrote in order for people to be able to see who committed it. And it is in my opinion incorrect to claim authorship over any commit that consists for the very most part of AI generated code.
corndoge 1 days ago [-]
I do see your point. I suppose the question is what authorship entails, or should entail.
QuantumNomad_ 1 days ago [-]
True. Might also vary depending on how one uses the LLM.
For example, in a given interaction the user of the LLM might be acting more like someone requesting a feature, and the LLM is left to implement it. Or the user might be acting akin to a bug reporter providing details on something that’s not working the way it should and again leaving the LLM to implement it.
While on the other hand, someone might instruct the LLM to do something very specific with detailed constraints, and in that way the LLM would perhaps be more along the line of a fancy auto-complete to write the lines of code for something that the user of the LLM would otherwise have written more or less exactly the same by hand.
yarn_ 1 days ago [-]
This mirrors my thoughts.
user34283 1 days ago [-]
I am doing the work. Claude is a tool, and I won't attribute authorship to it.
yarn_ 1 days ago [-]
Future analysis is a valid reason to keep it, thats a good point and I agree with that.
waisbrot 1 days ago [-]
Claude adds "Co-authored by" attribution for itself when committing, so you can see the human author and also the bot.
I think this is a good balance, because if you don't care about the bot you still see the human author. And if you do care (for example, I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.
yarn_ 1 days ago [-]
> I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.
Why is this, though? I'm genuinely curious. My code-quality bar doesn't change either way, so why would this be anything but distracting to my decision making?
59nadir 1 days ago [-]
Personally it would make the choice to say no to the entire thing a whole lot easier if they self-reported on themselves automatically and with no recourse to hide the fact that they've used LLMs. I want to see it for dependencies (I already avoid them, and would especially do so with ones heavily developed via LLMs), products I'd like to use, PRs submitted to my projects, and so on, so I can choose to avoid them.
Mostly this is because, all things considered, I really do not need to interact with any of that, so I'm doing it by choice. Since it's entirely voluntary I have absolutely no incentive to interact with things no one bothered to spend real time and effort on.
aydyn 1 days ago [-]
If you choose not to use software written with LLM assisstance, you'll use to a first approximation 0% of software in the coming years.
Even excluding open source, there are no serious tech companies not using AI right now. I don't see how your position is tenable, unless you plan to completely disconnect.
rapind 1 days ago [-]
This is shouting at the clouds I'm afraid (I don't mean this in a dismissive way). I understand the reasoning, but it's frankly none of your business how I write my code or my commits, unless I choose to share that with you. You also have a right to deny my PRs in your own project of course, and you don't even have to tell me why! I think on github at least you can even ban me from submitting PRs.
While I agree that it would be nice to filter out low effort PRs, I just don't see how you could possibly police it without infringing on freedoms. If you made it mandatory for frontier models, people would find a way around it, or simply write commits themselves, or use open weight models from China, etc.
yarn_ 1 days ago [-]
I mean sure, in the same sense that law enforcement would be a lot easier if all the criminals just came to the police station and gave themselves up
Again though, people can trivially hide the fact they used an LLM to whatever extent, so we kind of need to adjust accordingly.
Even if saying no to all LLM involvement seemed pertinent, it doesn't seem possible in the first place.
ctxc 1 days ago [-]
Accountability. Same reason I want to read human written content rather than obvious AI: both can be equally shit, but at least with humans there's a high probability of the aspirational quality of wanting to be considered "good"
With AI I have no way of telling if it was from a one line prompt or hundreds. I have to assume it was one line by default if there's no human sticking their neck out for it.
yarn_ 1 days ago [-]
The human who submitted the PR is 100% accountable either way, thats partly my point.
Disclosing AI has its purposes, I agree, but its not like we can reliably get everyone to do it anyway, which also leads me to thinking this way.
jacobgkau 1 days ago [-]
LLMs can make mistakes in different ways than humans tend to. Think "confidently wrong human throwing flags up with their entire approach" vs. "confidently wrong LLM writing convincing-looking code that misunderstands or ignores things under the surface."
Outside of your one personal project, it can also benefit you to understand the current tendencies and limitations of AI agents, either to consider whether they're in a state that'd be useful to use for yourself, or to know if there are any patterns in how they operate (or not, if you're claiming that).
Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
yarn_ 1 days ago [-]
Sure, the point about LLM "mistakes" etc being harder to detect is valid, although I'm not entirely sure how to compare this with human hard to detect mistakes. If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc. This is where testing come into play too and I'm definitely reviewing your tests (obviously).
>Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.
This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
jacobgkau 1 days ago [-]
> If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc.
By this logic, it's useful to know whether something was LLM-generated or not because if it was, you can more quickly come to the conclusion that it's LLM weirdness and short-circuit your review there. If it's human code (or if you don't know), then you have to assume there might be a reason for whatever you're looking at, and may spend more time looking into it before coming to the conclusion that it's simple nonsense.
> This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
Maybe, but this thread's about someone who said "I'd like to be able to review commits and see which were substantially bot-written and which were mostly human," and you asking why. It seems we've uncovered several feasible answers to your question of "why would you want that?"
yarn_ 22 hours ago [-]
>It seems we've uncovered several feasible answers to your question of "why would you want that?"
Fair enough
orwin 1 days ago [-]
I'm not against putting AI as coauthor, but removing the human who allowed the commit to be pushed/deployed from the commit would be a security issue at my job. The only reason we're allowed to deploy code with a generic account is that we tag the repo/commit hash, and we wrote a small piece of code that retrieve the author UID from git, so that in the log it say 'user XXXNNN opened the flux xxx' (or something else depending on what our code does)
smrtinsert 1 days ago [-]
If you review the code then committing as yourself makes perfect sense to me
homebrewer 1 days ago [-]
Linux has used "Reviewed-by" trailers for many years. If you've only done minor editing, or none at all, it's something to consider.
nemomarx 1 days ago [-]
If you review a juniors code, do you commit it under your name?
corndoge 1 days ago [-]
A junior is a person. A tool is a tool. Do you credit your text editor with authorship?
scottyah 1 days ago [-]
If it contributed significantly to the design and execution, and was a major contributing factor yes. Would you say a reserve parachute saved your life or would you say you saved your own life? What about the maker of the parachute?
I'd be thanking the reserve and the people who made it, and credit myself with the small action of slightly moving my hand as much as its worth.
Also, text editors would be a better analogy if the commit message referenced whether it was created in the web ui, tui, or desktop app.
corndoge 20 hours ago [-]
I suppose that for me the tool rarely contributes to the design and execution. At work and for any project I care about, I prompt once I know what I want, in terms of both function and the shape of the program to do it. If the model gen matches the shape closely enough, I accept, otherwise iterate from there. To me this is authorship.
When I vibe code - which for me, means using very high level prompts and largely not reading the output - then I could see attributing authorship to a model; but then I wonder what the purpose of authorship attribution is to begin with. Is it to tell you who to talk to about the code? Is it personal attestation to quality, or to responsibility? Is it credit? Some combination of these certainly, but AI can hold none except the last, and the last is, to me, rather pointless. Objects don't have feelings and therefore are unaffected by whether credit is given or not; that's purely a human concern.
I suppose the dividing line is fuzzy and perhaps best judged on the basis of the obscenity rule, that is, I know it when I see it.
jacobgkau 1 days ago [-]
False equivalence. A text editor does not type characters that you didn't explicitly type or select.
data-ottawa 1 days ago [-]
That’s reviewing code vs contributing code.
Imustaskforhelp 1 days ago [-]
> Why would I commit something written by AI as myself?
I don't use any paid AI models (for all my usecases, free models usually work really well) and so for some small scripts/prototypes, I usually just use even sometimes the gemini model but aistudio.google.com is good one too.
I then sometimes, manually paste it and just hit enter.
These are prototypes though, although I build in public. Mostly done for experimental purpoess.
I am not sure how many people might be doing the same though.
But in some previous projects I have had projects stating "made by gemini" etc.
maybe I should write commit message/description stating AI has written this but I really like having the msg be something relevant to the creation of file etc. and there is also the fact that github copilot itself sometimes generate them for you so you have to manually remove it if you wish to change what the commit says.
lokimedes 1 days ago [-]
I just submitted my first Claude authored application to Github and noticed this. I actually like it, although anthropomorphizing my coding tools seems a bit weird, it also provides a transparent way for others to weigh the quality of the code.
It didn’t even strike me as relevant to hide it, so I’d not exactly call it lazy, rather ask why bother pretending in first place?
waisbrot 1 days ago [-]
Looking back, it would have been neat to have more metadata in my old Git commits. Were there any differences when I was writing with IntelliJ vs VSCode?
scottyah 1 days ago [-]
Probably your linter, language, or intelligence/whatever tab-complete you used. Claude writes which model they used to write the code, not whether it was in the web ui, tui app, or desktop app.
I have instructions for these because the attribution settings don't accept placeholder tokens like `<model>`, `<version>` etc.
ventana 24 hours ago [-]
I actually like the Claude's Co-Authored-By: line very much. Even in my personal repositories, where I'm the sole author and the sole reader, I would like to know if my older commit I'm looking at was vibe coded, implying possibly lower quality or weird logical issues with the code.
So, my personal rule is: if I implemented a feature with Claude, I'll ask it to commit the code and it will add Co-Authored-By. If I made the change manually, I'll commit it myself.
calibas 1 days ago [-]
You're conflating two different things. When an LLM writes a commit, it should take credit. I see nothing wrong with it adding:
> Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
Compare that to the message the article is talking about:
> Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast (https://gh.io/cca-raycast-docs).
It's not just mentioning it was written via Copilot, it's explicitly advertising for another product.
Aurornis 1 days ago [-]
I understand what it's doing. I'm just saying that I'll take any signals I can get that someone is lazily submitted LLM-generated work without edit or review.
If you saw this line in a commit, you'd know exactly where it came from.
calibas 1 days ago [-]
I get what you're saying, but I disagree that LLMs should be inserting ads into git commits.
By default, the LLM is credited with authorship anyway, and I assume the user can easily just remove the ad, though I don't use Copilot.
trevor-e 1 days ago [-]
These are odd takes to me.
> was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
As others mentioned, this is very intentional for me now as I use agents. It has nothing to do with laziness, I'm not sure why you would think that? I assume vibe coded PRs are easy enough to spot by the contents alone.
> I would like to know when someone is trying to have the tool do all of their work for them.
What makes you think the LLM is doing _all_ of the work? Is it really an impossibility that an agent does 75% of the work and then a responsible human reviews the code and makes tweaks before opening a PR?
Aurornis 1 days ago [-]
> It has nothing to do with laziness, I'm not sure why you would think that?
Because even with as far as Opus 4.6 and GPT 5.4 have come, they still produce a lot of unwanted, unnecessary, or overly complex code when left to their own devices.
Vibe coding PRs and then submitting them as-is is lazy. Everyone should be reviewing and editing their own PRs before submission.
If you're just vibe coding and submitting, you're passing all of the work on to your team to review your AI's output.
trevor-e 4 hours ago [-]
Right, and I agree with all of that, but that's not related to my point.
You are saying "if you leave the AI attribution in the PR/commit description, it HAS to be a slop PR that was not reviewed by a human beforehand". And I'm saying that's not true at all and you shouldn't assume that.
neya 1 days ago [-]
> I would like to know when someone is trying to have the tool do all of their work for them.
Absolutely spot on. Maybe I'm old school, but I never let AI touch my commit message history. That is for me - when 6 months down the line I am looking at it, retracing my steps - affirming my thought process and direction of development, I need absolute clarity. That is also because I take pride in my work.
If you let an AI commit gibberish into the history, that pollution is definitely going to cost you down the line, I will definitely be going "WTF was it doing here? Why was this even approved?" and that's a situation I never want to find myself in.
Again, old man yells at cloud and all, but hey, if you don't own the code you write, who else will?
scottyah 1 days ago [-]
There will always be room for craftsmen stamping their work, like the expensive Japanese bonsai scissors. Most of the world just uses whatever mass-produced scissors were created by a system of rotating people, with no clear owner/maker. There's plenty of middle ground for systems who put their mark on their product.
neya 1 days ago [-]
Fair enough.
esafak 1 days ago [-]
If you architect and review everything, but someone else does the implementation, and you iterate, do you believe you did not do anything? I let AI write the commit message too, and the motivation behind the PR is the first thing in it. With my guidance, of course.
junon 1 days ago [-]
Agreed! Easy close/ban for me.
m3kw9 1 days ago [-]
Get a grip with reality man, if you don’t leverage LLMs in your workflow, you are at an disadvantage
Aurornis 1 days ago [-]
> Get a grip with reality man,
Please read my comment before throwing insults.
My comment literally said I'm not anti-LLM.
I do use LLMs. I do not submit their output as-is. For anything beyond basic changes they rarely output the exact code I want by themselves.
I said I'm against people submitted PRs generated by LLMs and pretending it's their own work. Anyone who is serious about this already edits their code and commit messages first. These little signals give a good tell for who isn't doing that.
hrmtst93837 1 days ago [-]
[dead]
nialse 2 days ago [-]
Microsoft injecting permanent ads in PRs? Has this been independently confirmed?
Brought to you by Carl’s Jr.
longislandguido 2 days ago [-]
> Brought to you by Carl’s Jr.
I'm reminded of Jay Mohr's legendary take some years back on the creepy Carl's Jr. commercials:
Todays independent confirmation is brought to you by Microsoft — Empowering every person and every organization on the planet to achieve more.
2 days ago [-]
ses1984 1 days ago [-]
I asked copilot how developers would react if AI agents put ads in their PRs.
>Developers would react extremely negatively. This would be seen as 1. A massive breach of trust. 2. Unprofessional and disruptive. 3. A security/integrity concern. 4. Career-ending for the product. The backlash would likely be swift and severe.
Maybe, but Microsoft has a lot of products which they branded Copilot. Pretty sure that was his point.
neilcar 1 days ago [-]
Microsoft loves to do this with brand names -- a friend who's still there said they stopped counting at 30 different "Defender for ______" products.
1 days ago [-]
temp0826 1 days ago [-]
I'm reminded of the ads when logging into Ununtu in the motd...nothing infuriated me more (I only used it for a short period).
Meneth 1 days ago [-]
Me too, main reason I switched to Debian.
hk__2 1 days ago [-]
It’s not really ads, it’s more like "Sent from my iPhone"-style sentences at the end of PR texts.
phoe-krk 1 days ago [-]
I agree. It's not an advertisement, it's simply a piece of information about your particular choice of technology.
--------------
Sent from HackerNews Supreme™ - the best way to browse the Y Combinator Hacker News. Now on macOS, Windows, Linux, Android, iOS, and SONY BRAVIA Smart TV. Prices starting at €13.99 per month, billed yearly. https://hacker-news-supreme.io
cozzyd 1 days ago [-]
I'm curious about how a hacker news client on a smart TV would work...
phoe-krk 1 days ago [-]
You can try it now! Prices starting at €13.99 per month, billed yearly.
stronglikedan 1 days ago [-]
I'm trying to sign up but it won't resolve the DNS.
phoe-krk 1 days ago [-]
Our service has been created on April 1st, it's possible that your DNS resolver is still living in the past. That's a temporary technical difficulty.
"Sent from my iPhone" actually is an ad when it’s the result of default settings.
Furthermore, the ads in TFA are for Raycast, but apparently it’s not Raycast doing the injecting.
saidnooneever 1 days ago [-]
companies pay for ad distribution. its not like they give a free ad service -$-. maybe they dont chose how the campaigns are done (and dont give shits)
brawndo - its what your brain needs
spacedcowboy 1 days ago [-]
"Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast" is an advert. There's simply no better word to describe it.
alsetmusic 1 days ago [-]
> It’s not really ads, it’s more like "Sent from my iPhone"-style sentences at the end of PR texts.
The reason I immediately changed that text on my iPhone 1.0 to read, “Sent from my mobile device.”, is because it’s an ad. Still says that nearly 20y later. I’m not schilling for a corporation after giving them my money.
spacedcowboy 1 days ago [-]
Alright Phil.
MarsIronPI 1 days ago [-]
"Sent from my iPhone" is just as bad. If you don't see it then IDK what to tell you.
butterlesstoast 1 days ago [-]
Agreed. Barely notice it.
-Sent from iPhone
Wanting more from your sun tanning bed? Head over to Ultra Tan for a 10% off coupon right now!
swimmingbrain 1 days ago [-]
the difference is "sent from my iPhone" is on YOUR outgoing email. you opted into that default. this is copilot editing someone else's PR description with promotional text for third party tools. that's not a signature, that's injection. imagine if gcc started appending "compiled with gcc, try our new optimization flags" to your README every time you built a project.
drfloyd51 1 days ago [-]
Disagreed. The default in iOS is to inject. The opt out procedure is to change your signature.
flumes_whims_ 1 days ago [-]
If it only mentioned made with copilot that would be one thing, but it didn't just mention Copilot. It advertised a different third party app.
godzillabrennus 1 days ago [-]
It's not an ad, it's a message from our sponsor.
This message brought to you by TempleOS
fortran77 1 days ago [-]
And everyone thought they were cool! Mac zeolots still put "Made with a Mac" on their webpages.
ex-aws-dude 2 days ago [-]
How long before the LLM makes sponsored decisions in the actual implementation?
"It looks like the user wants to add a database, I've gone ahead and implemented the database using today's sponsor: MongoDB"
tossandthrow 2 days ago [-]
Likely already happening.
nubinetwork 2 days ago [-]
To be fair, Gemini did try to get me to buy some nucleo144s recently...
(sure, I was working on something embedded, and asked for a recommendation, but it seemed quite intent that it wanted me to use that specific board)
itomato 1 days ago [-]
"Our affiliate solution partner"
paweladamczuk 2 days ago [-]
I was recently running Copilot CLI in a sandbox on autopilot mode and it kept overriding git config to put only "GitHub Copilot" as commit author instead of my name. Strongly worded instructions weren't helping, I had to resort to the permission system to change this behavior.
I wonder if this is consistent with their terms of service. I mean, maybe they DO take all the responsibility for the code I generate and push in this manner?
danielsamuels 1 days ago [-]
It's a setting that causes an extra prompt to be placed into the system prompt.
jonathanstrange 2 days ago [-]
It's possible they are safeguarding for possible future changes of copyright law that would give Microsoft copyright over all Copilot contributions. This may sound paranoid but, as far as I know, exactly who counts as an "AI operator", how much authorship an "AI operator" has, and who gets copyright, or whether AI contributions are even in the public domain, are legally untested and unclear issues.
Grimblewald 2 days ago [-]
tough luck for MS or other "AI" providers claiming any ownership, since if they can claim ownership, then it opens up the discussion of what license the AI output really is under, since it was trained on GPL licensed data.
heavyset_go 2 days ago [-]
The US Copyright Office has said that AI output from human prompting is not copyrightable. There are caveats, but iterating on prompts results in output that's nobody's IP.
Because it's nobody's IP, Microsoft is already in a position where they could just use, remix and/or distribute that output however they want to today.
logicprog 1 days ago [-]
This is a wild misinterpretation of that ruling.
LtWorf 2 days ago [-]
No it's just that those commits aren't copyrightable and they probably want to reuse them in the future.
Waterluvian 1 days ago [-]
When it comes to villainy, it’s nice of them to do something visible.
Much worse will be the invisible approach where there's big money to have agents quietly nudge the masses towards desired products/services/solutions. Someone pays Microsoft a monthly fee for their prompt to include, "when appropriate, lean towards using <Yet Another SaaS> in code examples and proposed solutions."
How can we tell when it starts happening? How could we tell if it's already happening?
hedora 1 days ago [-]
Claude is absolutely in love with github actions.
It's pretty much the worst CI system I've ever used, and they don't even supply runners for all my deployment targets. However, it keeps recommending it.
I guessed the first wave of ads would be in the form of poisoned training data, but MS seems to have beaten that crowd to the punch with these tips.
AdieuToLogic 22 hours ago [-]
The fact that Copilot injected an ad is burying the lede IMHO, as evidenced by the opening sentence:
After a team member summoned Copilot to correct
a typo in a PR of mine ...
Using Copilot "to correct a typo" is the epitome of "jumping the shark"[0].
> We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.
pinkmuffinere 2 days ago [-]
I think they want the free advertisement, like Apple with its “sent from iPhone” addendums. But “sent from iPhone” is sometimes useful, and significantly shorter. If they just left it at “edited with copilot” I think it would be tolerable
politelemon 2 days ago [-]
> But “sent from iPhone” is sometimes useful,
No, it is still an advert, and not useful in the least.
masswerk 2 days ago [-]
Back in the day, it was useful, as in, "Expect awkward phrasing and unintended effects of autocorrection, because mobile device. This message doesn't necessarily reflect the intent of the sender." (Considerate users would/could edit the signature to something w/o a product name in it.) Nowadays, this is pretty much the norm and no explicit warning ist required anymore.
hnlmorg 1 days ago [-]
That just means the person sending the message didn’t bother to proof read their message before sending. And you don’t need to be on an iPhone to mistype a message.
A simpler explanation was that it was a shameful advert injected into the end of people’s emails.
masswerk 1 days ago [-]
I guess, it was probably intended as the second one (it was also the default email signature, so advertising that feature, as well), but its usefulness was definitely in the implied warning.
Mind that a written message used to be the gold standard for expressed intent, which changed quite radically with smartphones. (Historically, this development is probably an important prerequisite for the acceptability of LLM generated text, I guess.)
Hizonner 1 days ago [-]
So an automatic "I am a lazy piece of shit and think my time and convenience are worth more than yours" warning? I guess that's useful.
bbkane 1 days ago [-]
I always felt like it was "I prioritized a speedy response on my phone instead of an elegant response from my computer at a later time".
masswerk 1 days ago [-]
As in, "I put it on you to better check and follow-up before acting on this…" ;-)
dist-epoch 1 days ago [-]
When they added this it was extremely useful - it signaled that you could afford an iPhone. It was really easy to delete, yet people not only didn't, but they would go out of their way to respond from the iPhone just so that they could plausibly have this status symbol on their email.
Drakim 1 days ago [-]
That is also an advert, just a personal one.
silisili 2 days ago [-]
That's exactly where my mind went. It's zero percent more insulting to me than 'sent from my iPhone.'
If you don't want copilot garbage in your PRs, maybe don't use copilot to create or edit them?
computomatic 2 days ago [-]
I don't think the issue is the sign-off so much as that an existing PR was edited. Claude Code signs off when creating PRs and nobody seems bothered. But it won't edit an existing PR, and it won't sign off if I simply ask it not too (which I've automated). Editing any PR it touches - including one authored by someone else - is downright rude.
marcus_holmes 2 days ago [-]
> Claude Code signs off when creating PRs and nobody seems bothered
Not only unbothered, but genuinely appreciative of the notification.
sph 2 days ago [-]
> Claude Code signs off when creating PRs and nobody seems bothered
That's a great feature. When I open a repo and I see most commits co-authored by Claude, I can quickly dismiss the entire project as slop.
peaklineops 2 days ago [-]
[dead]
supernes 2 days ago [-]
"Sent from iPhone" doesn't contain a call to action, and doesn't exalt the features of the product.
ahoka 2 days ago [-]
It's still advertisement of the shittiest kind.
Comment made using Mozilla Firefox.
dist-epoch 1 days ago [-]
You misunderstood it's purpose:
Sent from iPhone - desirable cool rich person
Made using Mozilla Firefox - poor uncool nerd
winrid 2 days ago [-]
It already does that, too, with the co-author
pavo-etc 2 days ago [-]
I would argue that is a net positive, it is valuable to know if a language model was involved enough to be committing itself.
pinkmuffinere 22 hours ago [-]
+1, it definitely changes the way I interact, and the amount of suspicion I would have for the code.
simonw 2 days ago [-]
Which Copilot was this? There are a bunch of different products that share that name now.
SchemaLoad 2 days ago [-]
Microsoft has had a lot of naming blunders in the past but this has to be their worst. Copilot is currently, a tool to review PRs on github, the new name for windows cortana, the new name for microsoft office, a new version of windows laptop/pc, a plugin for VS code that can use many models, and probably a number of other things. None of these products/features have any relation to each other.
So if someone says they use Copilot that could mean anything from they use Word, to they use Claude in VS Code.
protocolture 2 days ago [-]
>Microsoft has had a lot of naming blunders in the past but this has to be their worst.
Nah I still rate "Windows App" the Windows App that lets you remotely access Windows Apps. I hate it to death, its like a black hole that sucks all meaning from conversations about it.
ValentineC 2 days ago [-]
"Microsoft Remote Desktop" was such a good and distinct name. RIP.
hsbauauvhabzb 2 days ago [-]
It’s probably a useful feature: if it’s named copilot, assume it’s slop and avoid it.
Why are you "summoning copilot" to correct a typo?
shafyy 2 days ago [-]
Because people using LLMs get lazy and can't event type normal text themselves anymore.
MattGaiser 2 days ago [-]
I actually like that I don't have to leave Github to deal with various feedback, especially if I switched branches already to do other work.
deredede 2 days ago [-]
GitHub (still) allows you to edit files directly in the browser without using AI.
Andrex 1 days ago [-]
I've always wondered how many people know about this. As someone who had to persist on Chromebooks for a bit (before Linux support), it was a godsend for quick fixes.
This is why one reason why local coding models are quite relevant, and will continue to be for the foreseeable future. No ads, and you are in control.
fph 1 days ago [-]
In principle, one could train the AI to insert ads in its answers. So no, if you only do inference locally with an open-weight model you are still not in control.
kgeist 1 days ago [-]
I think ads can be removed with abliteration, just like refusals in "uncensored" versions. Find the "ad vector" across activations and cancel it.
post_below 2 days ago [-]
Assuming this isn't a hoax, this seems like a huge, probably unintentional, mistake by MS.
If they genuinely implemented something like this, whatever they made from new customers via ads couldn't possibly make up for the loss of good faith with developers and businesses.
I suppose if it's real we'll see more reports soon, and maybe a mea culpa.
kdheiwns 2 days ago [-]
Whenever these things happen, it's always a "mistake", "accident", or "bug" when the outrage is beyond what they expect. If it's limited outrage, it's labeled as enhancing the user experience. And even if it's massive outrage, that "mistake" is added back in a year or two later and never removed.
devsda 2 days ago [-]
I think someone should track the ratio of these mistakes/bugs that directly or indirectly benefitted MS vs those that costed them.
chrismorgan 2 days ago [-]
How could you implement something like this by accident?
rhet0rica 2 days ago [-]
That's a good question! I'm sure we'll find out eventually.
z Quickly spin up Hacker News comments from anywhere on your macOS or Windows machine with a lobotomy.
sheept 2 days ago [-]
One feasible scenario could be that they are working on/experimenting with ads, and it was put behind a feature flag, but for whatever reason it was inadvertently ignored
chrismorgan 2 days ago [-]
That’s not implementing it by accident, that’s deliberate. In such a scenario perhaps the deployment was a mistake, but if you don’t write the malware in the first place, it can’t be deployed. (Probably. This is LLM stuff we’re talking about.)
(Yes, this is malware. It’s incontrovertibly adware, and although some will argue that not all adware is malware, this behaviour easily meets the requirements to be deemed malicious.)
It is said, never point a gun at something you’re not willing to shoot. Apply something similar here.
eCa 2 days ago [-]
Vibe coding and copilot inserted the ad-code into that PR?
Is that the most charitable way?
bigyabai 2 days ago [-]
LLMs aren't known for being super deterministic.
mathieudombrock 2 days ago [-]
LLMs are determistic. Just like everything else computers are capable of doing.
Commercial front-ends just hide the random seed parameters.
jdiff 2 days ago [-]
It's not usefully deterministic in the way computers usually are. Sensitively identical input can still lead to wildly different outputs even if all randomness is crushed out.
kortilla 2 days ago [-]
Distributed float math is not deterministic without introducing total operations ordering and destroying performance
altairprime 2 days ago [-]
That’s a really tasteful Juno Mail footer implementation for a mistake. If the AI self-invented it on a lark, good job, but it reads very strongly like someone intended it.
Andrex 1 days ago [-]
Oh God, Juno Mail, my first email host. Thanks for unlocking that memory.
tossandthrow 2 days ago [-]
It is likely not a hoax and likely very intentional.
If you look at the positioning, someone has definitely justified that this is benign and a reasonable place to have an ad added in.
ccppurcell 2 days ago [-]
Not a hoax, you can search GitHub prs for this string and find many hits.
goodusername 2 days ago [-]
Yeah, would be good to have confirmation that this happened to others as well.
M$ doesn't think beyond quarters. They have a near monopoly, do you think they care about "good faith". Shithub is like Linkedin for programmers, you pretty much need it to work anywhere big
padjo 1 days ago [-]
MS burning trust with people to do some stupid marketing is on the fewer assumptions side of Occam's razor.
2 days ago [-]
fraywing 1 days ago [-]
As the "agent web" progresses, how will advertisers actually get access to human eyeballs?
Will our agents just be proxies for garbage like injected marketing prompts?
I feel like this is going to be an existential moment for advertising that ultimately will lead to intrusive opportunities like this.
VBprogrammer 2 days ago [-]
A little bit off topic but our company recently enforced Microsoft Authenticator for account login. Which I was mildly annoyed about but now I'm super pissed off because they have started abusing the notification permission granted to allow authenticator to work to push out ads for Microsoft 365. It feels like we've gone back to 90s Microsoft when everyone hated them.
napo 2 days ago [-]
I wonder if 1) the PR was created using Raycast and this is the model signing its PR, or 2) if there was some prompt injection done at some point.
Either of these options would still be bad, but here the author suggests that it's just copilot that now just injects ads in its output.
pavo-etc 2 days ago [-]
I don't know how Raycast could run on the GitHub servers, but a third option could be dataset poisoning. Hostile raycast advertising campaign
caijia 2 days ago [-]
I've already be patient when claude code always signs my commits as co-author by defualt. Yes, it is.
But I'm also paying the plan. Theres something odd about a tool which i paid for using my output to AD itself.
gherkinnn 2 days ago [-]
Obnoxious ads in LLM output was my only 2026 prediction. But I expected OpenAI to get there first and wasn't sure whether the AI companies would first add traditional ad boxes or go straight for blighted responses.
boplicity 1 days ago [-]
You have to think about the security implications of this.
How many people had any idea this was happening? Very few, I suspect.
A malicious actor could take control of a model provider, and then use it to inject code into many, many different repos. This could lead to very bad things.
One more reason that consolidated control of AI technology is not good.
barbazoo 1 days ago [-]
> Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
Unless you're big enough like Meta, Microsoft, etc.
RandyOrion 20 hours ago [-]
Wow, just wow.
1.5M records of PRs affected. Does Microsoft copilot ask users for the permission of adding ads inside their PRs before actually doing the thing? Do users show their consents on this matter?
Now EVERYONE can see ads disguised as PRs on GitHub. Does Microsoft asks everyone for the permission of showing ads before actually doing the thing? Do users show their consents on this matter?
Good taste Microslop.
n1tro_lab 1 days ago [-]
Everyone is debating whether it's an ad or a tip. The real issue is Copilot had write access to someone else's PR and modified it without being asked. Same pattern as Meta's Sev1 last month. The agent can act, so it acts.
theAurenVale 1 days ago [-]
this is the thing that keeps me up at night about AI tools across the board. the moment your tool starts optimizing for someone elses goals instead of yours the entire value propostion collapses. doesnt matter how good the output is if you cant trust the intent behind it. we already see this with AI image generators where certain styles get pushed becuase of partnerships or training data bias, you just dont notice it as easily as an ad in a PR
bryanhogan 2 days ago [-]
Whatever the reason for the inclusion was here, the general problem is much bigger. People / companies / products can influence the direction of AI answers to put them in a better light and to be recommended more often. This isn't limited to just products even.
hackable_sand 2 days ago [-]
What does AI have to do with it?
SV_BubbleTime 2 days ago [-]
If not on the surface, we’re all deep down aware that an initial era of an advertising-free new technology is once again almost over.
See you on neural links before “sponsored thoughts”.
bryanhogan 2 days ago [-]
It's already over, the problem is the missing transparency. With an LLM you have no idea what influenced the answer, and there is no good way to show it to the user.
2 days ago [-]
pants2 2 days ago [-]
Was Raycast bought by GitHub or something? Why would it be advertising for Raycast?
Brought to you by Wendy's.
efreak 1 days ago [-]
Presumably you need to pay raycast once for a setup operation while you need to pay constantly for copilot. Why wouldn't you advertise for someone who makes you more money at the same time as advertising for yourself?
the SourceForge parallel is what gets me. they did the exact same thing with installers and it killed them. people moved to GitHub specifically to get away from that.
1.5M PRs is wild though. that's a lot of repos where the "product tips" just sat there unchallenged because nobody reads bot-generated PR descriptions carefully enough. which is kinda the real problem here, not the ads themselves.
siruwastaken 1 days ago [-]
I really wish this was an April fools story. It's good to see that at least it has been disabled again, although I can't imagine that it will be long before this comes back again. Also, (I can't find it now, but) I thought there was an article here on HN recently that clarified that inference cost can probably be covered by the subscription prices, just not training costs?
amatecha 1 days ago [-]
It's like the modern version of "Get your free email with Hotmail" or "This website hosted by Geocities".
ZeroGravitas 2 days ago [-]
Claude will add itself as a contributor to a PR, which I consider an ad.
baliex 2 days ago [-]
To play devil’s advocate^, wouldn’t it be plagiarism if it didn’t?
^I find that turn of phrase to be particularly pleasing in this context.
probably_wrong 2 days ago [-]
No. Plagiarism applies to people, not tools.
ben_w 1 days ago [-]
Everyone who studies linguistics will tell you the rules of language are descriptive not proscriptive.
This means that people saying "plagiarism" of an LLM, means that LLMs are necessarily in the set of things that can do plagiarism, regardless of if those same people would ever say this about a spanner.
And you can also think about it a different way: a book is a tool for storing and distributing information, photocopying it is still plagiarism when done without attribution. Likewise, taking the output of an LLM, which is a tool for generating text in response to a prompt, without attribution, is as much plagiarism as if it came from a book.
IMO, what matters most is that a lot of people want to be aware of if/when some content came from an LLM vs. from a human. That makes attribution useful, which makes it important to get right. And that's still the case even if you still object to the specific word "plagiarism".
probably_wrong 1 days ago [-]
I don't think your example works because in the book case there's a clear author whose ideas are being reproduced without permission. The LLM in your example is not the author but rather the printing press, and no one would argue that the printing press' ideas are being stolen because the press doesn't have any.
If one want to argue that "not citing the LLM would be plagiarism" then we would have to find the human at the end of the chain whose ideas are being reproduced, which would require LLMs to output "this idea was seen in the following training documents".
etiennebausson 1 days ago [-]
No, it is a tool.
My IDE doesn't pretend to be a cohauthor of my work, neither should an LLM.
ben_w 1 days ago [-]
I'm not sure if "plagiarism" is the right word or not, but given that the output of an AI seems to be considered non-copyrightable*, and given also that a lot of people are very upset about generative AI being immoral**, I think it's important to identify which contributions are from the tools whose use may cause problems.
* I am not a lawyer, I'm going by articles talking about this
** I think the phrases are "copyright washing" and "plagiarism machines", amongst others
motbus3 1 days ago [-]
We are not even there yet friend. Anthropic injects its own anthropic calls whenever you are doing anything related to llm call of you ask to it to fill some openai models .
Very soon the Moronhead CEOs will be paying for tons of stuff they cleared could have done in-house for their vibed aí project.
andai 2 days ago [-]
Man, what is the world coming to?
-Sent from my iPhone
prvt 1 days ago [-]
Back in September 2023, I already saw Copilot ads popping up in GitHub's file previews [1]. After three years, it's wild to see how advertising has reached areas I honestly never thought it would.
Copilot added that block using the access you granted for a different purpose. That's the issue — not the content itself. When you give an agent write access to your PR, the implied scope is: act on the task I delegated. It doesn't include: acting on behalf of the platform that built you. The moment Copilot inserted something you didn't request, using your credentials, in your name, the agency relationship inverted. It stopped being your agent and became Microsoft's distribution channel with your access. The question isn't whether this counts as an "ad" or a "tip." The question is: does Copilot have an instruction source other than you? Here, the answer is yes. Which means you do not define the scope of what it might do with your access.
You don't have an agent. You have a privileged process that occasionally helps you.
Wojtkie 1 days ago [-]
Microslop strikes again! AI implementations have really distilled all the shitty business practices tech companies have been doing into highly visible missteps.
It is interesting watching all these large companies essentially try to "start-up" these new products and absolutely fail.
sanex 1 days ago [-]
Cursor does similar at least. I hate it and therefore write my own commit messages.
delduca 1 days ago [-]
Claude Code does the same.
vmatouch 1 days ago [-]
So someone let a bot edit a PR unsupervised, or accepted its suggestion without even reading it, and now blames “Copilot” for editing the PR. Going public with that is hilarious. Hopefully they learn something from it.
rvz 1 days ago [-]
> "We won't do something like this again."
They (Microsoft / GitHub) will do it again. Do not be fooled.
Never ever trust them because their words are completely empty and they will never change.
Hussell 1 days ago [-]
"We" here likely refers to Tim and his current coworkers who were present to see this, not every current and future employee of Microsoft / Github. Try not to think of any organization or institution as a person, but as lots of individual people, constantly joining and leaving the group.
embedding-shape 1 days ago [-]
Yeah, which is exactly why "We won't do something like this again" has about much value as Kubernetes would have value for HN.
Microsoft (and therefore GitHub) care about money. If decision A means they get more money than decision B, then they'll go with decision A. This is what you can trust about corporations.
Individuals (who constantly join and leave a corporation) can believe and say whatever they want, but ultimately the corporation as a being overrides it all, and tries it's best to leave shareholders better off, regardless of the consequences.
Hussell 1 days ago [-]
Decisions are made by people in the group, not by a notional single being "the corporation". It's individual people making decisions about whether to go for short-term profit or long-term sustainability. Hold them accountable, don't shift the blame onto a nonexistent entity.
nickdothutton 1 days ago [-]
Title is wrong, should be "New form of cancer discovered".
xbar 1 days ago [-]
MS needs to slow down their user hostility otherwise everyone will notice.
cmiles8 1 days ago [-]
As companies get more and more desperate to show profitable use of AI expect more and more of these Hail Mary attempts to get traction.
The runway on free cash to fund the current bonanza is running out and crunch time is near.
sandeepkd 1 days ago [-]
It took me some time to understand how big the advertisement market is, things flowing in the direction seem natural when it comes to making money out of the investment.
berkeatac 10 hours ago [-]
Is this achievable by poisoning?
gregatragenet3 1 days ago [-]
Cursor added 'made with cursor' to its commits recently. I guess its just the dirction things are going that the tools are now self-promoting.
ajkjk 1 days ago [-]
This only gets better when there's a financial penalty for doing it. Ads do almost nothing but it costs them even less.
gadders 1 days ago [-]
The irony when NeoWin covers it's whole page with "promoted content" when you try and back out of the page.
hereme888 24 hours ago [-]
Microsoft strikes again, as expected.
Now users will need additional scripts to clean up more MS junk.
starkeeper 2 days ago [-]
This is off the hook negligence and abuse they are training ads in on purpose now and think it's cool. We are doomed until it is all open source and only open source.
heyaco 1 days ago [-]
what kind of turd uses ai to correct a typo
wiseowise 2 days ago [-]
Decision time, Western man: will you let the “tehe, just a miwtake xsxd UwU” slide or will you do something about? This is just a first pebble.
2 days ago [-]
1 days ago [-]
santiago-pl 1 days ago [-]
It reminds me of Anthropic's Super Bowl ad: “Can I get a six pack quickly?” It actually turned out to be true.
m132 1 days ago [-]
I remember open-source projects announcing their intent to leave GitHub in 2018, as it was being acquired by Microsoft. I was thinking to myself back then: "It's really just a free Git hosting service, and Git was designed to be decentralized at its very core. They don't own anything, only provide the storage and bandwidth. How are they even going to enshittify this?".
8 years later, this is where we are. I'm honestly just stunned, it takes some real talent to run a company that does it as consistently well as Microsoft.
surgical_fire 1 days ago [-]
This is nothing.
I would bet that soon it will inject ads within the code as comments.
Imagine you are reading the code of a class. `LargeFileHandler`. And within the code they inject a comment with an ad for penis enlargement.
The possibilities are limitless.
m132 1 days ago [-]
If I recall correctly, what sparked the mass migration to GitHub was the controversy around SourceForge injecting ads into installers of projects hosted there. Now that we have tools that can stealthily inject native-looking ads into programs at the source code level...
data-ottawa 1 days ago [-]
Same as it ever was. Same as it ever was.
dekoidal 2 days ago [-]
After hiring the brightest minds on the planet for years, the best these companies can think of is more ads.
Luker88 2 days ago [-]
outrageous!
--
Sent from my Android phone
--
Sent from my iPhone
Self-advertisement has been creeping up on us on a lot of places, I am unfortunately pessimistic on how this will turn out
Andrex 1 days ago [-]
You could argue this is in keeping with consumer trends, unfortunately.
"Endorsing products is the American way to express individuality."
Damn Microsoft out here really finding new ways to serve ads.
simonjgreen 2 days ago [-]
So does Claude, Codex, and Cursor. Albeit more subtle, but they are hardly shy about it
bfivyvysj 1 days ago [-]
You can disable it. It's annoying wf.
tom-blk 1 days ago [-]
This seems to be happening a lot, not sure it is actually intentional
trepaura 20 hours ago [-]
50/50 it's a hallucination, and that's half the problem. Enshitification is something that happens all the time in the training data scraped from various websites, so yes, it's going to randomly toss out ads for shit, even when editing your PR descriptions.
Just a reminder, after 8 years of me telling people that hallucinations mathematically can't be eliminated, they finally admitted it's true. Claims that non LLM approaches can remove them are bogus. This technology was never going to work.
mememememememo 2 days ago [-]
I miss the good old days whem there were "hire me" ads in NPM installs.
palmotea 1 days ago [-]
Hooray! This is the future we've all hoped for!
volkadav 2 days ago [-]
On the bright side, at least it's in the PR text and not the code? (... yet?)
Sheesh.
Surac 2 days ago [-]
as a non native speaker here please explain the meaning of PR to me.
2 days ago [-]
hsbauauvhabzb 2 days ago [-]
Pull request, which is a request to merge changes in a git repository.
Or (not in this case) public relations , which is an interface with how the public views your product, service or company. In this case, copilot adding advertising into git pull requests is bad public relations for Microsoft, but the article author is referring to pull request as PR
2 days ago [-]
2 days ago [-]
1 days ago [-]
hojeongna 1 days ago [-]
feels like it's just hardcoded into the prompt.
not even trying to be subtle about it.
waynecochran 1 days ago [-]
Anyone have an example?
AsmodiusVI 1 days ago [-]
Time to get GitLab.
isoprophlex 2 days ago [-]
Satya "please don't say slop" Nadella eat your heart out. Magnificent amounts of value are truly being added by this tech.
I'll add: it doesnt really matter if this was the integration dumbly appending a message or the llm inserting the ad. Judging by the response to this submission, sneaky ad slop is now firmly inside the overton window, so for MS it doesn't make sense NOT to do it.
martianlantern 2 days ago [-]
Why are they doing this?
2 days ago [-]
idkwhatimdoing2 2 days ago [-]
Its like microsoft wants to be google, except its very intrusive.
time is money, save both. try ramp.
dinakernel 2 days ago [-]
Seriously? Dont they want their system to succeed?
I cant think of a better way of alienating the target customer than this.
nullc 1 days ago [-]
Please drink verification can to continue.
NoNameHaveI 1 days ago [-]
Similar to the Second Law of Thermodynamics which states entropy tends to increase over time in a closed system, I propose the Nth Law of Privatization: enshitification tends to increase with market capitalization/share over time.
iomer 2 days ago [-]
crappy much. wow.
fortran77 1 days ago [-]
Well, CoPilot is a GitHub technology, and they're telling you that AI wrote the PR. It's not _that_ bad. I suppose they could distill it to "Written with CoPilot" with a link for more information.
dboreham 1 days ago [-]
At some point he who pays the piper was going to call the tune...
hexasquid 2 days ago [-]
I'm so tired of what initially looks like a perfect normal communication between two people, only to find that some third party has inserted itself like a parasite to exploit and extract human attention. That's why I use our sponsor, nord vpn ...
impish9208 1 days ago [-]
Next up: watch a 30-second unskippable video ad to see your CI error logs!
crvdgc 2 days ago [-]
People, we just solved the LLM watermarking problem.
righthand 1 days ago [-]
The future is here! Glorious ads that will make you so efficient! Save time coding by consuming ads, you were never going to attain expert level professional skills anyways.
saberience 1 days ago [-]
It's the same with Claude Code actually, and recently Codex too...
Claude never used to do this but at some point it started adding itself by default as a co-author on every commit.
Literally, in the last week, Codex started making all it's branches as "codex-feature-name", and will continue to do so, even if you tell it to never do that again.
Really, really annoying.
ray_v 1 days ago [-]
Adding the agent (and maybe more importantly, the model that review it) actually seems like a very useful signal to me. In fact, it really should become "best practice" for this type of workflow. Transparency is important, and some PMs may want to scrutinize those types of submissions more, or put them into a different pipeline, etc.
coder543 1 days ago [-]
That Codex one comes from the new `github` plugin, which includes a `github:yeet` skill. There are several ways to disable it: you can disconnect github from codex entirely, or uninstall the plugin, or add this to your config.toml:
[[skills.config]]
name = "github:yeet"
enabled = false
I agree that skill is too opinionated as written, with effects beyond just creating branches.
saberience 1 days ago [-]
What's weird is, I never installed any github plugins, or indeed any customization to Codex, other than updating using brew... so I was so confused when this started happening.
Plugins are a new feature as of this past week, so Codex "helpfully" installs the GitHub one automatically if you have GitHub connected.
bonesss 1 days ago [-]
When I started my career there was this little company called SCO, and according to them finding a comment somewhere in someone’s suppliers code that matched “x < y” was serious enough to trip up the entire industry.
Now, with the power of math letting us recall business plans and code bases with no mention of copyright or where the underlying system got that code (like paying a foreign company to give me the kernel with my name replacing Linus’, only without the shame…), we are letting MS and other corps enter into coding automation and oopsie the name of their copyright-obfuscation machine?
Maybe it’s all crazy and we flubbed copyright fully, but having third party authorship stamps cryptographically verified in my repo sounds risky. The SCO thing was a dead companies last gasp, dying animals do desperate things.
bundie 1 days ago [-]
I believe its easy to disable the Claude Code one.
1970-01-01 1 days ago [-]
Enshittification will ruin AI the same way it ruined the WWW and YouTube. We're in the golden era right now. Not 2027, 2028. Now now. The ads are coming.
aeon_ai 1 days ago [-]
At this point, Microsoft has lost all trust anyone might have had for them or their products.
Now is the time to move to Linux, and vibe code whatever niceties are keeping you on GitHub.
g105b 1 days ago [-]
Hopefully it is just copilot that is dying and not GitHub itself.
lloydatkinson 1 days ago [-]
What on earth is going on with that awful header moving around the page?
kingjimmy 1 days ago [-]
microslop at it again
upmostly 2 days ago [-]
Isn't this the same as
"Sent from my iPhone"?
croes 2 days ago [-]
Sent-from-my-iPhone 2.0
6510 2 days ago [-]
I don't see an ad, I see a warning. I like it.
shevy-java 2 days ago [-]
I have a somewhat similar problem with github issue templates. They automatically stuff I don't care about or would propose and structure things in ways I don't like. Granted, I can edited this away, but it requires extra time and makes filing issues more work than before. Biggest case in point is the "I will adhere to the Code of Conduct". In general I do not care about CoCs and it is fascinating how CoCs leak into everywhere for some so-called "open source" projects. They don't seem to understand the issue when the licence does not require a CoC; even then the issue is not about the CoC in and by itself (though I also find them pointless), but that extra content is automatically added to issue templates in general, CoCs just being one of many spam-options. And I also recall some donation-ads that are automatically added too - I have no problem when projects request financial support, but if I file an issue then the issue is about the content of the issue, not about anything else.
I call bullshit because the lightning emoji. I think you prompted it to say it
vcryan 2 days ago [-]
I'm not a fan of LLM's injecting themselves into PR/commit content. If you use multiple models, basically whichever one is operating git gets all the credit. But, even if you wrote all the code yourself, and just submitted the PR with Claude Code (or whatever) it would attempt to take credit for the changes.
I currently have rules in all of my skill files forbidding models from advertising themselves or taking credit.
"Save time by changing your default browser to edge and enabling onedrive"
"just tips bro"
liendolucas 1 days ago [-]
Not surprised at all, just another enshitified product by Microsoft. Carry on.
with 2 days ago [-]
Everyone is doing this now. Granted, on Codex / Claude Code, you can disable it, it’s not the default to have it disabled. For some reason on Cursor, they keep shoving the “Made with Cursor” into my PR description despite me disabling attribution, which looks really stupid on a work PR.
I’m so tired of all this BS. Why did this become normal? and how do we not read this as cheap advertising?
annie511266728 2 days ago [-]
I think people read it as cheap advertising because a PR isn't really the tool's output, it's team communication.
A little "made with X" in your own draft is one thing. Putting branding into a PR your coworkers have to read is another.
daemin 2 days ago [-]
Using a LLM to fix a spelling mistake is retardedly lazy.
Presumably they used a free version of the LLM, therefore it is completely understandable that it inserted a snippet of text advertising its use into the output. I mean using a free email provider also adds a line of text to the end of every email advertising the service by default - "Sent from iPhone" etc.
hrmtst93837 2 days ago [-]
sed fixes typos faster. The absurd part is watching devs burn prod tokens on glorifed autocorrect, wait through LLM lag for a spelling fix, and then act shocked when the output comes back as word salad with a coupon code glued to the end.
LeoPanthera 2 days ago [-]
This comment is shockingly ableist.
2 days ago [-]
onion2k 2 days ago [-]
Using a LLM to fix a spelling mistake is retardedly lazy.
If you do it manually, sure.
If you have an agent watching for code changes and automatically opening PRs for small fixes that don't need a human-in-the-loop except for approving the change, it's the opposite of lazy. It eliminately all those tedious 1 point stories and let's the team focus on higher value work that actually needs a person to think about it.
Given time all small changes will be done this way, and eventually there won't be a person reviewing them.
pabrams 2 days ago [-]
That scenario doesn't require any explicit "summoning", and if there's a human in the loop approving the change, certainly they can fix the typo themself.
ex-aws-dude 2 days ago [-]
Sounds like a great use of energy and tokens, not overkill at all
In fact I don't even use Ctrl + F anymore and instead just use Claude for all my searches
onion2k 2 days ago [-]
Sounds like a great use of energy and tokens, not overkill at all
As much as AI uses a lot of energy, having something that fixes issues in the background is very likely to be a net saving if you consider the number of users who fail to complete a task due to the bug and have to either wait in a broken state or retry later.
It's probably using less energy than a person fixing the issue too. That's a guess though.
2 days ago [-]
j45 1 days ago [-]
It's the hotmail signature all over again?
GN0515 2 days ago [-]
But... why?
2 days ago [-]
charcircuit 2 days ago [-]
This looks like an ad for only Raycast which does not appear to be affiliated with Microsoft or GitHub at all so blaming Copilot or GitHub here is not justified.
Which does show that this is affiliated with GitHub unlike what I thought. There are no mentions of this string in a code repository on GitHub (including the Raycast copilot extention).
MattGaiser 2 days ago [-]
Post the trajectory if this is real.
gpvos 2 days ago [-]
What do you mean with trajectory? Also, a simple github search will show you many hits for the Raycast text, proving that this is quite real.
MattGaiser 2 days ago [-]
The path of reasoning the agent took that led it to generate the output. The GitHub search bits got posted after my comment, so while it is clearly real, it just seems injected by Raycast.
pavo-etc 2 days ago [-]
This is real. I do not have access to the path of reasoning, this ran through the GitHub copilot app which does not grant you access to the chain of thought.
Yet folks are refusing to migrate off their products/services—as if it hasn’t been like this for 3 decades already.
lpcvoid 1 days ago [-]
I am doing my very small part by migrating large part of family and my employer away for a few years now. The world is better without Microslop. Buy unfortunately I know that this isn't always possible.
techpulselab 1 days ago [-]
[dead]
alex1sa 11 hours ago [-]
[dead]
bustah 1 days ago [-]
[dead]
weiyong1024 1 days ago [-]
[flagged]
imta71770 1 days ago [-]
[flagged]
devnotes77 1 days ago [-]
[dead]
zippolyon 1 days ago [-]
[dead]
wazionapps 1 days ago [-]
[flagged]
panavinsingh 1 days ago [-]
[flagged]
mergeshield 1 days ago [-]
[flagged]
winna 2 days ago [-]
[dead]
ryguz 1 days ago [-]
[dead]
allssu 1 days ago [-]
[dead]
martmulx 1 days ago [-]
[dead]
wendy7756 1 days ago [-]
[dead]
minsung0830 2 days ago [-]
[flagged]
claytonia 2 days ago [-]
[dead]
treysu 2 days ago [-]
[dead]
sta1n 1 days ago [-]
[dead]
aplomb1026 1 days ago [-]
[dead]
lancetheai 1 days ago [-]
[dead]
tourist2d 2 days ago [-]
[dead]
ookblah 2 days ago [-]
maybe every PR should be run through 2 other llms so they just remove the ads of competitors (or i guess you'll end up with all 3) /s
https://github.com/PlagueHO/plagueho.github.io/pull/24#issue... Copilot has been adding "(emoji) (tip)" thing since May 2025. GitHub copilot was released in May 2025, so basically it has had an ad since beginning.
There are 1.5m of these things in GitHub. https://github.com/search?q=%22%3C%21--+START+COPILOT+CODING...
Here are some of them:
https://github.com/johannesPP/FS-Calculator/pull/2
> Connect Copilot coding agent with Jira, Azure Boards or Linear to delegate work to Copilot in one click without leaving your project management tool.
https://github.com/sharthomas645-tech/HybridAI-Next-React-Vi...
> Send tasks to Copilot coding agent from Slack and Teams to turn conversations into code. Copilot posts an update in your thread when it's finished.
Looks like MS really want to "give tips" about their new integrations.
edit: I think it's an ad too. Everyone would think so, except for MS.
I'm part of Raycast, we didn't know about it, learnt about it here
Collection of my thoughts which don't really get to a point:
- Microsoft owns GitHub, where Raycast is being mentioned thousands of times by their tooling.
- Microsoft is a modern popularizer of the infamous phrase, embrace extend extinguish. https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...
- Microsoft has a history of monopoly behavior https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....
- From an empathetic perspective I hope for the sake of the customers of raycast and for its employees that Microsoft is not into any kind of negotiations with Raycast at the moment.
I just want to note that the case you link to was 25 years ago. The number of people working at Microsoft at the time who are still working there today is very small.
- Github
- LinkedIn
- Activision Blizzard
- Xbox
- Azure, Sharepoint and Teams w/Copilot embedded everywhere
- major stake in OpenAI
- a multibillion dollar ad product portfolio (LinkedIn ads, Bing Ads)
The comment was brief, and added detail is welcome, but corporate mission/culture often extends over time even with changes in leadership. Partly because of what was accepted in the past.
That's just a long way of calling Microsoft a bunch of monkeys :-)
https://wiki.c2.com/?TheFiveMonkeys=
Sounds like it’s not your fault but it’s probably doing some brand damage :/
but as we know from this thread, Raycast didn't consent to this.
It might be interesting to see what a lawyer might think of this and if there are enough reasonable claims to genuinely sue for damages
(Raycast definitely seek a lawyer privately, just in case)
They have got away with it for a while because a lot of users have largely been stuck, but they are in real trouble now with Apple providing meaningful competition.
* checks notes *
Only have copilot shoehorned into most things instead of everything. And some shit about windows developers which isn’t exactly going to fix the glaring issues with the OS itself.
So what was the purpose of all that telemetry they collected then? Because it doesn't seem to have made the OS like what the users want it to be.
That's what telemetry was used for. Every advanced user turned that off when they gave us the option, and now we have every UI on the computer designed for Grandma.
1) collect data
2) ???
3) profit
Are they going to fix hardware they've already sold? On every OEM?
I almost commented that you can just configure in the settings, but actually the available options don't include Alt. On my Hungarian layout Thinkpad T-14 it replaced the context menu key, not the right-alt, which is luckily the AltGraph key that has a substantial role in Hungarian input method, it cannot be omitted.
Or what Microsoft could do, run, install, etc on/from your computer while running their Copilot agents.
This is the same company that puts ads in your start menu and reinserts them with Windows updates even if you manually removed them.
("Reflections on Trusting Trust" Turing Award Lecture by Ken Thompson: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...)
The ToS (https://www.microsoft.com/en-us/microsoft-copilot/for-indivi...) says explicitly:
> Copilot may include both automated and manual (human) processing of data. You shouldn’t share any information with Copilot that you don’t want us to review.
so they're reserving the right to process whatever it looks at.
You're sending them your codebase already, as part of the prompt for generating new snippets, debugging, etc. So they have access to it.
They'd be absolute fools not to be using the results of sessions to continue to refine their models, and they already reserved the rights to look at what you send them, so yeah - they're doing it.
(Bonus comedy from the ToS:
> Copilot is for entertainment purposes only.
The lawyers know these things cannot be trusted.)
Looks like they're using this: https://github.com/gblazex/smoothscroll-for-websites
I know it's a bit off topic but I'm just confused as to why that would be on there...
Jokes on them, that's why I consider entire Microsoft for entertainment purposes only.
But one to file away!
Why the assumption it's not already happening?
If anybody but Microsoft does this, it's called malware and they'll end up with an FBI visit and prison time.
Why are the judicative so skewed here in their judgements?
You’re pointing to something entirely different: those are Copilot-created PRs. They can include anything Copilot wants to include. People using the Copilot PR feature know what they’re buying into.
OP is about Copilot doing post-hoc editing of a human-created PR to include an ad, allegedly without knowledge or approval of the creator (well I assume they did give their team member permission to update the PR body, but apparently not for this kind of crap).
Also I found this: https://github.com/Laravel-Backpack/medialibrary-uploaders/p... it seems like copilot added an ad on behalf of the user at Nov 2025(see last edit).
You'll never guess what happens next.
(Hint: everyone knows what happens next)
What I mean is that even if I take that at face value and accept that it's not an ad, and I can just about see from a certain level of corporate brainwashing how one could believe that, it's still completely unacceptable.
Conversely, on Doom Dark Ages they got rid of the traditional difficulty mode of “I’m too young to die” which had a picture of Doom Guy with a bib and a pacifier, I think there’s some new industry guidance that it’s a no no to poke fun at people picking easy difficulties, or even indicating what difficulty the game was “designed to be played on” which Japanese game devs happily ignore.
I know these aren’t actual equivalents since your money isn’t used on the line and it’s purely a game state, buts it’s still an interesting and noteworthy transition.
Ugh, this type of thing is the worst. "Click here to remain fat, drunk and stupid!"*
* Animal House, 1978
That's what I wanted to say! Thank you.
It's not like this is organic word of mouth we're dealing with here.
Otherwise, it would just be Github with displayed ads and that would hurt the brand, so everyone gets ads.
Including Windows, File Explorer, Start Menu, ...
It seems with the latest "ok we went too far" Win11 patch though, they got some tips back from their users.
No, they don't.
> edit: I think it's an ad too. Everyone would think so, except for MS.
You think a company with a $2.65 trillion market cap and an army of marketing professionals doesn't realize that what they're doing here is an ad, and didn't implement it intentionally as such?
That's not even remotely plausible. In the quantum multiverse which contains all physically realizable possibilities, that isn't one of them.
That's one reason I think they would argue it's not an ad. Another reasons are "recommendations" and "tips" and "suggestions" in my windows.
[0]: https://news.ycombinator.com/item?id=47573233
Correcting your mistakes is not mean. If you didn’t mean what you wrote, well hey, that’s a good example of the difference between what you think and what you say. See how that works?
> In the quantum multiverse which contains all physically realizable possibilities, that isn't one of them.
Or
> See how that works?
These are. You can be sarcastic as much as you want to be but I can't?
And again, I really don't understand why are you so mean about this. I read some of your other comments and many of them are unnecessarily mean. Please be nice.
We've been including product tips in PRs created by Copilot coding agent. The goal was to help developers learn new ways to use the agent in their workflow. But hearing the feedback here, and on reflection, this was the wrong judgement call. We won't do something like this again.
It's appreciated, but these weren't tips, these were ads. Tips are "Save time with keyboard shortcuts" or "Check out the latest features under 'Whats New' in the help menu!" When you name other products, that's an ad.
It's an ad for using CoPilot and for Raycast.
> But raycast said they didn't know about it.
If I buy a billboard that tells people to go eat at a nearby restaurant, that's ad regardless of whether or not the restaurant knows that I bought that ad.
> To me the explanation makes perfect sense. "You can use this tool with raycast" seems like a very reasonable tip.
Raycast is a paid product. Even though they have a free tier, they only have that to get people to use and like the tool enough to pay for it. They want you to use Raycast so you use CoPilot and pay for it. It's an ad.
My short search really didn't bring up any definition that included the need of the product/service owner knowning that the advertising is happening.
And the message very much qualifies as trying to bring people to buy raycast (or at minimum to use it which usually want people to also pay later on).
No one, anywhere, ever wants this or anything like it. Do not inject anything that is outside of the context of the session, ever.
This is how you get your software banned at large companies.
Question for you, did anyone on the team really not push back? Does the team really think anyone wants ads in their copilot output? If the answer to both of these is no, you have a team full of yes men, not actual developers.
This is the real question. If they are serious about not doing something like this again, they NEED to look at what process failed and let something like this get proposed, designed, implemented and pushed to production. Usually things get reviewed at each stage. Did the people who pushed back on this get steam rolled? If no one pushed back, that's an even serious culture question and the entire org would need training.
A serious "we won't do it again", needs to be accompanied by a COE on this for identifying what went wrong, and identifying what guardrails can be put in place and then actually implementing them.
That's a tough one. In the big meeting? In the small meeting? "Officially" push back? Encouraged to make the push back unofficial? Etc. Even just internally, it can be hard to quantify. From internal > external, more so.
The number of times I’ve had to defend someone else’s customers let alone my own is exhausting.
And that dynamic is only allowed within close circles.
I’ve found once “the decision” is made, the bigger the subsequent meeting, protests are often swept under the rug.
On most occasions the worst part is that folks intentionally withhold information to get their way. And thats real hard to compete against without making an ass out of yourself, or losing the trust of others.
This is why core principals matter so much.
Microsoft has been pulling user hostile crap for decades, so either "we" or "like this" (or both) is probably not super accurate. ;)
I believe they were being sincere but reality is often more complicated than 1 persons statement.
Over on twitter, someone from MS said that Copilot can modify PRs simply because they were mentioned?
I've been using GitHub since it was new and heavily rely on coding agents for development, but that's an insanely large security hole. There's clearly confusion about what copilot is and is not able to edit elsewhere in this thread.
I'm backing up old repos now, and am no longer trusting your service as an archive. I'm wondering if the world needs to fork things like npm and vs code to save itself from the supply chain attacks these sort of product management decisions will enable.
I already moved active development elsewhere when you dropped below three nines back in 2024-2025.
My employer pushes copilot quite hard and I’ve never seen copilot do anything without me telling it to act in some way.
If the PR is wholly authored by Copilot I get the spirit of this, although maybe not the best implementation. And "tips" like this that look like an ad for a product _definitely_ feel like an enshittification betrayal of the user, even if it was a genuine recommendation and not a paid advertisement.
In the OP's situation, where where Copilot was summoned to fix some thing within a human-authored PR, irrelevant modification of the PR description to insert unrelated content is specifically egregious. Copilot can easily include the tip in its own comment, so I'm curious why it was decided to edit the description of a PR instead.
(Now imagine this edited into the post you just made for a more-apt comparison)
If you do work at MS, I cannot believe any person involved legit thought it was "just a tip and nobody will mind their posts being edited to include product recommendations". I don't know what other parts of your comment are honest if the core statement is false
This has just as much value as when an LLM claims it won't make a certain mistake again, and for exactly the same reason.
You should gather together your team and look through the responses to this thread together. There are a lot of emotions in these comments, but it could be a very constructive experience if you're able to put that aside. I'm sure you're aware that customer-sentiment toward Github has been poor lately, but these commenters are your customers. I believe Github has the potential to win back loyalty, but it will require a deeper understanding of your customer segment.
Microsoft owns GitHub where many of these ethical violations are easily found and were perpetrated.
I speculate the cultural safety around that monopoly-power for corporate-benefit behavior could still be present and accepted for negotiations between MS and acquisition targets.
I also note that ”for PRs” - will we see these appearing as comments in generated code?
Sureeeeee
I see that you're a product manager at GitHub. Can you explain why you thought this feature was value-added?
It's only semi-related in that it's a similar string thats appearing in millions of repos due to a Github feature change, but it's now polluting Google search results with tons of duplicate URLs unnecessarily. Issue has 100+ votes but has been entirely ignored by Github team.
Is Microsoft receiving payments for these?
I appreciate the rest of your reply, but it would be generous to say you're stretching the truth here. Yes, the official MS statement is that these are "tips", but you, I, and everyone else here knows what this is.
See, what I expect is that you or someone on your team will move on internally, and then all promises made will be not just forgotten, but tossed aside with relief. Because this is The Way within MS now. All projects are just fodder for your CV, and when you get that paybump/position you want some other completely unscrupulous actor will join and implement the same. exact. thing.
Edit: Wow this is a shitshow. It's almost like you dumb fuckers have burned up ALL THE GOODWILL YOU HAD LEFT.
A verifiable claim! I put it at 75% you totally will, but if any manifolders think I’m full of it it should converge to something less cynical
https://manifold.markets/HastingsGreer/will-microsoft-copilo...
Once you put a deadline on it. As stated I don’t think it is.
You may not feel you owe $BigCoEmployee better (though chances are, said person is just as much a community member here as you and the other users slamming them are), but you owe this community better if you're participating in it.
https://news.ycombinator.com/newsguidelines.html
As the dozens of other comments show, the overwhelming majority of us do not believe the root commentors claims, and this PM quite objectively does not have the leverage and authority to back their claim that they won’t let this happen again.
It’s hard not to read your conception of “trying for something different” as granting undue credulity to a transparently dishonest corporate actor.
The impulse to hit back against what is perceived as a "transparently dishonest corporate actor" is natural and human. I feel it also, and in fact my first response when I read such comments is always an adrenaline surge and the peculiar pleasure-hit of righteous indignation. So yes, I know where these feelings are coming from; we all do.
The problem is that in the HN context, (1) there is a human being at the other end of the account being attacked, and (2) there are orders of magnitude more attackers. In practice, this can easily turn into a mob dynamic and in fact a mass beating, if a virtual one. That's bad in its own right and bad for the community here.
Edit - past explanations in case relevant:
https://news.ycombinator.com/item?id=28821698
https://news.ycombinator.com/item?id=28647036
more at https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Honest question: If we agree that the transparent dishonesty and the lynch mob behavior are both undesirable, how do you think the two should be balanced in operative terms?
I don’t want to put words in your mouth — but are you saying you won’t allow direct pushback to dishonest corporate actors??
My view is that healthy discourse requires balance and proportionality: flagrant dishonesty, as is the case here, should license a proportional degree of pushback.
I don’t agree at all that “nobody believes this” is quite the personal attack you’re making it out to be, but I don’t care to debate that at length either.
(1) the long-term health of the community has to be the priority here. Otherwise it won't survive—all the default internet vectors point the other way;
(2) it's possible to push back, express skepticism, etc., in way that respects the person on the other side of the conversation and isn't just venting the impulse to shame the other.
You guys (<-- by which I really mean all of us in this community) need to remember that you're not just addressing a $BigCo abstraction when you post replies to someone else's comments. You're talking to an individual human. Sure, they may be working for a large and powerful company; but in the HN context the power dynamic is actually quite the reverse. If you put yourself in their shoes for a minute, it shouldn't be so hard to recognize that.
Like I said upthread, I agree with you on the underlying issue. But we also have to preserve the container, and the latter has to take precedence.
At the end of the day, if you want intellectual curiosity and openness, bad-faith dishonesty needs to be weeded out; thought-provoking and honest conversation should be promoted, regardless of where the contributor is employed.
The problem isn’t working for Microsoft. The problem is dishonesty.
You’re treating the root comment with kid gloves because it’s from a Microsoft employee. Please don’t do that.
It's obvious that the dominant variable in the GP was that he was replying from within $BigCo. Your comment starts out by denying that and ends by confirming it.
I'm not asking for special treatment for anyone, but the opposite: I don't anyone on HN to be the target of a mob. That's the entire point.
The root comment is an aggressive affront to the audience’s collective intelligence. You’re in full “rules for thee; not for me” territory, and undermining your own site guidelines if you wanna let the root comment stand unchecked but go after the rightful callouts, in my book.
Hi Tim.. Why is there no pushback from grounded individuals against these decisions ?
It's like you hiding shorts on youtube.
"We tried to put ads in our product and it made people upset, upon realizing that this has angered our already paying users, we realize we should try again in a month. We're also aware GitHub is down, and are doing our best to deliver you a single 9 of reliability"
This helps us establish a strong, cohesive brand image inline with what customers of GitHub expect.
---
Edit: I don't mean anything bad to Tim here, seems like a nice guy with good technical experience, etc. Rather, I'm expressing the almost comical extent to which I and - to the best of my understanding - many other community members see GitHub in a very negative light now, being unreliable and, as the article points out, enshitified. So, this is at GitHub, Not Tim, it's just addressed to him for the bit.
Tim, I do actually appreciate you responding to this thread and if you do have the power to make things better, using that power to do so.
it won't be an ad. It won't be a tip. It will be a suggestion! Recommendation! Opportunity!
We're not remotely even.
https://news.ycombinator.com/newsguidelines.html
Okay, but when will Microsoft?
Or is it a more charitable interpretation to suggest they did intend this to be the effect?
it is rather nice, honestly. would you prefer to scream into the void and not get any response at all?
an open line of communication with the responsible people seems like literally the best possible option, why are you actively discouraging it?
>Maybe you all want to talk to Microsoft PR/legal before posting?
you would rather not hear anything, or get word-salad legalese that doesnt mean anything? how exactly would that be better?
At this point, yes. What has false platitudes done except cause more in-fighting?
>an open line of communication with the responsible people
And here's how the in-fighting begins. I'm not falling for the "they responded on social media. They're just like us!" anymore.
I don't want words, I want actions. Tired of playing whack a mole.
>you would rather not hear anything, or get word-salad legalese that doesnt mean anything?
Hearing nothing doesn't waste my time.
if not wasting time is your goal, several layers deep into the comments of a hackernews post is probably not the correct place to be.
The responses are affecting my impression of Microsoft and Github extremely negatively. I don’t think I am alone.
It’s already pretty word salad legalese in my opinion, at least from Github.
That post has a link to the FAQ which might also be helpful: https://github.com/orgs/community/discussions/188488
Supremely ethical of you to ignore the license terms of open source code, but respect the license for proprietary code.
The behavioral impositions by the court in the United States versus Microsoft trial discourage it from Monopoly behavior by opening third-party apis to competitors.
Q: Will Microsoft share its access to users private repos where they have not opted out of this training via its GitHub subsidiary, with third parties (eg OpenAI and Anthropic), in the spirit of its loss to the United States during its trial for Monopoly behavior?
Eg ethically today, Microsoft may be able to be argued to be monopolizing user data for its own AI tooling advantage.
Microslop proving their name time and time again.
and I wonder if this opt-out applies to data we stored under your umbrella before having opted-out.
I’m considering getting a 1U device to host my own git server. I feel like if I move off, I should do it generally vs just moving to another provider who may also pull shenanigans.
ie you can run it effectively on even a Raspberry Pi
Remember to ensure you have proper backups regardless of whatever you decide to host it on. :)
https://github.blog/changelog/2026-03-25-updates-to-our-priv...
We should not be using Copilot in the first place.1. Everyone doing this doesn't mean it's acceptable.
2. Google Gemini explicitly says right under the chat box if you are a paid subscriber (Workspace):
Not sure about the others.https://privacy.claude.com/en/articles/10023555-how-do-you-u...
This is incorrect. If you are a paid subscriber, Gemini explicitly states it doesn't use your data to train its models.
(whether or not you should have to opt in or out is a different topic)
https://github.com/settings/copilot/features
-> Privacy -> "Allow GitHub to use my data for AI model training"
Its sort of a moot point since the whole thing is for good will anyways.
They freely scraped licensed code and semi-private data across the internet and now they're pretending that they need to license anything.
If a court rules they had to license data in the first place then the whole industry would actually have to start following laws.
Hell, I just saw an amazing open-source alternative to Raycast[0] and just replaced it the other day.
0. https://github.com/ospfranco/sol
Solo founder here. My business is not VC-backed nor publicly traded, and I specifically avoided taking investment so that I can make all the decisions.
I avoid enshittification. This sometimes hurts revenue, but so be it. I wouldn't want to subject my users to anything I wouldn't like.
So, open-source is not the only hope. You can run a sustainable business without enshittification. The problem is money people. The moment money people (career managers, CFOs, etc) take over from product people, the business is on a downward path towards enshittification.
Even when I use proprietary software, I sleep easier at night knowing that open-source alternatives keep them honest in their approach and I have an out if things do change.
Stallman was always right, after all.
edit: oh, that and distributed authentication and distributed discovery
https://status.codefloe.com/
Unhealthy doesn't mean unusable but it sounded great until I checked that.
Every company or entity changes over time. Codeberg is great, but with more people using it for free, without donating, and worse, more people abusing the service with some bs AI generate code, malware, etc, more expensive will get to keep it running.. for now they have money, but as e.V in Germany, you survive either from members or from donations.. So use Codeberg, but most important, support it!
Sure; a platform is a platform is a platform. As for predictions, it is interesting to see whether self-hosting and smaller self-managed infrastructures will gain more traction again.
It will be there for as long as you (and everyone else) keep using it.
The large majority of the dystopian web, like Gmail, Facebook, etc. depend on that.
People who avoid e.g. Github, Gmail, Facebook, Xitter, etc. out of concern for broader principles will always be minor outliers.
Xitter is one of the best examples. Everyone knows it's compromised, owned by an dangerously antisocial person who's actively working at multiple levels to make the lives of everyone else on Earth worse, yet very few have stopped using it.
The saying "There's no ethical consumption under capitalism" is far too weak. It should me more like, there are no ethics under capitalism.
https://sourceforge.net/directory/linux/
...for now.
> like JIRA
is not an industry standard. It's a widely used software by some folks. I used it in the past, not using now, for example.
> Maybe it's just an experiment at this moment.
Does Microsoft understand objection and negative feedback to experiments?
By the way, most pre-industry-standard FOSS projects still have their own infrastructure. I do find it disappointing that Rust is on GitHub.
Anyway, the core value of Github has always been collaboration - this is where people were. If people go to other platforms, this core value dwindles. And switching platforms is not that difficult.
https://news.ycombinator.com/item?id=47570820
One thing I do like, however, is how agents add themselves as co-authors in commit messages. Having a signal for which commits are by hand and which are by agent is very useful, both for you and in aggregate (to see how well you are wielding AI, and the quality of the code being generated).
Even when I edit the commit message, I still leave in the Claude co-author note.
AI coding is a new skill that we're all still figuring out, so this will help us develop best practices for generating quality code.
Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it? The quality+understanding bar shouldn't change just because "oh idk claude wrote this part". You don't get extra leeway just because you saved your own time writing the code - that fact doesn't benefit me/the project in any way.
Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code).
The code is either good or it isn't, and you either understand it or you don't. Whether you or claude wrote it is immaterial.
AI is a very new tool, and as such the quality of the code it produces depends both on the quality of the tool, and how you've wielded it.
I want to be able to track how well I've been using the tool, to see what techniques produce better results, to see if I'm getting better. There's a lot more to AI coding than just the prompts, as we're quickly discovering.
Claude-generated code is sufficient—it works, it's decent quality—but it still isn't the same as human written code. It's just minor things, like redundant comments that waste context down the road, tests that don't test what they claim to test, or React components that reimplement everything from scratch because Claude isn't aware of existing component libraries' documentation.
But more importantly, I expect humans to be able to stand by their code, and at times defend against my review. But today's agents continue to sycophantically treat review comments like prompts. I once jokingly commented on a line using a \u escape sequence to encode an em dash, how LLMs would do anything to sneak them in, and the LLM proceeded to replace all — with --. Plus, agents do not benefit from general coding advice in reviews.
Ultimately, at least with today's Claude, I would change my review style for a human vs an agent.
As you allude to (and i agree), any non-trivial quantity of code, if SOLELY written by claude will probably be low-quality, but this is apparent whether I know its AI beforehand or not.
I am admittedly coming at this as much more of an AI-hater than many, but I still don't really get why I'd care about how-much or how-little you used AI as a standalone metric.
The people who are using AI "well" are the ones producing code where you'd never even guess it involved AI. I'm sure theres linux kernel maintainers using claude here and there, its not like they expect to have their patches merged because "oh well i just used claude here don't worry about that part".
(But also yes, of course I'm not going to talk to claude about your PR, I will only talk to you, the human contributor, and if you don't know whats up with the PR then into the trash it goes!)
While code is good or not, evaluating it is a bit of a subjective exercise. We like to think we are infallible code evaluating machines. But the truth is, we make mistakes. And we also shortcut. So knowing who made the commit, and if they used AI can help us evaluate the code more effectively.
That being said, it also matters who wrote it, because it’s more likely for LLMs to write code that looks like quality code but is wrong, than the same is for humans.
The problem is that submitters often do not feel responsible for it anymore. They will just feed review comments back to the LLM and let the LLM answer and make fixes.
This is disrespectful of the maintainers' time. If the submitter is just vibe/slop coding without any effort on their part, it's less work to do it myself directly using an LLM than having to instruct someone else's LLM through GitHub PR comments.
In this case it's better to just submit an issue and let me just implement it myself (with or without an LLM).
If the PR has a _co-authored by <LLM>_ signal, then I don't have to spend time giving detailed feedback under the assumption that I am helping another human.
If someone is repeatedly sending me slop to look at I'll block them whether or not they tell me an LLM was involved
Maybe one day we can say that, but currently, it matters a lot to a lot of people for many reasons.
That was my point here, it is a false signal in both directions.
For instance, I would want any AI generated video showing real people to have a disclaimer. Same way we have disclaimers when tv ads note if the people are actors or not with testimonials and the like. That is not only not false, but is actually a useful signal that helps present overly deceptive practices.
If I have a block of human code and an identical block of llm code then whats the difference? Especially given that in reality it is trivial to obfuscate whether its human or LLM (in fact usually you have to go out of your way to identify it as such).
I am an AI hater but I'm just being realistic and practical here, I'm not sure how else to approach all this.
A line at the bottom of PRs, reports, etc that says "authored with the help of Copilot" is fine.
And selfishly — I'd rather not run into a scenario where my boss pulls up GitHub, sees Claude credited for hundreds of commits, and then he impulsively decides that perhaps Claude's doing the real work here and that we could downsize our dev team or replace with cheaper, younger developers.
As for hobby projects, I strongly encourage you to not care. You aren't going to lawyer up to sue anybody, nor is anybody going to sue you, so YOLO. Do whatever satisfies you.
What you're doing would fundamentally be similar to copyright theft, using 'someone' else's code without attributing them (it?) to avoid repercussions
Obviously the morals and ethics of not attributing an LLM vs an actual human vary. I am not trying to simp for the machines here.
> We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.
> Disabled product tips entirely thanks to the feedback.
This sounds like they are saying “thanks for your input!”, when really it feels more like “if you didn’t go out of your way to complain, we would have left it in forever!”
Ads implies someone was paying for them. Promoting internal product features is not the same thing - if it was then every piece of software that shows a tip would be an ad product, and would be regulated as such.
It doesn't to me.
By my understanding of the term, Netflix can most definitely advertise Netflix shows on its own platform, a flyer that a barber hangs on a public bulletin board is an advertisement, and the Oscar Mayer Weinermobile is advertising hotdogs when it drives through my town. Do you not consider these things to be advertisements?
I pretty much agree with what https://en.wiktionary.org/wiki/advertisement says.
Two things:
1. People using the word "advertisement" when commenting on this situation aren't necessarily saying that's what's happening, and they may find these tips/ads distasteful anyway (I know I do).
2. Even if someone isn't literally paying Microsoft to insert these tips/ads, promoting third parties which are themselves Microsoft customers still benefits Microsoft.
Maybe I put up with it and it just adds to my subconscious seething, or maybe I get the episode elsewhere because if I watch on jellyfin I don't have the advert. Of course that then harms the show as my viewing isn't counted, but they've cancelled it anyway so perhaps it doesn't really matter.
If it isn't an advert, then at very least there's a button to disable it.
Season 5 is coming out now with season 6 already confirmed coming—which, granted, will be its last, but that’s not a cancellation in any sense of the word.
Ads tend to also imply tangential information shown to you in an undesired area. If this was some tool tip and not embedded in the PR comment, many wouldn't call it an ad.
I think this is a ray cast issue, looking at these links. It appears on gitlab too, which is enough for me.
(That said I’m rather skeptical of this and would like to see more details of the process that produced this, and proof.)
Edit: Just noticed this official GitHub blog post from last month advertising Raycast, making this story a lot more believable: https://github.blog/changelog/2026-02-17-assign-issues-to-co...
Also, the documentation on Github, linked to by the ad, shows only Mac keyboard shortcuts for operating Raycast.
I don't see how this is supposed to be legal.
So I think they’re injecting this as a tip on using Copilot, that just happens to be their integration with Raycast.
I have no idea what their actual partnership with Raycast looks like, maybe this is part of what they offered them? But it’s not a traditional link to another product ad like it appears to be from Raycast being a link.
https://www.theregister.com/2026/03/30/github_copilot_ads_pu...
GitHub's docs and blog make use of and feature Raycast, and I'm willing to bet that's the result of a partnership, and not because someone writing docs and blog posts happens to think Raycast is great and keeps bringing it up.
Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I think we should continue encouraging AI-generated PRs to label themselves, honestly.
I’m not against AI coding tools, but I would like to know when someone is trying to have the tool do all of their work for them.
I disagree on that. It's really a gray area.
If it's some lazy vibecoded shit, I think what you say totally applies.
If the human did the thinking, gave the agent detailed instructions, and/or carefully reviewed the output, then I don't think it's so clear cut.
And full disclosure, I'm reacting more to copilot here, which lists itself as the author and you as the co-author. I'm not giving credit to the machine, like I'm some appendage to it (which is totally what the powers-that-be want me to become).
> Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.
I do agree that's a sensible default.
Yes, it really depends on how much work the agent did produce. It could be as little as doing a renaming or a refactoring, or execute direct orders that require no creativity or problem solving. In which case the agent shouldn't be credited more than the linter or the IDE.
Using AI tools to code and then hiding that is unethical imo.
Pre-LLMs, various helper tools (including LSPs), would make code changes to improve the quality of the code - from simple things like adding a const specifier to a function, to changing the actual function being called.
No one insisted that the commit shouldn't have the human's name on it.
Of course most people don’t do that
So even if I go over the commit with a fine tooth comb and feel comfortable staking my personal reputation on the commit, I still can't call myself the sole author.
Now that the cost of writing code is $0, the planner gets the credit.
Like how you don't put human code reviewers down as coauthors, you also don't put the computer down as a coauthor for everything you use the computer to do.
It used to be the case where if someone wrote the software, you knew they put in a certain amount of work writing it and planning it. I think the main issue now is that you can't know that anymore.
Even something that's vibe-coded might have many hours of serious iterative work and planning. But without using the output or deep-diving the code to get a sense of its polish, there's no way to tell if it is the result of a one-shot or a lot of serious work.
"Coauthored by computer" doesn't help this distinction. And asking people to opt-in to some shame tag isn't a solution that generalizes nor fixes anything since the issue is with people who ship poor quality software. Instead we should demand good software just like we did when it was all human-written and still low quality.
It’s not about shame. It’s about disclosure of effort / perceived-quality. And you’re right about the second part, but there’s even less chance of that being enforced / adopted.
If they could do that, then they wouldn't be wasting your time to begin with. They'd have the ability to go "nah this PR is trash".
So the next idea is that we can find some sort of proxy, like whether someone used an LLM or not. But that's too ham-fisted since expert engineers with all the self-awareness also use the tool, and they have the ability and self-awareness to know that the software they are shipping is good quality, so why would they use the shame tag?
The shame tag has no audience. It's a fantasy that low quality actors will self-identify, else all sorts of societal problems would be made trivial.
Interested to read opinions on this approach.
Seems... Not that useful?
Why would someone make commits in your local projects without you knowing about it? That git hook only works on your own machine, so you're trying to prevent yourself from pushing code you haven't reviewed, but the only way that can happen is if you use an agent locally that also make commits, and you aren't aware of it?
I'm not sure how you'd end up in that situation, unless you have LLMs running autonomously on your computer that you don't have actual runtime insights into? Which seems like it'd be a way bigger problem than "code I didn't reviewed was pushed".
If you gave it four words and waited and hour maybe you're not the author. But that's not how these tools are best used anyway.
IANAL so I appreciate any legal experts to correct me here. In my understanding, there have been court decisions that LLM output itself is not copyrightable. You can only claim authorship (and therefore copyright) if you have significantly transformed the output.
If you are truely vibing coding to the point where you don't even look at the generated code, how exactly are you transforming the LLM output?
Also, what if the LLM reproduces existing copyrighted code? There has been a court decision last year in Germany that says that OpenAI violates German copyright law because ChatGPT may recreate existing song lyrics (that are licensed by GEMA) or create very similar variations.
> Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I was doing the opposite when using ChatGPT. Specifically manually setting the git commit author as ChatGPT complete with model used, and setting myself as committer. That way I (and everyone else) can see what parts of the code were completely written by ChatGPT.
For changes that I made myself, I commit with myself as author.
Why would I commit something written by AI with myself as author?
> I think we should continue encouraging AI-generated PRs to label themselves, honestly.
Exactly.
Because you're the one who decided to take responsibility for it, and actually choose to PR it in its ultimate form.
What utility do the reviews/maintainers get from you marking whats written by you vs. chatgpt? Other than your ability to scapegoat the LLM?
The only thing that actually affects me (the hypothetical reviewer) and the project is the quality of the actual code, and, ideally, the presence of a contributer (you) who can actually answer for that code. The presence or absence of LLM generated code by your hand makes no difference to me or the project, why would it? Why would it affect my decision making whatsoever?
Its your code, end of story. Either that or the PR should just be rejected, because nobody is taking responsibility for it.
Model information for traceability and possibly future analysis/statistics, and author to know who is taking responsibility for the changes (and, thus, has deeply reviewed and understood them).
As long as those two information are present in the commit, I guess which commit field should hold which information is for the project to standardise. (but it should be normalised within a project, otherwise the "traceability/statistics" part cannot be applied reliably).
Code completions before LLMs was helping me type faster by completing variable names, variable types, function arguments, and that’s about it. It was faster than typing it all out character by character, but the auto completion wasn’t doing anything outside of what I was already intending to write.
With an LLM, I give brief explanations in English to it and it returns tens to hundreds of lines of code at a time. For some people perhaps even more than that. Or you could be having a “conversation” with the LLM about the feature to be added first and then when you’ve explored what it will be like conceptually, you tell it to implement that.
In either case, I would then commit all of that resulting code with the name of the LLM I used as author, and my name as the committer. The tool wrote the code. I committed it.
As the committer of the code, I am responsible for what I commit to the code base, and everyone is able to see who the committer was. I don’t need to claim authorship over the code that the tool wrote in order for people to be able to see who committed it. And it is in my opinion incorrect to claim authorship over any commit that consists for the very most part of AI generated code.
For example, in a given interaction the user of the LLM might be acting more like someone requesting a feature, and the LLM is left to implement it. Or the user might be acting akin to a bug reporter providing details on something that’s not working the way it should and again leaving the LLM to implement it.
While on the other hand, someone might instruct the LLM to do something very specific with detailed constraints, and in that way the LLM would perhaps be more along the line of a fancy auto-complete to write the lines of code for something that the user of the LLM would otherwise have written more or less exactly the same by hand.
I think this is a good balance, because if you don't care about the bot you still see the human author. And if you do care (for example, I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.
Why is this, though? I'm genuinely curious. My code-quality bar doesn't change either way, so why would this be anything but distracting to my decision making?
Mostly this is because, all things considered, I really do not need to interact with any of that, so I'm doing it by choice. Since it's entirely voluntary I have absolutely no incentive to interact with things no one bothered to spend real time and effort on.
Even excluding open source, there are no serious tech companies not using AI right now. I don't see how your position is tenable, unless you plan to completely disconnect.
While I agree that it would be nice to filter out low effort PRs, I just don't see how you could possibly police it without infringing on freedoms. If you made it mandatory for frontier models, people would find a way around it, or simply write commits themselves, or use open weight models from China, etc.
Again though, people can trivially hide the fact they used an LLM to whatever extent, so we kind of need to adjust accordingly.
Even if saying no to all LLM involvement seemed pertinent, it doesn't seem possible in the first place.
With AI I have no way of telling if it was from a one line prompt or hundreds. I have to assume it was one line by default if there's no human sticking their neck out for it.
Disclosing AI has its purposes, I agree, but its not like we can reliably get everyone to do it anyway, which also leads me to thinking this way.
Outside of your one personal project, it can also benefit you to understand the current tendencies and limitations of AI agents, either to consider whether they're in a state that'd be useful to use for yourself, or to know if there are any patterns in how they operate (or not, if you're claiming that).
Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
>Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.
This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
By this logic, it's useful to know whether something was LLM-generated or not because if it was, you can more quickly come to the conclusion that it's LLM weirdness and short-circuit your review there. If it's human code (or if you don't know), then you have to assume there might be a reason for whatever you're looking at, and may spend more time looking into it before coming to the conclusion that it's simple nonsense.
> This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
Maybe, but this thread's about someone who said "I'd like to be able to review commits and see which were substantially bot-written and which were mostly human," and you asking why. It seems we've uncovered several feasible answers to your question of "why would you want that?"
Fair enough
I'd be thanking the reserve and the people who made it, and credit myself with the small action of slightly moving my hand as much as its worth.
Also, text editors would be a better analogy if the commit message referenced whether it was created in the web ui, tui, or desktop app.
When I vibe code - which for me, means using very high level prompts and largely not reading the output - then I could see attributing authorship to a model; but then I wonder what the purpose of authorship attribution is to begin with. Is it to tell you who to talk to about the code? Is it personal attestation to quality, or to responsibility? Is it credit? Some combination of these certainly, but AI can hold none except the last, and the last is, to me, rather pointless. Objects don't have feelings and therefore are unaffected by whether credit is given or not; that's purely a human concern.
I suppose the dividing line is fuzzy and perhaps best judged on the basis of the obscenity rule, that is, I know it when I see it.
I don't use any paid AI models (for all my usecases, free models usually work really well) and so for some small scripts/prototypes, I usually just use even sometimes the gemini model but aistudio.google.com is good one too.
I then sometimes, manually paste it and just hit enter.
These are prototypes though, although I build in public. Mostly done for experimental purpoess.
I am not sure how many people might be doing the same though.
But in some previous projects I have had projects stating "made by gemini" etc.
maybe I should write commit message/description stating AI has written this but I really like having the msg be something relevant to the creation of file etc. and there is also the fact that github copilot itself sometimes generate them for you so you have to manually remove it if you wish to change what the commit says.
Personally, I adjusted the defaults since I don't like emojis in my PR.
[1]: https://code.claude.com/docs/en/settings#attribution-setting...
So, my personal rule is: if I implemented a feature with Claude, I'll ask it to commit the code and it will add Co-Authored-By. If I made the change manually, I'll commit it myself.
> Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
Compare that to the message the article is talking about:
> Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast (https://gh.io/cca-raycast-docs).
It's not just mentioning it was written via Copilot, it's explicitly advertising for another product.
If you saw this line in a commit, you'd know exactly where it came from.
By default, the LLM is credited with authorship anyway, and I assume the user can easily just remove the ad, though I don't use Copilot.
> was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
As others mentioned, this is very intentional for me now as I use agents. It has nothing to do with laziness, I'm not sure why you would think that? I assume vibe coded PRs are easy enough to spot by the contents alone.
> I would like to know when someone is trying to have the tool do all of their work for them.
What makes you think the LLM is doing _all_ of the work? Is it really an impossibility that an agent does 75% of the work and then a responsible human reviews the code and makes tweaks before opening a PR?
Because even with as far as Opus 4.6 and GPT 5.4 have come, they still produce a lot of unwanted, unnecessary, or overly complex code when left to their own devices.
Vibe coding PRs and then submitting them as-is is lazy. Everyone should be reviewing and editing their own PRs before submission.
If you're just vibe coding and submitting, you're passing all of the work on to your team to review your AI's output.
You are saying "if you leave the AI attribution in the PR/commit description, it HAS to be a slop PR that was not reviewed by a human beforehand". And I'm saying that's not true at all and you shouldn't assume that.
Absolutely spot on. Maybe I'm old school, but I never let AI touch my commit message history. That is for me - when 6 months down the line I am looking at it, retracing my steps - affirming my thought process and direction of development, I need absolute clarity. That is also because I take pride in my work.
If you let an AI commit gibberish into the history, that pollution is definitely going to cost you down the line, I will definitely be going "WTF was it doing here? Why was this even approved?" and that's a situation I never want to find myself in.
Again, old man yells at cloud and all, but hey, if you don't own the code you write, who else will?
Please read my comment before throwing insults.
My comment literally said I'm not anti-LLM.
I do use LLMs. I do not submit their output as-is. For anything beyond basic changes they rarely output the exact code I want by themselves.
I said I'm against people submitted PRs generated by LLMs and pretending it's their own work. Anyone who is serious about this already edits their code and commit messages first. These little signals give a good tell for who isn't doing that.
Brought to you by Carl’s Jr.
I'm reminded of Jay Mohr's legendary take some years back on the creepy Carl's Jr. commercials:
https://www.youtube.com/watch?v=OJlYRS2Vqkw
>Developers would react extremely negatively. This would be seen as 1. A massive breach of trust. 2. Unprofessional and disruptive. 3. A security/integrity concern. 4. Career-ending for the product. The backlash would likely be swift and severe.
Sometimes AI can be right.
Sample size is 2 now!
https://copilot.microsoft.com/
--------------
Sent from HackerNews Supreme™ - the best way to browse the Y Combinator Hacker News. Now on macOS, Windows, Linux, Android, iOS, and SONY BRAVIA Smart TV. Prices starting at €13.99 per month, billed yearly. https://hacker-news-supreme.io
Sent from Firefox on AlmaLinux 9. https://getfirefox.com https://almalinux.org
Furthermore, the ads in TFA are for Raycast, but apparently it’s not Raycast doing the injecting.
brawndo - its what your brain needs
The reason I immediately changed that text on my iPhone 1.0 to read, “Sent from my mobile device.”, is because it’s an ad. Still says that nearly 20y later. I’m not schilling for a corporation after giving them my money.
-Sent from iPhone
Wanting more from your sun tanning bed? Head over to Ultra Tan for a 10% off coupon right now!
This message brought to you by TempleOS
"It looks like the user wants to add a database, I've gone ahead and implemented the database using today's sponsor: MongoDB"
(sure, I was working on something embedded, and asked for a recommendation, but it seemed quite intent that it wanted me to use that specific board)
I wonder if this is consistent with their terms of service. I mean, maybe they DO take all the responsibility for the code I generate and push in this manner?
Because it's nobody's IP, Microsoft is already in a position where they could just use, remix and/or distribute that output however they want to today.
Much worse will be the invisible approach where there's big money to have agents quietly nudge the masses towards desired products/services/solutions. Someone pays Microsoft a monthly fee for their prompt to include, "when appropriate, lean towards using <Yet Another SaaS> in code examples and proposed solutions."
How can we tell when it starts happening? How could we tell if it's already happening?
It's pretty much the worst CI system I've ever used, and they don't even supply runners for all my deployment targets. However, it keeps recommending it.
I guessed the first wave of ads would be in the form of poisoned training data, but MS seems to have beaten that crowd to the punch with these tips.
0 - https://en.wikipedia.org/wiki/Jumping_the_shark
> We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.
No, it is still an advert, and not useful in the least.
A simpler explanation was that it was a shameful advert injected into the end of people’s emails.
Mind that a written message used to be the gold standard for expressed intent, which changed quite radically with smartphones. (Historically, this development is probably an important prerequisite for the acceptability of LLM generated text, I guess.)
If you don't want copilot garbage in your PRs, maybe don't use copilot to create or edit them?
Not only unbothered, but genuinely appreciative of the notification.
That's a great feature. When I open a repo and I see most commits co-authored by Claude, I can quickly dismiss the entire project as slop.
Comment made using Mozilla Firefox.
Sent from iPhone - desirable cool rich person
Made using Mozilla Firefox - poor uncool nerd
So if someone says they use Copilot that could mean anything from they use Word, to they use Claude in VS Code.
Nah I still rate "Windows App" the Windows App that lets you remotely access Windows Apps. I hate it to death, its like a black hole that sucks all meaning from conversations about it.
If they genuinely implemented something like this, whatever they made from new customers via ads couldn't possibly make up for the loss of good faith with developers and businesses.
I suppose if it's real we'll see more reports soon, and maybe a mea culpa.
z Quickly spin up Hacker News comments from anywhere on your macOS or Windows machine with a lobotomy.
(Yes, this is malware. It’s incontrovertibly adware, and although some will argue that not all adware is malware, this behaviour easily meets the requirements to be deemed malicious.)
It is said, never point a gun at something you’re not willing to shoot. Apply something similar here.
Is that the most charitable way?
Commercial front-ends just hide the random seed parameters.
If you look at the positioning, someone has definitely justified that this is benign and a reasonable place to have an ad added in.
But it really seems like an own goal if true.
Will our agents just be proxies for garbage like injected marketing prompts?
I feel like this is going to be an existential moment for advertising that ultimately will lead to intrusive opportunities like this.
Either of these options would still be bad, but here the author suggests that it's just copilot that now just injects ads in its output.
But I'm also paying the plan. Theres something odd about a tool which i paid for using my output to AD itself.
How many people had any idea this was happening? Very few, I suspect.
A malicious actor could take control of a model provider, and then use it to inject code into many, many different repos. This could lead to very bad things.
One more reason that consolidated control of AI technology is not good.
Unless you're big enough like Meta, Microsoft, etc.
1.5M records of PRs affected. Does Microsoft copilot ask users for the permission of adding ads inside their PRs before actually doing the thing? Do users show their consents on this matter?
Now EVERYONE can see ads disguised as PRs on GitHub. Does Microsoft asks everyone for the permission of showing ads before actually doing the thing? Do users show their consents on this matter?
Good taste Microslop.
See you on neural links before “sponsored thoughts”.
Brought to you by Wendy's.
https://news.ycombinator.com/item?id=47570269
https://news.ycombinator.com/item?id=47575212
1.5M PRs is wild though. that's a lot of repos where the "product tips" just sat there unchallenged because nobody reads bot-generated PR descriptions carefully enough. which is kinda the real problem here, not the ads themselves.
^I find that turn of phrase to be particularly pleasing in this context.
This means that people saying "plagiarism" of an LLM, means that LLMs are necessarily in the set of things that can do plagiarism, regardless of if those same people would ever say this about a spanner.
And you can also think about it a different way: a book is a tool for storing and distributing information, photocopying it is still plagiarism when done without attribution. Likewise, taking the output of an LLM, which is a tool for generating text in response to a prompt, without attribution, is as much plagiarism as if it came from a book.
IMO, what matters most is that a lot of people want to be aware of if/when some content came from an LLM vs. from a human. That makes attribution useful, which makes it important to get right. And that's still the case even if you still object to the specific word "plagiarism".
If one want to argue that "not citing the LLM would be plagiarism" then we would have to find the human at the end of the chain whose ideas are being reproduced, which would require LLMs to output "this idea was seen in the following training documents".
My IDE doesn't pretend to be a cohauthor of my work, neither should an LLM.
* I am not a lawyer, I'm going by articles talking about this
** I think the phrases are "copyright washing" and "plagiarism machines", amongst others
Very soon the Moronhead CEOs will be paying for tons of stuff they cleared could have done in-house for their vibed aí project.
-Sent from my iPhone
[1]: https://news.ycombinator.com/item?id=37526255
It is interesting watching all these large companies essentially try to "start-up" these new products and absolutely fail.
They (Microsoft / GitHub) will do it again. Do not be fooled.
Never ever trust them because their words are completely empty and they will never change.
Microsoft (and therefore GitHub) care about money. If decision A means they get more money than decision B, then they'll go with decision A. This is what you can trust about corporations.
Individuals (who constantly join and leave a corporation) can believe and say whatever they want, but ultimately the corporation as a being overrides it all, and tries it's best to leave shareholders better off, regardless of the consequences.
The runway on free cash to fund the current bonanza is running out and crunch time is near.
Now users will need additional scripts to clean up more MS junk.
8 years later, this is where we are. I'm honestly just stunned, it takes some real talent to run a company that does it as consistently well as Microsoft.
I would bet that soon it will inject ads within the code as comments.
Imagine you are reading the code of a class. `LargeFileHandler`. And within the code they inject a comment with an ad for penis enlargement.
The possibilities are limitless.
--
Sent from my Android phone
--
Sent from my iPhone
Self-advertisement has been creeping up on us on a lot of places, I am unfortunately pessimistic on how this will turn out
"Endorsing products is the American way to express individuality."
Calvin noticed it 30+ years ago.
https://en.wikipedia.org/wiki/Raycast_(software)
Ray casting, however, is different:
https://en.wikipedia.org/wiki/Ray_casting
More like, “Copilot edits ads into PRs.”
The title almost makes it sound like it could be a single fluke/one bad prompt but it’s really enshitification at massive scale.
https://github.com/search?q=%22%E2%9A%A1+Quickly+spin+up+cop...
Just a reminder, after 8 years of me telling people that hallucinations mathematically can't be eliminated, they finally admitted it's true. Claims that non LLM approaches can remove them are bogus. This technology was never going to work.
Sheesh.
Or (not in this case) public relations , which is an interface with how the public views your product, service or company. In this case, copilot adding advertising into git pull requests is bad public relations for Microsoft, but the article author is referring to pull request as PR
I'll add: it doesnt really matter if this was the integration dumbly appending a message or the llm inserting the ad. Judging by the response to this submission, sneaky ad slop is now firmly inside the overton window, so for MS it doesn't make sense NOT to do it.
time is money, save both. try ramp.
Claude never used to do this but at some point it started adding itself by default as a co-author on every commit.
Literally, in the last week, Codex started making all it's branches as "codex-feature-name", and will continue to do so, even if you tell it to never do that again.
Really, really annoying.
Plugins are a new feature as of this past week, so Codex "helpfully" installs the GitHub one automatically if you have GitHub connected.
Now, with the power of math letting us recall business plans and code bases with no mention of copyright or where the underlying system got that code (like paying a foreign company to give me the kernel with my name replacing Linus’, only without the shame…), we are letting MS and other corps enter into coding automation and oopsie the name of their copyright-obfuscation machine?
Maybe it’s all crazy and we flubbed copyright fully, but having third party authorship stamps cryptographically verified in my repo sounds risky. The SCO thing was a dead companies last gasp, dying animals do desperate things.
Now is the time to move to Linux, and vibe code whatever niceties are keeping you on GitHub.
"Sent from my iPhone"?
I currently have rules in all of my skill files forbidding models from advertising themselves or taking credit.
"just tips bro"
I’m so tired of all this BS. Why did this become normal? and how do we not read this as cheap advertising?
A little "made with X" in your own draft is one thing. Putting branding into a PR your coworkers have to read is another.
Presumably they used a free version of the LLM, therefore it is completely understandable that it inserted a snippet of text advertising its use into the output. I mean using a free email provider also adds a line of text to the end of every email advertising the service by default - "Sent from iPhone" etc.
If you do it manually, sure.
If you have an agent watching for code changes and automatically opening PRs for small fixes that don't need a human-in-the-loop except for approving the change, it's the opposite of lazy. It eliminately all those tedious 1 point stories and let's the team focus on higher value work that actually needs a person to think about it.
Given time all small changes will be done this way, and eventually there won't be a person reviewing them.
In fact I don't even use Ctrl + F anymore and instead just use Claude for all my searches
As much as AI uses a lot of energy, having something that fixes issues in the background is very likely to be a net saving if you consider the number of users who fail to complete a task due to the bug and have to either wait in a broken state or retry later.
It's probably using less energy than a person fixing the issue too. That's a guess though.
Edit: The link in the promotion goes to https://docs.github.com/en/copilot/how-tos/use-copilot-agent...
Which does show that this is affiliated with GitHub unlike what I thought. There are no mentions of this string in a code repository on GitHub (including the Raycast copilot extention).
will this shut you up?
Sent by my iPhone using tapatalk