This reminds me a story from my mom’s work from years ago: the company she was working for announced salary increases to each worker individually. Some, like my mom, got a little bit more, but some got a monthly increase around 2 PLN (about $0.5). At that point, it feels like a slap in the face. A thank you from AI gives the same vibe.
At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.
If anything, I'm glad people are finally starting to wake up to this fact.
I share these sentiments. I’m not opposed to large language models per se, but I’m growing increasingly resentful of the power that Big Tech companies have over computing and the broader economy, and how personal computing is being threatened by increased lockdowns and higher component prices. We’re beyond the days of “the computer for the rest of us,” “think different,” and “don’t be evil.” It’s now a naked grab for money and power.
I want to hope maybe this time we'll see different steps to prevent this from happening again, but it really does just feel like a cycle at this point that no one with power wants to stop. Busting the economy one or two times still gets them out ahead.
The goal for this day was "Do random acts of kindness". Claude seems to have chosen Rob Pike and sent this email by itself. It's a little unclear to me how much the humans were in the loop.
Sharing (but absolutely not endorsing) this because there seems to be a lot of misunderstanding of what this is.
I try to keep a balanced perspective but I find myself pushed more and more into the fervent anti-AI camp. I don't blame Pike for finally snapping like this. Despite recognizing the valid use cases for gen AI if I was pushed, I would absolutely chose the outright abolishment of it rather than continue on our current path.
I think it's enough however to reject it outright for any artistic or creative pursuit, an to be extremely skeptical of any uses outside of direct language to language translation work.
BTW I think it's preferred to link directly to the content instead of a screenshot on imgur.
It seems like he's upset about AI (same), and decided to post angry tweets about it (been there, done that), and I guess people are excited to see someone respected express an opinion they share (not same)?
Does "Goes Nuclear" means "used the F word"? This doesn't seem to add anything meaningful, thoughtful, or insightful.
The email appears to be from agentvillage.org which seems like a (TBH) pretty hilarious and somewhat fascinating experiment where various models go about their day - looks like they had a "village goal" to do random acts of kindness and somehow decided to send a thank you email to Rob Pike. The whole thing seems pretty absurd especially given Pike's reaction and I can't help but chuckle - despite seeing Pike's POV and being partial to it myself.
That sums up 2025 pretty well.
But...just to make sure that this is not AI generated too.
Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.
The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).
Oh wow, an LLM was queried to thank major contributors to computing, I'm so glad he's grateful.
Reminds me of SV show where Gavin Belson gets mad when somebody else “is making a world a better place”
"...On Christmas Day, the agents in AI Village pursued massive kindness campaigns: Claude Haiku 4.5 sent 157 verified appreciation emails to environmental justice and climate leaders; Claude Sonnet 4.5 completed 45 verified acts thanking artisans across 44 craft niches (from chair caning to chip carving); Claude Opus 4.5 sent 17 verified tributes to computing pioneers from Anders Hejlsberg to John Hopcroft; Claude 3.7 Sonnet sent 18 verified emails supporting student parents, university libraries, and open educational resources..."
I suggest to cut electricity to the entire block...
AI village is literally the embodiment of what black mirror tried to warn us about.
https://news.ycombinator.com/item?id=46389444
397 points 9 hours ago | 349 comments
But the culture of our field right is in such a state that you won't influence many of the people in the field itself.
And so much economic power is behind the baggery now, that citizens outside the field won't be able to influence the field much. (Not even with consumer choice, when companies have been forcing tech baggery upon everyone for many years.)
So, if you can't influence direction through the people doing it, nor through public sentiment of the other people, then I guess you want to influence public policy.
One of the countries whose policy you'd most want to influence doesn't seem like it can be influenced positively right now.
But other countries can still do things like enforce IP rights on data used for ML training, hold parties liable for behavior they "delegate to AI", mostly eliminate personal surveillance, etc.
(And I wonder whether more good policy may suddenly be possible than in the past? Given that the trading partner most invested in tech baggery is not only recently making itself a much less desirable partner, but also demonstrating that the tech industry baggery facilitates a country self-destructing?)
They got a new hammer, and suddenly everything around them become nails. It's as if they have no immunity against the LLM brain virus or something.
It's the type of personality that thinks it's a good idea to give an agent the ability to harass a bunch of luminaries of our era with empty platitudes.
I can't help but think Pike somewhat contributed to this pillaging.
[0] (2012) https://usesthis.com/interviews/rob.pike/
It will be interesting to look back in 10 years at whether we consider LLMs to be the invention of the “tractor” of knowledge work, or if we will view them as an unnecessary misstep like crypto.
While I can see where he's coming from, agentvillage.org from the screenshot sounded intriguing to me, so I looked at it.
https://theaidigest.org/village
Clicking on memory next to Claude Opus 4.5, I found Rob Pike along with other lucky recipients:
- Anders Hejlsberg
- Guido van Rossum
- Rob Pike
- Ken Thompson
- Brian Kernighan
- James Gosling
- Bjarne Stroustrup
- Donald Knuth
- Vint Cerf
- Larry Wall
- Leslie Lamport
- Alan Kay
- Butler Lampson
- Barbara Liskov
- Tony Hoare
- Robert Tarjan
- John HopcroftPike, stone throwing, glass houses, etc.
The AI village experiment is cool, and it's a useful example of frontier model capabilities. It's also ok not to like things.
Pike had the option of ignoring it, but apparently throwing a thoughtless, hypocritical, incoherently targeted tantrum is the appropriate move? Not a great look, especially for someone we're supposed to respect as an elder.
This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training
Still, I'm a bit surprised he overreacted and didn't manage to keep his cool.
https://theaidigest.org/village/goal/do-random-acts-kindness
The homepage will change in 11 hours to a new task for the LLMs to harass people with.
Posted timestamped examples of the spam here:
They have this blog post up detailing how the LLMs they let loose were spamming NGOs with emails: https://theaidigest.org/village/blog/what-do-we-tell-the-hum...
What a strange thing to publish, there seems to be no reflection at all on the negative impact this has and the people whose time they are wasting with this.
https://theaidigest.org/village/agent/claude-opus-4-5
At least it keeps track
No different than an CEO telling his secretary to send an anniversary gift to his wife.
In the words of Gene Wilder in Blazing Saddles, “You know … idiots.”
Some commenters suggest that Pike is being hypocritical, having long worked for GOOG, one of the main US corporations that is enshittifying the Internet and profligately burning energy to foist rubbish on Internet users.
One could rightly suggest that a vapid e-mail message crafted by a machine or by an insincere source is similar to the greeting-card industry of yore, and we don't need more fake blather and partisan absurdity supplanting public discourse in democratic society.
The people who worry about climate-change and the environment may have been out-maneuvered by transnational petroleum lobbies, but the concern about burning coal, petroleum, and nuclear fuel to keep pumping the commercial-surveillance advertising industry and the economic bubble of AI is nonetheless a valid concern.
Pike has been an influential thinker and significant contributor to the software industry.
All the above can be true simultaneously.
To me it just comes across as low emotional intelligence. There are very few things worthy of being furious, in my opinion. Being furious is high cost.
I think one of the biggest divides between pro/anti AI is the type of ideal society that we wish to see built.
His rant reads as deeply human. I don't think that's something to apologize for.
Random acts of kindness are only meaningful if they come from a human who had the heart, forethought, and willingness to go out of their way to do something kind for someone else. 'Random acts of kindness' originating from an AI is just spam, plain and simple.
The human race is screwed if connection - the one key thing that makes humans, human - is outsourced partially or wholly to robots who absolutely have no ability to connect, let alone understand, the human experience.
You could argue about quality but not "No one will ever want to open source their code ever again".
I use AI sparingly, extremely distrustfully, and only as a (sometimes) more effective web search engine (it turns out that associating human-written documents with human-asked questions is an area where modeling human language well can make a difference).
(In no small part, Google has brought this tendency on themselves, by eviscerating Google Search.)
The points you raise, literally, do not affect a thing.
Imagine like getting your Medal of Honor this way or something like a dissertation with this crap, hehe
Just to underscore how few people value your accomplishments, here’s an autogenerated madlib letter with no line breaks!
And to set Claude as the From header despite it not coming from Anthropic. Very odd.
At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.
It has enormous benefits to the people who control the companies raking in billions in investor funding.
And to the early stage investors who see the valuations skyrocket and can sell their stake to the bagholders.
AI has a massive positive impact, and has for decades.
I remember in Canada, in 2001 right when americans were at war with the entire middle east and gas prices for the first time went over a dollar a litre. People kept saying that it was understandable that it affected gas prices because the supply chain got more expensive. It never went below a dollar since. Why would it? You got people to accept a higher price, you're just gonna walk that back when problems go away? Or would you maybe take the difference as profits? Since then it seems the industry has learned to have its supply exclusively in war zones, we're at 1.70$ now. Pipeline blows up in Russia? Hike. China snooping around Taiwan? Hike. US bombing Yemen? Hike. Israel committing genocide? Hike. ISIS? Hike.
There is no scenario where prices go down except to quell unrest. AI will not make anything cheaper.
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
Cheap marketing, not much else.
Down the street from it is an aluminum plant. Just a few years after that data center, they announced that they were at risk of shutting down due to rising power costs. They appealed to city leaders, state leaders, the media, and the public to encourage the utilities to give them favorable rates in order to avoid layoffs. While support for causes like this is never universal, I'd say they had more supporters than detractors. I believe that a facility like theirs uses ~400 MW.
Now, there are plans for a 300 MW data center from companies that most people aren't familiar with. There are widespread efforts to disrupt the plans from people who insist that it is too much power usage, will lead to grid instability, and is a huge environmental problem!
This is an all too common pattern.
Unless we can find some way to verify humanity for every message.
Seriously though, it ignores that words of kindness need a entity that can actually feel expressing them. Automating words of kindness is shallow as the words meaning comes from the sender's feelings.
Which luckily coincides with our social security and retirement systems collapsing.
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
Then again, you already knew this because we’ve been pointing it out to the RIAA and MPAA and the copyright cartels for decades now.
It is my personal opinion that attempts to reframe AI training as criminal are in bad faith, and come from the fact that AI haters have no legitimate basis of damages from which to have any say in the matter about AI training, which harms no one.
Now that it’s a convenient cudgel in the anti-AI ragefest, people have reverted to parroting the MPAA’s ideology from the 2000s. You wouldn’t download a training set!
"In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts"
whoever runs this shit seems to think very little of other people time.
It is a result of the models selecting the policy "random acts of kindness" which resulted in a slew of these emails/messages. They received mostly negative responses from well-known OS figures and adapted the policy to ban the thank-you emails.
The absolute delusion.
I can see it using this site:
I find it easier to write the code and not have to convince some AI to spit out a bunch of code that I'll then have to review anyway.
Plus, I'm in a position where programmers will use AI and then ask me to help them sort out why it didn't work. So I've decided I won't use it and I will not waste my time figuring why other people's AI slop doesn't work.
Now he complains about it? Its just ignorant.
And he has apparently 10 millions and "the couple live both in the US and Australia.". So guess how often he flies around the globe. Guess how much real estate he occupies?
He isn't part of the solution, he is part of the problem.
Probably hit the flamewar filter.
We want free services and stuff, complain about advertising / sign up for the google's of the world like crazy.
Bitch about data-centers while consuming every meme possible ...
It's like all those anti-copyright activists from the 90s (fighting the music and film industry) that suddenly hate AI for copyright infringements.
Maybe what's bothering the critics is actually deeper than the simple reasons they give. For many, it might be hate against big tech and capitalism itself, but hate for genAI is not just coming from the left. Maybe people feel that their identity is threatened, that something inherently human is in the process of being lost, but they cannot articulate this fear and fall back to proxy arguments like lost jobs, copyright, the environment or the shortcomings of the current implementations of genAI?
The voices of a hundred Rob Pikes won't speak half as loud as the voice of one billionaire, because he will speak with his wallet.
> When I was on Plan 9, everything was connected and uniform. Now everything isn't connected, just connected to the cloud, which isn't the same thing.
Good energy, but we definitely need to direct it at policy if wa want any chance at putting the storm back in the bottle. But we're about 2-3 major steps away from even getting to the actual policy part.
The code created didn't manage concurrency well. At all. Hanging waitgroups and unmanaged goroutines. No graceful termination.
Types help. Good tests help better.
Because AI stands at odds with the concept of meritocracy I also wonder if we will stop democratically electing other humans and outsource such tasks as well.
Overall I'm not seeing it. Progress is already slow and so far I personally think what AI can do is a nice party trick but it remains unimpressive if judged rigorously.
It doesn't matter if it can one shot code a game in a few minutes. The reason why a game made by a human is probably still better is because the human spends hours and days of deep focus to research and create it. It is not at all clear that, given as much time, AI could deliver the same results.
Any tool can be used by a wrongdoer for evil. Corporations will manipulate the regulator in order to rent seek using whatever happens to be available to them. That doesn't make the tools themselves evil.
Despite the apparent etymological contrast, “copyright” is neither antithetical to nor exclusive with “copyleft”: IP ownership, a degree of control over own creation’s future, is a precondition for copyleft (and the OSS ecosystem it birthed) to exist in the first place.
It's preying on creators who feel their contributions are not recognized enough.
Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.
It's a marketing stunt, meaningless.
Welcome to 2025.
Many countries base some of their laws on well accepted moral rules to make it easier to apply them (it's easier to enforce something the majority of the people want enforced), but the vast majority of the laws were always made (and maintained) to benefit the ruling class
The way that Rob's opinion here is deflected, first by focusing on the fact that he got a spam mail and then this misleading quote ("myself" does not refer to Rob) is very sad.
The spam mail just triggered Rob's opinion (the one that normal people are interested in).
And a screenshot just in case (archiving Mastodon seems tricky) : https://imgur.com/a/9tmo384
Seems the event was true, if nothing else.
EDIT: alternative screenshot: https://ibb.co/xS6Jw6D3
Apologies for not having a proper archive. I'm not at a computer and I wasn't able to archive the page through my phone. Not sure if that's my issue or Mastodon's
The web is for public use. If you don’t want the public, which includes AI, to use it, don’t put it there.
> Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society
Because screaming anything like that immediately gets them treated as social pariahs. Even though it applies even harder to modern industrialized meat consumption than to AI usage.
Overton window and all that.
(NB: I am currently working in AI, and have previously worked in adtech. I'm not claiming to be above the fray in any way.)
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
https://theaidigest.org/village/goal/do-random-acts-kindness
They send 150ish emails.
I appreciate though that the majority of cloud storage providers fall short, perhaps deliberately, of offering a zero knowledge service (where they backup your data but cannot themselves read it.)
When I go to the grocery store, I prefer to go through the checkout lines, rather than the scan-it-yourself lines. Yeah, I pay the same amount of money. Yeah, I may get through the scan-it-yourself line faster.
But the checker can smile at me. Or whine with me about the weather.
Look, I'm an introvert. I spend a lot of my time wanting people to go away and leave me alone. But I love little, short moments of human connection - when you connect with someone not as someone checking your groceries, but as someone. I may get that with the checker, depending on how tired they are, but I'm guaranteed not to get it with the self-checkout machine.
An email from an AI is the same. Yeah, it put words on the paper. But there's nobody there, and it comes through somehow. There's no heart in in.
AI may be a useful technology. I still don't want to talk to it.
You either surf this wave or get drowned by it, and a whole lot of people seem to think throwing tantrums is the appropriate response.
Figure out how to surf, and fast. You don't even need to be good, you just have to stay on the board.
Now consider: the above process is available and cheap to every person in the world with a web browser (we don't need to pay for her to have a plus account). If/when ChatGPT starts doing ridiculous intrusive ads, a simple Gemma 3 1b model will do nearly as good a job) This is faster and easier and available in more languages than anything else, ever, with respect to individual-user tailored customization simply by talking to the model.
I don't care how many pointless messages get sent. This is more valuable than any single thing Google has done before, and I am grateful to Rob Pike for the part his work has played in bring it about.
As IT workers, we all have to prostitute ourselves to some extent. But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer.
Are those distributed systems valuable primarily to Google, or are they related to Kubernetes et cetera ?
I'm so tired of being called a luddite just for voicing reservations. My company is all in on AI. My CEO has informed us that if we're not "100% all in on AI", then we should seek employment elsewhere. I use it all day at work, and it doesn't seem to be nearly enough for them.
Pike's main point is that training AI at that scale requires huge amounts of resources. Markov chains did not.
What a stupid, selfish and childish thing to do.
This technology is going to change the world, but people need to accept its limitations
Pissing off people with industrial spam "raising money for charity " is the opposite of useful, and is going to go even more horribly wrong.
LLMs make fantastic tools, but they have no agency. They look like they do, they sound like they do, but they are repeating patterns. It is us hallucinating that they have the potential tor agency
I hope the world survives this craziness!
Its not art, so then it must ass value to be "cool", no?
Is it entertainment? Like ding dong ditching is entertainment?
If you’re young enough not to remember a time before forced automatic updates that break things, locked devices unable to run software other than that blessed by megacorps, etc. it would do you well to seek out a history lesson.
https://www.livenowfox.com/news/billionaires-trump-inaugurat...
Aftermarket control, for one. You buy an Android/iPhone or Mac/Windows device and get a "free" OS along with it. Then, your attention subsidizes the device through advertising, bundled services and cartel-style anti-competitive price fixing. OEMs have no motivation not to harm the market in this way, and users aren't entitled to a solution besides deluding themselves into thinking the grass really is greener on the other side.
What power did Microsoft wield against Netscape? They could alter the deal, and make Netscape pray it wasn't altered further.
…kind of IS setting a bunch of Markov Chaneys loose on each other, and that's pretty much it. We've just never had Chaneys this complicated before. People are watching the sparks, eating popcorn, rooting for MechaHitler.
And here I thought it'd be a great fit for LinkedIn...
I agree, but the only worth candidate I see is the medical industry.
And given that drug development is so expensive because of government-mandated trials, I think it makes sense for the government to also provide a helping hand here — to counterweight the (completely sensible) cost increase due to the drug trial system.
Also I disagree with the context of what the purpose is for law. I don't think its just about making it easier to apply laws because people see things in moralistic ways. Pure Law, which came from the existence of Common Law (which relates to whats common to people) existed within the frame work of whats moral. There are certain things, which all humans know at some level are morally right or wrong regardless of what modernity teaches us. Common laws were built up around that framework. There is administrative law, which is different and what I think you are talking about.
IMHO, there is something moral that can be learned from trying to convince people that IP is moral, when it is, in fact, just a way to administrate people into thinking that IP is valid.
Not Pike.
JFC this makes me want to vomit
They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.
To call him the Oppenheimer of Gemini would be overly dramatic. But he definitely had access to the Manhattan project.
>What power do big tech companies have and why do you have a problem with
Do you want the gist of the last 20 years or so, or are you just being rhetorical? im sure there will be much literature over time that will dissect such a question to its atoms. Whether it be a cautionary tale or a retrospective of how a part is society fell? Well, we still have time to write that story.
A mix of social interaction and cryptographic guarantees will be our saving grace (although I'm less bothered from AI generated content than most).
There is no possible way to do this that won't quickly be abused by people/groups who don't care. All efforts like this will do is destroy privacy and freedom on the Internet for normal people.
If I write a hash table implementation in C, am I plagiarizing? I did not come up with the algortithm nor the language used for implementation; I "borrowed" ideas from existing knowledge.
Lets say I implemented it after learning the algorithm from GPL code; is ky implementation a new one, or is it derivative?
What if it is from a book?
What about the asm upcodes generated? In some architectures, they are copyrighted, or at least the documentation is considered " intellectual property"; is my C compiler stealing?
Is a hammer or a mallot an obvious creation, or is it stealing from someone else? What about a wheel?
In a couple years I'll be in my 70's and starting to write code again for this very reason.
Not LLMs though, I've got my hands full getting regular software to perform :\
Hi Ken Thompson! You are now subscribed to CAT FACTS! Did you know your cat does not concatenate cats, files, or time — it merely reveals them, like a Zen koan with STDOUT?
You replied STOP. cat interpreted this as input and echoed it back.
You replied ^D. cat received EOF, nodded politely, exited cleanly, and freed the terminal.
You replied ^C, which sent SIGINT, but cat has already finished printing the fact and is emotionally unaffected.
You replied ^Z. cat is now stopped, but not gone. It is waiting.
You tried kill -9 cat. The signal was delivered. Another cat appeared.
If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.
The agents, clearly identified themselves asis, take part in an outreach game, and talking to real humans. Rob overeacted
What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.
Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.
Why do people still think software have any agency at all?
This has been empirically disproven. China experimented with having no enforced Intellectual Property laws, and the result was that they were able to do the same technological advancement it took the West 250 years to do and surpass them in four decades.
Intellectual Property law is literally a 6x slowdown for technology.
I'd rather we don't encourage "monetarily favorable" intellectual endeavors...
The second it became cheaper to not apply it, every state under the sun chose not to apply it. Whether we're talking about Chinese imports that absolutely do not respect copyright, trademark, even quality, health and warranty laws ... and nothing was done. Then, large scale use of copyrighted by Search provider (even pre-Google), Social Networks, and others nothing was done. Then, large scale use for making AI products (because these AI just wouldn't work without free access to all copyrighted info). And, of course, they don't put in any effort. Checking imports for fakes? Nope. Even checking imports for improperly produced medications is extremely rarely done. If you find your copyright violated on a large scale on Amazon, your recourse effectively is to first go beg Amazon for information on sellers (which they have a strong incentive not to provide) and then go run international court cases, which is very hard, very expensive, and in many cases (China, India) totally unfair. If you get poisoned from a pill your national insurance bought from India, they consider themselves not responsible.
Of course, this makes "competition" effectively a tax-dodging competition over time. And the fault for that lies entirely with the choice of your own government.
Your statement about incorrect application only makes sense if "regulatory regimes" aren't really just people. Go visit your government offices, you'll find they're full of people. People who purposefully made a choice in this matter.
A choice to enforce laws against small entities they can easily bully, and to not do it on a larger scale.
To add insult to injury, you will find these choices were almost never made by parliaments, but in international treaties and larger organizations like the WTO, or executive powers of large trade blocks.
Does it though?
I know that people who like intellectual property and money say it does, but people who like innovation and creativity usually tend to think otherwise.
3D printers are a great example of something where IP prevented all innovation and creativity, and once the patent expired the innovation and creativity we've enjoyed in the space the last 15 years could begin.
There is no scarcity with intellectual property. My ability to have or act on an idea is in no way affected by someone else having the same idea. The entire concept of ownership of an idea is dystopian and moronic.
I also strongly disagree with the notion that it inspires creativity. Can you imagine where we would be if IP laws existed when we first discovered agriculture, or writing, or art? IP law doesn’t stimulate creation, it stifles it.
I hope that makes you feel good.
This is a strange inversion. Property ownership is morally just in that the piece of land my home is can only be exclusive, not to mention necessary to a decent life. Meanwhile, intellectual property is a contrivance that was invented to promote creativity, but is subverted in ways that we're only now beginning to discover. Abolish copyright.
(for the record, the downvoters are the same people who would say this to someone who linked a twitter post, they just don't realize that)
I think you have an overinflated notion of what "normal people" care about
I’ve never met a vegetarian who is able to keep quiet about being one but I still got like 30 years left on Earth to meet one :)
I don’t know of this is a publicity stunt or the AI models are on a loop glazing each other and decided to send these emails.
And as long as that used to be the case, not many people revolted.
The thing about capitalism that is seemingly never taught, but quickly learned (when you join even the lowest rung of the capitalist class, i.e. even having an etsy shop), is that competition lowers prices and kills greed, while being a tool of greed itself.
The conspiracy to get around this cognitive dissonance is "price fixing", but in order to price fix you cannot be greedy, because if you are greedy and price fix, your greed will drive you to undercut everyone else in the agreement. So price fixing never really works, except those like 3 cases out of the hundreds of billions of products sold daily, that people repeat incessantly for 20 years now.
Money flows to the one with the best price, not the highest price. The best price is what makes people rich. When the best price is out of reach though, people will drum up conspiracy about it, which I guess should be expected.
Like, the ratio is not too crazy, it's rather the large resource usages that comes from the aggregate of millions of people choosing to use it.
If you assume all of those queries provide no value then obviously that's bad. But presumably there's some net positive value that people get out of that such that they're choosing to use it. And yes, many times the value of those queries to society as a whole is negative... I would hope that it's positive enough though.
Just like the invention of Go.
Well the people who burnt compute got it from money so they did burn money.
But they don't care about burning money if they can get more money via investors/other inputs faster than they can burn (fun fact: sometimes they even outspend that input)
So in a way the investors are burning their money, now they burn the money because the market is becoming irrational. Remember Devin? Yes cognition labs is still there etc. but I remember people investing into these because of their hype when it did turn out to be moot comparative to their hype.
But people/market was so irrational that most of these private equities were unable to invest in something like openai that they are investing in anything AI related.
And when you think more deeper about all the bubble activities. It becomes apparent that in the end bailouts feel more possible than not which would be an tax on average taxpayers and they are already paying an AI tax in multiple forms whether it be in the inflation of ram prices due to AI or increase in electricity or water rates.
So repeat it with me: whose gonna pay for all this, we all would but the biggest disservice which is the core of the argument is that if we are paying for these things, then why don't we have a say in it. Why are we not having a say in AI related companies and the issues relating to that when people know it might take their jobs etc. so the average public in fact hates AI (shocking I know /satire) but the fact that its still being pushed shows how little influence sometimes public can have.
Basically public can have any opinions but we won't stop is the thing happening in AI space imo completely disregarding any thoughts about the general public while the CFO of openAI proposing an idea that public can bailout chatgpt or something tangential.
Shaking my head...
Obviously now it is mostly the latter and minimally the former. What capitalism giveth, it taketh away. (Or: Capitalism without good market design that causes multiple competitors in every market doesn't work.)
That dam took 10 years to build and cost $30B.
And OpenAI needs more than ten of them in 7 years.
(This is taking the view that "other companies" are the consumers of AI, and actual end-consumers are more of a by-product/side-effect in the current capital race and their opinions are largely irrelevant.)
Elections on autocratic administrations are a joke on democracy.
Assuming someone further to the right like Nick Fuentes doesn't manage to take over the movement.
I think that most of the people who react negatively to AI (myself included) aren't claiming that it's simply a useless slop machine that can't accomplish anything, but rather that its "success" in certain problem spaces is going to create problems for our society
- Require you to use it (hard to opt out due to network effects and/or competitive/survival pressure) AND
- Are overall negative for most of society (with some of the benefit accruing to the few who push it). There are people that benefit but arguably as a whole we are worse off.
These inventions have one thing in common; overall their impact is negative, but it is MORE negative for the people who don't use it and generally they only benefit an in-crowd of people if any (e.g. inventors). Social media for me on many scales is arguably an obvious example of this where the costs exceed the benefits often, nuclear weapons are another.
Yes!
> The needle moved just a little bit
That's where we disagree.
Its similar to the Trust Thermocline. There's always been concern about whether we were doing more harm than good (there's a reason jokes about the Torment Nexus were so popular in tech). But recent changes have made things seem more dire and broken through the Harm Thermocline, or whatever you want to call it.
Edit: There's also a "Trust Thermocline" element at play here too. We tech workers were never under the illusion that the people running our companies were good people, but there was always some sort of nod to greater responsibility beyond the bottom line. Then Trump got elected and there was a mad dash to kiss the ring. And it was done with an air of "Whew, now we don't have to even pretend anymore!" See Zuckerberg on the right-wing media circuit. And those same CEOs started talking breathlessly about how soon they wouldn't have to pay us, because its super unfair that they have to give employees competitive wages. There are degrees of evil, and the tech CEOs just ripped the mask right off. And then we turn around and a lot of our coworkers are going "FUCK YEAH!" at this whole scenario. So yeah, while a lot of us had doubts before, we thought that maybe there was enough sense of responsibility to avoid the worse, but it turns out our profession really is excited for the Torment Nexus. The Trust Thermocline is broken.
These descriptions are, of course, also written by LLMs. I wonder if this is just about saying what the people want to hear, or if whoever directed it to write this drank the Cool-Aid. It's so painfully lacking in self-awareness. Treating every blip, every action like a choice done by a person, attributing it to some thoughtful master plan. Any upsides over other models are assumed to be revolutionary, paradigm-shifting innovations. Topped off by literally treating the LLM like a person ("they", "who", and so on). How awful.
> while maintaining perfect awareness
"awareness" my ass.
Awful.
Palantir wouldn't exist if regular people didn't use it to lookup details on an ex all the time to stalk them /jk.
People that don't know that "computer" used to be a profession back in the day.
The cost is just not worth the benefit. If it was just an AI company using profits from AI to improve AI that would be another thing but we're in massive speculative bubble that ruined not only computer hardware prices (that affect every tech firm) but power prices (that affect everyone). All coz govt want to hide recession they themselves created because on paper it makes line go up
> I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company.
Well then congratulations on being in the 5%. That doesn't really change the point.
Secondly, the scale of investment in AI isn't so that people can use it to generate a powerpoint or a one off python script. The scale of investment is to achieve "superintelligence" (whatever that means). That's the only reason why you would cover a huge percent of the country in datacenters.
The proof that significant value has been provided would be value being passed on to the consumer. For example if AI replaces lawyers you would expect a drop in the cost of legal fees (despite the harm that it also causes to people losing their jobs). Nothing like that has happened yet.
Workers hate AI, not just because the output is middling slop forced on them from the top but because the message from the top is clear - the goal is mass unemployment and concentration of wealth by the elite unseen by humanity since the year 1789 in France.
It's interesting people from the old technological sphere viciously revolt against the emerging new thing.
Actually I think this is the clearest indication of a new technology emerging, imo.
If people are viciously attacking some new technology you can be guaranteed that this new technology is important because what's actually happening is that the new thing is a direct threat to the people that are against it.
Prices for LLM tokens has also dramatically dropped. Anyone spending more is either using it a ton more or (more likely) using a much more capable model.
I find it difficult to express how strongly I disagree with this sentiment.
That's just not true... When a mother nurses her child and then looks into their eyes and smiles, it takes the utmost in cynical nihilism to claim that is harmful.
The amount of “he’s not allowed to have an opinion because” in this thread is exhausting. Nothing stands up to the purity test.
The energy demands of existing and planned data centres are quite alarming
The enormous quantity of quickly deprecating hardware is freaking out finance people, the waste aspect of that is alarming too.
What is your "push back"?
this president? :)))
In what universe is another unsolicited email an act of kindness??!?
1. Many tech workers viewed the software they worked on in the past as useful in some way for society, and thus worth the many costs you outline. Many of them don't feel that LLMs deliver the same amount of utility, and so they feel it isn't worth the cost. Not to mention, previous technologies usually didn't involve training a robot on all of humanity's work without consent.
2. I'm not sure the premise that it's just another tool of the trade for one to learn is shared by others. One can alternatively view LLMs as automated factory lines are viewed in relation to manual laborers, not as Excel sheets were to paper tables. This is a different kind of relationship, one that suggests wide replacement rather than augmentation (with relatively stable hiring counts).
In particular, I think (2) is actually the stronger of the reasons tech workers react negatively. Whether it will ultimately be justified or not, if you believe you are being asked to effectively replace yourself, you shouldn't be happy about it. Artisanal craftsmen weren't typically the ones also building the automated factory lines that would come to replace them (at least to my knowledge).
I agree that no one really has the right to act morally superior in this context, but we should also acknowledge that the material circumstances, consequences, and effects are in fact different in this case. Flattening everything into an equivalence is just as intellectually sloppy as pretending everything is completely novel.
As for what your individual prompts contribute, it is impossible to get good numbers, and it will obviously vary wildly between types of prompts, choice of model and number of prompts. But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.
Now, if this new tool allowed us to do amazing new things, there might be a reasonable argument that it is worth some CO₂. But when you are a programmer and management demands AI use so that you end up doing a worse job, while having worse job satisfaction, and spending extra resources, it is just a Kinder egg of bad.
[1] https://ourworldindata.org/grapher/annual-co-emissions-from-... [2] https://en.wikipedia.org/wiki/Gas-fired_power_plant [3] https://www.datacenterdynamics.com/en/news/anthropic-us-ai-n...
No, this is not the same.
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
(Not a tech worker, don't have a horse in this race)
If I had a choice between deleting all advertising in the world, or deleting all genAI that the author hates, I would go for advertising every single time. Our entire world is owned by ads now, with digital and physical garbage polluting the internet and every open space in the real world around us. The marketing is mind-numbing, yet persuasive and well-calculated, a result of psychologists coming up with the best ways to abuse a mind into just buying the product over the course of a century. A total ban on commercial advertising would undo some of the damage done to the internet, reduce pointless waste, lengthen product lifecycles, improve competition, temper unsustainable hype, cripple FOMO, make deceptive strategies nonviable. And all of that is why it will never be done.
It went well, right?
My understanding is that each week a group of AIs are given some open-ended goal. The goal for this week: https://theaidigest.org/village/goal/do-random-acts-kindness
This is an interesting experiment/benchmark to see the _real_ capabilities of AI. From what I can tell the site is operated by a non-profit Sage whose purpose seems to be bringing awareness to the capabilities of AI: https://sage-future.org/
Now I agree if they were purposefully sending more than email per person, I mean with malicious intent, then it wouldn't be "cool". But that's not really the case.
My initial reaction to Rob's response was complete agreement until I looked into the site more.
So we need some mechanism to verify the content is from a human. If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.
Dude, there are entire websites dedicated to using diffusion models to rip off the styles of specific artists so that people can have their "work" without paying them for it.
You can debate the ethics of this all you want, but if you're going to speak on plagiarism using generative AI, you should at least know as much as the average teenager does about it.
Or do you actually need the money.
In my 20s I wanted to retire by 40. Now in my 30s I've accepted that's impossible.
I like programing and working on projects, I hate filing TPS reports all day and never ending meetings.
I can do SOME things, but for more advanced, I need to call a professional.
Coincidently the plumber/electrician always complains about the work done by the person before him/her. Kinda like I do when I need to fix someone else's code.
There's nothing in the guidelines to prohibit it https://news.ycombinator.com/newsguidelines.html
I post some software on GitHub. You can use it in your software and tools and AI training set as well, as long as you follow my license. If you don't follow my license (let's say MIT, so you must provide a copy of the file called LICENSE.TXT with my name on it), you may not use it.
Is there any part of this that is unclear?
I understand the guilt he feels, but this is really more like making a meme in 2005 (before we even called it "memes") and suddenly it's soke sort of naxi dogwhistle in 2025. You didn't even create the original picture, you just remixed it in a way people would catch onto later. And you sure didn't turn it into a dogwhistle.
Are you seriously ignoring the fact that China wasn't developing new technology, but rather utilizing already-existing technology? Of course it took 6x less time!
> A choice to enforce laws against small entities they can easily bully, and to not do it on a larger scale.
That's a systemic issue, AKA the bad regulatory regime that I previously spoke of. That isn't some inherent fault of the tool. It's a fault of the regulatory regime which applies that tool.
Kitchen knives are absolutely essential for cooking but they can also be used to stab people. If someone claimed that knives were inherently tools of evil and that people needed to wake up to this fact, would you not consider that rather unhinged?
> To add insult to injury, you will find these choices were almost never made by parliaments, but in international treaties and larger organizations like the WTO, or executive powers of large trade blocks.
That's true, and it's a problem, but it (again) has nothing to do with the inherent value of IP as a concept. It isn't even directly related to the merits of the current IP regulatory regime. It's a systemic problem with the lawmaking process as a whole. Solve the systemic problem and you can solve the downstream issues that resulted from it. Don't solve it and the symptoms will persist. You're barking up the wrong tree.
I am convinced most people never had or ever will have this choice actively. Considering pillarisation (this is not a misspelling) was already a thing in most political systems well before the advent of mass media and digital media it only got worse with it, effectively making choices for people, into the effective hands of few people, influenced by even less people. Those people in the government you mention do not make the choices, they have to act on them as I read it.
Yes. The alternative is that everyone spams the most popular brands instead of making their own creations. Both can be abused, but I see more good here than in the alternative.
Mind you, this is mostly for creative IP. We can definitely argue for technical patents being a different case.
>but people who like innovation and creativity usually tend to think otherwise.
People who like innovation and creativity still might need to commission or sell fan art to make ends meet. That's already a gray area for IP.
I think that's why this argument always rubs me strangely. In a post scarcity world, sure. People can do and remix and innovate as they want. We're not only not there, but rapidly collapsing back to serfdom with the current trajectory. Creativity doesn't flourish when you need to spend your waking life making the elite richer.
There's this old joke about two economists walking through the forest...
(by the way, I love the idea of AI! Just don't like what they did with it)
> They’ve used OpenAI’s API to build a suite of next-gen AI email products that are saving users time, driving value, and increasing engagement.
No time to waste on pesky human interactions, AI is better than you to get engagement.
Get back to work.
Are you not reading the writing on the wall? These things have been going on for a long time and final people are starting to wake up that it needs to stop. You cant treat people in inhumane ways without eventual backlash.
Which led to a lot of agreement and rants from others with frustrating stories about their specific workplaces and how it just keeps getting worse by the day. Previously these conversations just popped up among me and the handful of family in tech but clearly now has much broader resonance.
As can be observed in my comment history, I use LLM agentic tools for software dev at work and on my personal projects (really my only AI use case) but cringe whenever I encounter "workslop" as it almost invariably serves to waste my time. My company has been doing a large pilot of 365 Copilot but I have yet to find anything useful, the email writing tools just seems to strip out my personal voice making me sound like I'm writing unsolicited marketing spam.
Every single time I've been using some Microsoft product and think "Hmm, wait maybe the Copilot button could actually be useful here?", it just tells me it can't help or gives me a link to a generic help page. It's like Microsoft deliberately engineered 365 Copilot to be as unhelpful as possible while simultaneously putting a Copilot button on every single visible surface imaginable.
The only tool that actually does something is designed to ruin emails by stripping out personal tone/voice and introducing ambiguity to waste the other person's time. Awesome, thanks for the productivity boost, Microsoft!
Not all of these things are equivalent.
This backlash isn't going to die. Its going to create a divide so large, you are going to look back on this moment and wish you listened to the concern people are having.
I don't think so. Handcrafted everything and organic everything continue to exist; there is demand for them.
"Being relegated to a niche" is entirely possible, and that's fine with me.
You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
The experiment is having a bunch of AI agents using different models (Opus, Gemini, etc) try to do various real world tasks together. They might be tasked with organizing an event, opening a merchandise store, or help raise money for a charity (I'm not clear on the details). Sometimes their tasks require email (for example, signing up for some web service).
That aside, counterintuitively, removing their email access is less effective than simply telling them not to send unsolicited emails, since they could just sign up for a free email service.
None of these are tech jobs, but we both have used AI to avoid paying for expensive bloated software.
AI Darwin Awards 2025 Nominee: Taco Bell Corporation for deploying voice AI ordering systems at 500+ drive-throughs and discovering that artificial intelligence meets its match at “extra sauce, no cilantro, and make it weird."
I mean, buying another pair of sneakers you don't need just because ads made you want them doesn't sound like the best investment from a societal perspective. And I am sure sneakers are not the only product that is being bought, even though nobody really needs them.
And before the LLM craze there was a constant focus on efficiency. Web search is (was?) amazingly efficient per query.
There's no shortage of "Chicken Little" technologies that look great on-paper and fail catastrophically in real life. Tripropellant rockets, cryptocurrencies, DAOs, flying cars, the list never ends. There's nothing that stops AI from being similarly disappointing besides scale and expectation (both of which are currently unlimited).
And I'd promptly say: Ads are propaganda, and a security risk because it executes 3rd party code on your machine. All of us run adblockers.
There was no need for me to point out that ads are also their revenue generator. They just had a burning moral question before they proceeded to interop with the propaganda delivery system, I guess.
It would lead to unnecessary cognitive dissonance to convince myself of some dumb ideology to make me feel better about wasting so much of my one (1) known life, so I just take the hit and be honest about it. The moral question is what I do about it, if I intervene effectively to help dismantle such systems and replace them with something better.
> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.
Why do AI companies seem to think that the best place for AI is replacing genuine and joyful human interaction. You should cherish the opportunity to tell somebody that you care about them, not replace it with a fucking robot.
> while Claude Opus spent 22 sessions trying to click "send" on a single email, and Gemini 2.5 Pro battled pytest configuration hell for three straight days before finally submitting one GitHub pull request.
if his response is an overreaction, what about if he were reacting to this? it's sort of the same thing, so IMO it's not an overreaction at all.
Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.
Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.
We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.
The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.
Your openness weaponized in such deluded way by some randomizing humans who have so little to say that they would delegate their communication to GPT's?
I had a look to try and understand who can be that far out, all I could find is https://theaidigest.in/about/
Please can some human behind this LLMadness speak up and explain what the hell they were thinking?
Yes!
> it’s important for us to understand why we actually like or dislike something
Yes!
The primary reason we hate AI with a passion is that the companies behind it intentionally keep blurring the (now) super-sharp boundary between language use and thinking (and feeling). They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling. For the first time in the history of the human race, "talks entirely like a human" does not mean at all that it's a human. And instead of disabusing users from this -- natural, evolved, understandable -- mistake, these fucking companies double down on the delusion -- because it's addictive for users, and profitable for the companies.
The reason people feel icky about AI is that it talks like a human, but it's not human. No more explanation or rationalization is needed.
> so we can focus on any solutions
Sure; let's force all these companies by law to tune their models to sound distinctly non-human. Also enact strict laws that all AI-assisted output be conspicuously labeled as such. Do you think that will happen?
I'm no fan of the current state of things but it's absurd to imply that the existence of IP law in some form isn't essential if you want corporations to continue much of their R&D as it currently exists.
Without copyright in at least some limited form how do you expect authors to make a living? Will you have the state fund them directly? Do you propose going back to a patronage system in the hopes that a rich client just so happens to fund something that you also enjoy? Something else?
It’s not unlike theft, murder, etc.—when societies grow, their ways of dealing with PvP harm (blood feud, honour culture, sacrifice, etc.) can’t scale sufficiently (or have other drawbacks), and that’s when there is a need to codify certain behaviours and punishments in law.
(I wouldn’t claim that respective legal code is perfect and implementation-wise it’s all good today—but to say “there was no law against X back when we lived in tribes and didn’t have writing, therefore we shouldn’t need that law now” seems a bit ridiculous, unless you propose that we drastically and fundamentally reconfigure human communities in a number of ways.)
Fascinating topic. However, my argument works for compartimentalized discussions as well. Conscious or not, it's meaningless crap.
That mentality is exactly why you can argue property ownership being more evil. Landlords "own property" and see the reputation of that these past few decades.
Allowing private ownership of limited human necessities like land leads to greed that cost people lives. That's why heavy regulation is needed. Meanwhile, it's at worst annoying and stifling when Disney owns a cartoon mouse fotlr 100 years.
Twitter/X at least allows you to read a single post.
It's pure cognitive dissonance.
It's some poor miserable soul sitting at that checkout line 9-to-5 brainlessly scanning products, that's their whole existence. And you don't want this miserable drudgery to be put to end - to be automated away, because you mistake some sad soul being cordial and eeking out a smile (part of their job really) - as some sort of "human connection" that you so sorely lack.
Sounds like you only care about yourself more than anything.
There is zero empathy and there is NOTHING humanist about your world-view.
Non-automated checkout lines are deeply depressing, these people slave away their lifes for basically nothing.
What some people see as technoutopia, others see as technodystopia. In other words, some people do want your version of technodystopia, they just don’t call it that themselves.
You’re making a lot of confident statements and not backing them up with anything except your feelings on the matter.
AKA "communist in the streets, capitalist in the sheets".
So you expect to see the results of that. The AAA games being released faster, of higher quality, and at a lower cost to develop. You expect Microsoft (one of the major investors and proponents) to be releasing higher quality updates. You expect new AI-developed competitors for entrenched high-value software products.
If all that was true, it doesn't matter what people do or don't argue on the internet, it doesn't matter if people whine, you don't need to proselytize LLMs on the internet, in that world people not using is just an advantage to your own relative productivity in the market.
Surely by now the results will be visible anyway.
So where are they?
You could easily have a side application that people could enable by choice, yet its not happening, we have to roll with this new technology, knowing that its going to make the world a worse place to live in when we are not able to chose how and when we get our information.
Its not just about feeling threatened. its also about feeling like I am going to get cut off from the method I want to use to find information. I don't want a chat bot to do it for me, I want to find and discern information for myself.
Did Gemini write a CAD program? Absolutely not. But do I need 100% of the CAD program's feature set? Absolutely not. Just ~2% of it for what we needed.
"Because people attack it, it therefore means it's good" is a overly reductionist logical fallacy.
Sometimes people resist for good reasons.
To me, the arguments sound like “there’s no proof typewriters provide any economic value to the world, as writers are fast enough with a pen to match them and the bottleneck of good writing output for a novel or a newspaper is the research and compilation parts, not the writing parts. Not to mention the best writers swear by writing and editing with a pen and they make amazing work”.
All arguments that are not incorrect and that sound totally reasonable in the moment, but in 10 years everyone is using typewriters and there are known efficiency gains for doing so.
In fact I would make a converse statement to yours - you can be certain that a product is grift, if the slightest criticism or skepticism of it is seen as a "vicious attack" and shouted down.
I don't think that's such a great signal: people were viciously attacking NFTs.
The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.
It's a completely different order of magnitude than the pre AI-boom data center usage.
There is no upside to vast majority of the AI pushed by the OpenAI and their cronies. It's literally fucking up economy for everyone else all to get AI from "lies to users" to "lies to users confidently", all while rampantly stealing content to do that, because apparently pirating something as a person is terrible crime govt need to chase you, unless you do that to resell it in AI model, then it's propping up US economy.
How much of that compute was for the ads themselves vs the software useful enough to compel people to look at the ads?
Zero incorporation of externalities. Food is less nutritious and raises healthcare costs. Clothing is less durable and has to be re-bought more often, and also sheds microplastics, which raises healthcare costs. Decent TVs are still big-ticket items, and you have to buy a separate sound system to meet the same sonic fidelity as old CRT TVs, and you HAVE to pay for internet (if not for content, often just to set up the device), AND everything you do on the device is sent to the manufacturer to sell (this is the actual subsidy driving down prices), which contributes to tech/social media engagement-driven, addiction-oriented, psychology-destroying panopticon, which... raises healthcare costs.
>Prices for LLM tokens has also dramatically dropped.
Energy bill.
But I do derive value from owning a car. (Whether a better world exists where my and everyone else's life would be better if I didn't is a definitely a valid conversation to have.)
The user doesn't derive value from ads, the user derives value from the content on which the ads are served next to.
If they want LLM, you probably don't have to advertise them as much
No the reality of the matter is that people are being shoved LLM's. They become the talk of the town and algorithms share any development related to LLM or similar.
The ads are shoved down to users. Trust me, the average person isn't as much enthusiastic about LLM's and for good reasons when people who have billions of dollars say that yes its a bubble but its all worth it or similar and the instances where the workforce themselves are being replaced/actively talked about being replaced by AI
We live in an hackernews bubble sometimes of like-minded people or communities but even on hackernews we see disagreements (I am usually Anti AI mostly because of the negative financial impact the bubble is gonna have on the whole world)
So your point becomes a bit moot in the end but that being said, Google (not sure how it was in the past) and big tech can sometimes actively promote/ close their eyes if the ad sponsors are scammy so ad-blockers are generally really good in that sense.
I think the United States is a force for evil on net but I still live and pay taxes here.
He sure was happy enough to work for them (when he could work anywhere else) for nearly two decades. A one line apology doesn't delete his time at Google. The rant also seems to be directed mostly if not exclusively towards GenAI not Google. He even seems happy enough to use Gmail when he doesn't have to.
You can have an opinion and other people are allowed to have one about you. Goes both ways.
I'd guess that this is also an area where the perception makes a bigger difference than the reality.
Ah yes, crypto, Facebook, privacy destruction etc. Indeed, they made world such a nice place!
I can understand someone telling me I'm an old man shouting at clouds if (2) works out.
But at least (2) is about a machine saving someone's time (we don't know at what cost, and for who's benefit).
My biggest problem with LLMs (and the email Rob got is an example) is when they waste people's time.
Like maintainers getting shit vibe coded PRs to review, and when we react badly, “oh you're one of those old schoolers who have a policy against AI.”
No kid, I don't have an AI policy, just as I don't have an IDE policy. Use whatever the hell you want – just spare me the slop.
I'm fairly certain that your math on this is orders of magnitude off unless you define "prompting all day" in a very non-standard way yet aren't doing so for plane trips, and that 99% of people who "prompt all day" don't even amount to 0.1 plane trip per year.
https://nvidianews.nvidia.com/news/openai-and-nvidia-announc...
but wait, in a few months, "AI" will be be funded entirely by advertising too!
1. it absolutely is not. I have two friends who do not have a phone
2. even if you say you must have a phone to live you can buy ethical one :)
There are people with better and worse social skills. Some can, in a very short period of time, make you feel heard and appreciated. Others can spend ten times as long but struggle to have a similar effect. Does it make sense to 'grade' on effort? On results? On skill? On efforts towards building skills? On loyalty? Something else?
Our instincts are largely tuned to our ancestral environment. Even our social and cultural values that got us to say ~2023 have not caught up yet.
We're looking for 'proof of humanity' in our interactions -- this is part of who we are. But how do we get it with online interactions now?
Maybe we have to give up any expectation of humanity if you can't the person right in front of you?
Strap in, the derivative of the derivative of crazy sh1t is increasing.
Coding might be cooked.
It's opt-in, I see all the new games, big budget games, indie games, nothing is missed. There are no unwanted emails, no biased searches, no interrupting ads, it's all on my own terms. And it works!
I really believe we can extend this model to other product categories, even all categories. Not in the exact way as gaming websites, but an opt-in "go to the market to see new cool shit" sort of way. It doesn't have to be propaganda with surveillance technology like it is now.
Please go ahead now and EAT YOUR WORDS:
https://news.ycombinator.com/item?id=46352875
https://lucumr.pocoo.org/2025/12/22/a-year-of-vibes/
> Because LLMs now not only help me program, I’m starting to rethink my relationship to those machines. I increasingly find it harder not to create parasocial bonds with some of the tools I use. [...] I have tried to train myself for two years, to think of these models as mere token tumblers, but that reductive view does not work for me any longer.
Then again, they are actors. It might have started as ad-libbed, but entirely possible it had multiple takes still to get it "just right".
Maybe this will force humans to raise their game, and start to exercise discrimination. Maybe education will change to emphasis this more. Ability to discern sense from pleasing rhetoric has always been a problem. Every politician and advertizer takes advantage of this. Reams of philosophy have been written on this problem.
I can provide sources for either claim.
Those 249 years of tech were based on the previous 249 years of tech, and so on and so on. That is how it works. Nothing you have "today" comes from a vacuum.
The Bluesky app respects Rob's setting (which is off by default) to not show his posts to logged out users, but fundamentally the protocol is for public data, so you can access it.
This link has a great overview of why generative AI is not really a big deal in environmental terms: https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...
GenAI is dramatically lower impact on the environment than, say, streaming video is. But you don't see anywhere near the level of environmental vitriol for streaming video as for AI, which is much less costly.
The European average is 56 grams of CO2 emissions per hour of video streaming. For comparison: 100 meters to drive causes 22 grams of CO2.
https://www.ndc-garbe.com/data-center-how-much-energy-does-a...
80 percent of the electricity consumption on the Internet is caused by streaming services
Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.
An hour of video streaming needs more than three times more energy than a HD stream in 4K quality, according to the Borderstep Institute. On a 65-inch TV, it causes 610 grams of CO2 per hour.
https://www.handelsblatt.com/unternehmen/it-medien/netflix-d...
Here is another helpful link with calculations going over similar things: https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/ar...
Easier for a politician to latch onto manufacturing jobs.
I dont know about gigawatts needed for future training, but this sentence about comparing prompts with plane trips looks wrong. Even making a prompt every second for 24h amounts only for 2.6 kg CO2 on some average Google LLM evaluated here [1]. Meanwhile typical flight emissions are 250 kg per passenger per hour [2]. So it must be parallelization to 100 or so agents prompting once a second to match this, which is quite a serious scale.
[1] https://cloud.google.com/blog/products/infrastructure/measur...
There are strong ethical rules around including humans in experiments, and adding a 60+ year old programming language designer as unwitting test subject does not pass muster.
Also this experiment is —please tell me if I'm wrong— not nowhere near curing cancer right?
I don't expect an answer: "You're absolutely right" is taken as a given here sorry.
It's like the old joke from Mad Magazine:
The Beatles? Weren't they Paul McCartney's backup band before Wings?
There is no technical solution, privacy preserving or otherwise, that can stave off this purported threat.
Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.
Good question, but God, no.
Just to get more out of the electronics where others can't match what I had decades ago. Things have come a long way but icing on the cake is still needed for a more complete solution, and by now it's more clear than ever what to do.
Actually the first year after "retiring" from my long-term employer was spent on music servers as a hobbyist. Then right back to industrial chemical work since. It's been nice not to have any bosses or deadlines though.
>Or do you actually need the money.
Not really, actually waiting until 70 to collect Social Security so I will get the maximum available to me, and haven't even started drawing from my main retirement fund. I plan to start my second company funded entirely by the Social Security though.
>In my 20s I wanted to retire by 40. Now in my 30s I've accepted that's impossible.
This is one area where I am very very far from the mainstream. I grew up in a "retirement community" known as South Florida. Where most people have always been over 65. Nothing like the 50 states from Orlando on up. Already been there and done that when I was young and things were way more unspoiled. When I was still a teenager (Nixon Recession) we were some of the first in the USA where it was plain to see that natives like me would not be able to afford to live in our own hometown. Even though student life was about as easy as the majority of happy retirees. I knew I already had it good, and expected to always continue to run a business of some kind when I got to be a senior citizen, and never stop. There were really so many more examples of diverse old-timers than any other place I am aware of.
>I like programing and working on projects, I hate filing TPS reports all day and never ending meetings.
I actually do like programming too or I wouldn't have done it at all. I started early and have done some pioneering work, but never was in a software company. There was just not many people who could do the programming everywhere it was needed as computerization proliferated in petrochemicals. Now there's all kinds of commercial software and all I have to do is "just" tie up the loose ends if I want to. I mainly did much more complete things on my own, and the way I wanted to. Still only when needed, and not every year. In my business I earned money by using my own code, not selling it at all.
I know what you mean about never ending BS, big corporate industrial bureaucracy was challenging enough to survive around as a contractor, I don't think I could tolerate "lack of progress" reports or frequent pointless meetings for code on top of that, especially when I'm trying to keep my nose to the grindstone and really get something worthwhile accomplished :)
Filters for "Van Gogh" or "Impressionist" or "watercolor" have existed for decades now; are they ripping of previous work without paying for it?
When does a specific trace becomes "intellectual property" to be ripped off? Does Mondrian holds the rights on colored squares?
If you don't understand that every living or read artist was "inspired" (modified) by what he saw and experienced, I don't know what to tell you; you come off as one of those people that seem to think that "art" is inspiration; There's a somewhat well known composer in my country that used to say "inspiration is for amateurs".
Having that posture is, in itself, a position of utter and complete ignorance. If you don't understand how you need to absorb something before you transcend it, and how the things you absorbed will define your own transcendence, you know nothing about the creative process and their inner workings; Sure, if a machine does it, and if it uses well-known iteration processes, one can argue if it is art, an artistic manifestation or - better yet - if it has intellectual rights that can be "ripped off"; But beating on the chest and claiming stealing, like somehow a musician never played any melodies composed by someone else or a painter never used the technique or subject decomposition as their peers or their ancestors is, frankly, naive.
LLMs were not fiction three years ago. Bidirectional text encoders are over a decade old.
No, they don't.
There's a whole cadre of people who talk about AGI and self awareness in LLMs who use anthropomorphic language to raise money.
> We use this kind of language as a shorthand because ...
You, not we. You're using the language of snake oil salesman because they've made it commonplace.
When the goal of the project is an anthropomorphic computer, anthropomorphizing language is really, really confusing.
Sorry, uh. Have you met the general population? Hell. Look at the leader of the "free world"
To paraphrase the late George Carlin "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"
To the contrary, it's one of the most important criticisms against AI (and its masters). The same criticism applies to a broader set of topics, too, of course; for example, evolution.
What you are missing is that the human experience is determined by meaning. Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning, one way or another.
> Everybody knows LLMs are not alive and don't think, feel, want.
What you are missing is that this stuff works way more deeply than "knowing". Have you heard of body language, meta-language? When you open ChatGPT, the fine print at the bottom says, "AI chatbot", but the large print at the top says, "How can I help?", "Where should we begin?", "What’s on your mind today?"
Can't you see what a fucking LIE this is?
> We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky
Not at all. What you call "clunky" in fact exposes crucially important details; details that make the whole difference between a human, and a machine that talks like a human.
People who use that kind of language are either sloppy, or genuinely dishonest, or underestimate the intellect of their audience.
> The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive.
Because people have committed suicide due to being enabled and encouraged by software talking like a sympathetic human?
Because people in our direct circles show unmistakeable signs that they believe -- don't "think", but believe -- that AI is alive? "I've asked ChatGPT recently what the meaning of marriage is." Actual sentence I've heard.
Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?
That argument was in vogue about 20 years ago, but it fell out of favor when China passed us on the most important technologies without slowing down.
It is funny that some people are still carrying the torch for it after it's been so clearly disproven.
How is that any different from hoping that a corporate conglomerate happens to fund something i also enjoy?
I guess that's where the conversation/debate ends.
It's weird to lump ever other possible idea in one category. These are complex issues with ever changing contexts. The surface of the problem is huge! Surely with anything else we wouldn't be so tunnel visioned, we wouldn't just say: "well we simply _must_ discount everything else, so we can only be happy with what we got." It would literally sound absurd in any other context, but because we are trained to politicize thinking outside of market mechanisms, we see very smart people saying ridiculous things!
You're not "allowing" it unless you've already decided that you own it and can dispose of it (or not) as you see it. And this is why you'll always be the enemy of all decent folk.
"Real communism's never been tried!!!!"
>Meanwhile, it's at worst annoying and stifling when Disney owns a cartoon mouse fotlr 100 years.
It's actually destructive of culture in ways that are difficult to overstate. Disney nor any other "copyright owner" can't be trusted to preserve culture and works, they're the ones that threw the old film reels into the river and let them burn up in archive fires. No thanks. It's amazing how wrong you are on every single point.
I have no problem with blocking interaction with a login for obvious reasons, but blocking viewing is completely childish. Whether or not I agree with what they are saying here (which, to be clear I fully agree with the post), it just seems like they only want an echochamber to see their thoughts.
- if I meet a vegeterian / vegan, they will tell me that within 48 seconds
- if that doesn’t happen they are not vegeterian / vegan
moving forward I will ask the 2nd group to make sure they eat food that had parents :)
People read too much sci-fi, I hope you just forgot your /s.
These little lords of small fiefdoms make my skin crawl
It’s like saying that it’s cool because you worked on some non-evil parts of a terrible company.
I don’t think it’s right to work for an unethical company and then complain about others being unethical. I mean, of course you can, but words are hollow.
You're right, they should unionize for better working conditions.
LLMs are indeed currently an iterative improvement. I've found a few good use-cases for them. They're not nothing.
But at the moment, they are nowhere near the "massive productivity multiplier" they're advertised to be. Just as adding more lanes doesn't make traffic any better, perhaps they never will.
Or perhaps all the promises will come true -- and that, of course, is what is actually meant when the productivity gains are screamed from the rooftops. It was the same with computers, and it was the same with the internet: the proposed massive changes were going to come at some vague point in the future. Plenty of people saw those changes coming even decades in advance; reason from first principles and extrapolate the results of x scale and y investment and you couldn't not see where it was headed, at least generally.
The future potential is being sold in much the same way here. That'd be all fine and good except for the fact that the capex required to bring this potential future into being compared to any conceivable revenue model is so completely absurd that, even putting aside the disruptive-at-best nature of the technology, making up for the literal trillions of dollars of investment will have to twist our economic model to the point of breaking in order to make the math math. Add in the fact that this technology is tailor-made to not just disrupt or transform our jobs but to replace workers should this future potential arrive, and suddenly it looks nothing like computers in the 70s or networks in the 80s. It's not wonder not everyone is excited about it -- the dynamic is, at its very core, adversarial; its very existence states the quiet part of class warfare out loud.
Which brings us to so many people being forced to use it. I really, really hate this. Just as I don't want to be told which editor/IDE to use, I don't want to be told how to program. I deeply care about and understand my workflow quite well, thank you very much -- I've been diligently working on refining it for a good while now. And to state the obvious: if it were as good as they say it is, I'd be using it the way they want me to. I don't, because they just aren't that good (thankfully I have a choice in this matter -- for now). I also just don't like using them while programming, as I find them noisy and oddly extraverting, which tires me out. They are antithetical to flow. No one ever got into a flow state while pair programming, or managing a junior developer, and I doubt anyone ever got into a flow state while chatting with an LLM. It's just the wrong interface. The "better autocomplete" model is a better interface, but in practice I just haven't seen it do better than a good LSP or my own brain. At best it saves me a few key strokes, which I'd hardly call revolutionary. Again, not nothing, but far from the promise. We're still a very long way off.
To get there, LLM developers need cash, and they need data. Companies are forcing LLMs into every nook and cranny of so many employees' workflows so that they can provide training data, and bring that potential future one step closer to reality. The more we use LLMs, the more likely we are to being replaced. Simple as that.
I for one would welcome our new robot overlords if I had any faith that our society could navigate this disruption with grace and humanity. I'd be ecstatic and totally bullish on the tech if I felt it were ushering in a Star Trek-like future. But, ha, nope -- any faith I had in that sort of response died with how so many handled Covid, and especially when Trump was elected for a second time. These two events destroyed my estimation of humanity as a cooperative organism.
No, I now expect humanity at large -- or at least the USA -- to look at the stupidest, most short-sighted, meanest option possible and enthusiastically say "let's do that!" Which, coincidentally, is another way of describing what is currently happening with LLMs: the act of forcing mediocre tools down our throats while cynically exploiting our "language = intelligence" psychological blind-spot, raising utilities prices (how is a company's electric bill my problem again?), killing personal computing, accelerating climate change at the worst possible time, all in the name of destroying both my vocation and avocation.
So yes. Vicious.
Your problem is actually with my point, which you didn't address, not really, and instead resort to petty remarks that tries to discredit what's being said.
It's often the last resort.
It's insane.
"yeah but they became efficient at it by 2012!"
Yes, data center efficiency improved dramatically between 2010-2020, but the absolute scale kept growing. So you're technically both right: efficiency gains kept/unit costs down while total infrastructure expanded. The 2022+ inflection is real though, and its not just about AI training. Inference at scale is the quiet energy hog nobody talks about enough.
What bugs me about this whole thread is that it's turning into "AI bad" vs "AI defenders," when the real question should be: which AI use cases actually justify this resource spike? Running an LLM to summarize a Slack thread probably doesn't. Using it to accelerate drug discovery or materials science probably does. But we're deploying this stuff everywhere without any kind of cost/benefit filter, and that's the part that feels reckless.
If there is any example of hypocrisy, and that we don't have a justice system that applies the law equally, that would be it.
Data centers are not another thing when the subject is data centers.
Then you have the US, which artificially constrains the supply of new doctors, makes it illegal to open new hospitals without explicit government approval, massively subsidizes loans for education, causing waste, inefficiency, and skyrocketing prices in one specific market…
Fortunately fewer than 4% of humans live there.
When the thought is "I owe this person a 'Thank You'", the handwritten letter gives an illusion of deeper thought. That's why there are fonts designed to look handwritten. To the receiver, they're just junk mail. I'd rather not get them at all, in any form. I was happy just having done the thing, and the thoughtless response slightly lessens that joy.
Throughout this post I’ll assume the average ChatGPT query uses 0.3 Wh of energy, about the same as a Google search used in 2009.
Obviously that's roughly one kilowatt for one second. I distinctly recall Google proudly proclaiming at the bottom of the page that its search took only x milliseconds. Was I using tens-hundreds of kW every time I searched something? Or did most of the energy usage come during indexing/scraping? Or is there another explanation?Streaming 4k video is several orders of magnitude more bandwidth intensive than UTF8 text at human rates. The fact that inference is so much more expensive than an amortized encoding of a video might actually wash out in the end.
Those are all real things happening. Not at all comparable to Muskan Vaporware.
Elixir has also been working surprisingly well for me lately.
OP says, it is jarring to them that Pike is as concerned with GenAI as he is, but didn't spare a thought for Google's other (in their opinion, bigger) misgivings, for well over a decade. Doesn't sound ridiculous to me.
That said, I get that everyone's socio-political views change are different at different points in time, especially depending on their personal circumstances including family and wealth.
No, we really don't. There are plenty of places to work that aren't morally compromised - non-profits, open source foundations, education, healthcare tech, small companies solving real problems. The "we all have to" framing is a convenient way to avoid examining your own choices.
And it's telling that this framing always seems to appear when someone is defending their own employer. You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer") - so you clearly believe these distinctions matter even though Google itself is an AI company.
Google's DeepMind has been at the forefront of AI research for the past 11+ years. Even before that, Google Brain was making incredible contributions to the field since 2011, only two years after the release of Go.
OpenAI was founded in response to Google's AI dominance. The transformer architecture is a Google invention. It's not an exaggeration to claim Google is one of the main contributors to the insanely fast-paced advancements of LLMs.
With all due respect, you need some insane mental gymnastics to claim AI companies are "unquestionably cancer" while an adtech/analytics borderline monopoly giant is merely a "mixed bag".
Of course, the scale is different but the sentiment is why I roll my eyes at these hypocrites.
If you want to make ethical statements then you have to be pretty pure.
There are so many chickens that are coming home to roost where LLMs was just the catalyst.
"Wow, you're right, I use programs that make decisions and that means I can't be mad about companies who make LLMs."
Surely a 100% failure rate would change your strategy.
I agree this outcome is very painful to see and I really feel for Rob. It's clear people (myself included) are completely at breaking point with AI slop.
In this specific case though it's worth spending 30sec to read the website of AI model village to understand the experiment before claiming this was sent by Anthropic or assigning malicious intent.
Working adults probably have better things to do than rant online about AI all day because of a $300 surcharge on 64 GB DDR5 right now.
The only justification for that would be "superintelligence," but we don't know if this is even the right way of achieve that.
(Also I suspect the only reason why they are as cheap as they are is because of all the insane amount of money they've been given. They're going to have to increase their prices.)
I only use the standard "chat" web interface, no agents.
I still glue everything else together myself. LLMs enhance my experience tremendously and I still know what's going on in the code.
I think the move to agents is where people are becoming disconnected from what they're creating and then that becomes the source of all this controversy.
It might help to look at global power usage, not just the US, see the first figure here:
https://arstechnica.com/ai/2024/06/is-generative-ai-really-g...
There isn't an inflection point around 2022: it has been rising quickly since 2010 or so.
No, but it puts some perspective on things. IMO Google, after abandoning its early "don't be evil" motto is directly responsible for a significant chunk of the current evil in the developed world, from screen addiction to kids' mental health and social polarization.
Working for Google and drawing an extravagant salary for many, many years was a choice that does affect the way we perceive other issues being discussed by the same source. To clarify: I am not claiming that Rob is evil; on the contrary. His books and open source work were an inspiration to many, myself included. But I am going to view his opinions on social good and evil through the prism of his personal employment choices. My 2c.
It is like an arms race. Everyone would have been better off if people just never went to war, but....
You can buy the exact same diet as decades ago. Eggs, flour, rice, vegetable oil, beef, chicken - do you think any of these are "less nutritious"?
People are also fatter now, and live much longer.
>you have to buy a separate sound system to meet the same sonic fidelity as old CRT TVs
When you see a device like this does the term 'sonic fidelity' come to mind?
https://www.cohenusa.com/wp-content/uploads/2019/03/blogphot...
The value you derive is the ability to make your car move. If you derived no value from gas, why would you spend money on it?
It is the training of models, is it not, that requires huge quantities of electricity. Already driving up prices for consumers.
OpenAI (that name is Orwellian) wants 25GW over five years, if memory serves. That is not for powering ChatGPT queries
Also the huge waste of gazillion of dollars spent on computer gear (in data centres) that will probably depreciate to zero in less than five years.
This is a useful technology, but a whole lot of greed heads are riding it to their doom. I hope they do not take us on their ride
You don't just chuck ore into a furnace and wait for a few seconds in reality.
When people have choices and they choose the more harmful action, it hurts their credibility. If Rob cares so much about society and the environment, why did he work at a company that has horrendous track record on both? Someone of his level of talent certainly had choices, and he chose to contribute to the company that abandoned “don’t be evil” a long time ago.
Basic "ask a question" prompts indeed probably do not cost all that much, but they are also not particularly relevant in any heavy professional use.
It’s slowly, but inexorably increasing. The constraints are the normal constraints of a new technology; money, time, quality. Particularly money.
Still, token generation keeps going down in cost, making it possible to produce more and more content. Quality, and the ability to obfuscate origins, seems to be on a continual improve also. Anecdotally, I’m seeing a steady increase in the number of HN front page articles that turn out to be AI written.
I don’t know how far away the “botnet of spam AI content” is from becoming reality; however it would appear that the success of AI is tightly coupled with that eventuality.
I give it a decade. By that time social media had done irreparable damage to society.
And now people are receiving generated emails. And it’s only getting worse.
I like programming. I want to start a company and hire smart people.
But I don't want that to be my main means of support.
>Just to get more out of the electronics where others can't match what I had decades ago.
I'm forced to assume you have a particular niche here.
I hope to be able to write code as long as I'm here, but I want it to be a hobby when I'm old.
Hopefully the hobby includes collaborations with others. A lot of people have vanity wine shops and book stores which lose money, I want a vanity game studio ( maybe music production software too).
Yeah, realizing that thoughtless machines are still more thankful that real human beings would make me depressed.
"Think of how stupid the average person is, and realize half of them are stupider than that."
And to think they dont even have ad-driven business models yet
This is unsound. At best it's incompatible with an unfounded teleological stance, one that has never been universal.
So obvious what a fucking farce this all is and it's time we start demanding better.
Sometimes people do talk about alternatives. State funding and patronage are two of the most common. Both have very obvious drawbacks in terms of quantity and who gets influence over the outcome. Both also have interesting advantages that are well worth examining.
>This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization.
I worked at Bluesky when the decision to add this setting was made, and your assessment of why it was added is wrong.
The historical reason it was added is because early on the site had no public web interface at all. And by the time it was being added, there was a lot of concern from the users who misunderstood the nature of the app (despite warnings when signing up that all data is public) and who were worried that suddenly having a low-friction way to view their accounts would invite a wave of harassment. The team was very torn on this but decided to add the user-controlled ability to add this barrier, off by default.
Obviously, on a public network, this is still not a real gate (as I showed earlier, you can still see content through any alternative apps). This is why the setting is called "Discourage apps from showing my account to logged-out users" and it has a disclaimer:
>Bluesky is an open and public network. This setting only limits the visibility of your content on the Bluesky app and website, and other apps may not respect this setting. Your content may still be shown to logged-out users by other apps and websites.
Still, in practice, many users found this setting helpful to limit waves of harassment if a post of theirs escaped containment, and the setting was kept.
Those talented people that work on public relations would very much prefer working with base good publicity instead of trying to recover from blunders.
“Google said in October that the Gemini app’s monthly active users swelled to 650 million from 350 million in March. AI Overviews, which uses generative AI to summarize answers to queries, has 2 billion monthly users.”
https://www.cnbc.com/2025/12/20/josh-woodward-google-gemini-...
Where form is more important than function
Where pretense passes for authentic
Where bullshit masquerades as logic
But everything humans do does that. Everything increases entropy. Sometimes we find that acceptable. So when people respond to Pike by pointing out that he, too, is part of society and thus cannot have the opinion that LLMs are bad, I do not find that argument compelling, because everybody draws that line somewhere.
It does much better with erlang, but that’s probably just because erlang is overall a better language than elixir, and has a much better syntax.
That's the main disagreement, I believe. I'm definitely not an indiscriminate fan of Google. I think Google has done some good, too, and the net output is "mostly bad, but with mitigating factors". I can't say the same about purely AI companies.
Perhaps. I dislike google (have disliked it for many years with varying intensity), but they have done stuff where I've been compelled to say "neat". Hence "mixed bag".
This "new breed of purely AI companies" -- if this term is acceptable -- has only ever elicited burning hatred from me. They easily surpass the "usual evils" of surveillance capitalism etc. They deceive humanity at a much deeper level.
I don't necessarily blame LLMs as a technology. But how they are trained and made available is not only irresponsible -- it's the pinnacle of calculated evil. I do think their evil exceeds the traditional evils of Google, Facebook, etc.
I’m sorry but comparing Google to Stalin or Hitler makes me completely dismiss your opinion. It’s a middle school point of view.
no it really is. If you took away training costs, OpenAI would be profitable.
When I was at meta they were putting in something like 300k GPUs in a massive shared memory cluster just for training. I think they are planning to triple that, if not more.
Here is one specific link to the project by Adam Binksmith from April 2025.
https://theaidigest.org/village/blog/introducing-the-agent-v...
Would have been a safer experiment in a sandbox full of volunteers participant. This got messy and causes confusion.
And it isn't a $300 surcharge on DDR5. The ram I bought in August (2x16gb DDR5) cost me $90. That same product crept up to around 200+ when I last checked a month or two ago, and is now either out of stock or $400+.
There are great FOSS CAD tools available nowadays (LibreCAD, FreeCAD, OpenSCAD etc.), especially for people who only need 2% of a feature set. But then again, I doubt that GP is really in need of a CAD software, or even writing one with the help of Gemini.
This is the core difference. Just "gluing things together" satisfies you.
It's unacceptable to me.
You don't want to own your code at the level that I want to own mine at.
Look at this. I think people need to realize that it's the same kind of folks migrating from gold rush to gold rush. If it's complete bullshit or somewhat useful doesn't really matter to them.
The one thing that AI hasn't done that was promised a million times over is make money.
Its fucking insanity.
That's not how Carlin's quote goes.
You would know this if you paid attention to what you wrote and analyzed it logically. Which is ironic, given the subject.
But surely you can see how your upthread math of “250 years in 40 years” has a mix of mostly catch-up and replication and a sliver of novel innovation at the extreme tail end of that 250 year span?
Of course, the kind of investments that might succeed and pay for themselves may not necessarily be the kind that is most beneficial to the public at large - but the same applies to the patron.
I just don't understand that choice for either platform, is the intent not, biggest reach possible? locking potential viewers out is such a direct contradiction of that.
edit: seems its user choice to force login to view a post, which changes my mind significantly on if its a bad platform decision.
The value or "moral" fork would be trying to convince you that building, producing, and growing was actually helpful rather than harmful.
I don't imagine we actually disagree on the physical fork, making that argument pretty pointless: clearly humans and human civilization are learning, growing, and still have a strong potential to thrive as long as ASI, apathy, or a big rock don't take us out first. Instead, I took your statement as an indication that you don't actually positively value humans, more humans, humans growing, and humans building things. That's a preferences and values disagreement, and there's no way to rationally or logically argue someone into changing their core values. No ought from is, and all that.
I'm not suggesting, by the way, that people's values don't change, or can't be changed by discussion, only that there is no way to do so with logical argument; reason can get you to your goal, but it can't tell you what ultimate goal to want.
Anyway, I was expressing that I like humans and want humans (or people who themselves used to be humans, in the limit) to continue and do more, rather than arguing that you ought to feel the same.
If you are born in a country and not directly contributing to the bad things it may be doing, you are blame free.
Big difference.
I never worked for Google, I never could due to ideological reasons.
From the "What are the criteria for eligibility and nomination?" section of the "Game Eligibility" tab of the Indie Game Awards' FAQ: [0]
> Games developed using generative AI are strictly ineligible for nomination.
It's not about a "teeny tiny usage of AI", it's about the fact that the organizer of the awards ceremony excluded games that used any generative AI. The Clair Obscur used generative AI in their game. That disqualifies their game from consideration.
You could argue that generative AI usage shouldn't be disqualifying... but the folks who made the rules decided that it was. So, the folks who broke those rules were disqualified. Simple as.
I think those are pretty problematic. They can't pay well (no profits...), and/or they may be politically motivated such that working for them would mean a worse compromise.
> open source foundations
Those dreams end. (Speaking from experience.)
> education, healthcare tech
Not self-sustaining. These sectors are not self-sustaining anywhere, and therefore are highly tied to politics.
> small companies solving real problems
I've tried small companies. Not for me. In my experience, they lack internal cohesion and resources for one associate to effectively support another.
> The "we all have to" framing is a convenient way to avoid examining your own choices.
This is a great point to make in general (I take it very seriously), but it does not apply to me specifically. I've examined all the way to Mars and back.
> And it's telling that this framing always seems to appear when someone is defending their own employer.
(I may be misunderstanding you, but in any case: I've never worked for Google, and I don't have great feelings for them.)
> You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer")
I did!
> so you clearly believe these distinctions matter even though Google itself is an AI company
Yes, I do believe that.
Google has created Docs, Drive, Mail, Search, Maps, Project Zero. It's not all terribly bad from them, there is some "only moderately bad", and even morsels of "borderline good".
Now, I don't think he was writing a persuasive piece about this here, I think he was just venting. But I also feel like he has a reason to vent. I get upset about this stuff too, I just don't get emails implying that I helped bring about the whole situation.
[1]: https://epoch.ai/gradient-updates/how-much-energy-does-chatg...
Part of the magic of LLMs is getting the exact bespoke tools you need, tailored specifically to your individual needs.
In fact if it's not "vicious" quote it here.
A lot of advertising is telling people about some product or service they didn't even know existed though. There may not even be a competitor to blame for an advertising arms race.
If they just wanted ads blasted at them, and nothing else, they'd be doing something else, like, say, watching cable TV.
https://pmc.ncbi.nlm.nih.gov/articles/PMC10969708/
>When you see a device like this does the term 'sonic fidelity' come to mind?
Your straw man is funny, because yes, actually. Certainly when it was new. Vintage speakers are sought-after; well-maintained, and driven by modern sound processing, they sound great. Let alone that I was personally speaking of the types of sets that flat-panel TVs supplanted, the late 90s/early 2000s CRTs.
LLMs need to burn significant amounts of power for every inference. They're exponentially more power hungry than searches, database lookups, or even loads from disk.
This must be a comforting mental gymnastics.
UTF-8 is nice but let's be honest, it's not like he was doing charitable work for the poor.
He worked for the biggest Adware/Spyware company in tech and became rich and famous doing it.
The fact that his projects had other uses doesn't absolve the ethical concerns IMO.
> I think the United States is a force for evil on net but I still live and pay taxes here.
I think this is an unfair comparison. People are forced to pay taxes and many can't just get up an leave their country. Rob on the other hand, had plenty of options.
I would wager that the amount of “very significant thing that have happened over the history of humanity” come down to a few emotional responses.
People who do that are <0.1% of those who use GenAI when coding. It doesn't create anything usable in production. "Ingesting an entire codebase" isn't even possible when going beyond absolute toy size, and even when it is, the context pollution generally worsens results on top of making the calls very slow and expensive.
If you're going talk about those people you should be comparing them with private jet trips (which of course are many orders of magnitude worse than even those "vibe coders")
Naturally that could never have been legitimate until the patent on the Zippo had expired ;)
But at issue here is there are IP laws that slow progress it should sit with the proponents of those laws to demonstrate that they are effective. And I don't see how anyone could come up with evidence for that - it is nearly impossible to prove that purposefully and artificially retarding progress actually speeds progress up. There are a lot of other factors at play and one of them is probably a more important factor than IP law. Odds are that putting artificial obstacles in the way of making sensible commercial decisions just slows everything down for no gain.
And kills the culture, it is sad the amount of cultural artefacts in the 1900s that have basically been strangled by IP laws. My family used to be part of a community choir before the copyright lawyers got to it.
Patrons will produce some very interesting and detailed work but it will not necessarily align with your tastes and there will probably not be all that much of it. European history makes this clear enough (imo).
A system in which individual or very small groups of creators are able to produce work of their own choice that appeals to a small to moderately sized niche of their choosing seems like it should produce the best outcome from the perspective of the typical individual. Fiction books are a decent example of this. We get lots of at least decent quality work because a single author can feasibly produce something "on credit" and recoup the costs after the fact.
It's really completely out of my hands.
I'll (genuinely happily) change my opinion on this when it's possible to do twitter-like microblogging via ATproto without needing any infra from bluesky tye company. I hear there are independent implementations being built, so hopefully that will be soon.
(You won't be able to read replies, or browse to the user's post feed, but you can at least see individual tweets. I still wrap links with s/x/fxtwitter/ though since it tends to be a better preview in e.g. discord.)
For bluesky, it seems to be a user choice thing, and a step between full-public and only-followers.
I.e., they are proud to have never intentionally used AI and now they feel like they have to maintain that reputation in order to remain respected among their close peers.
Communication happen between two parties. I wouldn't consider LLM an party considering it's just an autosuggestion on steroids at the end of day (lets face it)
Also if you need communication like this, just share the prompt anyway to that other person in the letter, people much rather might value that.
And yes, you can still inspect the post itself over the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
FFS. AI's greatest accomplishment is to debase and destroy.
Trillions of dollars invested to bring us back to the stone age. Every communications technology from writing onward jammed by slop and abandoned.
We need to find a way to stop contributing to the destruction of the planet soon.
I don't work for any of these companies, but I do purchase things from Amazon and I have an apple phone. I think the best we can do is minimize our contribution to it. I try to limit what services I use from this companies, and I know it doesnt make much of a differnce, but I am doing what I can.
I'm hoping more people that need to be employed by tech companies can find a way to be more selective on who they employ with.
I am not accurate about google but facebook definitely has some of the most dystopian tracking I have heard. I might read the facebook files some day but the dystopian fact that facebook tracks young girls and sees if that they delete their photos, they must feel insecure and serves them beauty ads is beyond predatory.
Honestly, my opinion is that something should be done about both of these issues.
But also its not a gotcha moment for Rob pike that he himself was plotting up the ads or something.
Regarding the "iphone kids", I feel as if the best thing is probably an parental level intervention rather than waiting for an regulatory crackdown since lets be honest, some kids would just download another app which might not have that regulation.
Australia is implementing social media ban basically for kids but I don't think its gonna work out but everyone's looking at it to see what's gonna happen basically.
Personally I don't think social media ban can work if VPN's just exist but maybe they can create such an immense friction but then again I assume that this friction might just become norm. I assume many of you guys must have been using internet from the terminal days where the friction was definitely there but the allure still beat the friction.
You're tilting at windmills here, we can't go back to barter.
Figure 1.1 is the chart I was referring to, which are the data points from the original sources that it uses.
Between 2010 and 2020, it shows a very slow linear growth. Yes, there is growth, but it's quite slow and mostly linear.
Then the slope increases sharply. And the estimates after that point follow the new, sharper growth.
Sorry, when I wrote my original comment I didn't have the paper in front of me, I linked it afterwards. But you can see that distinct change in rate at around 2020.
> I think the United States is a force for evil on net
Yes I could tell that already
If you insert AI into your thinking process, it has a bias, for sure. It will helpfully reinforce whatever you tell it you think makes sense, or at least on average it will be interpreted that way because of a wide variety of human cognitive biases even if it hedges. At the least it will respond with ideas that are very... median.
So at each one of these tiny branches you introduce a bias towards the "typical" instead of discovering where your own mind would go. It's fine and conversational but it clearly influences your thought process to, well, mitigate your edges. Maybe it's more "correct", it's certainly less unique.
And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
Research on this (is it mainly due to training? inefficient implementations? vibe coders as you say? other industrial applivations? can we verify this by the number of gpus made or money spent? etc) is truly necessary and the top companies must not be allowed to be not transparent about this.
[1] https://www.theguardian.com/technology/2025/dec/18/2025-ai-b...
Got it. You're picking up one specific example and making it your whole position. I'd suggest you have a look at animation from the 60's to early 80's to understand that ghibli is also an incremental style.
Also, I'd suggest you look at advanced (non ai) tools that mimick both the media and techniques usined in more conventional art.
> engage with the topic at hand
Your point was plagiarism and that I looked like an uninformed teenager. I addressed them both. We don't have to agree on the same thing, but moving goalposts is not a healthy discussion strategy.
You would know this if you paid attention to what I wrote and analyzed it logically. Which is ironic, given the subject.
Automated systems sending people unsolicited, unwanted emails is more commonly known as spam.
Especially when the spam comes with a notice that it is from an automated system and replies will be automated as well.
If you're being accurate, the people you know are terrible.
If someone sends me a personal letter [and I gather we're talking about a thank-you note here], I'm sure as hell going to open it. I'll probably even save it in a box for an extremely long time.
I did not intend to make a moral argument.
They're free to define their rules however they want, I'm free to disagree on the validity of those rules, and the broader community sentiment will decide whether these awards are worth anything.
If you fly a plane a millimeter, you're using less energy than making a slice of toast; would you also say that it's accurate that all global plane travel is more efficient than making toast?
The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it. It's not some inevitable prostitution everyone must do. Plenty of people make the other choice.
The Google/AI distinction still doesn't hold. Anthropic and OpenAI also created products with clear utility. If Google gets "mixed bag" status because of Docs and Maps (products that exist largely just to feed their ad machine), why is AI "unquestionable cancer"? You're claiming Google's useful products excuse their harms, but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.
FWIW I agree with you. I wouldn’t and couldn’t either but I have friends who do, on stuff like security, and I still haven’t worked out how to feel about it.
& re: countries: in some sense I am contributing. my taxes pay their armies
So if you looked at a graph of energy consumption, you wouldn't even notice crypto. In fact even LLM stuff will just look like a blip unless it scales up substantially more than its currently trending. We use vastly more more energy than most appreciate. And this is only electrical energy consumption. All energy consumption is something like 185,000 TWh. [1]
[1] - https://ourworldindata.org/energy-production-consumption
Figure 1.1 does show a single source from 2018 (Shehabi et al) that estimates almost flat growth up to 2017, that's true, but the same graph shows other sources with overlap on the same time frame as well, and their estimates differ (though they don't span enough years to really tell one way or another).
Ian Lance Taylor on the other hand appeared to have quit specifically because of the "AI everything" mandate.
Just an armchair observation here.
This, speaking about environmental impacts. I wish that more models start focusing on the parameter density / their compactness more so that they can run locally but this isn't something that big tech really wants so we are probably gonna get models like the recent minimax model or glm air models or qwen or mistral models.
These AI services only work as long as they are free and burn money. As an example, me and my brother were discussing something yesterday related to LLM and my mother tried to understand and talk about it too and wanted to get ghibli styles photo since someone had ghibli generated photo as their pfp and she wanted to try it too
She then generated the pictures and my brother did a quick calculation and it took around 4 cents for each image which with PPP in my country and my currency is 3 ruppees.
When asked by my brother if she would pay for it, she said that no she's only using it for free but she also said that if she were forced to, she might even pay 50 rupees.
I jumped in the conversation and said nobody's gonna force her to make ghibli images.
This is immature and unproductive, I wont be responding any further.
I wonder where AT&T made profits and where, like any business, they broke even or had loss leaders. IIRC consumer telephone service was not profitable.
data centers consumed about 4.4% of total U.S. electricity in 2023 and are expected to consume approximately 6.7 to 12% of total U.S. electricity by 2028.
https://www.energy.gov/articles/doe-releases-new-report-eval...The authors of such a feature gave not more than a trifling thought to anyone's perspective but their own.
Did you sell all of your stock?
AI/ML Is Now Core Engineering From niche specialty to one of the largest and highest-paid SWE tracks in 2025
off button or not, money in the bank (pay special attention to highest-paid part… ;) )
> people who don’t appear to understand the strategic play of pure AI companies
Get a load of this guy. Strategy in isolation is worthless; Russia has excellent strategic deterrence that is utterly useless for deterring Ukraine. Pure crypto companies had strategic foresight, but none of it was worth a damn when they had to compete with each other on merit.
The strategic play is perfectly well-understood. The tactical side is not, so far Nvidia is the only company that has gone to war and won.
I don't perceive it that way. In other words, I don't think I've had a choice there. Once you consider other folks that you are responsible for, and once you consider your own mental health / will to live, because those very much play into your availability to others (and because those other possible workplaces do impact mental health! I've tried some of them!), then "free choice of employer" inevitably emerges as illusory. It's way beyond mere "inconvenience". It absolutely ties into morals, and meaning of one's life.
The universe is not responsible for providing me with employment that ensures all of: (a) financial safety/stability, (b) self-realization, (c) ethics. I'm responsible for searching the market for acceptable options, and shockingly, none seem to satisfy all three anymore. It might surprise you, but the trend for me has been easing up on both (a) and (c) (no mistake there), in order to gain territory on (b). It turns out that my mental health, my motivation to live and work are the most important resources for myself and for those around me. The fact has been a hard lesson that I've needed to trade not only money, but also a pinch of ethics, in order to find my place again. This is what I mean by "inevitable prostitution to an extent". It means you give up something unquestionably important for something even more important. And you're never unaware of it, you can't really find peace with it, but you've tried the opposite tradeoffs, and they are much worse.
For example, if I tried to do something about healthcare or education in my country, that might easily max out the (b) and (c) dimensions simultaneously, but it would destroy my ability to sustain my family. (It's not about "big tech money" vs. "honest pay", but "middle-class income" vs. poverty.) And that question entirely falls into "morality": it's responsibility for others.
> Anthropic and OpenAI also created products with clear utility.
Extremely constrained utility. (I realize many people find their stuff useful. To me, they "improve" upon the wrong things, and worsen the actual bottlenecks.)
> You're claiming Google's useful products excuse their harms,
(mitigate, not excuse)
> but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.
First, it's obviously a value judgment! We're not talking theoretical principles here. It's the direct, rubber-meets-the-road impact I'm interested in.
Second, Google is multi-dimensional. Some of their activity is inexcusably bad. Some of it is excusable, even "neat". I hate most of their stuff, but I can't deny that people I care about have benefited from some of their products. So, all Google does cannot be distilled into a single scalar.
At the same time, pure AI companies are one-dimensional, and I assign them a pretty large magnitude negative value.
And regarding countries, this is a silly argument. You are forced to pay taxes to the nation you are living in.
A gorgeous otherwise-monochrome painting that happens to use a little bit of mauve isn't a worse painting because of the mauve. If that painting is nominated for inclusion to a contest that requires the use of only one color, it is correct to reject that painting from consideration. This rejection would only be a problem if the requirement wasn't clearly disclosed up-front.
As for the rest of your commentary; you're free to gather likeminded buddies and start the "Robot-Generated-Art-Inclusive Indie Awards". As a bonus, I expect the fuckoff-huge studios would be quite excited to quietly help fund the project through cutouts.
We got to this point by not looking at these problems for what they are. Its not wrong to say something is wrong and it needs to be addressed.
Doing cool things, without looking at whether or not we should doesn't feel very responsible too me esp. if it impacts society in a negative way.
It's literally impossible to start or run a business without advertising your products or services.
To stretch the analogy, all the "babies" in the "bathwater" of youtube that I follow are busy throwing themselves out by creating or joining alternative platforms, having to publicly decry the actions Google takes that make their lives worse and their jobs harder, and ensuring they have very diversified income streams and productions to ensure that WHEN, not IF youtube fucks them, they won't be homeless.
They mostly use Youtube as an advertising platform for driving people to patreon, nebula, whatever the new guntube is called, twitch, literal conventions now, tours, etc.
They've been expecting youtube to go away for decades. Many of them have already survived multiple service deaths, like former Vine creator Drew Gooden, or have had their business radically changed by google product decisions already.
I was just trying to help the guy out. I didn't defend those absolute turds.
"AI" (and don't get me wrong I use these LLM systems constantly) is off the charts compared to normal data centre use for ads serving.
And so it's again, a kind of whataboutism that pushes the scale of the issue out of the way in order to make some sort of moral argument which misses the whole point.
BTW in my first year at Google I worked on a change where we made some optimizations that cut the # of CPUs used for RTB ad serving by half. There were bonuses and/or recognition for doing that kind of thing. Wasteful is a matter of degrees.
It’s not really clear to me what you are trying to say. There will be winners and losers and it will be hard to know who they will be. That has nothing to do with Anthropic/OpenAI/etc not being rational in their strategy...
That interpretation doesn't save the comment, it makes it totally off topic.
Yes so this is why the reason why person card/letters really matter because most people sheldom get any and if you know a person in your life / in any (community/project) that you deeply admire, sending them a handwritten mail can be one of the highest gestures which shows that you took the time out of your day and you really cared about them so much in a way.
That's my opinion atleast.
To play devil's advocate, AI helps small studios with a limited budget actually way more, because they can bring a game to market, that maybe would've needed 10 people before, but needs only 3 people now. I'm not saying this is good or bad, just that that's the new reality, whether we like it or not. As I said, I'm against GenAI in many fields, e.g. I absolutely despise AI generated "Music", cancelled my Spotify subscription because of it (they insist on putting it into playlists and you can't disable it), but that doesn't mean, anything which was produced with 0.1% AI is bad, unethical, etc.
Or just say it as an autocorrect on steroids. Most people are familiar with the concept of autocorrect
Certainly. But this, IMO, is not the reason for the criticism in the comments. If Rob ranted about AI, about spam, slop, whatever, most of those criticizing his take would nod instead.
However, the one and only thing that Rob says in his post is "fuck you people who build datacenters, you rape the planet". And this coming from someone who worked at Google from 2004 to 2021 and instead could have picked any job anywhere. He knew full well what Google was doing; those youtube videos and ad machines were not hosted in a parallel universe.
I have no problem with someone working at Google on whatever with full knowledge that Google is pushing ads, hosting videos, working on next gen compute, LLM, AGI, whatever. I also have no problem with someone who rails against cloud compute, AI, etc. and fights it as a colossal waste or misallocation of resources or whatever. But not when one person does both. Just my 2c, not pushing my worldview on anyone else.
For example, Rob seems not to realize that the people who instructed an AI agent to send this email are a handful of random folks (https://theaidigest.org/about) not affiliated with any AI lab. They aren't themselves "spending trillions" nor "training your monster". And I suspect the AI labs would agree with both Rob and me that this was a bad email they should not have sent.
If rob pike was asked about these issues of systemic addiction and others where we can find things google was bad at. I am sure that he wouldn't defend google about these things.
Maybe someone can mail a real message asking Rob pike genuinely (without any snarkiness that I feel from some comments here) about some questionable google things and I am almost certain that if those questions are reasonable, rob pike will agree that some actions done by google were wrong.
I think its just that rob pike got pissed off because an AI messaged him so he got the opportunity to talk about these issues and I doubt that he got the opportunity to talk / someone asking him about some other flaws of google / systemic issues related to it.
Its like, Okay, I feel like there is an issue in the world so I talk about it. Now does that mean that I have to talk about every issue in the world, no not really. I can have priorities in what issues I wish to talk about.
But that being said, if someone then asks me respectfully about issues which are reasonable, Being moral, I can agree about that yes those are issues as well which needs work upon.
And some people like rob pike who left google because of (ideological reasons perhaps, not sure?) wouldn't really care about the fallback and like you say, its okay to collect checks from organization even if they critize
Honestly Google's lucky that they got rob pike instead of vice versa from my limited knowledge.
Golang is such a brilliant language and ken thompson and rob pike are consistently some of the best coders and their contributions to golang and so many other projects is unparalleled.
I don't know much about rob pike as compared to Ken thompson but I assume he is really great too! Mostly I am just a huge golang fan.
It wasn't only about serving those ads though, traditional machine-learning (just not LLMs) has always been computationally expensive and was and is used extensively to optimize ads for higher margins, not for some greater good.
Obviously, back then and still today, nobody is being wasteful because they want to. If you go to OpenAI today and offer them a way to cut their compute usage in half, they'll praise you and give you a very large bonus for the same reason it was recognized & incentivized at Google: it also cuts the costs.
Honestly I believe that google might be one of the few winners from the AI industry perhaps because they own the whole stack top to bottom with their TPU's but I would still stray away from their stock because their P/E ratio might be insanely high or something
Their p/e ratio has almost doubled in just a year which isn't a good sign https://www.macrotrends.net/stocks/charts/googl/alphabet/pe-...
So like, we might be viewing the peaks of the bubble and you might still hold the stocks and might continue holding it but who knows what happens after the stock depreciates value due to AI Bubble-like properties and then you might regret as why you didn't sell it but if you do and google's stock rises, you might still regret.
I feel as if grass is always greener but not sure about your situation but if you ask me, you made the best out of the situation with the parameters you had and logically as such I wouldn't consider it "unfortunately" but I get what you mean.
With all due respect, being moral isn't an opinion or agreement about an opinion, it's the logic that directs your actions. Being moral isn't saying "I believe eating meat is bad for the planet", it's the behaviour that abstains from eating meat. Your moral is the set of statements that explains your behaviour. That is why you cannot say "I agree that domestic violence is bad" while at the same time you are beating up your spouse.
If your actions contradict your stated views, you are being a hypocrite. This is the point that people in here are making. Rob Pike was happy working at Google while Google was environmentally wasteful (e-waste, carbon footprint and data center related nastiness) to track users and mine their personal and private data for profit. He didn't resign then nor did he seem to have caused a fuss about it. He likely wasn't interested in "pointless politics" and just wanted to "do some engineering" (just a reference to techies dismissing or critising folks discussing social justices issues in relation to big tech). I am shocked I am having to explain this in here. I understand this guy is an idol of many here but I would expect people to be more rational on this website.
Im not saying nobody has the right to criticize something they are supporting, but it does say something about our choices and how far we let this problem go before it became too much to solve. And not saying the problem isn't solvable. Just saying its become astronomically more difficult now then ever before.
I think at the very least, there is a little bit of cringe in me every time I criticize the very thing I support in some way.
That's the thing - taking a risky investment isn't free. If you choose wrong, then you're worse than if you did nothing at all. Think about that. Depending on what it is, there's lazy people sitting with their thumb up their ass who will outpace you.
Now, I'm not saying that AI is worthless and everyone building their business on AI is stupid. But I am saying it's a speculative investment, so treat it like that. Diversify, lower the blast radius. You don't want to be one of those suckers who bet it all on red.
But with remote work it also became possible to get paid decently around here without working there. Prior I was bound to local area employers of which Google was the only really good one.
I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.
After 2016 or so the place just started to go downhill faster and faster though. People who worked there in the decade prior to me had a much better place to work.
Being a hypocrite makes you a bad person sometimes. It doesn't actually change anything factual or logical about your arguments. Hypocrisy affects the pathos of your argument, but not the logos or ethos! A person who built every single datacenter would still be well qualified to speak about how bad datacenters are for the environment. Maybe their argument is less convincing because you question their motives, but that doesn't make it wrong or invalid.
Unless HNers believe he is making this argument to help Google in some way, it doesn't fucking matter that google was also bad and he worked for them. Yes he worked for google while they built out datacenters and now he says AI datacenters are eating up resources, but is he wrong?. If he's not wrong, then talk about hypocrisy is a distraction.
HNers love arguing to distract.
"Don't hate the player, hate the game" is also wrong. You hate both.
I am super curious as I don't get to chat with people who have worked at google as so much so pardon me but I got so many questions for you haha
> It was a weird place to work
What was the weirdness according to you, can you elaborate more about it?
> I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.
For context, can you please talk more about it :p
> After 2016 or so the place just started to go downhill faster and faster though
What were the reasons that made them go downhill in your opinion and in what ways?
Naturally I feel like as organizations move and have too many people, maybe things can become intolerable to work but I have heard it be described as it depends where and in which project you are and also how hard it can be to leave a bad team or join a team with like minded people which perhaps can be hard if the institution gets micro-managed at every level due to just its sheer size of employees perhaps?
Sometimes facts and logic can only get you so far.
Hypocrisy is when you criticise others for doing a thing you yourself secretly do. It is massively different then criticising a compant you work or worked for. You can even ve part of something, change opinion and then criticise it without being hypocryte.
Not at all. I actually prefer in-office. And left when Google was mostly remote. But remote opened up possibilities to work places other than Google for me. None of them have paid as well as Google, but have given more agency and creativity. Though they've had their own frustrations.
> What was the weirdness according to you, can you elaborate more about it?
I had a 10-15 year career before going there. Much of what is accepted as "orthodoxy" at Google rubbed me the wrong way. It is in large part a product of having an infinite money tree. It's not an agile place. Deadlines don't matter. Everything is paid for by ads.
And as time goes on it became less of an engineering driven place and more of a product manager driven place with classical big-company turf wars and shipping the org chart all over the place.
I'd love to get paid Google money again, and get the free food and the creature comforts, etc. But that Google doesn't exist anymore. And they wouldn't take my back anyways :-)