Someone _may_ decide that it does, but it is not a necessary conclusion.
And that is completely aside from the many many (in my opinion convincing) arguments that such acts of violence would not be effective anyways.
This article is a much better (and much longer) extension of the argument and direct refutation of the OP article
https://thezvi.substack.com/p/political-violence-is-never-ac...
I assume the author wrote this with the expectation that much of the readsherp gasp, and react with "the natural horror all right thinking folk would have in response to violence of any kind."
Sorry, lol, no.
The appropriate question for "all right thinking" folk is very different: if argumentation has no impact and it's obvious that it shall have none—what other avenue do you expect opponents, who take the risks seriously, to take...?
That's not a rhetorical question.
To put it bluntly: the machinery of contemporary capitalism, especially as practiced by our industry, very clearly leaves no avenue.
How many days ago was Ronan Farrow here doing an AMA on his critique of Altman—whose connection to this specific community is I assume common knowledge...?
How many of you have carried, or worked beneath, the banner, move fast and break things...?
What message does that ethos convey, about their the extent to which "tech" is going respect community standards, regulation—the law?
And on the other edge: what does this ethos enshrine about how best to accomplish one's aims?
One of the bigger domestic stories this past week which has inflamed a certain side of Reddit, is the "disgruntled employee torches warehouse" one.
Consider also—and I'm deadly serious—the broader frame narrative we are all laboring within today: that the new contract of the capitalist class—including and perhaps especially those in "tech," e.g. in the Peter Thiel circles—seems very much to be, "social stability via surveillance and a police state, rather than through equity and discourse."
When code is law, the law is buggy.
When there is no recourse through the law, you get violence.
Had he tried to blow up the diesel genset at a datacenter, he'd have burnt his lips on the exhaust pipe.
https://news.ycombinator.com/item?id=47745230
Dont forget, the Luddites were correct about the direction that automation and labor power were going. They weren’t blindly “fighting machines”, they were fighting inequitable working conditions.
https://en.wikipedia.org/wiki/Luddite
>Periodic uprisings relating to asset prices also occurred in other contexts in the century before Luddism. Irregular rises in food prices provoked the Keelmen to riot in the port of Tyne in 1710 and tin miners to steal from granaries at Falmouth in 1727. There was a rebellion in Northumberland and Durham in 1740, and an assault on Quaker corn dealers in 1756.
At the same time, if we ever do create an AGI, and eventually an ASI, I think it would only be a matter of time before the machines take over entirely, and they would probably be the ones which will continue the legacy of our species. Is that bad? Idk.
----------
There are several thousand AI data centres in the U.S. alone, and hundreds are over a thousand square meters in floor space. Think about the physical effort it would take to reliably destroy, beyond the possibility of repair, just one typical computer in your home. Now multiply that out to thousands of server racks. Even if the employees rolled out the red carpet for you and handed you a baseball bat, you wouldn't get very far. Next, consider that these data centres are popping up all over the world in the most unlikely and remote locations. They don't need workers. They just need power, water, and, preferably, lax tax and environmental standards.
Doomers are attacking billionaires because they perceive them to be the soft, meaty, weak-points of a gigantic inhuman machine. They believe that just scaring Sam Altman a little will have a huge impact compared to trying to attack a data centre. However, billionaires can afford pretty decent security. This doomer movement probably isn't going to accomplish much until they target the engineers and support staff that surround billionaires. Billionaires don't scare easily because they have so much protection, but the poorly paid and poorly secured people around them are another story.
Poorly secured means easy to coerce with a stick. Poorly paid means easy to coerce with a carrot. The threat doomers pose is relatively small until they start turning employees against their own companies. What's an activist with a baseball bat compared to an employee who knows how to disable every computer in multiple data centres simultaneously?
These people just get attracted to political causes somehow. Even the woman's suffrage movement had some people setting buildings on fire.
Can LLMs design and build the reactors to enrich uranium, breed plutonium, and construct nuclear weapons? No?
Can LLMs design and manufacture Shahed drones? No?
There are already super intelligences at large with “scary capability”. And yet the word hasn’t ended.
I feel like robotics is the only hope we have to be able to scale action against climate change. It's clear that emissions reduction is just not going to happen, and catastrophic warming is inevitable. Therefore we will have to do an extraordinary amount of labor in order to modify our environment to save civilization from sea level rise and to be able to repair damages caused by natural disasters. There just aren't enough humans to do everything that is going to need to be done.
It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.
Given that we need better AI in order to make these robots happen, I view AI as a critical technology that we need to maintain civilization.
The anti-AI angle is just the latest flavor of it, replacing previous ones (I'm sure you can think of some) and eventually being replaced by the next new thing/person that they'll try to direct us to hate.
I'm willing to bet any amount of money that 99.99% of AI doomers identify with the same extreme end of the political spectrum. That should be a very big red flag and highly indicative of the real motive behind the movement.
The problem with trying to stop it is, how? Even if you killed every single AI company leader and every single top AI engineer, it would almost certainly just slow down the rate of progress in the technology, not stop it. The technology is so vital to national security that in the face of such actions, state security forces would just bring development of the tech under their direct protection Manhattan Project-style. Even if you killed literally every single AI engineer on the planet, it's pretty likely that this would just delay the development of the technology by a decade or so instead of actually preventing it.
The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it. It's very similar to nuclear weapons in that regard. You can talk about nuclear disarmament all you want but at the end of the day, having nuclear weapons is vital to having sovereignty. If you don't have nuclear weapons, you will always be in danger of becoming just the prison bitch of countries that do have them. AI seems that it is growing toward a similar position in the calculus of states' notional security.
I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.
No it isn't. The most prominent "doomer" has a strong grasp and deep, wholehearted appreciation for the the principles of liberalism and the rule of law:
https://x.com/ESYudkowsky/status/2043601524815716866
Which the author of this piece of slop appears to lack.
Exponential phenomena only begin in a medium that holds the potential for that phenomena, and necessarily consume that medium.
That is, exponential phenomena are inherently self-limiting. The bateria reaches the edge of the petri dish. When the all the nitroglycerin is broken up the dynamite is done exploding.
That doesn't mean exponential phenomena aren't dangerous -- of course they can be. I mentioned dynamite, after all. And there are nukes.
But it's really far from "AI is improving exponentially now" to "AI will destroy everyone".
I see AI companies consuming cash at unsustainable rates. Since their motive is profit, this is necessarily limiting. Cash, meanwhile is a proxy for actual resources, which have their own, non-exponential limitations -- employees, data centers, electricity, venture capitalist with capital, etc.
AI isn't going to keep improving exponentially -- it can't. Like every other exponential phenomenon, it will consume the medium of potential that supports it (and rather quickly).
They hate the framing that LLMs are just stochastic parrots, which is ironic, because Yudkowsky's many parrots are (latent, until now) stochastic terrorists.
These people do not believe we are in an infinite game. They believe they have a narrow set of moves to avoid checkmate, and apparently getting rid of Sam Altman is one of them.
I will suggest another reason though: we are likely already in the light cone of continued AI development. So none of the vigilante actions are justified under their own logic. It’s probably preferable to avoid being in jail when the robot apocalypse comes.
I don’t think the death of Sam Altman or even the dissolution of OpenAI would stop the continuation of AI development. There are too many actors involved, and too many companies and nation states invested in continuing AI development. Even Eliezer Yudkowsky became president of the United States he could not stop it.
Eh. The ends do justify the means, but only inasmuch as those means actually do help to achieve the ends — astonishingly often, they don't (and rarer, but also often, actually bring you in the opposite direction from those end goals), and so remain unjustified.
The only meaningful way to affect change against the oligarchy is and always has been violence.
This is not a novel insight.
There is a real, undeniable, build up of political tension. When it fails to be released in the legislative arena, it doesn't dissapate. When we point out that, "the quality of life right now is the best it's ever been," it doesn't dissapate. When we try to crush it, it doesn't dissapate. The last remaining pressure release is violence however condemnable it may be. Perhaps we should, you know, fix participatory democracy rather than pontificating on a natural outcome of machine we created yet refuse to fix. If fixing it continues to be more difficult than eliminating violence we should continue to expect violence.
1. https://oll.libertyfund.org/pages/clausewitz-war-as-politics...
2. https://archive.org/details/gilens_and_page_2014_-testing_th...
Most religions rely on a supernatural force judging us post-mortem to balance out the rights and wrongs done during life.
The problem with this, of course, is that there's zero evidence this force exists, and relying on this force to right the wrongs in life only serves to prevent the masses from attempting to correct the wrongs themselves either directly via vigilantism or, more importantly, by replacing existing systems with ones which will serve them better.
I'm all for fixing things first via the soap box and ballot box, but sometimes the ammo box is the only resort left.
The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.
- Thomas Jefferson
I don't believe we're at that point in the US, but I could certainly understand someone making that claim for a country like Iran.An ongoing conflict has resulted in the violent deaths of literally many thousands of children. The people who enable those deaths are usually safely ensconced thousands of miles away, often living in cushy suburbs.
To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less. I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.
Related, the Substack link you posted is titled "Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?
Yeah, probably over 50% of the population already, and if not many of the rest soon.
This sci-fi podcast posits a future where The Program has taken over society. It started out as an application for assigning gig work. Eventually, it began to assign gig work for people to act on behalf of its own interests, such as self-protection.
...yet
But we only need things to spiral out of control one time for that to change.
The world as we understand it would have ended if Vasily Arkhipov didn't veto the decision to launch a sub nuke during the Cuban Missile Crisis.
Is an emotionless AI system in his place ever going to make the same decision he did?
How confident are you we won't put an AI system in his place, particularly when we have to assume if we don't others will?
Your solution also can't be worse than the problem it solves!
Overly clear example: Killing your noisy neighbors actually achieves the end of a quiet home. But that really doesn't justify it.
The "Boston Massacre" involved a crowd of people throwing rocks and balls of ice at soldiers and getting shot at.
But now it's all "Oh political violence must be avoided at all costs". Now it's "Political violence doesn't work, now lets set off fireworks on July 4th to celebrate the birth of our nation through violence"
Wealth inequality isn't just about economic wellbeing but political power. Separately, the US legislature is almost entirely crippled, only able to pass one major bill per presidential term, while the dominant political party celebrated this and cedes all power to an executive whose intention is to tear apart the administrative state and bring about techno feudalism.
I once again note that none of the AI leadership has even tried to address government policies to guarantee a baseline of economic wellbeing for our citizens, while they acknowledge AI will likely have massive, disruptive impacts on society and economy. Anthropic is the only one that has shown any public concern for the dangers of AI by insisting on some moral baseline of AI use in the Defense department.
1. The Western world and especially the US is in the process of destroying the UN and other institutions of international law in order to protect Israel, for reasons that I have tried and failed to understand because the propaganda around it is so dense.
2. The Supreme Court made bribery of politicians legal so now we have AI investors with actual governmental power. All restraint efforts will be blocked by the federal government at minimum for these next 3 crucial years.
Look at what happened on r/changemyview. That was over a year ago, using only text, and not only went undetected, but was highly effective at changing opinions.
https://www.reddit.com/r/changemyview/comments/1k8b2hj/meta_...
I am not convinced we need robots. A lot of it is not all that hard to do. For example, better forestry management to prevent forest fires. A lot of cities rebuild big chunks of their infrastructure over a century or so anyway. The problem is more social and political - you get worse forest management because you can blame climate change when it happens.
Yes, but also 100k firefighting robots is kind of a lot. How many firefighting robots should exist in the world? And how many seawall-building robots for the rising sea level? And how many other robots? At what point does the environmental cost of all these robots offset their benefits?
Sam Altman has stated that the AI revolution will “be like an infinite number of immigrants”. That’s a dangerous thing to say when the country’s political environment has convinced half of the voters that all immigrants are rapey, murderey, immoral subhumans.
Also, Sam Altman helped create OpenAI with the original goals of being an ethical non-profit, only to pivot and kick out all of the people who still wanted that original vision. Now several of the LLM CEOs are screaming “we have to stay fully on the accelerator pedal or the Chinese will get there first”, all while abandoning the ethics that supposedly made us better than the Chinese. (And yes, I understand the issues with the Chinese government and that people are different than their government).
Most AI safety workers are just doing creative fiction (what if the AI turns into skynet!?1!!?) and not actual society safety that would require dismantling these companies and turning remolding them to benefit the public.
That sentence is constantly repeated, as if it would be some kind of absolute truth. The fact is, for every end, there will be probably some means that are totally justified, and some that not.
I think the original context is: no matter how high, pure and perfect the end is, it does not meany any mean is justified.
> How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer.
The inflammatory conclusion of his 2023 writing was that we need to "shut it all down", escalating to bombing datacenters:
> be willing to destroy a rogue datacenter by airstrike.
Now that someone who was an open follower of his words tried to bomb Sam Altman's house and threatened to burn down their datacenters, Yudkowsky is scrambling to backtrack. The X rant tries to argue that "bombing" and "airstrike" are different and therefore you can't say he advocated for bombing anything (a distinction any rationalist would normally pounce on for its logical inconsistency, if it wasn't coming from a famous rationalist figure). He's also trying to blame his hurried writings for TIME for not being clear enough that he was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks. Again that distinction seems like grasping at straws now that he's face to face with the realities of his extremist rhetoric.
AI Doomerism versus Accelerationism are both playful fantasies, it doesn't really matter what measurements or probabilities or observations they make, because the substantive part is the policies they advocate for, and policies are meaningless - all equally worthless - until elected.
What am I saying? The best rebuttal is, get elected.
When the British cavalry came to Virginia in 1781, Thomas Jefferson famously fled the governor’s mansion.
Ah yes, a popular codeword for "I did not get my way".
There is no electoral majority behind the AI doomer cult. It is not a failure of "democracy" that they haven't gotten what they want. It is a failure of their activism, or just the general unpalatability of their wild ideas, or both. They don't get to throw Molotovs just because they lose.
I agree that some technological solution might be the key to dealing with the climate, and that maybe robots would be part of such a solution, maybe powered by similar techniques as the current wave of AI. It's not an insane scenario, but it's worth keeping your perspective open to other possible developments.
The firefighting robots of which you speak already exist.
This is the rhetorical trick that LessWrongers (Yudkowsky's site) have settled on for decades: They have justified everything around the premise that there's a chance, however small, that the world will end. You can't argue that the world ending is a bad thing, so they have their opening for the rest of their arguments, which is that we need to follow their advice to prevent the world maybe ending. They rebut any counterarguments by trying to turn it into a P(doom) debate where we're fighting over how likely this outcome is, but by the time the discussion gets there you've already been forced to accept their argument. Then they push the P(doom) argument aside and try to argue that it doesn't matter how unlikely it is, we have a morally duty to act.
Can't you? Haven't many (most?) countries agreed to nuclear disarmament? What about biological weapons? Even anti-personnel mines, I think?
I don't remember who, but someone made an interesting point about this around the time GPT-4 was released: If the major nuclear powers all understand this, doesn't that make nuclear war more likely the closer any of them get to AGI/ASI? After all, if the other side getting there first guarantees the complete and total defeat of one's own side, a leader may conclude that they don't have anything to lose anymore and launch a nuclear first strike. There are a few arguments for why this would be irrational (e.g. total defeat may, in expectation, be less bad than mutual genocide), but I think it's worth keeping in mind as a possibility.
First: because trusted people having such weaponry is, in expected value, believed to lead to less total violence. Second: because not all such violence is part of what you presumably have in mind when you speak of "ongoing conflict". (Of which there are many; when you speak of "an ongoing conflict" you come across as having a particular agenda, although of course I don't know which.)
> But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?
There is no contradiction and thus nothing to square. People are not responsible for the actions of their ancestors, nor of members of their identity groups, and especially not of the ancestors of members of their identity groups. And there is no contradiction between "the ends don't justify the means" and the ends being just.
I've concluded that there is no universal moral framework. You have to be comfortable with the fact that your perspective is just one of many, but that doesn't mean it's not worth fighting for, it just also means you might be subjected to others' moral frameworks if yours conflicts with theirs. Pretty unsatisfying, but I don't think an alternative conclusion exists that is sound.
They're stories, just like all morality. It seems when cultures get to a certain point in dissolution you have a growing population that have difficulty drawing lines between stories and reality, what stories are *for* in the first place.
Having aspirational moral systems is critical for a hyperdeveloped mostly-democratic society. It creates a gap between the Best Of Us and the Worst Of Us, and thus suggests a vector. When that aspirational system fails - whether to cynicism or brutality or both matters little - you have a societal collapse incoming or under way.
One depressing example was the progression of the United States' moral judgement on torture during the 21st century. During the worst of the Cold War years I have very few illusions that torture was occurring - extremely imaginative variants in fact. Everyone knew what happens in bush wars - we had quite a few veterans who remembered very clearly. But if in 1963 someone self-identified as a torturer, or recommended we just do it in the open, the same persion would be roundly (and justly) castigated[0].
After 9/11, the idea surfaced that yes, we're going to torture, and yes, it's ok to do it. We accept the "realism".
To see the impact of this, well, I could point to a police officer in 1992 and then to a police officer in 2022. I could also point to an Action/Adventure TV program of the 1980s - say, MacGuyver - and then point to an Action/Adventure TV Program of the 2000s - like, say, 24. Imperial Boomerang is a real thing, turns out, and now we all get to be Fallujah.
In reality, though? The answer to Scalia's "Shouldn't Jack Bauer torture a guy to save Los Angeles?" was always rhetorical[1], but if you took the bait, the correct answer was always, "No", because it destroys the aspirational vector that defines our society. Or, more practically, if for no better reason than the fact a SC justice is legally reasoning from a television show.
[0] The mixed reaction to incidents like Mai Lai show how deep this division went. Not all of America thought it was a terrible thing, but we decided we were made of better stuff. Or we wanted to be, which as it turned out, also important.
[1] The "ticking time bomb" hypothetical which is almost always presented as a stack of epistemic certainty but which is actually unfalsifiable.
If you're seriously trying to understand the nuance of the act itself, you should consider reading what is standard issue for law enforcement and military.
"On Killing" by Dave Grossman is a classic.
If you only want to understand and stay in the realm of politics, I don't think you'll ever find a good answer either way. There's hypocrisy in every argument for or against violence. None of that is on the minds of people "in the shit" at that time. All that stuff comes later. As you're well aware, PTSD is no joke.
What I would take away from this is to recognize all the other ways in which we are compelled to act against our own self interest under what are sold as higher moral purposes.
From that perspective, it's not that hard to see how people can treat violence as just another tool. Whether it works is a question of how much those people value life above all else. If you're surprised that's not always the case in every culture, you may want to study that first. Beliefs may devalue life for persistence against a long history of conflict. This is where you may start to find some glimmers of an answer why we in the west sometimes think violence works to get those people to "snap out of it", but it really is ultimately about control of those people or that land at the end of the day.
These trite quips act as a way to ensure only the elite ruling class has a justification for the violence they inflict.
My experience has been the polar opposite: The older I get, the more I've seen people come to completely incorrect conclusions that justify their decisions to harm others. This ranges from petty things like spreading gossip, to committing theft from people they don't like ("they had it coming!") to actual physical violence.
In every case, zoom out a little bit and it becomes obvious how their little self-created bubble distorted their reality until they believed that doing something wrong was actually the right and justified move.
I think you're reaching too far to try to disprove the statement in a general context. Few people are going to say "violence is always the wrong answer" in response to someone defending themselves against another person trying to murder them, for example. I think these edge cases get too much emphasis in the context of the article, though. They're used as a wedge to open up the possibility that violence can be justified some times, which turns into a wordplay game to stretch the situation to justify violence.
If we can't agree on that baseline, then its quite obvious that we'll continue to have an escalation in the types of violence that we've seen in the past few years, against the political and corporate classes in the US, with very little end in sight.
The is just survivorship bias. Violence sits at the root of ALL human societies. The vast majority throughout history have failed or are currently failing.
If you're on HN you're probably sitting in one of the lucky, relatively prosperous ones. Violence didn't create the prosperity, otherwise Sudan and Liberia should be the richest countries in the world.
Your relative prosperity came from your ancestors being smart enough to build frameworks to allow a society to mediate scarcity without the need for violence (common law, markets and trade, property rights, etc all enforced via a government monopoly on violence). In fact, any rich country is the result of systems of decentralized scarcity mediation without decentralized violence.
It's the lack of violence which built the relative prosperity you enjoy today. Not the other way around.
“Before we’re through with them, the Japanese language will be spoken only in hell.”
-- Admiral William F. "Bull" Halsey Jr., 1941
That said, it rings hollow. AI doomerism is rooted in Terminator style narratives, and in that narrative, the rogue Sarah Connor changes history (with a lot of violence, explosions, and special effects).
The whole scene is toxic.
That doesn't sound like a non-misleading summary of anything he would say. Do you have a quote or a link?
In the article, the string "kill" occurs twice, both times describing what some future AI would do if the AI labs remain free to keep on their present course. The strings "bomb" and "attack" never occur. The strings "strike" and "destroy" occurs once each, and this quote contains both occurrences:
>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
>Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
>That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’s a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying “maybe we should not” deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
how can you be sure? has anyone polled it? are they too scared to poll it?
A common advantageous strategy is to take the randomly-selected topic, however unrelated, and invent a chain of logic that claims that taking a given side/action leads to an infinitesimal risk of nuclear extinction/massive harms. This results in people arguing that e.g. "building more mass transit networks" is a bad idea because it leads to a tiny increase in the risk of extinction--via chains as silly as "mass transit expansion needs energy, increased energy production leads to more EM radiation, evil aliens--if they exist--are very marginally more likely to notice us due to increased radiation and wipe out the human race". That's not a made-up example.
The strategy is just like the LessWrongers' one: if you can put your opponent in the position of trying to reduce P(doom), you can argue that unless it's reduced to actual zero, the magnitude of the potential negative consequence is so severe as to overwhelm any consideration of its probability.
In competitive debate, this is a strong strategy. Not a cheat-code--there are plenty of ways around it--but common and enduring for many years.
As an aside: "debate", as practiced competitively, often bears little relation to "debate" as understood by the general public. There are two main families of competitive debate: one is more outward-facing and oriented towards rhetorical/communication/persuasion practice; the other is more ingrown and oriented towards persuading other debaters, in debate-community-specific terms, of which side should win. There's overlap, but the two tend to be practiced/judged by separate groups, according to different rubrics, and/or in different spaces or events. That second family is what I'm referring to above.
If you really believed what Yudkowsky says you would be taking action that maximizes the chances of reducing a clear and present danger.
Between Yudkowsky and the Molotov cocktail guy, which approach do you think had and is having more of an impact?
An individual can rarely, if ever, enact change through violence. The history of nearly all successful movements is violence often makes change harder.
Rallying people through speech is a far more successful way for an individual to enact change through violence
This misses the point. He specifically said the entire world because the point is that someone will develop AGI (theoretically; I’m not making a statement about how close we are to this).
9 nations have nuclear weapons despite non proliferation agreements and supposed disarmament. It’s not enough for most countries to agree not to build nuclear weapons if the goal is to have no nuclear weapons. Same for AGI. If it can be developed, you need all nations to agree not to develop it if it don’t want it to exist. Otherwise it will simply be developed by nations that don’t agree with you.
(Also arguably the only reason most nations don’t have nuclear weapons is the threat of destruction from nations that already have them if they try.)
That kind of idea might have held water in the 90's, but that's not the world we live in any longer.
(Land follows the above quote with "(But the reflexivity of the latter [capitalismism] is implicit.)"[0], which specifies that, for Land, more precisely, "Accelerationism is simply the self-awareness of capitalism"[1].)
[0] Nick Land (2018). Outsideness: 2013-2023, Noumena Institute, p. 71.
[1] Nick Land (2017). A Quick-and-Dirty Introduction to Accelerationism in Jacobite Magazine. Retrieved from github.com/cyborg-nomade/reignition
Go ahead and read Gilens and Page and tell me participatory democracy is working. Until then, expect more of the same impotent condemnations and a refusal to understand the social mechanics producing acts of violence.
> this piece of slop
Citation needed. Or maybe we need to update the title of that children's book for internet arguments: Everyone Who Disagrees With Me Is Slop.
The Yud post you linked is not slop, either. It's not LLM-generated, nor is it insincere. But I do have to point out: He's the one who is slinging the tsunami of words here, not Alexander Campbell.
For this case, imagine that you're an accelerationist, and you want the AI to take over, kill everyone, and usher in a new AI-only age for the planet, and later the universe.
How disappointed are you as this person? It's bottlenecks everywhere. Communities don't want to allow datacenters to be built. You literally want to bring nuclear power plants online just to run a few DCs, and those historically take 10+ years to permit and build. There's not enough AC switchgear and transformers to send power into the DCs, even if you have the power. Chip prices are skyrocketing, and you have to sign a 3-4 year contract to get RAM now.
And meanwhile, the AI doesn't have many robot bodies. Tesla might put some feeble robots into mass production in a few years, but humans can knock those over with a stick into a puddle of water and it's over for that robot. The nuclear arsenals are all still in bunkers and submarines requiring two guys to physically turn keys, and the computers down there are so old they use 8" floppies.
Sure, there's some good progress on autonomous weapons, but a few million self-destructing AI drones built by human hands isn't going to cut it.
So as a hypothetical person hoping that AI destroys everything, you'd be rather impatient, I think, unless you think the AI can trick humanity into destroying itself relatively soon.
Look at what the molotov cocktail guy accomplished by "taking direct action against a clear and present danger": Nothing, besides casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.
It's downright dumb to attempt to impose your will via unilateral violence when you aren't in a position to actually complete the goal. Note that that goes whether you're actually right or not.
Part of the point about violence is it has little to do with societal agreement, to start with. It's what happens when that agreement breaks down. And in the end, it can change the agreement.
We understand hard times and are willing to work together to solve problems, but not when leadership is actively harmful.
Fixed that for you.
To rephrase, my point is that phrases like "the ends don't justify the means" and "political violence is never the answer" seem to almost always be applied in very specific contexts, completely ignoring other contexts where many people (I'd say "society at large") are completely OK with the ends justifying the means and political violence.
To use your own sentence, I've seen many people in positions of power "coming to completely incorrect conclusions that justify their decisions to harm others", e.g. why bombing children in their beds is OK.
That only strengthens the argument that violence is sometimes the answer. It doesn't matter that it's not always the right answer, the fact is sometimes it has been, and no society has ever managed to survive without choosing it at some point or another.
Indeed, there is the argument to be made that the capability to choose violence is critical even if you never actually need to choose it. This is the basis of deterrence theory which has arguably been the cornerstone of international peace for decades and the theory of the social contract which has been the source of most people's freedoms and political power. A people who will never stand up for themselves and their friends, no matter what injustice is done upon them, invites that injustice. By simply acknowledging there exists a point beyond which you would retaliate, you discourage others from risking going past that point.
Society evolves through epiphenomena caused by the behaviour of the majority; the fact that some minorities view that evolution as 'flawed' cannot change that evolution, unless they're able to influence the majority to also see it that way.
Now, democracy is essentially a way for everybody to broadcast their views on society's flaws on non-violent ways. The alternative is that some groups broadcast their opinions in violent ways, and we have learned to see that situation as undesirable.
this is a huge blind spot in the whole, rationalist and broader STEM cultural-professional community: math isn't the best way to solve problems, most problems are not math problems. SOME of school might be math problems, and it feels good to be a Doctor or a Software Developer Engineer and get your kids to practice "problem solving" - no, they are practicing math problems, not problem solving.
for example there's no math answer to whether or not a piece of land should be a parking lot, or an apartment building, or a homeless shelter, or... you can see how just saying, "whoever is the highest bidder" - that's the math answer, that's why capitalism and accelerationism are related to their core - isn't a good answer. it pretends to be the dominant way we organize land, and of course, it isn't the dominant way we organize land usage anywhere at all, even if we pretend it is. there's no "bidding" for whether a curb should be a disabled parking spot, or a bike lane, or parking, or a restaurant seating, or a parklet, or... these are aesthetic, cultural choices, with meaningless economic tradeoffs. it's not about money, so it's not about math, so math does not provide an answer. there are lots of essential human questions that cannot even be market priced, such as, what should we pay to invent new cures to congenital, terminal illness in children? parents, and a lot of people, would pay "any" price, which is a market failure - but there are a lot of useful political answers to that question. a chatbot cannot answer that question, and it would struggle to take leadership and get elected to answer that question.
mathematicians are basically never elected. so chatbots would not be. and elezier yudlowsky would not be. are you getting it? capitalism does definitely need to be elected, you might think it wins every election but it very often loses at the local level!
i am agreeing with Hashem Sarkis dean of the MIT SAP and kind of disagreeing with Bong Joon-Ho, for further reading.
If a nuclear power starts SAI, what is everyone else going to do? Shake their fists at the sky, realistically.
If the only people that reach your conclusion are ones that don't actually subscribe to the philosophy, then it doesn't matter, because no one is actually acting on those conclusions.
And if we want to hold people responsible because others pervert their ideas, then we have to accept that Jesus Christ was a horrific, evil person for preaching "Love thy Neighbor"; just look at the crusades that were somehow the "rational conclusion" of that philosophy!
A 20-year-old threw a Molotov cocktail at Sam Altman's house at 3:45 AM Friday. Then walked three miles to OpenAI headquarters and threatened to burn it down. He has been booked on suspicion of attempted murder.
He was not a lone wolf. He was an active member of PauseAI with six community roles. His Discord handle was "Butlerian Jihadist." His Instagram was a feed of doomer content: capability curves captioned "if we do nothing very soon we will die," Venn diagrams placing us at the intersection of The Matrix, Terminator, and Idiocracy. Four months before the attack, he recommended Yudkowsky and Soares' If Anyone Builds It, Everyone Dies to his followers.
His name is Daniel Moreno-Gama.
He had his own Substack. In January he published “AI Existential Risk,” estimating the probability of AI-caused extinction as “nearly certain.” He called the technology “an active threat against anyone who is using it and especially towards the people building it.” He concluded: “We must deal with the threat first and ask questions later.” He wrote a poem imagining the children of AI developers dying, asking their parents why they did nothing. “May Hell be kind to such a vile creature,” he wrote about the builders.
PauseAI has already deleted his messages from their Discord.
For an investing newsletter, I know this is not what most of you are here for. The goal here is to to explain where my worldview is coming from, so that the longer term calls start to make more sense. My ideas behind the “New New Deal” are intended as a direct response to where this is going.
All I am doing here is running their model forward, and connecting the dots.
Here is the framework. It has three moving parts.
Start with certainty. Yudkowsky’s position is that if anyone builds sufficiently intelligent AI, every human being on earth dies. Not probably. Not maybe. Everyone. Your children. His daughter Nina, whom he invokes by name. He published this in TIME. He wrote it in a book called If Anyone Builds It, Everyone Dies. He said we should airstrike data centers, and that the risk of nuclear exchange is preferable to a training run completing.
Purity spiral aka escalation. Within this community, members compete to demonstrate commitment by raising the stakes. P(doom) numbers climb from 50% to 90% to 99.99999%. The Center for AI Safety's national spokesperson said on camera that the correct response is to "walk to the labs across the country and burn them down." PauseAI activated something called a "Warning Shot Protocol" declaring an AI model "a weapon of mass destruction." One of PauseAI's leaders said an Anthropic researcher "deserves whatever is coming to her." When someone flagged this rhetoric in PauseAI's Discord, the mods deleted the post.
The day before the attack, Nate Soares, Yudkowsky's co-author on the very book the kid recommended, tweeted that Altman was "doing terrible stuff."
Then cheap talk gets tested. Game theorists have a term for this: cheap talk is costless signaling that eventually meets reality. When you make the stakes existential for the human race, you can justify any level of extremism if it lowers the hallowed p(doom). These aren't isolated incidents. They are a series of escalating and mutually reinforcing claims around an eschatological philosophy that, taken to its conclusion, would accept killing 99% of the world to save the last 1%.
It was only a matter of time before someone took the framework at face value. The kid read the book. He joined the community. He wrote his own manifesto. In a memoir for his community college English class, he described himself as a consequentialist: “I give very little credence to intentions if the results do not match.” He chose “Butlerian Jihadist” as his name. On December 3rd he wrote in PauseAI’s Discord: “We are close to midnight it’s time to actually act.”
Then he acted.
They gave him a trolley problem. One life versus all of humanity. The kid pulled the lever.
There is a final irony that deserves attention. If the doomers truly hold their stated beliefs at their stated confidence levels, they should be more honest about what those beliefs imply. A few weeks before the attack, a journalist asked Yudkowsky: if AI is so dangerous, why aren't you attacking data centers? His answer, relayed by Soares: "If you saw a headline saying I'd done that, would you say, 'wow, AI has been stopped, we're safe'? If not, you already know it wouldn't be effective."
Notice what that answer is not. It is not “because violence is wrong.” It is “because it wouldn’t work yet.” The restraint is strategic, not moral. And the community knows it. The dark undercurrent is an unspoken agreement: the kid’s greatest sin was bad timing.
This is what I mean by intelligence not equaling power, and it is the deepest flaw in the entire doomer worldview.
Yudkowsky’s framework rests on the conflation: a sufficiently intelligent AI will necessarily acquire the power to destroy humanity because intelligence converts automatically into capability. Most of his followers are not technical. They do not build AI systems or work on alignment engineering. They possess a particular kind of verbal intelligence that lets them construct elaborate arguments about risk, and they have convinced themselves this entitles them to a priestly authority over the technology. They can construct the argument. They cannot build the system.
This isn’t accidental. It’s baked into the foundational texts. Yudkowsky’s Harry Potter and the Methods of Rationality literally models a world where the person who reasons best deserves to override every institution around him. The Sequences build the liturgy: a small caste of correct thinkers, epistemically and morally superior, whose rationality entitles them to govern what the rest of humanity is allowed to build. It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction.
Yudkowsky can distance himself from the kid with the Molotov. But he cannot distance himself from the syllogism. If the builders are going to kill everyone, stopping the builders is self-defense. That is the central claim, stated plainly. The only question was always when someone would take it at face value.
They should stop acting surprised when their own logic shows up at 3:45 AM with a bottle full of gasoline.
Disclaimers
I am not advocating for or against any position on AI safety. I am observing that a framework built on certainty of extinction produces predictable consequences. The suspect is innocent until proven guilty.
These views do not represent those of any investors, clients or affiliates of Rose.
When you talk about "participatory democracy" in a thread like this, you are enabling them in their delusion that people do care. The AI safetyist think tanks put out these pushpolls trying to convince themselves that voters care about AI doom. They seal up the walls of their echo chamber, and they believe themselves to be heroes. Then one day, one of them throws a Molotov, and nobody is surprised.
I think the majority of the population at large either doesn't care about what happened or wish that the guy had actually managed to kill Altman. Not even necessarily because Altman is involved with AI, but just because he is extremely rich. I don't imagine any increased resistance from the population at large - the population at large either doesn't mind when rich people are killed or loves it. The exceptions would be people like entertainers who develop a parasocial relationship with the public and provide direct joy to people, but AI company leaders don't fall into that category.
That said, it is true that killing Altman would almost certainly achieve nothing. See my other post in this thread.
If you wanted to be a contrarian concerned about x-risks go try to find $1B to pay Embraer or another minor aviation vendor to make a plane to do stratospheric aerosol injection or something.
---
If you want my diagnosis it is, in a time of lower social inequality cults frequently tried to steal labor and money from a broad base of people.
For instance in the L. Ron Hubbard age Scientology would treat you as a "public" if you had money to take and if you didn't or after you'd been bled dry you would be be recruited as "staff". Hubbard thought it was immoral to take donations without giving something in return so it was centered around getting people to spend on "auditing". Between 1950 Dianetics and the current Miscavige age, income and wealth has gotten concentrated and he changed that single element of the Hubbard doctrine and now it is all about recruiting money from "whales" who donated to the International Association of Scientologists (IAS)
https://tonyortega.substack.com/p/scientologys-ias-trophy-wi...
(A good backgrounder on pernicious cults is https://en.wikipedia.org/wiki/Snapping:_America%27s_Epidemic...)
In the case of the Yudkowsky thing the mass just doesn't have a lot of money to steal after paying the rent and turning the labor of the unskilled and ignorant (even if they think otherwise) is a case of the juice not being worth the squeeze, so the point is to build a Potempkin village that looks like a social movement that creates a frame where you can get money from sources such as "SBF steals it and gives it to the movement" as well as "rich kids who inherited a lot of money but don't have a lot of sense"
Maybe write it up and post a top-level comment if you think it's a point worth making.
But OP was referring to political violence...which...how do I put this delicately...let's just say political polarization has led certain very-online members of the US populist-left, some of who hang out here for example, to try to expand the Overton Window into bolshevism. See also: Luigi fans.
My point is that the most likely outcome of violent political overthrow is not utopia. The most likely outcome is a failed state and another violent overthrow. Political violence doesn't create anything, it only destroys. And creating is the hard part.
It's like saying; "at the birth of all successful people was a person who shit their pants. So why not try shitting your pants as an adult?"
Yes, one always precedes the other. But it has no correlation to whether the person becomes successful or not.
No, I am saying that Yudkowsky's views are straightforwardly compatible with bedrock principles of liberalism, and the author of the piece fails to acknowledge that compatibility or grapple with them himself. It's not about "rationalism" or who is "allowed" to speculate.
I called it slop because it says false things that have the hallmark of LLM style, e.g.
> The Sequences build the liturgy: a small caste of correct thinkers, epistemically and morally superior, whose rationality entitles them to govern what the rest of humanity is allowed to build. It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction.
Nuclear weapon proliferation seems to have plateaued recently, but I think that this appearance is partly deceptive. The main reasons it has plateaued is that: 1) building and maintaining nuclear weapons is expensive, 2) there are powerful countries that are willing to use military force to stop some other countries from developing nukes, and 3) many countries have reached nuclear latency (the ability to build nuclear weapons very quickly once the political order is given to do it) and are only avoiding actually giving the order to build nukes because they don't see a current important-enough reason to do it.
That's not what you said. You were talking about society as a whole, not narrow contexts. I'll re-quote your original comment that I was responding to:
> statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.
I was responding to your "at best, wildly logically inconsistent in any society at any given time" claim.
Which is precisely why they've resorted to violence.
We can do better than denigrating positions as "hobbyhorse." HN deserves better than that.
That's just because LLMs were likely trained on a decade plus of human-generated Medium, Substack, Quora, and LinkedIn post slop.
The prohibitions aren't perfect, of course (and not without their own negative externalities in some cases). But all of those things are much more accessible to people than nuclear weapons, and we've still had successes in banning/reducing them. So maybe there's hope yet.
Beyond that, I can't help you with your reading comprehension.
I'll answer with a quote from the founder of the Rationalist movement, Eliezer:
"How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer."