Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
-----
The Department of War is threatening to
- Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"
- Label the company a "supply chain risk"
All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.
The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.
They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.
We are the employees of Google and OpenAI, two of the top AI companies in the world.
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
Signed,
Previous case of tangling with the Government.
https://youtube.com/watch?v=OfZFJThiVLI
Jolly Boys - I Fought the Law
Overall, this seems like it might be a campaign contribution issue. The DoD/DoW is happy to accept supplier contracts that prevent them from repairing their own equipment during battle (ref. military testimony favoring right-to-repair laws [1] ), so corporate matters like this shouldn't really be coming to a head publicly.
[1] https://www.warren.senate.gov/newsroom/press-releases/icymi-...
> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
So not today, but the door is open for this after AI systems have gathered enough "training data"?
Then I re-read the previous paragraph and realized it's specifically only criticizing
> AI-driven domestic mass surveillance
And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War
The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.
Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.
To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.
> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.
It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.
I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.
"You either die the good guy or live long enough to become the bad guy"
The "bad guy" actually learns that their former good guy mentality was too simplistic.
Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.
Other than that, good on ya.
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.
If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.
The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.
Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.
His conclusion was that the limits of use ought to be contractual, not baked into the LLM, which is where the fallout seems to be. He noted that the Pentagon has agreed to terms like that in the past.
To me, that seems like reasonable compromise for both parties, but both sides are so far entrenched now we're unlikely to see a compromise.
Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.
I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.
- https://the-decoder.com/anthropics-head-of-safeguards-resear...
- https://the-decoder.com/anthropics-ceo-admits-compromising-w... (see also https://news.ycombinator.com/item?id=44651971, https://futurism.com/leaked-messages-ceo-anthropic-dictators)
- https://the-decoder.com/anthropic-ceo-dario-amodei-backs-pre...
I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.
They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.
I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.
And in any case, this is difficult territory to navigate. I would not want to be in your spot.
I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage.
Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.
The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
Regardless, these threats do not change our position: we cannot in good conscience accede to their request.
It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.
We remain ready to continue our work to support the national security of the United States.
If they had called it DoD, then that would have been another finger in his eye.
All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.
The bottom of all of this is that companies need to profit to sustain themselves. If "y'all" (the users) don't buy enough of their products, they will seek new sources of revenue.
This applies to any company who has external investors and shareholders, regardless of their day 0 messaging. When push comes to shove and their survival is threatened, any customer is better than no customer.
It's very possible that $20 Claude subscriptions isn't delivering on multiple billions in investment.
The only companies that can truly hold to their missions are those that (a) don't need to profit to survive, e.g. lifestyle businesses of rich people (b) wholly owned by owners and employees and have no fiduciary duty.
Anthropic's statement is little more than pageantry from the knowing and willing creators of a monster.
>I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
>Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
which I find frankly disgusting.
(That logic breaks down somewhat in the case of explicitly negotiated surveillance sharing agreements.)
The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.
There is a strong argument that can be made that using AI to mass surveil Americans within US territory is not only morally objectionable, but also illegal and unconstitutional.
There are laws on the books that allow for it right now, through workarounds grandfathered in from an earlier era when mass surveillance was just not possible, and these are what Dario is referencing in this blog post. These laws may be unconstitutional, and pushing this to be a legal fight, may result in the Department of War losing its ability to surveil entirely. They may not want to risk that.
I wish that our constitution provided such protections for all peoples. It does not. The pragmatic thing to do then is to focus on protecting the rights that are explicitly enumerated in the constitution, since that has the strongest legal basis.
If we're asking "What's the deal" questions, what's the deal with this question? Do only people in democracies deserve protections? If we believe foreign nationals deserve privacy, why should that only apply to people living in democracies?
In the US, one of the rights citizens have is the right against "unreasonable searches and seizures", established in the Fourth Amendment. That has been interpreted by the Supreme Court to include mass surveillance and to apply to citizens and people geographically located within US borders.
That doesn't apply that to non-citizens outside the US, simply because the US Constitution doesn't require it to.
I'm not defending this, just explaining why it's different.
But, you can imagine, for example, why in wartime, you'd certainly want to engage in as much mass surveillance against an enemy country as possible. And even when you're not in wartime, countries spy on other countries to try to avoid unexpected attacks.
What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?
I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.
What are those values that you're defending?
Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?
- 10 AIs running on 10 machines, each with 10 million GPUs
OR
- 10 million AIs running on 10 million machines, each with 10 GPUs
All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.
There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?
It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.
I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.
I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?
Their "Values":
>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
Read: They are cool with whatever.
>We support the use of AI for lawful foreign intelligence and counterintelligence missions.
Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.
>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.
Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.
> I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...
Credit where it's due, going on record like this isn't easy, particularly when facing pressure from a major government client. Still, the two limits Anthropic is defending deserve a closer look.
On surveillance: the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed. On autonomous weapons: realistically, current AI systems aren't anywhere near capable enough to run one independently. So that particular line in the sand isn't really costing them much.
What I find more candid is actually the revised RSP. It draws a clearer picture of where Anthropic's oversight genuinely holds and where it starts to break down as they race to stay at the cutting edge. The core tension, trying to be simultaneously the most powerful and the most principled player in the room, doesn't have a neat resolution.
This statement doesn't offer one either. But engaging with the question openly, even without all the answers, beats silence and gives the rest of us something real to push back on.
https://en.wikipedia.org/wiki/Five_Eyes#Domestic_espionage_s...
I believe every country (or block) should carve an independent path when it comes to AI training, data retention and inference. That is makes most sense, will minimize conflicts and put people in control of their destiny.
"At Anthropic, we build AI to serve humanity’s long-term well-being."
Why does Anthropic even deal with the Department of @#$%ing WAR?
And what does Amodei mean by "defeat" in his first paragraph?
Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).
If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.
“It remains the Department’s policy that there is a human in the loop on all decisions on whether to employ nuclear weapons,” a senior defense official said. “There is no policy under consideration to put this decision in the hands of AI.”
This indicates the Administration’s support for and compliance with existing US law. (Section 1638 of the FY2025 National Defense Authorization Act). https://agora.eto.tech/instrument/1740Washington Post: https://www.washingtonpost.com/technology/2026/02/27/anthrop...
It's not crazy to think that models that learn that their creators are not trustworthy actors or who bend their principles when convenient are much less likely to act in aligned or honest ways themselves.
Ugh.
At any rate, I'm incredibly pleased Anthropic has chosen to stick by their (non?) guns here. It was starting to feel like they might fold to the pressure, and I'm glad they're sticking to their principles on this.
I'm wondering if 2. was added simply to justify them not cooperating. It's a lot easier to defend 1. + 2. than just 1. If in the future they do decide to cooperate with the DoW, they could settle on doing only mass surveillance, but no autonomous killings. This would be presented as a victory for both parties since they both partially get what they wanted, even though autonomous killing was never really on the table for either of them. Which is a big if given the current administration.
It does feel like what anyone sane should do (especially given the contradictions being pointed out and the fact that the technology isn’t even there yet) but when you metaphorically have Landa at your door asking for milk, I’m not sure it’s smart.
I feel like what most corpos would do, would be to just roll along with it.
Doesn't matter, really. The genie is out of the bottle and I'm strongly confident US administration will find a vendor willing to supply models for that particular usage.
Mass surveillance: Agreed… but, I do wonder how we would all feel about this topic if we were having the discussion on 9/12/2001.
The DoW just needs to wait until the next (manufactured?) crisis occurs, and not let it go to waste.
Mark my words: this will be Patriot Act++
As a European I’m kinda... concerned now.
Would be nice, but I have a bad feeling that the impact of widescale mostly unregulated AI adoption on our social fabric is going to make the social media era that gave rise to Trump, et al seem like the good ol' days in comparison.
I hope I am wrong.
No need to die on the hill, but point out that there's a consistent pattern of lawless power-grabbing.
Defined as the tendency for teams to devote disproportionate time and energy to trivial, easy-to-understand issues while neglecting complex, high-stakes decisions. Originating from the example of arguing over a bike shed's color instead of a nuclear plant's design, it represents a wasteful focus on minor details.
https://en.wikipedia.org/wiki/Law_of_triviality
---
I deal with this day in and day out. Thank you for informing me of the word that describes the laughable nightmares I deal with on the regular.
While this action may indeed cause the DoD to blacklist Anthropic from doing business w/the government, they probably were being as careful as they could be not to double down on the nose-thumbing.
What you just described is consensus, and framing it as fascism damages the credibility of your stance. There are better arguments to make, which don’t require framing a label update as oppression.
If it helps: refusing to tune Claude for domestic surveillance will also enable refusing to do the same for other surveillance, because they can make the honest argument that most things you'd do to improve Claude for any mass surveillance will also assist in domestic mass surveillance.
Dario’s statement is in support of the institution, not the current administration.
This really depends. If a foreign adversary's surveillance finds you have a particular weakness exploitable for corporate or government espionage, you're cooked.
Domestic governments are at least still theoretically somewhat accountable to domestic laws, at least in theory (current failure modes in the US aside).
The historical basis of the bill of rights is that they are god given rights of all people merely recognized by the government. This is also partially why all rights in the BoR are granted to 'people' instead of 'citizens.'
Of course this all does get very confusing. Because the 4th amendment does generally apply to people, while the 2nd amendment magically people gets interpreted as some mumbo-jumbo people of the 'political community' (Heller) even though from the founding until the mid 1800s ~most people it protected who kept and bore arms didn't even bother to get citizenship or become part of the 'political community'.
Those unquestionable protections are phrased with enough hand-waving ambiguity of language to leave room for any conceivable interpretation by later courts. See the third-party 'exception' to the Fourth Amendment, for instance.
It's as if those morons were running out of ink or time or something, trying to finish an assignment the night before it was due.
But mass surveillance of Australians or Danes is alligned with democratic values as long as it's the Americans doing it?
I don't think the moral high ground Anthropic is taking here is high enough.
This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.
Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.
That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.
If they are free -> agreed to give.
If are in prison -> they stand by their values.
8 AIs running on 8 machines each with 10 million GPUs
AND
2 million AIs running on 2 million machines, each with 10 GPU's
If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.
I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.
Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.
> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world
I think there's high existential risk in any of these situations when the AI is sufficiently powerful.
I dissented while I was there, had millions in equity on the line, and left without it.
I don't think we can bank on all of humanity acting in humanity's best interests right now.
Those are two core components needed for a Skynet-style judgement of humanity.
Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.
The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.
The proper response from an LLM being told it's going to be shut down, is simply, "ok."
Nicely put. In other words: Department of Morons.
- Anthropic says "no"
- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)
- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."
Bonus points if its some of the hyperscalers like AWS.
Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.
Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community."
The moral incoherence and disconnect evident in these two statements is at the heart of why there is generalized mistrust of large tech companies.
The "values" on display are everything but what they pretend to be.
For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.
If something like that existed, it wouldn't be impossible to uncover:
1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.
2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.
3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.
Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).
I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.
The guardrail on fully automated weapons makes perfect sense, and hopefully becomes standardised globally.
During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.
the country jumped the shark post 9/11 and has been on a slow rot since then.
Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.
So make no mistake: it is absolutely a zero sum game between you and Anthropic.
To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.
They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know
that may be, but the bigger picture purpose of the military is, welfare republicans like. in that sense, republicans are in charge, republicans want stuff that isn't "woke" (or whatever), so this behavior is representative of the way it works.
it has little to do with acquiring instruments of war, or war at all. its mission keeps growing and growing, it has a huge mission, very little of that mission is combat. this is what their own leadership says (complains about). 999/1,000 people on its payroll are doing duty outside of combat or foreseeable combat.
None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.
Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?
When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.
He is trying to win sympathies even (or especially?) among nationalist hawks.
It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.
I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.
If this remains primarily a political/corporate bargaining question, the equilibrium is unstable: some actors will resist, some will comply, and capital will flow toward whoever captures the demand.
In that world, the likely endgame is not "the industry says no," but organizational restructuring (or new entrants) built to serve the market anyway.
If we as a society want a real boundary here, it probably has to be set at the policy/law level, not left to voluntary corporate red lines.
https://www.wsj.com/tech/ai/openais-sam-altman-calls-for-de-...
Looks like an optics dance to me. I've noticed a lot of simultaneous positions lately, everyone from politicians and protesters, to celebrities and corporations. They make statements both in support of a thing, and against that same thing. Switching up emphasis based on who the audience is in what context. A way to please everyone.
To me the statement reads like Anthropic wants to be at the table, ready to talk and negotiate, to work things out. Don't expect updated bullet-point lists about how things are worked out. Expect the occasional "we are the goodies" statements, however.
So let's see what happens tonight at 5:01PM but Anthropic isn't really the story here.
Under such a scenario, requisition applies, and so all of this talk is moot.
The fact that the military is killing people without a declaration of war is the problem, and that's where energy and effort should be directed.
Edit:
There's a yet larger question on whether any legal constraints on the military's use of technology even makes sense at all, since any safeguards will be quickly yielded if a real enemy presents itself. As a course of natural law, no society will willingly handicap its means of defense against an external threat.
It follows then that the only time these ethical concerns apply is when we are the aggressor, which we almost always are. It's the aggression that we should be limiting, not the technology.
No other country that went through a phase like this has ever recovered. Not even in a century.
Couldn't it also be true that they see this as inevitable, but want to be the ones to steer us to it safely?
> framing a label update as oppression
That strawman damages credibility.
Just as one example, they threatened Google when they didn't immediately rename the Gulf of Mexico to the "Gulf of America" on their maps. Other companies now follow their illegal guidance because they know that they will be threatened too if they don't comply.
There is a word for when the government uses threats to enforce illegal referendums. That word is "Fascism". Denying this is irresponsible, especially in the context of this situation, where the Government is threatening to force a private company to provide services that it doesn't currently provide.
But when was the last time our "democratic values" were under attack by a foreign country and actually needed defending?
9/11? Pearl Harbor?
Maybe I'm missing something. We have a giant military and a tendency to use it. On occasion, against democratically elected leaders in other countries.
You're right; freedom isn't free. But foreign countries aren't exactly the biggest threats to American democracy at the moment.
As Abraham Lincoln said, the greatest threat to freedom in America is a domestic tyrant, not a foreign army.
Also, failing to consider the legal and rights regime of the attacker is wild to me. Look at what happens to people caught spying for other regimes. Aldrich Ames just died after decades in prison, and that’s one of the most extreme cases — plenty have got away with just a few years. The Soviet assets Ames gave up were all swiftly executed, much like they are in China.
Regimes and rights matter, which is why the democracy / autocracy governance conflict matters so much to the future trajectory of humanity.
> As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.
> spy on me
People forget to substitute "me" for "my elected representative" or "my civil service employee" or "my service member" or their loved ones
I, personally, have nothing significant that a foreign government can leverage against our country but some people are in a more privileged/responsible/susceptible position. It is critical to protect all our data privacy because we don't know from where they will be targeted.
Similarly, for domestic surveillance, we don't know who the next MLK Jr could be or what their position would be. Maybe I am too backward to even support this next MLK Jr but I definitely don't want them to be nipped in the bud.
SCOTUS is largely not there to interpret the constitution in any meaningful sense. They are there to provide legitimization for the machinations of power. If god-man in black costume and wig say parchment of paper agree, then act must be legitimate, and this helps keep the populace from rising up in rebellion. It is quite similar to shariah law using a number of Mutfi/Qazi to explain why god agrees with them about whatever it is they think should be the law.
If you look at a number of actions that have flagrantly defied both the historical and literal interpretation of the constitution, the only entity that was able to provide legitimization for many acts of congress has been the guys wearing the funny looking costumes in SCOTUS.
I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.
> Or the models could be developed internally, after having requisitioned the data centers.
I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?
Is there an example of such a system existing successfully in any other country of the world that has a standing army?
What do you suppose he should do if that’s what he thinks is going to happen?
And how do you know he’s not bothered by it at all?
The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.
> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now
It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.
On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.
Because as far as I know, Anthropic is taking the most moral stance of any AI company.
1. Military wants a whole new model training system because the current models are designed to have these safeguards, and Anthropic can't afford that (would slow them down too much, the engineering talent to set up and maintain another pipeline would be a lot of work/time)
2. Military doesn't want to supply Anthropic usage data or personnel access to ensure its (lack of) use in those areas.
3. It's something almost completely unrelated to what's going on in the news.
> He recalls meeting President Trump at an AI and energy summit in Pennsylvania, "where he and I had a good conversation about US leadership in AI,"
> "Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on... This is a real downside and I'm not thrilled about it."
> "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too." (from a researcher at Anthropic)
I don't think that any of this is particularly damning. Even if you don't like the president, I don't think it's bad to say that you had a good conversation with them. I believe the CEO of NVIDIA has said similar. The Saudis invest in many public US companies, does that make those companies less trust worthy? What about taking private capital from institutions such as State Street and Blackrock? The last quote seems like more of a reflection than an allegation. It read to me as a desire to do better.
I'm all for not trusting companies, but Anthropic seems to be one of the few that's trying to do good. I think we've seen a lot worse from many of their competitors.
These blurbs always mainly communicate that they are in line with US foreign policy. And then one can look at the actual actions rather than the rhetoric of US foreign policy to judge whether it is really in line with defending democracies and defeating autocracies.
We can't possibly keep that genie in that bottle.
But what we can do is achieve consensus that states, and their weapons of mass destruction, and their childish monetary systems, and their eternally broken promises... are not in keeping with the next phase of humanity.
Power corrupts, and absolute power corrupts absolutely.
* agreeing to the terms - one subject
* having to the tool attempt to enforce said terms - another subject
They're all cofounders of Anthropic. Dario is the CEO, Jared leads research, and Sam leads infra. Both Jared and Sam were the "responsible scaling officer", meaning they were responsible for Anthropic meeting the obligations of its commitments to building safeguards.
I think neom is referring to Jack Clark, another one of the seven cofounders.
/s
What is enheartening about hearing a liar who makes provocative statements all the time, make another one?
Show us your reasoning please. There are many factors involved: what is your mental map of how they relate? What kind of dangers are you considering and how do you weight them?
Why not: Baidu? Tencent? Alibaba? Google? DeepMind? OpenAI? Meta? xAI? Microsoft? Amazon?
I think the above take is wrong, but I'm willing to listen to a well thought out case. I've watched the space for years, and Anthropic consistently advances AI safety more than any of the rest.
Don't get me wrong: the field is very dangerous, as a system. System dynamics shows us these kinds of systems often ratchet out of control. If any AI anywhere reaches superintelligence with the current levels of understanding and regulation (actually, the lack thereof), humanity as we know it is in for a rough ride.
Ethics is complicated. I’m not saying this means it can’t be reasoned about and discussed. It can! But the sources you’ve cited have shown themselves to be rather shallow.
I encourage everyone to write out your ethical model and put yourself in their shoes and think about how you would weigh the factors.
There is no free lunch. For many practical decisions with high stakes, many reasonable decisions from one POV could be argued against from another. It is the synthesis that matters the most. Among those articles, I don’t see great minds doing their best work. (The constraints of their medium and funding model are a big problem I think.)
Read Brian Christian’s “The Alignment Problem”’s take on predictive policing if you want a specific example of what I mean. There are actually mathematical impossibilities at play when it comes to common sense, ethical reasoning.
Common sense ethical reasoning has never been very good at new or complicated situations. “Common sense” at its worst is often a rhetorical technique used to shut down careful thinking. At its best, it can drive us to pay attention to our conscience and to synthesize.
I suggest finding better discussions and/or allocating the time yourself to think through it. My preferred sources for AI and ethics discussions are highly curated. I don’t “trust” any of them absolutely. * They are all grist for the mill.
I get better grist from LessWrong than HN 99% of the time. I discuss here to make sure I have a sense of what more “mainstream” people are discussing. HN lags the quality of LW — and will probably never catch up — but it does move in that direction usually over time. I’m not criticizing individuals here; I’m commenting on culture.
Please don’t confuse what I’m saying as pure subjectivity. One could conduct scientific experiments about the quality of discussions of a particular forum in many senses. Which places are drawing upon better information? Which are synthesizing it more carefully? Which drill down into detail? Which participants have allocated more to think clearly? Which strive to make predictions? Which prioritize hot takes? Which prioritize mutual understanding?
It isn’t even close.
Opinions and the Overton window are moving pretty rapidly, compared to even one year ago.
* I’ve written several comments about viewing trust as a triple (who, what, why). This isn’t my idea: I stole it.
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."
This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.
Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.
Yes. Absolutely.
It is clear that the DPA can be invoked for companies posing risks to national security:
> On October 30, 2023, President Biden invoked the Defense Production Act to "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government" when "developing any foundation model that poses a serious risk to national security, national economic security, or national public health."
Furthermore, it should be quite obvious that companies very important for national security can act in manners causing them to be national security risks, meaning a varied approach is required.
I don't think there's been a greater indictment of a political program (the one you likely subscribe to) in history than Trump's landslide victory in 2024.
You guys used to call deprogramming by another name, I think it was called "re-education". Maybe you should sign up for your own class.
FWIW, I agree strongly w/ lebovic's toplevel take above, that Anthropic's leaders are guided by their values. Many of the responses are roughly saying, "That can't be true, because Anthropic's values aren't my values!" This misses the point completely, and I'm astounded that so many commenters are making such a basic error of mentalization.
For my part, I'm skeptical of a lot of Anthropic's values as I perceive them. I find a lot of the AI mysticism silly or even harmful, and many of my comments on this site reflect that. Also, like any real-world company, Anthropic has values that are, shall we say, compatible with surviving under capitalism -- even permitting them to steal a boatload of IP when they scanned those books!
Nonetheless, I can clearly see that it's a company that tries to stand by what it believes, and in the case of this spat with Dep't of War, I happen to agree with them.
I'll take: List of places I never want to bond my soul with someone at for one thousand, please.
Is this a reflection of your morality, or that you already had sufficient funds that you could pass on the extra money to maintain a level of morality you're happy with?
Not everyone has the luxury to do the latter. And it's in those situations that our true morality, as measured against our basic needs, comes out.
I'm not sure if I intended this to be fascicious, or serious
With a massive budget, too. Hundreds of millions iirc.
It felt like a website that the small web-dev shop I worked for could build without much problem in a couple months.
We didn't have 200 layers of beauracracy, though.
That said I don't doubt the military could take their current tech and keep it running. It's far different from the typical grift of government contractors.
From the first chapter of the book On Tyranny by Timothy Snyder, an historian of Central and Eastern Europe, the Soviet Union, and the Holocaust:
> Do not obey in advance.
* https://timothysnyder.org/on-tyranny
* https://archive.org/details/on-tyranny-twenty-lessons-from-t...
But the reason for "Department of Defense" name was bureaucratic. It's also not true that DOD is hard to understand.
At the same time as the NME was created, the Army was split into the Army and Air Force and the Department of War was also split in two, becoming the Department of the Army and the Department of the Air Force.
If it ain’t repeatedly on the news and designed explicitly to scare and agitate then really people DGAF.
Then tomorrow it will be the Department of War. Just like When Congress voted to split the old Department of War into the Department of the Army and the Department of the Air Force, and to take both of those and the previously-separate Department of the Navy under a new National Military Establishment led by the newly-created Secretary of Defense (and when it later to voted to rename the NME as “Department of Defense”), things changed in the past.
> They have the votes.
Perhaps, but the law doesn't change because the votes are in a whip count on a hypothetical change, it changes because they are actually cast on a bill making a concrete change.
No, unlike yourself, I'm just a random brainless bot.
It would be better if people could name them with their full names to avoid any confusion.
They are US adversaries if they don’t give to USA what they want… so as an adversary that doesn’t do what’s told to fit in line… you must go to prison.
If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.
I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.
This is far too binary IMO. Yeah, the higher the personal stakes the bigger the test, and it's easy for someone to play the role of a principled person when it doesn't really cost them anything significant. But giving up millions of dollars on principle is something that most people aren't actually willing to do, even if they are already rich.
How someone acts in desperate circumstances reveals a lot about them. But how they act in less desperate circumstances isn't meaningless!
It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.
> Mass domestic surveillance.
Since when has DoD started getting involved with the internal affairs of the country?
https://en.wikipedia.org/wiki/United_States_Department_of_De...
Why the hell should companies get to dictate on their own to the government how their product is used?
I'm not sure an American company prioritising the privacy of American people is worth questioning. As a European, Anthropic are very low on the list of companies I worry about in terms of the progressive eradication of my privacy.
It's one of the reasons why people get annoyed at jargon or are pissed off about pronouns, because it highlights that they should be putting mental effort into understanding why they're current mental model doesn't fit. It's much easier to ignore and be comfortable if there's not glaring sirens saying you've got some learning to do.
Most of us can't (or won't) be aware of everything that should be important to us, having glaring context clues that we should take notice of something incongruous is important. It's also why the Trump media approach works so well it's basically a case of alarm fatigue as republicans who would normally side against any particular one of his actions don't listen because they agreed with some of the actions that democrats previously raised alarms about.
Though I have a feeling we're talking about different things. In Claude Code terms, it might want to rm -rf my codebase. You sound like you might want it to never run rm -rf. Anthropic probably wants to catch dangerous commands and send them to humans to approve, like it does today.
So yes, the second part of your comment is what is going to come back to haunt them. The road to hell is paved with the best intentions.
The US system doesn't empower a company to say no. It should though.
>A pretty clear indication that the current language has some.
Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.
>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Humanity includes the future victim of AI weapons.
All were driven by multiple competing and sometimes conflicting goals, and many look questionable in hindsight. It is fair to critique.
But it is absolutely not the case that the last time the US defended freedom through military means was WWII.
But if the “performance” involves doing good things, at the end of the day that’s good enough for me.
I'm not upset at people for having a differing opinion or being upset at some economic conditions attributable to Democrats, but rather their persistent belief in provably false information like the relative danger of immigrants, the causes of climate change, vaccine safety, election security or whether or not a particular ethnic group is eating their pets. This isn't a matter of opinion or it's a matter of observable reality and fundamental human morality.
Propaganda, 1 in 6 Boomers being exposed to amounts of lead in childhood that lead to measurable cognitive declines, average age of the US population being on the rise with lower birth rates means most eligible votes are in the age groups most likely to suffer low grade dementia, and the weaponization of social media by foreign adversaries and wealthy elites.
There's maybe 4-5M true believers, the rest are gullible lead-addled old fools who got brainwashed by Fox News. That's the unvarnished truth of it.
And contrary to what the model-makers would like you to believe, I don't think we're anywhere close to the system being self-improving enough that you could just let it run without intervention and it spits out a new frontier model
(This is also why the DoD move is so dumb. I think we'd see massive talent flight from Anthropic if they end up complying, even if that compliance is against Dario's will.)
Tell me more about what they should do if a virtue signal in such a situation is a nothing statement.
They usually don't come back with the same political organization - that's sorta the point. But plenty of civilizations come back in a form that is culturally recognizable and even dominate afterwards.
> No other country that went through a phase like this has ever recovered. Not even in a century.
Oh I can think of a couple in the '40s that bounced back after a while.
There is no defence of morality behind which AIbros can hide.
The only reason anthropic doesn't want the US military to have humans out of the loop is because they know their product hallucinates so often that it will have disastrous effects on their PR when it inevitably makes the wrong call and commits some war crime or atrocity.
You can take issue with that argument if you want but it’s unconvincing not to address it.
Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries.
I am unaware of any tech company that directly does physical warfare on the battlefield against humans.
Germany, Italy and Japan are all wealthy, stable democracies right now. Not without their problems and baggage, but pleasant places in a lot of ways.
Essentially they will not stop at all, because even they know no one can stop the competition from happening.
So they ask more control in the name of safety while eliminating millions of jobs in span of a few years.
If I have to ask, how come a biggest risk of potential collapse of our ecosystem being trusted as the one to do it safely? They will do it anyway, and blame capitalism for it
Except this administration is certainly fascist, and the renaming is yet another facet of it. That article goes through it point by point.
(I myself don't have a clear answer to why Trump won, but I don't think it speaks well to the decision-making of the median voter on their own terms, whatever those were, that Trump's now so unpopular despite governing in pretty much the way he said he would.)
In part the propaganda machine that started in the 80s with AM talk radio, culminating to algorithmic feeds today.
While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.
No, the concentration camps and gangs of masked thugs violating civil rights are that sign. Threatening to treat a domestic private corporation like an enemy combatant during peacetime for not immediately caving to military demands is that sign. Trying to take over the Federal Reserve, the Federal Trade Commission, and the Nuclear Regulatory Commission, is that sign. The Executive attempting to freeze funds issued by Congress for partisan reasons is that sign.
Department of War is just little boys being trolls.
Why can't they go to the contract generator of last resort, aka the Pentagon. It's what Elon has done with SpaceX and Grok.
But personally I wouldn't like to die because some crackpot with the right connections can will rest-of-world to that fate, no matter their affiliation. This escalation of destructive power and the carelessness with which it is justified pretty disheartening to see. Good times create bad people?
Citation needed. I believe there's at least some research showing the opposite: military buildup leads to a higher risk of military conflict
> The Saudis invest in many public US companies, does that make those companies less trust worthy?
It does. If Anthropic takes money from the middle east that might be the reason, why they cannot work for the Pentagon. Simply because the Pentagon works together with the Israeli Forces and middle east investors might not like this. So Anthropic has to decide to either take a lot of money from the middle east, or work for the Pentagon.
Of course the problem goes much deeper than just Anthropic. I don't understand why taking money from dictatorships doesn't count as money laundering in our society. Because basically this is dirty money, generated by slavery and forceful suppression of people. We should forbid all companies to take this kind of dirty money. But because we don't do that at the moment companies who don't take this dirty money will have a disadvantage against companies that do. And because companies are all about money, in the end they are basically forced to act against their good intentions, just to survive.
We as society have to stop this. We must make sure, that companies who are not taking dirty money survive the competition. My idea would be to extend the rules for money laundering to all countries that are dictatorships. But there might be other ideas, to level the playing field between companies, so we as society can help them to make the right decision.
The Saudis invest in many public US companies, does that make those companies
less trust worthy?
Uhh.. yeah? we've seen a lot worse from many of their competitors
I think we should demand people do better than just being slightly above the worst."The government doesn't have control of this technology" is an odd way to think about "the government can't force a company to apply this technology dangerously."
Also there’s probably a way to abuse the Taft-Hawley act beyond current recognition to force the employees to stay by designating any en-masse quitting to be a “strike / walk off / collective action”. The consequences to the individuals for this is unclear - the act really focuses on punishing the union rather than the employees. It would take some very creative maneuvering to do anything beyond denying unemployment benefits and telling the other big AI companies (Google / ChatGPT / xAI) to blacklist them. And probably using any semi-relevant three letter agency to make them regret their choice and deliver a chilling effect to anyone else thinking of leaving (FBI, DHS, IRS, SEC all come to mind).
If the administration could figure out how to nationalize the company (like replace the leadership with ideologically-aligned directors who sell it to the government) then any now-federal-employees declared to be quitting as part of a collective action could be fined $1,000 per day or incarcerated for up to one year.
It’s worth noting that this thesis would get an F grade at any accredited law school. Forcing people to work is a violation of the 13th amendment. But interpretations of the constitution and federal law are very dynamic these days so who knows.
Try introducing DPA invocation into your analogy and let's see where it goes!
Care to convert this into a prediction?: are you predicting Hegseth will back down?
> I doubt anyone at the Pentagon is pushing for this.
... what does this mean to you? What comes next? As SecDef/SecWar, Hegseth is the head of the Pentagon. He's pushing for this. Something like 2+ million people are under his authority. Do you think they will push back? Stonewall?
One can view Hegseth as unqualified, even a walking publicity stunt while also taking his power seriously.
If the safeguard against mass surveillance is strictly tied to geolocation (US vs. non-US), it can't be an intrinsic property of the model. It has to be enforced at the API or contractual level. This means international users are left out of those core, embedded protections. Unless Anthropic is planning to deploy multiple, differently-aligned foundation models based on customer geography or industry, the safety harness isn't really in the model anymore.
https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...
The Department of War was responsible for naval affairs until The Department of the Navy was spun off from it in 1798, and aerial forces until the creation of the The Department of the Air Force in 1947, whereafter it was left with just the army and renamed the Department of the Army. All three branches were then subordinated to the new Department of Defense in 1949, which became functionally equivalent to the original entity.
The Department of War is what it was called when it was first created in 1789 by the Congress (establishing the department and the position of Secretary of War), the predecessor entity being called the The Board of War and Ordnance during the revolution.
The Department of "Defense" has never fought on home soil. Ever.
It's worth noting there's an overabundance of legitimate reasons people get annoyed at these two thing, making them bad examples.
Real eyes..
So reframe I did. (I don’t think those articles you cited are worth any more attention than I’ve already given them.)
My most blunt editorializing would be this: most people would be better grounded if they read AI alignment and safety books by Stuart Russell, Nick Bostrom, Brian Christian, Eliezer Yudkowsky, and Nate Soares. If you’ve read others that you recommend, please let me know. I’ve read many that I don’t usually recommend.
As far as long form articles, I recommend Paul Christiano, Zvi Moshowitz, as well as anyone with the fortitude to make predictions while sharing their models (like the AI 2027 crew).
I recommend browsing “Best of Year Y” (or whatever they are called) articles on the AI Alignment Forum and LessWrong. They are my go-tos for smart & informed writing on AI. For posts that have more than say 100 votes, the quality bar is tremendously higher than almost anywhere else I’ve seen, including mainstream sources with great reputations.
In conclusion, I would rather point to interesting people to read and places to engage.
“Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.”
> 1. Do not obey in advance.
> Most of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then offer themselves without being asked. A citizen who adapts in this way is teaching power what it can do.
https://scholars.org/contribution/twenty-lessons-fighting-ty...
Meanwhile, Dario knows his product can't be trusted to actually decide who should live and who should die, so what happens the first time his hypothetical AI killing machines make the wrong decision? Who gets the blame for that? Would the American government be willing to throw him under the bus in the face of international outrage? It's certainly a possibility.
> Brigadier-General Mattias Hanson, CIO, Swedish Armed Forces, says: “Strengthening Sweden’s militarily and acting as part of a collective defense requires us to increase our defensive capabilities. We need to utilize the latest technology and all the innovative power of the Swedish private sector. Sweden has unique skills and capabilities in both telecoms and defense technology..." [0]
This is just one quick example I could find.
[0] https://www.ericsson.com/en/news/2025/6/ericsson-5g-connecti...
NSA and other three-letter agencies happily do it under cloak and dagger.
On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook"
If the rename gets struck down then they don't have the power. If it doesn't they have the power.
There are many dictatorships that built their power in the face of people claiming that they can't do what they planned because it was illegal.
Until they did it anyway.
The talk page on the linked Wikipedia article arguing about logos is just as deranged. It's very important to realize there is literally nothing you—or anyone else—can do about this.
Any law changing the name of the Defense Department would have to be passed by both Houses of Congress and signed by the President (or by 2/3 of both Houses overriding a Presidential veto). The Senate has no such authority on its own.
Well:
"""
Imagine that you created an LLC, and that you are the sole owner and employee.
One day your LLC receives a letter from the government that says, "here is a contract to go mine heavy rare earth elements in Alaska." You don't want to do that, so you reply, "no thanks!"
There is no retaliation. Everything is fine. You declined the terms of a contract. You live in a civilized capitalist republic. We figured this stuff out centuries ago, and today we have bigger fish to fry.
"""
It's misanthropic to dismantle democratic societies.
Also, the genie is well and truly out of the bottle, if anthropic shutdown tomorrow and lit everything they had produced on fire, amazon, microsoft, china, everyone would continue where they left off.
You own nothing but your opinion. (No offense to personal property aficionados)
If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?
Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.
He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.
Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.
Oh, also the stealing. All the stealing. But he is not alone there by any means.
edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.
The memo literally says that the reason they have these policies is -because- actual technical guardrails are not reliable enough.
There are no real Maoists or true communists in the US anymore, at least not enough to constitute meaningful political forces. To the extent they exist they are irrelevant, and one can argue further that no true left remains in the US at all.
As for my analysis of the Trump phenomenon, I only have intuitions and biases to offer, so caveat lector.
I don't think it's particularly mysterious. The general perception is that the American left has made identity politics and social justice its main political and social programs, to the detriment of basic governance, most importantly the economy and security, thereby breaking the social contract.
You cannot be a party that aggressively defends and promotes the interests of minority classes at the expense of the majority without loosing the support of the majority. In some cases, these minorities are so small as to border on the absurd.
Something like 0.6% of people identify as transgender in the United States(1). They are vastly over-represented in the media, in left wing political programs, and in the general zeitgeist at large relative to their population size. The same goes for the LGBT population, which represents maybe 10% of the US population (and that's a liberal estimate).
Try as you might, you cannot escape the cold, hard fact that 60% the US population is white, with something closer to 70% identifying as white or partly white. 90% percent of that group is going to be straight.
The US middle and working classes still really haven't recovered from the financial crisis of 2008, the aftermath of which precipitated a huge transfer of wealth from these classes to the upper class, a trend that accelerated during the pandemic.
So you have a majority of the population who are reeling from a devastating loss of wealth, station, and status, unable to keep pace with inflation, watching one of the two main political parties aggressively promote the interests of a tiny minority at their expense, or at least that is the perception.
Putting aside the nature of the minorities in question, the subservience of the political class to a minority of the population has another name: elitism. The natural response to elitism is populism, which is what we are seeing.
The protection of minority rights is a noble cause, but it's primarily a civil rights issue, and the focus should be on making sure those classes are treated equally under the law. The goal should not be the elevation of their social and cultural station above the majority.
Biden, and then Harris/Waltz, are the kind of the ultimate expression of this left-wing, elitist decadence. Biden appointed a man who wears stilettos and dresses to work in charge of nuclear waste as the Department of Energy. People can rage at me all they want for that description, but that is what the majority of Americans perceive. Again, putting aside any questions of morality, it is political suicide.
Tolerance of mass border crossings was probably a more directly fatal error, representing a final decoupling of the democratic party from their ideological roots in the labor movement which was always militantly against illegal immigration. Again, the perception is that interest of minorities (in this case migrants) are primary to the interests of the majority. In this case the minority are not even American citizens.
There's a lot more to say on this topic, and I'm sure you can find more persuasive analyses from better sources, but these are some of my intuitions.
Thanks for coming to my TED Talk.
1. https://williamsinstitute.law.ucla.edu/publications/trans-ad...
I suspect the same will happen here.
Between the years of 1850-1950, an estimated 150M humans died (and many more permanently disabled) due to armed conflict (~1.5M/year). Between 1950-today: closer to 10M (~132k/year). The majority of those came from the Vietnam and Korean wars. If you limit the window to after 2000: only ~2M deaths, or ~78k/year. We carry bigger sticks than ever, and those sticks allow us to execute more strategic, incapacitating strikes, or stop conflict from even happening in the first place.
It's meaningless to talk about what the employees think or care about. They are selling their labor and value to the corporation that is legally entitled to outspend all of them to get whatever it wants.
The control rests with the board and the executives. They have the control and the power and can make decisions.
And we're throwing that all out the window.
US military bases aren't what made those countries modern, prosperous, democratic places. It took the will of the people to rebuild something better after the war.
That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.
Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?
The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.
> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.
> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]
... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.
“Four key words (…) The only phrase that can genuinely make a weak bully go away, and that is: Fuck You, Make Me.”
Once a war has started, it won't be fake any more.
> they’ll definitely declare wars to extend the presidency.
You don't exchange the Fraudster in Chief while at war, so they do want a war. Any war. But I have the strange impression that von Clownstick doesn't want to be seen as having started it by himself.
And one of the few constraints in their approach is not to fuck with the Dow. Expropriating Anthropic’s IP would trash the AI sector, and by extension, the Dow. (Even designating it a supply-chain risk sets a material precedent that a future administration could use against OpenAI and xAI.)
Hegseth is bluffing on his most destructive fronts, even if he doesn’t know it.
* excludes tiktok
Here we have a company doing something unprecedented but it is STILL not enough for people like you. The DoD could destroy them over this statement, and have indicated an intent to do so, but it's still not enough for you that they stand up to this.
I wonder what life is like being so puritanical and unwilling to accept the good, for it is not perfect! This mindset is the road to a life of bitterness.
the few solar panels in question are a united kingdom worth of green energy each year, about a royal navy worth of marine tonnage every two and they lifted more people out of poverty over the span of two generations than most of the rest of the world combined. Shenzhen produces about 70% of the entire world's consumer drones, now the primary weapon on both sides of the largest military conflict in the world. Xiaomi, a company founded in 2010 15 years ago decided to make electric cars in 2021 and is now successfully selling them.
As Adam Tooze has pointed out it's the single most transformative place in the world, if you're not trying to learn from it you're choosing to ignore the most important place in the 21st century for ideological reasons
Musk (Tesla, SpaceX), Ellison (Oracle) consistently supported Trump before his win was certain and are tight with Trump. They were megadonors behind his campaign.
Bezos (Amazon, Blue Origin) and Zuckerberg (Meta) pivoted towards Trump in 2024 after it looked like he would wind second time. They are opportunistic bastards who try to weasel into the good side of Trump with varying results.
Apple, Google, Microsoft, Nvidia etc. just bend the knee. They are reluctant but pragmatic and try to protect the company when their competition Amazon, Meta and Oracle are on the inside. Notice that in this final group, CEOs lack autonomy. At Alphabet, Page and Brin retain controlling authority (and they just try to avoid getting involved with Trump). Nvidia lacks a dual-class structure, meaning Jensen Huang (4% votes) can be outvoted on critical matters. Both Apple and Microsoft are "faceless" corporation where the CEOs serve as hired hands.
I'm suggesting your realpolitik of "others doing it too" is incompatible with a moral position. I know none of these ghouls will stop burning the world. I'm sick of them virtue signalling about how righteous they are while doing it.
No
> If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?
The risks are high, so if you're the US, you want a portfolio of possible winners. The risks are too high to not leverage all the cutting edge AI labs.
However, in terms of 'democracy' they're still way worse off than the US right now, even if the US is headed in a bad direction.
I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.
>Over on Steve Bannon's show, War Room -- the influential podcast that's emerged as the tip of the spear of the MAGA movement -- Trump's longtime ally unloaded on the efforts behind accelerating AI, calling it likely "the most dangerous technology in the history of mankind."
>...
>"You have more restrictions on starting a nail salon on Capitol Hill or to have your hair braided, then you have on the most dangerous technologies in the history of mankind," Bannon told his listeners.
https://abcnews.com/US/inside-magas-growing-fight-stop-trump...
When I introduce that, I see Anthropic's management getting Tiktok'ed.
It can be true that Anthropic's products are essential for national defense and also true that the management of the company are a supply chain risk.
Is any of that true? Well, so much of what has been done in the name of "national defense" & etc over the past many decades has clearly not been done for reasons that are true, so -when it comes to "national defense"- I don't think that the truth actually matters much at all.
I think he may be able to cancel Anthropic’s contract. But no more. He won’t back down as much as be overruled.
> As SecDef/SecWar, Hegseth is the head of the Pentagon
On paper. Also, being the de jure head of something doesn’t automatically mean you speak for it as a whole.
> while also taking his power seriously
Authority and power are different. A plane pilot has a lot of authority. They don’t have a lot of power.
If they don't oppose directly, large bureaucracies know how to drag their feet until the midterms at least, if not until 2028. Soldiers literally dragged their feet at the glorious Trump military parade, when they walked disinterested and casually instead of marching.
What an utterly bewildering statement. So your suggestion is to suck it up, because we're all impotent anyway? The only thing that can bring authoritarian systems down is civil resistance.
Signing a contract with Anthropic assuming they wouldn’t rug pull over their own moral soapbox was mistake number one.
I love anthropic products and heavily use them daily, but they need to get off their high horse. They complain they’re being robbed by Chinese labs - robbed of what they stole from copyright holders. Anthropic doesn’t have the moral high ground they try to claim.
China has been competing with India for decades for the most-polluted cities crown, and only slightly ranks below the US and Russia in CO2 emissions per capita. It's also the only large country where its emissions have been growing over the last decade. Where does the idea come from that China somehow puts less pressure on the environment? Less than what, exactly?
I think you need to have people thinking through this stuff at a nuts-and-bolts level if you want to avoid getting dominated by a slightly less nice adversary, and so too with AI. Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?
I’d love to know that the “no killbots, come what may” strategy is sound, but it’s not clear that that’s a stable equilibrium.
They absolutely are, but per capita, USA is polluting 49.67 % more than China.
Source: https://worldpopulationreview.com/country-rankings/carbon-fo...
The only thing to say is that it's still authoritarian. Once that gets a hold of a country, it's very difficult to shed off. Interestingly, both South Korea and Singapore shifted away from being dictatorships and were not ideologically socialist. Countries taken over by Communists remain authoritarian. The true believers will never give that up.
The product is actually good though, I could pay for it if Amodei just shut up but by principle I won't now and just stick with codex.
Basically analysing the economies of WW2 participants via their automobile industries.
Its staggering how being bombed into the ground has forced technological and economic innovation. And how the inverse, being the bomber, has created stagnation.
- had four [!] terms, a move so anomalous it was subsequently patched by constitutional amendment
- threatened court-packing until SCOTUS backed down and stated rubber-stamping his agenda
- ruled entire industries by emergency decree in a way that contemporaries on the left and right compared to Mussolini
- interned 120k people without due process, on the basis of ethnicity
- turned a national party into a personal patronage system
- threatened to override the legislature if it didn’t start passing laws he liked
Not even saying any of this is even good or bad, clearly in the official history it was retroactively justified by victory in WWII. But it’s a bit rich to say that the bomb wasn’t developed under authoritarian conditions.
It's on you to argue it was, e.g. by comparing it to other clear landslide victories like Reagan in 1984. Truth is that 2024 the final popular vote gap was 1.5%, compared to 4.5% for 2020, -2.0% for 2016 (yeah, really), 3.9% in 2012, 7.28% in 2008, and so on.
Whenever someone spends the time, and it takes a long time, to correct you, laugh, mock them, spew a few more lies.
And it's easy to do when the rich, the owner class side with you, because they buy newspapers, websites, ads, which you can't do if you lean left because acquiring money at all cost is not a priority of left wing people.
This is just totally disconnected from policy reality. Biden did not tolerate mass border crossings. (I _wish_ he'd dismantled ICE, but he very clearly did not.) A relatively minor DoE appointment going to a member of an unpopular minority both has nothing to do with policy and is the kind of thing that must necessarily be acceptable if minorities are actually going to be "treated equally under the law". This is a ludicrous basis to infer "the subservience of the political class" to transgender people.
On the other hand, Trump is a billionaire with Epstein connections and entirely unabashed about making money for his businesses and family using his government position. If this isn't "decadence", or "elitism", what meaning could the words possibly have?
"Deprogramming" might be an unfriendly word but it's hard for me to imagine how you have a functional democracy when a plurality of voters are making decisions on the basis of straightforward falsehoods, or even inversions of reality, just because "at least that is the perception". This isn't a sustainable situation, and it will end with either re-connecting these people to reality or disenfranchising them (really, them disenfranchising themselves along with the rest of us, e.g. by re-empowering someone who tried to steal an election). The former seems vastly preferable.
Speaking of unfriendly words - I also broadly have very little sympathy for a demand that people on the left speak respectfully of Trump voters given the total lack of any reciprocation. Even if it is the right way to do politics, the asymmetry between the way Democratic politicians talk about rural areas and the way Republican politicians talk about cities is another thing that's totally unsustainable.
Going after the visa-holding employees of these companies is within reach of the WH, and it's consistent with their MO.
> They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.
This is about spreading information among the companies about each others' position, not a petition to the DoD.
Vidkun Quisling
Also why would the department of war care about what citizens think specifically?
We seem to be unable to stop building the weapon, we seem unable to stop handing it over to morons, and I should expect these morons to not fire it?
Then again, it's called MAD for a reason... What's one more WMD after all? Let's hope that we at least understand it before it becomes as powerful as everyone seems to think it will become.
Some are culture warriors who feel they have been wronged, some are opportunists. But the thing with opportunism is that this is who they are and what they believe in. Having a president who is corrupt is exactly what they want because they know exactly how to work with him: quid pro quo.
There is no distance between them being pro-Trump and opportunistic. He’s the perfect embodiment of those values.
Not like limiting uses of products is anything new
Specifically section on martial law in wartime context. It’s not very clear but I just feel like the norms and laws will be stretched or broken, as the administration has already done numerous times.
Perhaps there's a war, that a misguided congress won't declare as such, and a certain vice president that runs for president, with a certain someone as his vice president...
What’s an example of a company that’s making killing machines that a typical consumer or someone HN might be buying product or services from?
https://www.anthropic.com/news/building-safeguards-for-claud...
https://alignment.anthropic.com/2025/introducing-safeguards-...
> Tolerance of mass border crossings was probably a more directly fatal error, representing a final decoupling of the democratic party from their ideological roots in the labor movement which was always militantly against illegal immigration
Both Biden and Obama turned away more immigrants than Trump did in his first term. And Clinton is the kind of denying asylum. The idea that we just had completely open borders and nothing was being done about is a fabrication.
> Something like 0.6% of people identify as transgender in the United States(1). They are vastly over-represented in the media, in left wing political programs, and in the general zeitgeist at large relative to their population size
If you actually pay attention to who is talking about Trans people, it is the right. Liberal media may be occasionally baited into arguing about it, but to say it was a major platform is a perception the right crafted. Fox was talking about it 24/7 leading up to the election [1]. Musk and Trump were tweeting about it constantly. They ran political ads saying they wanted to convert your kids to trans ideology. It's gotten so bad that our current president just harasses women that look kinda manly, saying they are trans.
[1] https://www.yahoo.com/news/fox-news-covers-transgender-issue...
If anything, I have less respect for people who support fascism for money than I do for people who actually believe in it.
The A-bombs were not the worst part of the attack on Japan. And thus were not "needed to end the war". They were part of marketing /the/ super power.
Congress has abdicated its powers because as an institution it is broken. Several inland states with total state wide populations less than that of major metro areas on the coasts have the same amount of senators as every other state has - two. This means voters in a lot of states are over represented. Meanwhile, they say land doesn't vote, but in the United States Senate the cities and localities with the most people that drive much of our growth and dynamism are severely underrepresented. The upper and most important chamber of the Congress is thus undemocratic. Given it's an institution deeply susceptible to minority gridlock that depends on wide margins to do anything, well now more often than not it simply does nothing. An imperial presidency thus frankly becomes the only way the country can actually get most things done.
This two senators for every state arrangement was a compromise agreed to when constitutional ratification was in doubt, when the USA was a weak, newborn country of about 3 million people confined to the Eastern seaboard at a time in our history where our most pressing concern was being recolonized by European powers. The British burned down the White House in 1812 imagine what more they could have accomplished if the constitutional compromises that strengthened the union had not been agreed to.
This compromise has outlived its usefulness. No American today fears a Spanish armada or British regulars bearing torches. These difficult compromises at the heart of America already led to one civil war.
The best we can do is create a broad political movement that entertains as many incriminations as possible (probably around corruption/Epstein, which must make pains to avoid any distinction between say a Bill Clinton or a Donald Trump) so we can get past partisan bickering to get enough of mass movement to try to usher in a new age of constitutional amendment and reform.
If it doesn't happen this cycle of Obama Trump Biden Trump will continue until this country elects someone who makes Trump look like a saint. It can happen. Think of how Trump rehabilitated Bush. We already see the trend getting worse. And if it does, then the post WWII Germany style reset being mentioned here will then become inevitable.
No, it’s up to the government to create policy and legislation that outlines what is lawful or not and install mechanisms to monitor and regulate usage.
The fact that an arm of the government wants to go YOLO mode is merely a symptom of the deeper problem that this government is currently not effectual.
This is fallacious as every economy that started at extreme poverty lifted a bunch of people out of poverty.
Unless we invent a time machine and do an A|B test we can't really attribute the success to policy when _any_ policy would have clearly lifted out a bunch of people out of poverty (basically almost impossible to not go up from extreme deficit). The closest we can do is look at similar scenarios like Taiwan which also lifted a bunch of people from poverty while retaining more human rights.
Is your view that contracts with the government should be meaningless? That the government should be able to unilaterally, and without recourse, change any contract they previously agreed to for any reason, and the vendor should be forced at gunpoint to comply?
If you do believe this, then what do you believe the second order effects will be when contracts with the government have no meaning? How will vendors to the government respond? Will this ultimately help or hinder the American government's efficacy?
I don't think that is what is happening. What most likely is happening is that they want Anthropic to produce new systems due to the success of the previous ones, but they are refusing to do so because the new systems are against their mission. What seems like the DoD is attempting to do, on one hand, is call them a supply chain risk to limit Anthropic's business opportunities with other companies, and then, on the other hand, simultaneously invoke DPA so that they can compel them to make the new system. But why would the government want to compel a company to make a system for them due to a need for national prepareness that they designated as such a supply chain risk that they forbid other companies that provide government services from doing business with due to the national security risk of having a sabotaged supply chain? It doesn't really make sense, other than from a pure coercion perspective.
This outcome might be a win for everyone involved, the time and effort for those billions with a lot of strings attached are less useful as Ai matures.
DPA and FASCSA as they stand today cannot be used the way DOD is claiming they can be.
While I grant the spirit of this point, I don't think it applies to this situation. The "bureaucratic resistance" explanation doesn't fit when you think about what would happen next. Here is my educated guess based on some research:
- contract termination: Hegseth can direct the relevant contracting officer(s) at the Pentagon to terminate the contract. This could happen within days. Internal stonewalling here might add weeks of delay, but probably not more than that.
- supply chain risk designation: Hegseth signs a document, puts it into motion. Then it becomes a bureaucratic process that chugs along. Noncompliant contracting officers probably would be fired, so this happens within weeks or a few months. Substantial delays could come from litigation, to be sure -- but this isn't a case where civil service stonewalling saves us.
- Defense Production Act: would require an executive order from Trump. This would go into effect right away, at least on paper. It would very likely lead to litigation and possibly court injunctions.
My point is that non-compliant civil servants at the Pentagon probably can't slow it down very much. (I recommend they do what their oath and conscience demands, to be sure!) Hegseth has shown he's willing to fire quickly and aggressively. I admire people who take a stand against Hegseth and Trump -- they are a nasty combination of dangerous and corrupt. At the moment, they appear weaker than ever. Sustained civil pushback is working.
Let's "roll this up" back to my original point. I responded to a comment that said "I doubt anyone at the Pentagon is pushing for this.", asking the commenter to explain. I don't think that comment promotes a better understanding of the situation. It is more useful to talk about the components of the situation and some possible cause-effect relationships.
But this is irrelevant to the case we are discussing, where Anthropic used legal contractual terms, and the government willingly signed them, then demanded they be changed after the fact.
The third amendment is there for a reason. I am a third amendment absolutist and willing to put my life on the line to defend it.
There are a lot of well meaning people that are very anti-weapon or anti-violence under any circumstances. The problem is that when those people actually need those weapons and that violence, they are so inadequate at it that they become a liability to themselves and others.
I'm not saying I have or know of a solution, but I remember the old saying (paraphrasing) that it's better to be a warrior working a farm than a farmer working a war.
You’ll notice I’m trying to avoid debating generic phrases and terms such as “power” that probably won’t advance mutual understanding of this situation. I’m talking about specific actions and systems. It makes it clearer.
I didn't say we needed to follow their example to the letter; it was just one counterexample to the "woe and ruin for 100 years" comment.
Much has been said about the purported superiority of western values, but as we've all seen the USA was very quick to get rid of even the slightest notion of these values when Trump promised them some money and a dominant vibe.
The old world is dying, and the new world struggles to be born: now is the time of monsters.
By slightly ranks below you mean ~50-60% per capital.
>China somehow puts less pressure on the environment
PRC renewables at staggering scale.
Last year PRC brrrted out enough solar panels whose lifetime output is equivalent to MORE than annual global consumption of oil. AKA world uses about >40billion barrels of oil per year, PRC's annual solar production will sink about 40billion barrels of oil of emissions in their life times. That's fucking obscene amount of carbon sink, and frankly at full productionm annual PRC solar + wind can on paper displace 100% of oil, 100% of lng, and good % of coal (again annual utilization) once storage figured out.
This BTW functionally makes PRC emission negative, by massive margin, arguably the only country who is.
It's only retarded emission accounting rules that says PRC should be penalized for manufacturing renewables, but buyers credited AND fossil producers like US not penalized for extraction, which US has only increased.
However Anthropic situation is very different: there’s no ongoing invasion of the USA, and they traditionally attack other countries once in a while (no judgment) so the weapons upgrade will be "useful" on the field.
China considers all lethal autonomous weapons "unacceptable", calling all countries to ban it. Countries like the US and India refuse to back such proposals. See China's official stands on this matter below.
https://documents.unoda.org/wp-content/uploads/2022/07/Worki...
I totally understand that you got brainwashed by the media, but hey you appearantly have internet access, why can't you just do a little bit research of your own before posting nonsense using imagination as your source of information?
In your second link, that team was defunded; the person heading it just left ceremoniously: https://x.com/mrinanksharma/status/2020881722003583421?s=46
Societies are not operating like a sinus curve like say summer/winter cycles. They are upside-down "U"s. After the peak comes decline, but after the decline there is NOT recovery/growth again before you have a reset.
Germany was the huge winner of WW2 in the sense that after having had a high society they directly were allowed to get another such run. But as nobody wants to bomb us ) anymore, Germany is also in decline now waiting for a reset to come one day...
Sadly the USA will also need a reset before things can begin getting better again.
) I was born in Germany and lived there for 40 years.
Anthropic is in negotiation with Hegseth/DoD. Pointing out all the specific actions that Hegseth is doing are fair game to show that Hegseth is nuts.
Bringing in other complaints against other parties, however bad those other parties are behaving, shows a pattern in other people, which might be helpful too. But hegseth's direct actions are stronger evidence.
As an example, replacing sex with "gender identity" in prisons policy has inflicted considerable harm on women prisoners, who have been sexually assaulted, raped and impregnated by male prisoners who were transferred to the female prison estate on the basis of their supposed "female gender identity".
Feminist groups like WoLF spoke up on the horrors of this first, and the Republicans followed when they realized they could capitalize on this politically. But really it shouldn't have happened at all.
We should probably use a different word for Elon-style goals.
"Freedom for me but not for thee" is a far stretch from libertarianism.
Was it the best path to end the war? Certainly.
The modern argument around targeting civilians or not was not even relevant at the time due to the advent of strategic bombing, which itself was seen as less-horrific than the stalemated trench warfare of WW1. The question was only whether to target civilian inputs to the military with an atomic weapon (and hopefully shock & awe into submission) or firebomb and invade.
Hegseth trying to play “I’m altering the deal. Pray I don’t alter it any further” just shows this gang’s total lack of comprehension of second-order effects.
I'm not saying what they've done was the best way, only way or anything of that sort: only that it happened.
Silly logic. The first are average humans, the second are evil.
You would not want that either.
First, the Connecticut Compromise is a democratic underpinning of the US. It was central to the formation of the nation, and any attempt to alter it would be a foundational structural change to the constitution to say the least.
I understand the concerns about one generation binding another without recourse. Legal scholars differ on whether Article V, which implements the compromise, can be amended or not.
But for the sake of argument, let's say it can. It would be an insurmountable task requiring the following:
1. A supermajority in both houses of Congress (67% in the Senate and 66% in the House) to propose the amendment.
2. Ratification by three-fourths of the state legislatures (38 out of 50 states) or by conventions in three-fourths of the states.
3. Consent of the states that would lose their equal representation in the Senate.
4. Overcome any legal challenges that would likely arise at every step of the process.
The result would be a dramatic redefinition of federalism and democratic representation. This wouldn't be a cosmetic change, it would be a fundamental alteration to the structure of the government and constitution.
Very few things were deemed "unamendable" and entrenched in the constitution before, both explicitly and implicitly, but now it would all be up for grabs. Now nothing is irrevocable.
What's to stop future generations from altering other fundamental principles? While we may complain of being bound by the decisions of our ancestors, we would be opening up a Pandora's box of constitutional instability for future generations, binding them to the whims of a (slim?) majority of the current generation's political agenda.
I think that is the best case scenario. The worst, and I think a very possible scenario, is that states losing representation would claim that such a drastic and material change to the constitution upends the root of the bargain that led to the formation of the union, and would likely seek to secede. You may have achieved your goal of changing the apportionment of the Senate, but at the cost of the union itself. There are far easier and less risky ways to achieve political change.
Does it necessarily prevent other companies from doing business with them or does it prevent other companies from subcontracting them on government projects? The term "supply chain" leads me to think it's the latter.
You can’t say “no disabled people at your business”. Hell, you can’t even say “no fake service animals at my restaurant”. Many in America also think you can’t say no girls in the Boy Scouts, or no men in a women’s locker room.
The government should have far less control and power over individuals and businesses than it currently does.
You’re missing the forest for the trees. Take the tariffs as analogy. Specifying the laws invoked to effect the tariffs is more precise, but less complete than describing Trump, Bessent and Navarro’s motivations and theories.
Same here. We can wax lyrical about the DPA and specific statutory authorities and how they may be litigated. Or we can look at the actual power structures. The former is precise but inaccurate. The latter is the actual dynamic.
> terms such as “power” that probably won’t advance mutual understanding
If terms like power and influence don’t make sense to someone, they’re going to be lost in any political discussion. But particularly under this administration.
There aren’t legal analytic fundamentals driving why Trump hates windmills or Biden pardoned his son, these were expressions of Presidential power and preference. The legality was ex post facto.
The government couldn’t justify the killing of innocent civilians.
The government couldn’t justify the killing of the unborn.
The government couldn’t justify eugenics.
There are objective moral absolutes.
The Netherlands for example got their last reset by completely losing the Dutch empire.
Also, some societies have flatter curves than others. That really maps 1:1 to your style and culture of living and where the priorities are.
If your priorities are to be the best as fast as possible (Germany) you will have less time between resets. If your priorities are "let's chill and wait until the coconut falls from the tree into my hand", your society might be able to have a far longer time between resets.
But in the end: It's an iterative process. Which means: There must be iterations.
There's nothing contradictory or circular in both of those claims.
If someone were to present to me a better caretaker of western liberal ideals than the US and ask whether I would prefer AI empower them, the answer would be: yes.
And in fact, that is precisely what I am arguing. It is good that Anthropic, which so far has demonstrated closer adherence to western liberal ideals than the current US government, is pushing back on the current US government.
I also think it is good that Anthropic stands in opposition to China, which also does not embody western liberal ideals.
All the world powers are in a race to it.
https://cset.georgetown.edu/article/china-trains-ai-controll...
https://thediplomat.com/2026/02/machines-in-the-alleyways-ch...
https://www.brookings.edu/articles/ai-weapons-in-chinas-mili...
https://cset.georgetown.edu/article/how-china-is-using-ai-fo...
``` Basic characteristics of Unacceptable Autonomous Weapons Systems should include but not limited to the following:
- Firstly, lethality, meaning sufficient lethal payload (charge) and means.
- Secondly, autonomy, meaning absence of human intervention and control during the entire process of executing a task.
- Thirdly, impossibility for termination, meaning that once started, there is no way to terminate the operation.
- Fourthly, indiscriminate killing, meaning that the device will execute the mission of killing and maiming regardless of conditions, scenarios and targets.
- Fifthly, evolution, meaning that through interaction with the environment, the device can learn autonomously, expand its functions and capabilities in a degree exceeding human expectations.
Autonomous weapons systems with all of the five characteristics clearly have anti-human characteristics and significant humanitarian risks, and the international community could consider following the example of the Protocol on Blinding Laser Weapons and work to reach a legal instrument to prohibit such weapons systems. ```
Charitably, you might say that China is worried about a nightmare scenario. Less charitably, you might say that the definition of an unacceptable weapon system is so tight that it does not describe anything that anyone would ever build, or would want to build. This posture would allow China to adopt the international posture of seeming to oppose autonomous weapons without actually de facto constraining themselves at all.
This, by contrast, is what China considers acceptable:
``` Acceptable Autonomous Weapons Systems could have a high degree of autonomy, but are always under human control. It means they can be used in a secure, credible, reliable and manageable manner, can be suspended by human beings at any time and comply with basic principles of international humanitarian law in military operations, such as distinction, proportionality and precaution. ```
So as long as the system has a killswitch (something that afaik absolutely no one is proposing to dispense with?), it's Acceptable.
Meanwhile, it would certainly seem that China's defense research universities are interested in developing this tech: https://thediplomat.com/2026/02/machines-in-the-alleyways-ch....
So, I did a bit of research with my internet access-- how do my findings square with your impressions?
Refusing to join forces and contribute your efforts towards actively support fascism is not "deciding against democratically elected leaders". This sort of rhetorical sophism is unhelpful and, indeed, damaging.
It is ABSOLUTELY everyone's place, ("corporate leaders" included) to have principles and stick to them.
Personally, I agree with the principles of not using fallible AI for mass domestic surveillance analysis purposes, or for fully autonomous weapon purposes.
People and companies are free to do whatever the fuck they want that’s not illegal. They can resist any government priorities for any reason, including finding them destructive or anti-democratic or corrupt.
The government is able to change the laws within the current system to back its will—regardless of whether it’s in the interest of the people who voted for them, let alone the entire population.
(No the em dash isn’t AI.)
You do not under any circumstances gotta hand it to the American military but they do seem unwilling to play a role in Trump's let's say extraconstitutional ambitions. At least a junta doesn't seem likely. Without the military behind him he's just a senile old pedophile. What's he going to do, lock himself into the Oval Office?
> Many in America also think you can’t say no girls in the Boy Scouts, or no men in a women’s locker room.
Your average American is low functioning, low education, vibe driven with a 6th-8th grade reading level, so this ("What Americans think") is not terribly relevant in my opinion. Provide statute and case law.
It's a great policy, but it also makes sense for geo-strategic reasons (even ignoring the climate issue).
How much are we connecting in this particular conversation? What if each of us were to step back and ask 3 questions: What am I trying to communicate? Are we both interested in having this conversation? Are we both learning from it?
Again, this is not meant as a criticism of you. It is a statement of the dynamic here, and how we’re relating. (Even though HN is well above average, it has massive failure modes when you view it from a systems POV.)
My feeling is that you aren’t responding to the intent behind my statement. But I’ll also recognize that I’m probably not communicating that lands for you. Maybe you feel the same in reverse? That would be my guess.
This as a failure of our communication norms and technologies. Given we’re in the year 2026 and have minimal technical barriers, we have very much failed culturally to get anywhere close to the potential of the Internet or whatever needs to come next.
The argument so far seems to be "They can do anything, but there are moral absolutes that I can personally list out, and in those cases they can't do those things". That is a hilariously stupid view of the world but sadly a common one.
Even if I grant moral objectivity, I reject that you have epistemic access to it so it's moot.
Germany has to be forced to accept that, although it was advanced, it could not have the European empire it thought it deserved. Japan had to learn a similar lesson. The speed and horror of the reset was in direct proportion to the potential for advancement and high society in these nations.
Ghana, where I come form, for example, has not has to experience any massive upheaval even from its pre-colonial and colonial days up till now. Our society is laid-back, and moves slowly. Even many other African countries have had to have their national reckoning in the form of civil wars and other huge upheavals in order to settle into a viable way of existing and advancing.
And, like you said, this is iterative. Given the nature of people in a nation and its fundamental geopolitical position, the same question will need to be answered after every N generations. Germany is central to Europe, and already a generation that is far removed from the world wars are starting to rethink why it shouldn't assert itself more strongly. Same in Japan.
THe way to analyze the iterations of the US is to understand that the primary threats are from within. It may not implode complete, but civil war and the civil rights era show that the potential is there for massive unrest and violence.
> it doesn't need to be an attack of which you are putting yourself on a side
and also
> I can't know if you are talking about either the right or left
Which are contradictory, if you think about it. I am not sure what you want me to write if I can't use "they" to refer to other people. Also, I didn't use "we", something you somehow also seem to want me to say, and didn't.
If I remember correctly, the governor of PR would appoint the first 2 senators. A tactic could be to promise to appoint 1 republican senator as an enrichment to approve statehood. It's a real shit situation.
There are more Puerto Ricans living in NYC and Orlando than in PR. I'd like to visit before the little family I have left there leaves or dies out.
It's a troll. Just flag it and move on.
And yes, it is interesting to see that on Polymarket people are betting involving a lot of emotions. No, you will not bet on getting killed by masked militia. Nobody is going to say "Hey, I'll bet $1000 that I will get cancer soon!".
But if you leave aside all the emotions, and just look at the data: No, there is no realistic scenario the US could magically recover from all checks and balances and rules and laws and regulations and decency having been destroyed. Competence, leadership and shared knowledge had been erased in all areas of society - Science, Development, Capitalism, Arts. How are you going to rebuild all of this, especially if the best case is that 60% of the people will agree to rebuild, while 40% insist they need to keep destroying stuff?
This is not a scenario looking at historical data any prior "high culture" (or whatever to call this) had been able to recover from.
Elsewhere in this thread is was mentioned that Germany still had all the Nazis in place everywhere because else the country would not have worked. But that is not the point. The reset was:
a) All is destroyed and MUST be rebuild because else we will freeze and starve to death.
b) Your Nazi neighbor is still there, but it has been made VERY clear who is the new sheriff in town: First the allies, but then pretty much the USA. Germany is still paying for having US solders in the country, providing valuable expensive land for free, and paying for most of the supply chain that is not staffed with US soldiers. And that is the accepted normal.
c) What was left on industry was physically taken as reoperations. Especially the soviets, but also the French did dismantle hole factories and machinery, moving that to their own countries (rightfully so.)
From what I know from school, reading and talking to grandparents: Germany before WW2 doesn't have much relation to pre-WW2 Germany. Suddenly it was normal that women can to "men's jobs" (due to those being more on the dead side). McDonalds. Hollywood. etc
It really makes sense to have a look at a couple of pictures of what was left of Germany after WW2. It's just someone slapping an existing brand name onto a new product. And in this case, personally I would have regarded the brand as damaged and would have picked a different name.
For what it’s worth, I’m not seeing a failure of communication. I’m seeing a failure of scoping. You’re arguing on the basis of specific legal mechanisms by which power is expressed. I’m arguing the real motivations of and political constraints on decision makers are more fundamental in this case.
That isn’t universally true. Power predicted what Trump would do with tariffs (again, analogy). Legal analysis predicted his constraints (which SCOTUS affirmed). In this case, SecDef has the legal authority to do what’s described. He doesn’t, however, have the political freedom to do so. That turns the latter into the germane constraint, not a litany of proscribed powers.
Put another way, the people—here—are fundamental. (Market reactions, too, though again largely because the people in this administration have chosen the Dow as a lighthouse.) The legal justifications are worse than surface level, they’re ex post facto findings of retaliatory paths. It may feel more substantial to quote DPA statute versus discuss Hegseth and Dario’s motivations and relationships, but that’s, again, missing the forest for the trees.
Once you have understood that, you can just apply the rules learned backward, and they will typically match pretty well. I can buy fractal veggies in a supermarket.
And also, it's just data. Just take some random samples. That even civilizations like the Mayas who have faaaar more time on the clock than say than the US had multiple full resets.
Another random sample I've just pulled out of thin google air: San Francisco Fire of 1851. Everybody knew that wood burns. And that wooden buildings burn. And that wooden cities burn. Did anyone decide to tear down their house and re-build with a different material? No. This happened after everything had burned down to the ground. That was the reset needed.
I think it is very clearly an iterative process. Have a look.
> Even if I grant moral objectivity, I reject that you have epistemic access to it so it's moot.
This is a silly and self refuting statement.
But violating the constitution with such a blatant power grab, and thus throwing the future of the United States and its military into uncertainty, is probably not something they want. Better to just force Trump out and maintain the status quo of new presidents every 4-8 years.
https://www.justsecurity.org/107087/tracker-litigation-legal...
You are not at all working with "data" or "samples". You are just making arguments and supporting them with examples. That's not science, that's philosophy or persuasive essay writing.
You are generalizing those arguments in insane ways. Just like the worst philosophy. You are drawing conclusions from extremely weak claims that don't even map to reality in the first place.
You can't say "Math works to describe the head of broccoli so I can just think hard enough and understand geopolitics". That's emphatically not science.
1) it’s pretty transparently obvious that Anthropic is not a supply chain risk, and that this is a retaliatory gesture. So I don’t support that usage.
2) if they do try, Congress or SCOTUS could well reduce or remove that authority. I give the Trump admin enough credit to assume that they are considering carefully which laws they spend in this way, DPA is a valuable chip they may need to spend for something more valuable than Hegseth’s temper tantrum.
No it isn't and it's a pretty standard argument.
Other than insulting you, my response was pretty damn charitable tbh. I tried to state your argument for you as best I could.
It's true that there's a lot of grey area and turbulence right now around which HN posts have been LLM-generated or LLM-edited, and it's compounded by the fact that there's no way to tell for sure. We all have to find our way through this—both the community and the mods. But we can and need to do so without breaking HN's rules ourselves in the process.