- OpenAI is ok with use of their AI for autonomous weapons, as long as there is "human responsibility"
- Anthropic is not ok with use of their AI for autonomous weapons
So there’s the difference, and an erasure of a red line. OpenAI is good with autonomous weapon systems. Requiring human responsibility isn’t saying much. Theres already military courts, rules of engagement, and international rules of war.
I hope everyone goes and works for Anthropic and OpenAI collapses.
Markets going to be interesting on Monday. This plus a war. Urgh.
weasels gonna weasel
This is a red line for me. It's clear OpenAI has zero values and will give Hesgeth whatever he wants in exchange for $$$.
https://www.nytimes.com/2026/02/27/technology/openai-reaches...
Right. Pete "FAFO" Hegseth is a model of intelligence, moderation, and respect for due process. Nothing to see here.
But what's the most charitable / objective interpretation of this?
For example - https://x.com/UnderSecretaryF/status/2027594072811098230
Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, not the AI provider?
Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.
Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And at least some AI researchers appear to agree.
>The axios article doesn’t have much detail and this is DoW’s decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective conditions included in Anthropic’s TOS, then yes. This, btw, was a compromise offered to—and rejected by—Anthropic.
https://x.com/UnderSecretaryF/status/2027566426970530135
> For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected.
> Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
> It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
ChatGPT maker OpenAI has the same redlines as Anthropic when it comes to working with the Pentagon, an OpenAI spokesperson confirmed to CNN.
https://edition.cnn.com/2026/02/27/tech/openai-has-same-redl...
If theres anything this admin doesn't like, its being postured against or called out by literally anyone, especially in public.
An algorithm, an ML model trained to predict next tokens to write meaningful text, is going to KILL actual humans by itself.
So killing people is legal,
Killing people by a random process is legal,
A randomized algorithm deciding on who to kill is legal,
And some of you think you are legally protected because they used the word “domestic”?
Yesterday and the day before sentiment seemed to be focused on “Anthropic selling out”, then that shifted to “Anthropic holds true to its principles in a David vs Goliath” and “the industry will rally around one another for the greater good.” But suddenly we’re seeing a new narrative of “Evil OpenAI swoops in to make a deal with the devil.”
Reminds of that weekend where Sam Altman lost control of OpenAI.
Anthropic probably made the mistake of questioning the Military's activities related to Claude after the Venezuela mission and wanted reassurance that the model wouldn't be used for the redlines, and the military didn't like this and told them we aren't using your models unless you agree to not question us and then the back and forth started.
In the end, we will probably have both OpenAI and Anthropic providing AI to the military and that's a good thing. I don't think they will keep the supply chain risk on Anthropic for more than a week.
https://www.theguardian.com/world/2026/feb/21/tumbler-ridge-...
So using Anthropic’s own words to cover a power play or pulling relationships to see if they could get anthropic to balk at it.
They will deploy this on a domestic scale and claim to use it to locate non-domestic threats. I can’t believe anyone is falling for this.
and we know we can trust openAI because they were founded on "open" and "safe" AI (up until they realized how much money there was to be made, at which point their only value changed to "make money")
Mad respect to Sam, now I believe OpenAI have better chance to win in the race
So it wasn't about those principles making them a supply chain risk? They're just trying to punish Anthropic for being the first ones to stand firm on those principles?
The little respect I had left for Sam is now wiped. Makes me sick.
Growing up I always thought AI would be this beautiful tool, this thing that opens the gates to a new society where work becomes optional in a way. But I failed to think about human greed.
I remember following OpenAI way back when it was a non profit explaining how AI uncontrolled could be highly detrimental. Now Sam has not only taken that non profit and made it for-profit. It seems he’s making the most evil decisions he can for a buck.
Cancel your subscription, tell your friends to. And vote to heavily tax these companies and their leaders.
Ended up renewing my Claude sub today instead. Principled stances matter and I no longer trust OpenAI to be trustworthy custodians of my AI History.
I linked to https://notdivided.org/ as the reasoning why.
Was shocking back then to think how far we’ve come.
(Person Of Interest for those who haven't seen it, watched it a decade ago and it's actually quite surprising how on point it ended up being)
Why? It is in the admin's interest to absolutely destroy Anthropic. Make them an example.
In my mind the only people left are those who are there for the stocks.
But they did.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
Well some may voluntarily leave, some will be actively poached by Anthropic perhaps and some I suppose will stay in their jobs because leaving isn't an easy decision to make.
Have we been watching the same Trump admin for the last year? That sound exactly like something the government would do: pointlessly throw a fit and end up signing a worse deal after blowing up all political capital.
It doesn't even matter if OpenAI is offered the same terms that Anthropic refused. It's absurd to accept them and do business with the Pentagon in that situation.
If you take the government at its word, it's killing Anthropic because Anthropic wanted to assert the ability to draw _some_ sort of redline. If OpenAI's position is "well sucks to be them", there's nothing stopping Hegseth from doing the same to OpenAI.
It doesn't matter at all if OpenAI gets the deal at the same redline Anthropic was trying to assert. If at the end of this the government has succeeded in cutting Anthropic off from the economy, what's next for OpenAI? What happens next time when OpenAI tries to assert some sort of redline?
What's the point of any talk of "AI Safety" if you sign on to a regime where Hegseth (of all people) can just demand the keys and you hand them right over?
The two things anthropic refused to do is mass surveillance and autonomous weapons, so why do _you_ think openai refused and still did not get placed on the exact same list.
It's fine to say "I'm not going to resign. I didn't even sign that letter", but thinking that openai can get away with not developing autonomous weapons or mass surveillance is naive at the very best.
I don't mean this in any way rude and I apologize if this comes accross as such but believing it won't be used in exactly this way is just naive. History has taught us this lesson again and again and again.
Sadly it would be very difficult for Anthropic to relocate to another country with their IP, models, and infrastructure.
(Guess I need to build everything I intended this year in a weekend.)
So by that measure the US govt can go get some Israeli software to surveill their domestic populace!
Homo sapiens deserve to become extinct.
It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
He's an administration official openly cheerleading his team. This should be characterized as the insider perspective/spin, not a neutral analysis of the relevant facts.Nothing in the quoted text comes anywhere close to the realm of justifying the retaliatory actions.
Anyone thinking they have any virtue is naive.
Anthropic said that mass surveillance was per se prohibited even if the government self-certified that it was lawful.
https://www.binance.com/en/square/post/35909013656801
I'm sure more will drop in the coming months.
* no military use
* no lethal use
* no use in support of law enforcement
* no use in support of immigration enforcement
* no use in mass surveillance
* no use in domestic mass surveillance (but mass surveillance of foreigners is OK)
* no use in domestic surveillance
* no use in surveillance
* require independent audits
* require court oversight
* require company to monitor use
* require company to monitor use and divulge it to employees
* some other form of human rights monitoring or auditing
* some other form of restriction on theaters/conflicts/targets
* company will permit some of these uses (not purport to forbid them by license, contract, or ToS) but not customize software to facilitate them
* company can unilaterally block inappropriate uses
* company can publicly disclose uses it thinks are inappropriate
* some other form of remedy
* government literally has to explain why some uses are necessary or appropriate to reassure people developing capabilities, and they have some kind of ongoing bargaining power to push back
It feels normal to me that a lot of people would want some of those things, but kind of unlikely that they would readily agree on exactly which ones.
I even think there's a different intuition about the baseline because one version is "nobody works on weapons except for people who specifically make a decision to work for an arms company because they have decided that's OK according to their moral views" (working on weapons is an abnormal, deliberate decision) and another version is "every company might sell every technology as part of a weapons system or military application, and a few people then object because they've decided that's not OK according to their moral views" (refusing to work on weapons is an abnormal, deliberate decision). I imagine a fair number of people in computing fields effectively thought that the norm or default for their industry was the latter, because of the perception that there are "special" military contractors where people get security clearances and navigate military procurement processes, and most companies are not like that, so you were not working on any form of weapon unless you intentionally chose to do so. But, having just been to the Computer History Museum earlier this week, I also see that a lot of Silicon Valley companies have actually been making weapons systems for as long as there has been a Silicon Valley.
And people wonder how we got here.
But I suspect the public sentiment will eventually turn against him. When society sets its pitchforks on big tech he’ll be the poster boy. A 21st century John D. Rockefeller.
Him, Musk, Bezos, and Zuck.
I have two qualms with this deal.
First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop.
Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point.
Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days.
[0] https://x.com/sama/status/2027578652477821175
[1] https://x.com/UnderSecretaryF/status/2027594072811098230
I don't want to overanalyze things but I also noticed his statement didn't say "our agreement specifically says chatgpt will never be used for fully autonomous weapons or domestic mass surveillance." It said something that kind of gestured towards that, but it didn't quite come out and say it. It says "The DoW agrees with these principles, and we put them in our agreement." Could the principles have been outlined in a nonbinding preamble, or been a statement of the DoW's current intentions rather than binding their future behavior? You should be very suspicious when a corporate person says something vague that somewhat implies what you want to hear - if they could have told you explicitly what you wanted to hear, they would have.
But anyway, it doesn't matter. You said you don't think it should be used for autonomous weapons. I'd be willing to bet you 10:1 that you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons", now or any point in the future.
You, and your colleagues, should resign.
My bet is that what the DoW wants is pretty clearly tied to mass surveillance and kill-bots. Altman is a snake.
And they are crossing the picket line, which honestly I was sure they would do, though I did expect it to take a bit longer.
This is too transparent even for sama.
You could recoup your investment in a year by collecting toll. Expedited financing available on good credit!
Anyone who chooses to stay shouldn’t have signed the letter. What’s the point of doing it off you’re not going to follow through? If you sign the letter and don’t leave after the demands aren’t met, you’re a liar and a coward and are actively harming every signatory of every future letter.
Both are based in Europe but Proton Lumo has the better privacy promises.
Would be interested in experiences of others with those alternatives for question/answering type research (not for coding for which there exist other, better alternatives like Gemini and Claude)
Income and revenue sources always, inevitably, and without fail, determine behavior.
It's absurd, and doubly so if OAI's deal includes the same or even similar redlines to what Anthropic had.
1. We've seen government lawyers write memos explaining why such-and-such obviously illegal act is legal (see: torture memo). Until challenged, this is basically law.
2. We've seen government change the law to make whatever they want legal (see: patriot act)
3. We've seen courts just interpret laws to make things legal
A contractor doesn't realistically have the power to push back against any of these avenues if they agree to allow anything legal.
(At the risk of triggering Godwin's Law, remember that for the most part the Holocaust was entirely legal - the Nazi's established the necessary authorization. Just to illustrate that when it comes to certain government crimes, the law alone is an insufficient shield.)
So the question is: do you trust the government to effectively govern its own use of AI? or do you trust Anthropic's enforcement of its TOS?
If his characterization of the agreement is correct, which I will not believe and you should not believe until a trustworthy news outlet publishes the text, I suppose this would convince me that Hegseth does not literally plan to build a Terminator for democracy-ending purposes. There's a lot of inexcusable stuff here regardless, but perhaps merely boycotting OpenAI and the US military would be a sufficient response if this all checks out.
A few months down the line, OpenAI will quietly decide that their next model is safe enough for autonomous weapons, and remove their safeguard layer. The mass surveillance enablement might be an indirect deal through Palantir.
Realistically, you need at least ~1M subscribers to cancel to make this painful.
But I suspect this will get drowned out in the face of other news.
Even Disney couldn't ignore the mass cancellations after dropping Kimmel and Disney+ bearly turns over a profit.
this seems strictly better than what anthropic had. anthropic has ruined their relationship with the US govt, giving oai a good negotiating hand
the oai folks are good at making deals, just look at all the complex funding arrangements they have
But as innovation slows globally, it is implementation, ethics, and ideology that will once again be the dominant metrics of progress, so there's a new window emerging to push for this social/moral change in technology once again.
So it's still critically important that we actively work towards finding a meaningful, socially contagious differentiator other than "ethical technologist" even if it's difficult- look at what OpenAI gets away with under that flimsy banner.
Are he and his peers Hitler or they the naive oligarchs who think they can keep populist leaders and their constituencies under their thumb? Only to be out maneuvered by the people who the masses think have their back.
I know many folks who think their political leaders have the best interest at heart (rightly or wrongly). I know nobody who thinks tech leaders do. At best they want to be them.
But tbh I just switched to Anthropic, they need all the support they can get. Claude is great for question/answer.
It seems like you chose to immediately disbelieve it.
> until a trustworthy news outlet publishes the text
If you've found one of these, let me know. I'm still looking...
Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.
Probably also got assurances about a bailout when OpenAI collapses.
Or perhaps, maybe, just a little maybe, DoW is getting absolutely excited about mass surveillance and kill-bots?
I didn't have much of an opinion of Altman before but now I think he's a grifting douche.
He allegedly raped his own sister. No charges have been brought against him.
> If you've found one of these, let me know. I'm still looking...
I do not assume, and I would recommend that you do not assume, that there is such a thing as a text of the contract. It's much easier to lie about contents of documents that don't actually exist yet. Then you can craft the text in response to public feedback, writing it down in early March and telling people that it's totally a copy of what was agreed to on February 27.
As a corollary, you should be skeptical of any purported text that is not widely published soon. If there is indeed such a contract, and it says what Altman claims, he will desperately want to ensure that his employees have read a "leak" of the text by Monday morning.
As Trump himself likes to say, "Promises made, promises kept."
I think what you are missing is their annual comp with two commas in it.
openai can deploy safety systems of their own making
from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident
this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model
1. Department of War broadly uses Anthropic for general purposes
2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons
3. Anthropic disagrees and it escalates
4. Anthropic goes public criticizing the whole Department of War
5. Trump sees a political reason to make an example of Anthropic and bans them
6. The entirety of the Department of War now has no AI for anything
7. Department of War makes agreement with another organization
If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.
I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.
While I don't live in the US, I could imagine the US government arguing that third party doctrine[0] means that aggregation and bulk-analysis of say; phone record metadata is "lawful use" in that it isn't /technically/ unlawful, although it would be unethical.
Another avenue might also be purchasing data from ad brokers for mass-analysis with LLMs which was written about in Byron Tau's Means of Control[1]
[0] https://en.wikipedia.org/wiki/Third-party_doctrine
[1] https://www.penguinrandomhouse.com/books/706321/means-of-con...
To be fair, Anthropic didn't say that either. Merely that autonomous weapons without a HITL aren't currently within Claude's capabilities; it isn't a moral stance so much as a pragmatic one. (The domestic surveillance point, on the other hand, is an ethical stance.)
>A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?
>It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.
>An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor.
I have a hunch that Anthropic interpreted this question to be on the dimension of authority, when the Pentagon was very likely asking about capability, and they then followed up to clarify that for missile defense they would, I guess, allow an exception. I get the (at times overwhelming) skepticism that people have about these tools and this administration but this is not a reasonable position to hold, even if Anthropic held it accidentally because they initially misunderstood what they were being asked.
https://web.archive.org/web/20260227182412/https://www.washi...
You learned this where?
this is going to end up being interpreted as "well, the president signed off on the operation. see - there's a human in the loop!" - is it?
Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me. I can see arguments on both sides and I acknowledge it’s probably impossible to eliminate all possible bias within myself.
One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public so that people can judge for themselves, without having to speculate about who’s being honest and who’s lying.
Their rational was pragmatic. But they specifically said that they didn't agree to let the DoD create fully automatic weapons using their technology. I'll bet 10:1 you won't ever hear Sam Altman say that. He doesn't even imply it today.
Missile detection and decision to make a (nuclear) counterstrike are 2 different things to me but apparently the department of war wants both, so it seems not "just" about missile detection.
It's like the one honest thing they've done
You should have said this.
> https://x.com/UnderSecretaryF/status/2027594072811098230
Thank you.
who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
Amodei is the type of person who thinks he can tell the US government what they can and can’t do.
And the US government should have precisely none of that, regardless of whether they’re red or blue.
No. Altman said human responsibility. Anthropic said human in the loop.
> And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.
All but confirmed was not confirmed.
You see, Obama droned more combatants than anyone else before or after him but always followed a legal paper trail and following the book (except perhaps in some cases, search for Anwar al-Awlaki).
One can argue whether the rules and laws (secret courts, proceedings, asymmetries in court processes that severely compress civil liberties… to the point they might violate other constitutional rights) are legitimate, but he operated within the limits of the law.
You folks just blurt “me ne frego” like a random Mussolini and think you’re being patriotic.
SMH
> And the US government should have precisely none of that, regardless of whether they’re red or blue.
This is a pretty hot take. "You can't break the law and kill people or do mass surveillance with our technology." fuck that, the government should break whatever laws and kill whoever they please
I hope you A: aren't a U.S. citizen, and B: don't vote.
If I'm selling widgets to the government and come to find out they are using those widgets unconstitutionally and to violate my neighbors rights you can be damn sure I'm going to stop selling the gov my widgets. Amodei said that Anthropic was willing to step away if they and the government couldn't come to terms, and instead of the government acting like adults and letting them they decided to double down on being the dumbest people in the room and act like toddlers and throw a massive fit about the whole thing.
To your second comment, it was clear enough to me to be the most plausible reading of the situation by far.
We state what we think the situation is all the time, without explicitly writing “I think the situation is…”.