Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.
I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.
Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.
Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)
If government procurement rules intended for national security risks can be abused as a way to punish Anthropic for perceived lack of loyalty, why not any other company that displeases the administration like Apple or Amazon?
This marks an important turning point for the US.
This is a trap. Two, I guess, but let's take the first one:
Domestic mass surveillance. Domestic.
Remember the eyes agreements: https://www.perplexity.ai/search/are-the-eyes-agreements-abo...
Expanding:
> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.
Banning domestic mass surveillance is irrelevant.
The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.
This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.
You can argue that the government refusing to do any business with company A is overreach, I suppose, but I imagine that the next logical escalation in this rhetorical slapfight is going to be the government saying "we cannot guarantee that any particular use will not include some version of X, and therefore we have to prevent working with this supplier"...which I sort of see?
Just to take the metaphor to absurdity, imagine that a maker of canned tomatoes decided to declare that their product cannot be used to "support a war on terror". Regardless of your feelings on wars on terror and/or canned tomatoes, the government would be entirely rational to avoid using that supplier.
Statement on the comments from Secretary of War Pete Hegseth - https://news.ycombinator.com/item?id=47188697 - Feb 2026 (31 comments)
I am directing the Department of War to designate Anthropic a supply-chain risk - https://news.ycombinator.com/item?id=47186677 - Feb 2026 (872 comments)
President Trump bans Anthropic from use in government systems - https://news.ycombinator.com/item?id=47186031 - Feb 2026 (111 comments)
Google workers seek 'red lines' on military A.I., echoing Anthropic - https://news.ycombinator.com/item?id=47175931 - Feb 2026 (132 comments)
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1527 comments)
The Pentagon Feuding with an AI Company Is a Bad Sign - https://news.ycombinator.com/item?id=47168165 - Feb 2026 (33 comments)
The Pentagon threatens Anthropic - https://news.ycombinator.com/item?id=47154983 - Feb 2026 (125 comments)
US Military leaders meet with Anthropic to argue against Claude safeguards - https://news.ycombinator.com/item?id=47145551 - Feb 2026 (99 comments)
Hegseth gives Anthropic until Friday to back down on AI safeguards - https://news.ycombinator.com/item?id=47142587 - Feb 2026 (128 comments)
please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers
» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.
I've been disappointed to see many businesses and institutions obeying in advance recently. I hope this moment wakes up the tech community and beyond.
This issue is about more than the government blacklisting a company for government procurement purposes.
From what I understand, the government is floating the idea of compelling Anthropic — and, by extension, its employees — to do as the DoD pleases.
If the employees’ resistance is strong enough, there’s no way this will serve the government’s interests.
Sure if you immediately stopped government spending today we'd have negative growth today but that's not because other things aren't growing, it's because you just removed part of the base that existed last year. That would be true of literally pretty much any economy ever, or anything that's growing and you decided to remove a chunk of the base from.
And yes I absolutely believe the government does not have better generative AI than Anthropic or its competitors.
This is the case for every government/nation in the world. The difference between communism and capitalism, is that the Politburo in capitalism allows the natural selection of elites based on their performance on an open economy. At least that was the case until 2011.
Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.
What, then, is this really about?
All of this should remain a bridge too far, forever.
EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.
Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.
Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given both employers probably monitor anything you do on your devices, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you wouldn't ask for the email link, you might as well use the alternate verification option.
Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.
P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.
I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.
It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
"Title I authorizes the President to identify specific goods as 'critical and strategic' and to require private businesses to accept and prioritize contracts for these materials."
If you invented a new kind of power source, and the government determined that it could be used to efficiently kill enemies, the government could force you to provide the product to them under the DPA. Why should AI companies get an exemption to that?
If you're an employee and actually believe in this you need to commit to something, like resigning.
Prediction: in time, OpenAI will be declared such to privatise profits but socialise losses
And where would they emigrate? Russia? China? UAE? :-)
not even top 3
Hope is neat, but are the signatories willing to quit their jobs over this? Kind of a hollow threat if not.
The other two definitely never would in a million years.
>All of this should remain a bridge too far, forever.
Hopefully Singularity will be graceful, killing-off everybody simultaneously
#PaperclipMaximizer #HimFirst
Head(s) will of course agree with the administration. And employees will likely be making themselves a target if they sign this letter. All anonymous from said company is not a good look at all.
Speculation of course; let's see what really happens.
Some form of US AI lab nationalization is possible, but it hasn't happened yet. We'll see. Nationalization can take different forms, not to mention various arrangements well short of it.
I interpret the comment above as a normative claim (what should happen). It implies the nationalization threat forces the decision by the AI labs. No. I will grant it influences, in the sense that AI labs have to account for it.
It often starts as collective action in response to a blatant disregard for the values of the workers
Any collective action should be encouraged
This is a massive body slam. This means that Nvidia, every server vendor, IBM, AWS, Azure, Microsoft and everybody else has to certify that they don't do business directly or indirectly using Anthropic products.
The EU (which is not the same as Europe), is also looking a bit sharper on AI regulation at the moment (for now… not perfect but sharper etc etc).
At least you are not paying taxes for the things you don't agree on. It's indeed a strange time we are living in.
Although it would be nice to have some high-level signees there, I think we shouldn’t minimize the role of lay employees in this matter. Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.
The current political climate is this is the kind of thing that will get you "investigated" and charged with crimes.
And the government has already threatened that it will commandeer these companies whether they like it or not.
If someone in charge wants to make a difference, there might be more effective things to do than to speak out in this instance.
Only if you're naive. I guess most here are.
Governments are paranoid, particularly about losing control and influence over its subjects. This is expected behaviour.
There are already several comments here showing xAIs involvement. Please save clutter and read before posting.
Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.
What is why?
You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?
If they actually wanted to do something they wouldn’t have sat back and funded Republican political campaigns because they were pissed about the head of the ftc under Biden.
But they didn’t. They gave millions to this guy and now they’re feigning ignorance or change ir wherever this is.
It’s meaningless. Utterly meaningless.
Get what you pay for, I suppose.
This is literally the mechanism by which the DoD does what you're suggesting.
Generally speaking, the DoD has to do procurement via competitive bidding. They can't just arbitrarily exclude vendors from a bid, and playing a game of "mother may I use Anthropic?" for every potential government contract is hugely inefficient (and possibly illegal). So they have a pre-defined mechanism to exclude vendors for pre-defined reasons.
Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.
Setting aside the spectacular metastasis of a lawless kakistocracy that is literally rewriting the facts on record...
Anthropic's leadership has wisely attempted to make it clear that its product is not fit for the US DoD's purpose/objective, which is automated killing at scale.
It would be (is) grossly, historically negligent to operate weapons with LLMs. Anthropic built systems for a thuggocracy that only understands bribery, blackmail, and force.
Not only in the US, but everywhere else there is a government.
Arthropic is trying to make that a corporate prerogative, which is why its causing such a stir.
The obvious solution is to use AI to build and operate them. If AI is as intelligent as the hype claims it shouldn't be an issue. It's not as if the goal wasn't to get rid of workers anyway. Why not start now?
The question isn’t if some would attempt these behaviors, but rather if we and our democratic structures empower those people or fail to constrain them.
OK, maybe someone will build a bioweapon that does that for real. :P
We shouldn't expect these people to consider how the logic breaks down one step ahead when it never made sense in the first place.
https://www.opensecrets.org/orgs/alphabet-inc/recipients?id=...
The corporation gave millions _after_ Trump had already won. If your criticism is that, then that does not apply to the people signing.
>Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company.
Some very brief googling also confirmed this for me too.
>Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.
This statement misses the point. The political punishment to disallow all US agencies and gov contractors from using Anthropic for _any _ purpose, not just domestic spying, IS the retaliation, and is the very thing that's concerning. Calling it "DoD vendor exclusion list" or whatever other placating phrase or term doesn't change the action.
You are working on ads, slurping up data and trapping people into rage baits and dramas with an economy centered around marketing and influencer types.
I don't think these tech elites should decide arbitrarily by signing some fake elitist pledge.
The USA has a democratic way of resolving these things. It should not be in the hands of a few. The executive branch is a side effect of elections and should hold the line against these tech elites.
I don't agree with the essence of these nonsense pledges either: they are actively undermining the US while living and breathing here thanks to the most advanced military and defense systems on earth.
Why are these tech elites not including things like "we won't slurp up ad data" or "we will not work on dark patterns" because it's easy to come up with BS pledges and seem like 'we are so holier than thou'.
It is a bit infuriating because this resulted in the mess we are in. The income disparity between the tech elites (the entire tech industry) and the rest of the country is so huge that I don't think empty posturing and pledges and moral superiority matters.
I do not want to be associated with these elitist people who as a group are extremely educated, talented, impactful - but in one very very tiny piece in the grand scheme of things. Doesn't automatically make you the controller of the entire world's decisions.
I only say this because this is not new behavior for the administration its been reported here on HN and in less biased and political ways but ends up suppressed just confused what changed?
Edit just to be clear this shouldn't be flagged and posts they deal with rights in the past shouldn't have been flagged because rights should be the preeminent concern of anyone in tech
It’s too little too late. Don’t be evil is not a value anyone is even pretending to uphold.
I’d rather someone of these very smart people start to develop countermeasures.
Prisoner's Dilemma in Action!
Maybe it can get reused after this stuff is over.
Funding the majority of HIV prevention in Africa.
The list is long, but you knew that.
Not to mention UK is arguably further down the mass surveillance pipeline than the US. They’ve always had more aggressive domestic intelligence surveillance laws which was made clear during the Snowden years, they’ve had flock style cameras forever, and they have an anti encryption law pitched seemingly yearly.
I’d imagine most top engineers would rather try to push back on the US executive branch overreach than move. At least for the time being.
At the end of the day it’s a matter of incentives, and good knowledge work can’t simply be forced out of people that are unwilling to cooperate.
The thinking seems to be that you can't have every defense contractor coming in with their own, separate set of red lines that they can adjudicate themselves and enforce unilaterally. Imagine if every missile, ship, plane, gun, and defense software builder had their own set of moral red lines and their own remote kill switch for different parts of your defense infrastructure. Palmer would prefer that the President wield these powers through his Constitutional role as commander-in-chief.
What is "it" in your comment?
The refusal to sign a contract with Anthropic, or their designation as a supply chain risk?
But don’t want to play ball when we’re on the cusp of war & immigration crises
Going to learn about who runs the country the hard way:
Defense Production Act
Costs a few hundred thousand per server, it's a huge expense if you want it at your home but a rounding error for most organizations.
Was it successful? The jury is still out.
Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.
We live in a free society. AI should be democratized like any other technology.
> (b) Prohibition. (1) Unless an applicable waiver has been issued by the issuing official, Contractors shall not provide or use as part of the performance of the contract any covered article, or any products or services produced or provided by a source, if the covered article or the source is prohibited by an applicable FASCSA orders as follows:
I'd even go as far to say that if this is indeed a publicity campaign it is the most successful one I've seen in years. Many detractors of the existence of LLMs are suddenly leaping to Anthropic's defence.
Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.
What I have known is that since its very inception, Google has been doing massive amounts of business with the war department. What makes this particular contract different? I really am trying to understand why these sentiments now.
Can Lockheed's drones autonomously blow up hippies' houses for protesting wars? Can a weapons system patch out support for features the contractor is no longer interested in supporting? Can all the intel gathered by these products be automatically forwarded to the contractor to be sold off to third-parties? Will rifles spontaneously refuse to fire when they incorrectly judge an enemy combatant to be a civilian?
I think Silly Valley has been allowed to get away with too much for too long when it comes to abusing their customers. That only works with end-users because most people aren't going to spend $10k on a lawyer to argue with microslop over all these idiotic mandatory updates. If they want to suckle off the military industrial complex's teat they can't be allowed to behave like this. Otherwise they can just not sign onto contracts to develop weapons systems for something that calls itself "the department of war" like normal conscientious objectors do.
https://news.ycombinator.com/item?id=47188473#47188709
They are very much not a part of the initiative. Their involvement is and will be non-existent. Unless of course, you want their lay staff to make some noise?
I think that's a key difference as well.
And how would a treaty like that be enforced? Every country has legitimate uses for GPUs, to make a rendering farm or simulations or do anything else involving matrix operations.
All of the technology involved, in more or less the configuration needed to make your own ChatGPT, is dual use.
On your second point, see my response to oceanplexian below: https://news.ycombinator.com/item?id=47189385
Thing is that very much want access to Anthropic's models. They're top quality. So that definitely want Anthropic to bid. AND give them unrestricted access.
"Misinformation" does not mean "facts I don't like".
> No one who wants to work with the US government would be able to have Claude on their critical path.
Yes. That is what the rule means. Or at least "the department of war". It's not clear to me that this applies to the whole government.
Idealists who “genuinely”[1] want to change the world “for the better”[1] will just move on to the next Interesting Problem if it ends up making the world worse.
Governments should not be permitted to introduce regulations against companies of this kind if the regulations can be enforced selectively and with regulator discretion, as the GDPR and antitrust definitely are. The free-speech implications are staggering.
No. Hope is not a strategy. Too much of the techno optimist future narratives we use to coat over the increasingly screaming cognitive dissonance as we see what keeps us civil, from each other's throats, decline, smothered by the rise of the broligarchy.
What's happening here is not about AI. It's a loyalty test, administered to every major actor in the economy, the more influential, the more ruthless and earlier.
Your core values, in exchange for taxpayer money access and loyalty to the Don, an offer few can refuse.
And the choice will come for everyone. It's a distillation attack to filter the
- DEI for Grants - Your officer's oath to not kill civilians by word of your leader for continued career - AI Safety for non blacklisting - Your immigirant employee's location for us not harassing your offices in person - Your trans neighbour shipped to a reeducation camp and gender reassignment for the safety of your family.
Becoming complicit is the ultimate loyalty
So stop hope. Stop asking. Demand, Force, Resist.
``` Do not go gentle into that long night, The righteous should burn and rave at close of day; Rage, rage against the dying of the light ```
So there you have it
That's apparently about 6k books' worth of data.
There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".
This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.
Yes, this is the part where I acknowledge that it might be overreach in my original comment, but it's not nearly as extreme or obvious as the debate rhetoric is implying. There are various exclusion rules. This particular rule was (speculating here!) probably chosen because a) the evocative name (sigh), and b) because it allows broader exclusion, in that "supply chain risks" are something you wouldn't want allowed in at any level of procurement, for obvious reasons.
Calling canned tomatoes a supply chain risk would be pretty absurd (unless, I don't know...they were found to be farmed by North Korea or something), but I can certainly see an argument for software, and in particular, generative AI products. I bet some people here would be celebrating if Microsoft were labeled a supply chain risk due to a long history of bugs, for example.
I wonder if this is how some non minority of American thinks or was just worded like that to try to appeal to the "most radical patriots"
Every nation has some bias but I think Americans have power poisoning for being the dominant power for so long. They think they are entitled to do anything and believe they are the good guys in the history. Well...
Here is an interesting thing to think about which country spies on Americans the most and how? Are there New Zealand commandos sneaking around the shores tapping cables? Moles working in the AT&T for the Canadian government? What happens if one of those individuals get caught, are they quietly allowed to leave, and if they commit any crimes do the charges get erased magically? Otherwise, if that doesn't happen there is danger they'll grab our spies in their countries in turn. Or they just blatantly pass lists around of who works for whom so they don't interfere with each other as that would preclude getting the data back through the loop to the NSA.
There is of course another loophole and that is private entities collecting data. The Constitution doesn't say anything about that, so the government figures it's fare game if they just pay a company to collect the data and then they query later. They didn't collect it so it's not "spying".
The agreement at the heart of 5 Eyes is to not surveil the other nations - this must be up there for most persistently misunderstood fact among techies (probably why AI spits it out)
Be the first to sign this letter.
Current and former employees of Google and OpenAI are invited to sign. You may sign anonymously. All signatures are verified before being published.
Have you thought about broadening the requests to be more comprehensive?
The goal of this letter is to find common ground. The signatories likely have a diverse set of views. The current situation with the DoW is so clear-cut that it can bring together a very broad coalition. Signing this letter doesn't mean you think it's the only thing that needs to be done, just that you agree with the bottom line.
Who is behind this?
This letter was organized by a few citizens who are concerned about the potential misuse of AI against Americans. We are not affiliated with any political party, advocacy group, or organization. We are not affiliated with any AI company and are not paid.
Who can sign?
Current and former employees of Google and OpenAI are invited to sign. We verify every signature to ensure authenticity. You may sign anonymously.
How is my data handled?
If you sign anonymously, your personal information (name, email) is automatically and permanently deleted from our database within 24 hours of verification. After deletion, only your anonymous public listing remains (e.g. "Anonymous, verified current employee at [Company]"). Only one organizer has access to review anonymous signatures during that 24-hour window. No one else can see your identity.
If you sign publicly, we store your name and affiliation to display on the letter. Email addresses used for verification are never published or shared.
What if I accidentally fill out the form twice?
Don't worry. We de-duplicate non-anonymous signatures automatically, and anonymous signatures within 24 hours (before personal data is deleted). For anonymous signatories beyond 24 hours, we cannot verify there are no duplicates, though there is one human who manually reads all signatures and will try hard to notice and correct any abuse of the system.
I signed anonymously but now want to put my name on it. How can I fix that?
Sign again using the "Alternative verification" method. In the verification details, mention that you previously signed anonymously and would like to switch to a named signature. We'll update your entry and make sure you're not double-counted.
How do you verify signatures?
Every signature is verified before it appears on the letter. If you sign using the Google Form or email verification options, we confirm that you have access to a @google.com or @openai.com email address. If you use alternative verification, an organizer manually reviews your proof of employment. No signature is published without verification.
Have there been any mistakes in signature verification for this letter?
We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.
What infrastructure does this site use and is it secure?
This site is hosted on Fly.io, a US-based infrastructure provider. The database is SQLite, stored on an encrypted persistent volume. Verification emails are sent via Resend. Google Forms is used as one verification option because it allows email confirmation without sending anything to your inbox. The site itself is a simple open-source Flask application. No analytics or tracking scripts are used. DNS and SSL are managed through Cloudflare.
You know, there are plenty of examples where people in positions of power choose different paths of escalation. I doesn't always need to be liner tit for tat. Some times you need to step back and look at the larger picture and decide of the escalation is worth the risk for all of humanity.
There is a video about game theory [0] that describes this problem very clearly. You have better outcomes when you make decisions outside the direct course of escalation.
Please don't talk in absolutes about these things, you have an opinion. I accept that, but its not as black and white as you think
Your worldview is outdated. There are obviously risks to signing this. Get your head out of the sand.
So they're saying anthropic is lying or what? Because Sam Altman is saying that DOW agrees with no mass surveillance and no autonomous drone killing. Also if not, how safety is their priority?
https://news.ycombinator.com/newsguidelines.html
Would this "open letter" be covered on TV news?
I’m not gonna dispute the UK being further down some parts of the road.
Not sure what you’d count as top engineers, but I know enough that have been asking about and moving to the UK/EU that it’s been a noticeable reversal of the historic trends. Also, a major slowdown of these kinds of people in the UK/EU wanting to move to the US.
When you only allow gov and big tech access to powerful AI, you create a much more dangerous and unstable world.
Centralizing power is dangerous and leads to power struggles and instability.
Which is why people are talking about this -- it's about ideology now.
You may personally be motivated solely by money. Not everybody is you.
And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC. This is part of why there’s funding from the EU to develop Sovereign AI capabilities, currently focused on designing our own hardware. We’re nothing like as far behind as you might expect in terms of tech, just in terms of scale.
Also, while US export restrictions might make things awkward for a short while, it wouldn’t stop European innovation. The chips still flow, our own hardware companies would scale faster due to demand increase, and there’s the adage about adversity being the parent of all innovation (or however it goes).
The fabs aren't, and that is no small thing. The tech stack is there though.
It's pretty tiresome that the HN audience keeps assuming Europe doesn't have "tech" because it doesn't have Facebook. Where do you think all the wealth comes from? Europe is all over everyone's R&D and supply chain.
If I sell red widgets that I make by hand to the government, I won't be allowed to use Anthropic to help me write my web-site.
Substantively, individual employees of these firms may have little or no actual impact on this. But AI is ubiquitous enough and disruptive enough that being professionally connected with it at a time of great geopolitical instability has the potential to be a very very bad look later.
[1]: https://x.com/UnderSecretaryF/status/2027594072811098230
That said, here are some American examples - it is being covered by CBS: https://www.cbsnews.com/news/anthropic-ceo-dario-amodei-full...
And a local affiliate: https://www.cbs8.com/article/news/nation-world/trump-order-a...
And ABC: https://abcnews.com/US/wireStory/anthropic-refuses-bend-pent...
And a local affiliate: https://abc7news.com/post/anthropic-refuses-bend-pentagon-ai...
NBC: https://www.nbcnews.com/tech/tech-news/trump-bans-anthropic-...
Fox: https://www.foxnews.com/politics/tech-company-refuses-pentag...
Searching "anthropic letter tv news coverage" in Google, the News tab has tons of other mainstream news sources, worldwide, covering this story.
So yes. This and many "technical" stories that appear on HN would be covered by "tv news."
And no, working remotely for US companies doesn't count.
People have this intuitive sense that there's some kind of authority of truth or justice, an available recourse that we could've and should've used.
But that sense is incorrect.
What we actually have the political and justice systems that Trump and his adherent have, so far, quite successfully subverted.
I think war is bad and generally a stupid thing to do, but my point is that if they were negotiating terms with the department at all, it's really a given they'd be OK with the stuff you took issue with.
I thought the US was a country of immigrants (or was before it started hunting them)?
The tools will be used however the government wants them to be used. The government makes the laws and wages the wars, and the corporation will follow the law whether it wants to or not.
So either you are willing to work on a tool that is not under your control, or you are not.
Ideology is easy to throw around for internet comments but working on the cutting edge stuff next to the brightest minds in the space will always be a major personal draw. Just look at the Manhattan project, I doubt the primary draw for all of those academics was getting to work on a bomb. It was the science, huge funding, and interpersonal company.
See what happened to Russian Baikal production on TSMC
US tech companies were previously forced into compliance with PRISM or threatened with destruction (see: escalating fines to infinity against Yahoo, forcing their eventual compliance).
You know what's new? This administration is doing out in the open what used to go on quietly.
It's interesting to see that nothing happens despite this. Now he started another war to distract from his involvement in the huge Epstein network. Also, by the way, quite interesting to see how many people were involved here; there is no way Ghislaine could solo-organise all of that yet she is the only one in prison. That makes objectively no sense.
That is, the money doesn't care so long as it's still profitable. When the recession comes a Democrat will be allowed back in to fix things.
See Liz Truss.
Anne Sacoolas (the woman who mowed down a British teenager with her car, but escaped because she had diplomatic immunity) turned out to be a senior CIA spy.
There are only good/bad people for moments in time. Some are good for longer than others.
But I get it, anti-American sentiment is very popular right now.
Snowden, as a very rare exception, did show clearly that the government agencies are quite capable of not providing anything to cite.
As an Australian, I wouldn't trust it at all. The US government has already asked the Australian government for highly expanded information on Australian citizens, and that's above the table.
Stop believing what these people are telling you. They have an awful track record, and the people making the statements now are even worse than the previous people.
or would the government just buy the stocks on the market?
Snowden wasn’t showing the world the NSA surveillance systems against them; he was trying to show that the US was illegally spying on its own citizens by leveraging the five-eyes countries to collect and aggregate the data on their behalf.
This also isn’t hypothetical. I know top-talent engineers and researchers that have moved out of the USA in the last 12 months due to the political climate (which goes beyond just the AI topics).
And you might want to read a few books on the Manhattan project and the people involved before you use that analogy. I don’t think it’s particularly strong.
Or because of the revoked processor design licenses from the British company Arm (which is still UK headquartered… despite being NASDAQ listed and largely owned by Japanese firm SoftBank)?
Or perhaps you think the US could stop us using the 12nm fabs being built by TSMC on European soil? Or could stop us manufacturing RISC-V-based chips (Swiss-headquartered technology)?
The US is weak in digital-logic silicon fabrication and it knows it. That’s why it’s been so panicked about Intel and been trying to get TSMC to build fabs on US soil. They’re pouring tens of billions of dollars into trying to claw back ownership and control of it, but it’s not like Europe or China or others are standing still on it either.
While the government is accustomed to complying with software licensing rules, indeed it is not accustomed to being limited in warfare, so the two have now come into an interesting conflict.
Being built as in not operating yet?
12 nm gpu is what? Nvidia 1080/2060 level? Those top researchers mentioned would love to train on that. Also how many gpus would be made annually?
Also what about CPU? You gonna use risc-v? With what toolchain?
Chinese could pull it off in a few years, yeah.
EU? Nah. Started thinking about sovereignty too late compared to China
Are they working remotely for US companies? In Canada that’s very much still the case everywhere you look
> Even the big American companies have been opening offices in places like London to hire the top talent at high salaries.
I assumed this discussion was about rejecting working for US companies who would be susceptible to the executive branch’s bullying, not whether you can you make a US tier salary off American companies while not living in America. If you’re doing that you might as well live in America among among the other talent and maximize your opportunities.
> You know what's new? This administration is doing out in the open what used to go on quietly.
So this administration has got bold and the behaviour has become overt.
e: Americans seem to be surprised to learn that their democracy is indeed classified as a flawed democracy for more than a decade by The Economist due to decades of backsliding (the more rapid regression lately is not yet accounted for, but I wouldn't be surprised if the outcome of the 2026 elections results in a hybrid regime assessment in 2027).
They were going to do him for conspiracy to defraud the United States and conspiracy to obstruct an official proceeding, re. the 2020 stuff before he got reelected.
Americans do the same, hence whole world got ttump. 95% of the world aint US, so such logic is even easier for almost whole mankind - is US force of good or evil? Different places would give you different answers, and most americans would not like the actual spread these days.
https://en.wikipedia.org/wiki/Anne_Sacoolas#Diplomatic_issue...
History will put Trumpers and Confederate at the same level of despicability.
You step up and start shooting at the heartless monsters running the first (US armed forces) and second (ICE) most well-funded militaries in the world. Go ahead. We’ll be right there behind you.
(Yeah, I’m burning some hn karma for this, I imagine.)
Yeah dude, that's the point.
There were a lot of things Snowden revealed, but most assuredly it was also about spying on US citizens. The NSA directly wiretapping people, even in cases when all communication was domestic. The NSA working to bypass security via routers diverted during shipping to Google, Facebook, and others, backdoors installed, thus compromising their infrastructure.
Back to the 5eyes, there is a difference in terms of scope and scale, when it comes to a foreign country spying on your citizens, and you doing it. The scope is entirely different, the scale, the capability.
It does matter whether it is 5eyes doing it, or whether it is domestic.
Now, does this stance matter overall? I don't know. It's a nice moral stance, I think. Is it functionally realistic?
I just don't know.
When these things done right you won't hear about it.
In other words we might have killed Osama Bin Laden, but he won. The U.S truly is a "shadow of it's former self."
I think the solution is also obvious for the United States — higher taxes and lower government spending. We need to do both. However, you can't get elected if you promise both those things.
The current US administration's relationships with corporations is more seeking to maximise how much bribe money it can extract from them, whilst undermining them with counterproductive policies no matter how big the tax breaks are.
A recent report shows the approval numbers, for all americans it's at 36%. For white americans, its at 45%
The US government has lots of corporatism, but this isn't an example of that.
And that whenever a mass shooting happens in the US, Americans reassure themselves that gun violence is a price worth paying for the Second Amendment. And there is a run on pawn shops and gun stores because mass shootings are the best form of advertising America's billion dollar gun lobby has.
And that Americans will wax poetic about watering the Tree of Liberty with the Blood of Tyrants and Patriots any time gun control comes up, because they believe their Second Amendment is an absolute vouchsafe against tyranny and because of that, they and they alone are the only truly free country.
And they were willing to rise up in Portland.
And they were willing to rise up during COVID.
And they were willing to rise up on Jan 6th.
And they're willing to shoot up schools and black churches and gay nightclubs and mosques so often it no longer makes the news.
But now, with blatant and undeniable tyranny in their face and shooting them dead in the streets... nothing.
Not that violence would necessarily be productive (although historically speaking no social or political progress happens without it)... but it's weird that the most violent society in human history, born of genocide and bathed in blood, with more guns than people and gun violence enshrined as its second most important and fundamental virtue, the land of "give me liberty or give me death" is all of a sudden the most timid.
Like goddamn throw a Molotov cocktail or something.
But nope, only words, words and more words.
is that so?
Even 36% is sky high for what he did.
https://www.nytimes.com/interactive/polls/donald-trump-appro...
https://www.reuters.com/graphics/TRUMP-POLLS-AUTOMATED/APPRO...
https://www.economist.com/interactive/trump-approval-tracker
Your ignorance of reality does not define reality.
"We mustn't consider dealing with problem x because it wasn't considered important by our founding fathers"
"China are catching up, so we need to cower behind a tariff wall rather than risk losing an open competition"
"Other countries with similar legal systems have successfully reformed their supreme courts, but there's nothing we can learn from them"
"We shouldn't constrain rogue leaders because of, er, something to do with King George III"
...and now "we can't push back against the regime, because they'll shoot us if we do".
It's so weird - a huge shift in such a short period of time. As an outsider who wishes America well, it's really sad to see.
You're making the mistake of assuming an attribute of a culture cannot be accurate unless it's 100% accurate about every member.
I think it's perfectly valid to call Americans to the carpet when they won't live up to their stated principles, if only because of how obnoxious they've been about their own sense of exceptionalism, and how their guns serve as an absolute vouchsafe against tyranny.
History is going to note that the only times Americans attempted a revolution against their government was first in defense of slavery and second in defense of fascism, and that isn't a good look. Replying with #notallamericans doesn't help.
edit: OK partial mea culpa as the US had anti-slavery revolts[0], but the two events that will stand out for their lasting impact and scope are the Civil War and Jan. 6th. The Revolutionary War doesn't count because they were British at the time.
[0]https://en.wikipedia.org/wiki/Slave_rebellion_and_resistance...
As for getting shot, while the chance of getting shot in the US for opposing the government is much higher than in similar circumstances in somewhere like the UK (which is far from perfect - but rarely actually shoots people), its also much, much lower than in Iran or China or Saudi Arabia.
Pushing back against the US government is a lot safer than taking part in something like the 2022 protests that ousted the Sri Lankan government, and lots of normally apolitical people took part in that (which was why it succeeded).
If you are in law enforcement, do not follow clearly unlawful orders. The president is not your boss. This is a functioning democracy.
If you are a librarian, do not hide otherwise lawful books that the current administration dislikes.
If you are in logistics, do not collect obviously unconstitutional taxes. Make sure to challenge them in courts first.
If you are in a university, stick to what is true and scientifically sound. Do not hide inconvenient truths.
If you are a baker, do not refuse to make a rainbow colored cake just because you are worried what the people wearing metaphorically brown shirts might say.
The list goes on and on and on. This has been well documented throughout history. Fascism needs a seed to thrive, and that seed is people complying in advance. Not with actual laws, but with the idea what direction the law will take, just because it's easier for them. People not helping other people because immigration is not in vogue right now and who knows what the neighbors might say.
Don't dismiss words: they are the necessary link between (individual) thoughts and collective deeds.
PS. Trump also got there with words: speeches, slogans, imprecations