That is an version of The Atlantic Every day, a publication that guides you thru the largest tales of the day, helps you uncover new concepts, and recommends the very best in tradition. Join it right here.
Earlier this week, Secretary of Protection Pete Hegseth sat down with Dario Amodei, the CEO of the main AI agency Anthropic, for a dialog about ethics. The Pentagon had been utilizing the corporate’s flagship product, Claude, for months as a part of a $200 million contract—the AI had even reportedly performed a job within the January mission to seize Venezuelan President Nicolás Maduro—however Hegseth wasn’t happy. There have been sure issues Claude simply wouldn’t do.
That’s as a result of Anthropic had instilled in it sure restrictions. The Pentagon’s model of Claude couldn’t be used to facilitate the mass surveillance of Individuals, nor might or not it’s utilized in totally autonomous weaponry—conditions the place computer systems, slightly than people, make the ultimate resolution about whom to kill. Based on a supply acquainted with this week’s assembly, Hegseth made clear that if Anthropic didn’t remove these two guardrails by Friday afternoon, two issues might occur: The Division of Protection might use the Protection Manufacturing Act, a Chilly Battle–period regulation, to basically commandeer a extra permissive iteration of the AI, or it might label Anthropic a “supply-chain threat,” which means that anybody doing enterprise with the U.S. army could be forbidden from associating with the corporate. (This penalty is often reserved for overseas corporations akin to China’s Huawei and ZTE.)
This night, Anthropic mentioned in a public assertion that it “can’t in good conscience accede” to the Pentagon’s request. What occurs subsequent might mark an important second for the corporate, and for the American authorities’s strategy to AI regulation extra broadly. In refusing to bow to an administration that has been intent on bullying non-public corporations into submission, Amodei and his crew are taking a daring stand on moral grounds, and risking a censure that would erode Anthropic’s long-term viability.
In the course of the first 12 months of Donald Trump’s second time period, the White Home had a extra relaxed angle towards AI regulation; an AI Motion Plan from July stresses that the administration will “proceed to reject radical local weather dogma and bureaucratic crimson tape” to encourage innovation. Hegseth is now, in impact, threatening to partially nationalize one of many greatest AI gamers within the non-public sector—and drive the corporate to go towards its personal ideas. “That is essentially the most aggressive AI regulatory transfer I’ve ever seen, by any authorities anyplace on this planet,” Dean Ball, who helped write a few of the Trump administration’s AI insurance policies, informed me.
The Pentagon has already reportedly been reaching out to different protection contractors to see in the event that they’re related to Anthropic, an indication that officers are getting ready to designate the corporate a supply-chain threat. Now that Anthropic has defied Hegseth, the contract is probably going in peril. The agency doesn’t actually need the $200 million—it reportedly pulls in $14 billion a 12 monthsand it mentioned it raised $30 billion in enterprise capital simply weeks in the past—however being blacklisted might have an effect on its means to scale up sooner or later. (“We aren’t strolling away from negotiations,” an Anthropic spokesperson informed The Atlantic in a press release. “We proceed to interact in good religion with the Division on a means ahead.” The Pentagon informed CBS on Tuesday that “this has nothing to do with mass surveillance and autonomous weapons getting used,” and that ”the Pentagon has solely given out lawful orders.”)
As AI corporations world wide jockey for dominance, Anthropic has distinguished itself by emphasizing security. OpenAI’s ChatGPT has been criticized for taking part in up some customers’ delusions, resulting in circumstances of “AI psychosis,” and simply final month, xAI’s Grok was spinning up almost nude photographs of virtually anybody with out consent. (xAI has mentioned it’s proscribing Grok from producing these sorts of photographs, and OpenAI has mentioned it’s working to make ChatGPT higher help folks in misery.) In the meantime, Anthropic’s consumer-facing chatbot doesn’t generate photographs in any respect. By refusing to cave to authorities stress, it might have simply averted one other disaster: a significant public backlash from customers, a few of whom see the corporate as a extra principled participant within the AI wars. Anthropic not too long ago confronted some pushback over altering its insurance policies—Time reported on Tuesday that, in a seemingly unrelated transfer, the corporate dropped a core security pledge regarding its broader strategy to AI growth.
Weeks earlier than Hegseth issued his ultimatum, Amodei opined on his web site about the dangers concerned with exactly the 2 guardrails the Pentagon is focusing on. “In some circumstances,” he wrote, “large-scale surveillance with highly effective AI, mass propaganda with highly effective AI, and sure kinds of offensive makes use of of totally autonomous weapons needs to be thought of crimes towards humanity.”
The Trump administration doesn’t appear to know what it needs from AI. On one hand, it’s deeply suspicious of sure sorts of fashions. The White Home’s designated AI czar, David Sacks, has criticized Anthropic for “working a classy regulatory seize technique based mostly on fear-mongering,” basically accusing the agency of pushing for pointless, innovation-squashing limitations and jeopardizing the way forward for American tech. The administration has additionally criticized AI bots for generally spitting out “woke” replies. Then again, Claude is seemingly helpful sufficient that it’s on the cusp of being commandeered by the federal authorities.
Ball informed me that the Division of Protection might have some extent—that there’s an argument to be made about reining in Silicon Valley’s management over the federal government’s use of latest applied sciences. Though the focus of energy among the many technocratic elite is definitely troubling, Hegseth’s proposed punishments for Anthropic are misguided and plainly contradictory. The Protection Manufacturing Act does enable the federal government to intervene in home industries within the curiosity of nationwide safety (the Biden administration invoked it in a 2023 govt order on AI regulation). However is Claude so vital for U.S. nationwide safety that the federal government must compel Anthropic to create an untethered new model? Or is it so harmful that it must be shunned—not simply by the Pentagon, however by any enterprise related to the army? A 3rd, even-more-bewildering possibility can be on the desk: Hegseth might resolve to concurrently fee a modified Claude and sanction the corporate that stewards it.
All of this ignores a a lot easier resolution: Hegseth might simply begin a partnership with a unique agency. It’s a very good time for his division to be in enterprise with tech, because the temper of Silicon Valley has these days grow to be way more Pentagon-friendly. Palantir’s Alex Karp has touted that his software program is used “to scare our enemies and, every so often, kill them”; the technologist and entrepreneur Palmer Luckey is already constructing autonomous weaponry for the federal government; and Andreessen Horowitz’s American Dynamism funds are serving to funnel the nation’s prime younger minds into protection tech. However slightly than look elsewhere, Hegseth is threatening to crush Anthropic—implying that if he can’t management Claude, nobody can.
Because the protection secretary appears to be like to make an instance of the corporate, he’s taking a cue from Trump, who has used authorized and extralegal stress to successfully drive different non-public companies, significantly large regulation corporations, banks, and universities, into submission. These acts of coercion have the potential to reshape American capitalism: We’re starting to see a market the place winners and losers are determined much less by the standard of their merchandise and extra by their seeming fealty to the White Home. How that may have an effect on the success of companies and the economic system is unsure.
The Pentagon created this ultimatum exactly as a result of it understands Anthropic’s world-altering potential. The administration simply can’t resolve if it’s an asset, a legal responsibility, or each.
Associated:
Listed here are three new tales from The Atlantic:
Right this moment’s Information
- A Columbia College scholar detained this morning by federal immigration brokers has been launched. The arresting officers reportedly misrepresented themselves as on the lookout for a lacking youngster with the intention to achieve entry to the scholar’s residential constructing.
- Hillary Clinton informed the Home Oversight Committee that she has no new details about Jeffrey Epstein and maintained that she had no information of his crimes; she criticized congressional Republicans’ dealing with of the probe as partisan. Invoice Clinton is scheduled to offer his deposition tomorrow.
- Cuban forces killed 4 folks and wounded six after firing on a Florida-registered speedboat that Cuban authorities say entered the nation’s waters yesterday and opened hearth on a patrol vessel. Cuba claims that the U.S.-based passengers had been armed and planning a “terrorist” infiltration.
Dispatches
Discover all of our newsletters right here.
Extra From The Atlantic
Night Learn

This Appears to be like Like an Insider Wager on Aliens
By Ross Andersen
On Monday night time, somebody positioned a peculiar wager on the prediction market Kalshi. At 7:45 p.m. jap time, a single dealer put down almost $100,000 on the declare that, by the tip of December, the Trump administration will verify that alien life or expertise exists elsewhere in our universe. Based on The Atlantic’s evaluation of Kalshi’s buying and selling information, about 35 minutes after this wager was executed, it was adopted by one other that was nearly twice as massive (probably from the identical individual). These had been market-moving occasions: For one temporary stretch, the market appeared to suppose that there was at the least a one-in-three probability that the U.S. authorities will announce the existence of aliens this 12 months. Maybe this was just a few overexcited UFO diehard with a hunch and cash to burn. Or possibly, as some observers rapidly famous, it was a dealer with inside information.
Tradition Break

Discover. When did literature get much less soiled? A puritan pressure is manifesting in realist novels as a marked absence of straight intercourse, Lily Meyer writes.
Learn. Casey Schwartz on two new books that show how Martha Gellhorn, Janet Flanner, and different feminine reporters took journalism in instructions that males couldn’t.
Play our day by day crossword.
Rafaela Jinich contributed to this article.
If you purchase a guide utilizing a hyperlink on this publication, we obtain a fee. Thanks for supporting The Atlantic.
