Earlier this week, Defense Secretary Pete Hegseth held a high-stakes meeting with Anthropic CEO Dario Amodei and, according to several news reports, delivered an ultimatum: either Anthropic drops the safety guardrails built into its AI model, Claude, or it faces potentially punishing consequences—including invoking the Defense Production Act to effectively seize Claude, or banning Anthropic outright by declaring the company a “supply chain risk.”
At issue are Anthropic’s terms of service for Claude, which prohibit the model from being used to develop or deploy lethal autonomous weapons systems—so-called “killer robots” that can identify and strike targets without meaningful human oversight. The Pentagon wants a free hand to potentially use Claude to develop these systems; Anthropic wants to prevent Claude from doing so.
The outcome of this dispute is highly consequential—potentially even for the future of humanity. So-called swarms of drones and other military hardware could operate autonomously, coordinating among themselves to kill with impunity. The Pentagon worries that if it doesn’t develop these systems, China might. Anthropic considers these systems an ethically abhorrent line it does not want to cross.
Joining me to discuss the details of this clash between a leading AI company and the Pentagon is Anna Hehir, head of Military AI Governance at the Future of Life Institute. We kick off with a discussion of how AI systems are already integrated into the U.S. military, before turning to a longer conversation about the vast implications of whether Anthropic complies with the Pentagon’s ultimatum. We also discuss how this incident illustrates the need for international agreements on lethal autonomous weapons systems, including a potential treaty now being hashed out at the United Nations.
Support our work with your paid subscription!










