
The centuries-long relationship between the scientific community and the defense enterprise is facing a reckoning when it comes to the deployment of AI in autonomous weapons for defense-purposes. Legal norms historically evolved to address emerging applications of novel technology at every turn, and the government has not ceased in relying on technological advancement to achieve national security interests.
Anthropic first established its partnership with the U.S. Department of Defense in July 2025 through a $200 million agreement where it contracted to prototype its AI capabilities to “advance U.S. national security.” Among leading artificial intelligence startups, Anthropic has been regarded as more vocal in championing responsible AI deployment. It is a self-proclaimed public benefit corporation devoted to risk mitigation, and has expressed an aim of mobilizing AI development consistent with global democratic values.
So upon contracting with the Pentagon to integrate its tools, Anthropic firmly resisted use of its systems for mass public surveillance or the development of autonomous weapons. Anthropic’s CEO, Dario Amodei, has cautioned against using AI for these purposes, writing in his personal capacity that using AI for domestic mass surveillance would be “entirely illegitimate.”
However, since then, Pentagon officials and Defense Secretary Pete Hegseth have expressed dissatisfaction with Anthropic’s stipulated restrictions. They have sought to expand military use to “all lawful purposes” through renewed negotiations. In a Jan. 9 memo, Secretary Hegseth called for AI companies to remove restrictions on their technology.
These negotiations are alarming for their coercive nature, their renewal seemingly prompted by intimidation. After the Pentagon designated Anthropic as a “supply chain risk”, the company stated its intent to challenge the unprecedented decision in court. These tensions capture an ongoing conflict over what legal norms should guide the ethical integration of AI into defense applications, and who is responsible for establishing crucial boundaries. Through negotiations, we are witnessing how startups must amicably collaborate with the government to create ethical guidelines for AI use in defense systems, or risk losing their government contracts. Amodei’s most recent attempt to revive negotiations and reach a new deal, despite the dramatic collapse in earlier discussions, reflects the reality of this corporate struggle.
Anthropic’s faltering relationship with the government reveals how effective intimidation tactics distort the decisionmaking landscape. Critically, the final outcome of negotiations will form a foundation to guide future negotiations for defense contracts between similarly situated parties. Anthropic’s decision to refuse concessions has the potential to establish a significant precedent for how Big Tech and AI startups negotiate safety guardrails when deploying their technology for defense purposes. This will shape how tech leaders, legal scholars, and Congress, advance future negotiations and regulations governing the ethical use of AI across the military-industrial complex.
Indeed, the government’s switch to favor a deal with OpenAI for its classified systems is risky not only for its potentially unbounded and non-transparent state-led applications. It also challenges an already weak legal regime governing ethical AI use in warfare. The cost of rationalizing a broad grant of discretionary power in using this technology with notions of increased safety for Americans, or “good and legitimate purposes” is disregarding critical ethical considerations. Democratic principles which prohibit unethical applications may be exchanged for more rapid technological development and deployment of sophisticated tools which enable similar practices.
Will the government’s proposed use of AI in defense for “all lawful purposes” be consistent with established norms of international and human rights law? The immediate backlash against OpenAI’s Pentagon deal among its own AI researchers signifies a persistent hesitation to accept this broad use classification.
To date, Anthropic’s “Claude” is the most integrated chatbot in the Defense Department’s AI pilot program and the sole chatbot on classified systems. While the impending legal battle and re-negotiations between Anthropic and the government unravel, perhaps it would be wise for elected officials to heed Mr. Amodei’s suggestion that Congress enact civil-liberties focused legislation mandating more robust guardrails against “AI-powered abuses.” In the absence of such targeted legislative efforts, we are left to rely on high pressure negotiations such as that between Anthropic and the Pentagon to signify the legal norms and standards that tech companies agree upon to bind future integrations of AI in defense operations.
