Can Anthropic Tell the Pentagon No?
- Billy Lau

- 16 minutes ago
- 3 min read
The ongoing dispute between the Pentagon and Anthropic, an artificial intelligence company, has sparked debate over the ethical limits of military AI.
In 2025, Anthropic, which secured a Pentagon contract with a ceiling of $200 million. As of February 2026, Anthropic was the only AI firm with models deployed on the Pentagon’s classified networks.
Anthropic has adopted a safety-focused approach to AI development. However, the company’s commitment to preventing its technology from being used for autonomous weapons or mass surveillance conflicts with the Pentagon, which wants broader access to its AI capabilities for military uses.
The Pentagon’s insistence on using commercial AI models such as Claude for “all lawful purposes” has raised questions about corporate governance and national security. This demand conflicts with Anthropic’s security measures; Military officials threatened to designate Anthropic a “supply chain risk” and the Pentagon later formally applied that designation.
Pentagon officials have expressed that they do not wish to be limited by corporate policies, especially in scenarios requiring rapid military action. "You can't lead tactical (operations) by exception," a Pentagon official said, indicating the necessity for autonomy in decision-making during national security situations.
Pentagon officials argue that the central issue is operational sovereignty. The U.S. military cannot take the risk of depending on a commercial AI supplier that might restrict certain applications when they are most needed during a crisis or conflict.
Pentagon officials argue that dependence on a commercial supplier that can restrict use in a crisis creates a strategic vulnerability. In a January AI strategy memo, Pete Hegseth directed the department to seek models free from usage-policy constraints, allowing models to be used for “any lawful purpose,” from planning campaigns to kill-chain execution. Officials told CNBC that the department had no intention of using AI for fully autonomous weapons or the mass surveillance of Americans, but they refused to write Anthropic’s safeguards into contracts.
The Pentagon also frames AI as simply another category of military technology. Just as the Pentagon does not accept restrictions from a missile manufacturer on how a weapon system may be used in combat, officials argue that AI systems should be governed by U.S. law and military directives, not by the internal policies of a Silicon Valley company.
Anthropic has told officials it will continue to support U.S. national security work, including intelligence and military analysis, but will not agree to two specific uses: mass surveillance of Americans and fully autonomous weapons that can select and kill targets without human intervention. The CEO Dario Amodei told The Verge over text that these limits are not radical, since they closely mirror current Pentagon regulations, such as Directive 3000.09, which requires “appropriate levels of human judgment over the use of force.” He wrote, “These threats do not change our position."
From Anthropic’s perspective, refusing to support fully autonomous lethal systems simply reinforces a principle that the Department of War has already formally adopted. Similarly, the prohibition on domestic mass surveillance reflects long-standing legal and constitutional protections controlling intelligence operations in the U.S.
The consequences extend beyond one company, especially for contractors such as Palantir that have reportedly built Anthropic’s models into sensitive defence tools. A Pentagon “supply chain risk” label puts pressure on finding replacements. According to reports, rival labs like OpenAI, Google, and xAI have agreed to more expansive military-use terms. This raises a question for engineers and investors: will insisting on ethical red lines simply hand more government business to less cautious competitors?
The discussion is complicated by the lack of legal frameworks governing the military use of AI technologies. Analysts have pointed out that ethical standards are underdeveloped, which creates a scenario where private companies bear the burden of ethical governance. The Pentagon's broad definition of "all lawful purposes" implies that almost any military use of AI could be allowed in the absence of laws that prohibit it, putting business in a difficult position where they must balance their ethical obligations with government demands.




Comments