The debate over where AI ethics ends and national security begins moved center stage this week, with OpenAI positioning itself as the company capable of satisfying both. But as Anthropic’s experience shows, bridging that gap is far easier to promise than to deliver.
Anthropic’s fall from government favor was swift and brutal. After months of attempting to negotiate military contracts that excluded uses for autonomous weapons and mass surveillance, the company found itself publicly attacked by the president and effectively barred from all federal contracts. The administration’s message was unmistakable: restrictions on government use of AI will not be tolerated.
OpenAI CEO Sam Altman moved quickly to offer an alternative, announcing a Pentagon contract that he said includes written protections against the same practices Anthropic fought over. His public statements and internal memo both described mass surveillance and autonomous weapons as fundamental ethical limits that OpenAI will not compromise.
The credibility of those claims rests partly on trust. Hundreds of OpenAI’s own employees joined a public letter supporting Anthropic and warning that the government was exploiting commercial competition to break industry-wide ethical norms. Their skepticism about whether any AI company can truly maintain ethical limits under government pressure is well-founded.
Anthropic, for its part, did not waver. The company stated clearly that it had attempted in good faith to find a workable agreement, that its two restrictions had never harmed a government mission, and that no political punishment would alter its stance. Whether that principled position or OpenAI’s more pragmatic approach ultimately serves the public better is a question the industry will be wrestling with for years.
