On February 27, 2026, Pete Hegseth, the US Secretary of Defense, labeled Anthropic, a San Francisco-based AI firm, as a “supply chain risk to national security.” This designation, previously applied to foreign companies like Huawei, was unprecedented for an American firm, particularly one founded by former OpenAI researchers. Anthropic's refusal to allow its AI technology to be used for mass domestic surveillance or autonomous lethal weapons led to this drastic measure.
Hours after Anthropic's blacklisting, OpenAI CEO Sam Altman announced a new agreement with the Pentagon, assuring that his company's AI models would be available for all lawful purposes, contrasting sharply with Anthropic’s stance. That same day, Caitlin Kalinowski, OpenAI’s hardware executive, resigned, citing the lack of deliberation regarding the ethical implications of surveillance and lethal autonomy.
The Dynamics of the Conflict
The circumstances surrounding the conflict between Anthropic, OpenAI, and the Pentagon reveal a larger narrative about governance and the deployment of transformative technologies. Anthropic had secured a $200 million Pentagon contract in July 2025, which included explicit restrictions against using its AI for domestic surveillance and fully autonomous weapons—safeguards in line with both international humanitarian law and US constitutional rights.
However, the Pentagon sought “unrestricted access to AI for all lawful purposes,” and when Anthropic refused to comply, a deadline was set that ultimately led to its blacklisting. Former President Trump criticized Anthropic’s leadership, branding them as “leftwing nut jobs,” and mandated federal agencies to stop using their technology.
In a subsequent ruling, Judge Rita Lin characterized the government’s actions as “classic First Amendment retaliation,” issuing a preliminary injunction against the ban, which was later challenged by a federal appeals court favoring the government's position. Despite the legal battles, Anthropic remains barred from Pentagon contracts but is actively engaging with other agencies and launching new initiatives.
OpenAI's Position
OpenAI’s involvement in this saga raises questions about moral integrity. Altman has claimed that OpenAI shares Anthropic’s core principles, which include opposing domestic mass surveillance and autonomous weapons. Yet, the key difference lies in OpenAI’s willingness to sign an agreement with the Pentagon, while Anthropic did not. The specifics of OpenAI’s contract remain undisclosed, and Pentagon officials assert that existing laws already prevent the abuses Anthropic was concerned about.
This situation underscores a troubling reality: the US government has demonstrated a capability to bypass enforceable safety restrictions in AI technologies through procurement power, effectively punishing companies that adhere to ethical standards while rewarding those that comply with governmental demands. Altman has acknowledged that the deal with the Pentagon was made hastily, leading to significant backlash, including a surge in uninstalls of ChatGPT shortly after the announcement.
Implications for Europe
The situation in the United States serves as a cautionary tale for Europe, which has dedicated years to establishing a regulatory framework for AI based on democratic principles. The EU’s forthcoming AI Act aims to legally bind technology companies to ethical standards, prohibiting practices like real-time biometric surveillance and social scoring.
The Anthropic case illustrates the consequences of a governance model that rejects such legal safeguards. The Biden administration’s revocation of AI safety measures and suppression of state-level legislation starkly contrasts with European efforts to ensure that advanced technologies operate within the bounds of law rather than corporate goodwill.
As the EU negotiates its “Digital Omnibus” package, which may weaken parts of the AI Act, it faces pressures to enhance competitiveness against less regulated competitors. The narrative that deregulation confers a competitive advantage is challenged by the US experience, which highlights the risks of prioritizing short-term gains over long-term safety.
Despite the ban, federal agencies continue to explore Anthropic’s technology, indicating that the US government values the very protections it publicly dismissed. The distinction between contractual safety principles and those communicated in press releases is crucial, underscoring the necessity for enforceable regulations.
In conclusion, both Europe and the US are at a crossroads regarding AI governance, and the decisions made in the coming months will have lasting implications for the industry's future. As the AI Act approaches its implementation deadline, the Anthropic saga serves as a critical reminder of the importance of embedding ethical considerations into technology deployment.
Source: TNW | Opinion News