A federal appeals court refused to halt the Trump administration’s efforts to blacklist Anthropic yesterday, denying the company’s emergency motion for a stay. But the court granted the US-based AI firm’s request to expedite the case and will hold oral arguments on May 19.
The ruling by the US Court of Appeals for the District of Columbia Circuit was issued by a panel of three judges appointed by Republicans, including Trump appointees Gregory Katsas and Neomi Rao. Katsas previously served as deputy counsel to the president during Trump’s first term, while Rao served in the Trump administration’s Office of Management and Budget. The judges’ decision is a setback for Anthropic, but it’s only one of two cases it filed against the Trump administration, and the AI firm has had more success in the other one.
Anthropic says it exercised its First Amendment rights by refusing to let Claude AI models be used for autonomous warfare and mass surveillance of Americans, and that Trump and Defense Secretary Pete Hegseth blacklisted it in retaliation. Trump directed all federal agencies to stop using Anthropic technology, and Hegseth labeled Anthropic a “Supply-Chain Risk to National Security,” prohibiting military contractors from doing business with Anthropic.
The DC Circuit ruling acknowledged “that Anthropic will likely suffer some degree of irreparable harm absent a stay,” which the court said appears to be “primarily financial in nature… Anthropic also claims ongoing harms from retaliation for its constitutionally protected speech,” but the firm “does not show that its speech has been chilled during the pendency of this litigation,” the ruling said.
Anthropic separately sued the Trump administration in US District Court for the Northern District of California, where a federal judge granted Anthropic’s motion for a preliminary injunction in March. The District Court judge handling the case in California, Biden appointee Rita Lin, described the Anthropic blacklisting as retaliation that violates the First Amendment. The Trump administration is appealing that ruling to the US Court of Appeals for the 9th Circuit.
Trump admin hails “victory for military readiness”
Yesterday’s DC Circuit court ruling did not address the merits of the case and said it is a difficult one to decide:
Anthropic’s petition raises novel and difficult questions, including what counts as a supply-chain risk under section 4713 and what qualifies as an urgent national-security interest justifying the use of truncated statutory procedures. In addition, we must consider whether Anthropic’s petition targets a “covered procurement action” reviewable at this time under the governing judicial-review scheme, 41 U.S.C. § 1327(b). The parties vigorously contest many of these issues, and we have found no judicial precedent shedding much light on the questions presented. But we do not broach the merits at this time, for Anthropic has not shown that the balance of equities cuts in its favor.
Acting Attorney General Todd Blanche called the ruling “a resounding victory for military readiness.” Blanche said the “military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems. Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company.”
Anthropic said in a statement provided to Ars that it is “grateful the court recognized these issues need to be resolved quickly” and that the firm “remain[s] confident the courts will ultimately agree that these supply chain designations were unlawful. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
Anthropic relationship with government “deteriorated”
The DC Circuit appeals court said that granting a stay to Anthropic would have negative consequences for the Department of Defense, which now calls itself the Department of War. A stay “would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict,” the court said. “As the Department explains, Anthropic has now conclusively barred uses that the Department recently deemed essential.”
The department’s “relationship with Anthropic has deteriorated to the extent that Anthropic’s CEO has publicly described the Department’s statements regarding the controversy as ‘completely false’ and ‘just straight up lies,’” the court said. “Under these circumstances, requiring the Department to prolong its use of Anthropic’s AI technology, whether directly or through contractors, strikes us as a substantial judicial imposition on military operations. And, of course, we do not lightly override the Department’s judgments on matters involving national security.”
While the court said the balance of equities favors the government in determining whether to issue a stay, it acknowledged that Anthropic raised substantial questions that should be addressed quickly.
“In our view, the equitable balance here cuts in favor of the government,” the court said. “On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict. For that reason, we deny Anthropic’s motion for a stay pending review on the merits. Nonetheless, because Anthropic raises substantial challenges to the determination and will likely suffer some irreparable harm during the pendency of this litigation, we agree with Anthropic that substantial expedition is warranted.”
The Computer & Communications Industry Association (CCIA), a trade group that filed briefs in both cases, said that tech companies are concerned about the “Pentagon’s means of blacklisting Anthropic without following typical procurement procedures,” and that the appeals court “denial will prolong ambiguities regarding whether political considerations can drive federal procurement.”
“Designating a company as a supply chain risk is a tool normally reserved for foreign adversaries, and should be used with discretion and proper procedure,” CCIA CEO Matt Schruers said. “It is risky to US innovation and competition to allow the government to unfairly discourage doing business with a US AI company as it competes with foreign AI companies.”







