Explore more publications!

US Judge Raises Concern Over Pentagon's Move Against Anthropic

(MENAFN) A US federal judge on Tuesday signaled deep skepticism over the Pentagon's decision to classify AI firm Anthropic as a "supply chain risk," raising pointed questions about whether the designation was genuinely rooted in national security or amounted to government overreach, according to media.

At a hearing in San Francisco, US District Judge Rita Lin described the government's conduct as "troubling," suggesting federal authorities may have exceeded the bounds of legitimate security action. The case centers on Anthropic's refusal to permit its AI model, Claude, to be deployed for mass surveillance of American citizens or integrated into fully autonomous weapons systems.

The US government has insisted it must retain authority to employ such technology for "all lawful purposes." When negotiations between the two sides collapsed, the Pentagon moved to curtail Anthropic's role in military-related work — a decision the company swiftly challenged in court.

A Lawsuit Rooted in AI Ethics and Retaliation Claims
Anthropic subsequently filed suit, contending the designation was unconstitutional and represented direct retaliation for the company's principled stance on AI safeguards. The firm is seeking to nullify both the security label and a sweeping directive ordering federal agencies to cease use of its technology.

During Tuesday's proceedings, a Justice Department attorney acknowledged that the Defense Department does not hold clear legal authority to terminate contracts with firms based solely on their separate commercial ties to Anthropic.

Judge Lin pressed the government on whether its rationale — including the specter of potential "future sabotage" — was backed by concrete evidence, or whether the designation reflected a policy disagreement rather than a credible threat.

Anthropic's legal team firmly rejected assertions that the company could manipulate or interfere with its software post-deployment, arguing the government's actions had already inflicted measurable business harm by sowing uncertainty among the company's partners and clients.

The Pentagon, for its part, maintained that determinations over the lawful application of AI technology belong with elected and appointed government officials — not private corporations — while stressing that current policy already bars mass surveillance and autonomous lethal systems operating without human oversight.

Judge Lin indicated a ruling is expected within days.

MENAFN25032026000045017169ID1110905247


Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions