Close Menu
  • Home
  • Europe
  • United Kingdom
  • World
  • Politics
  • Business
  • Culture
  • Health
  • Sports
  • Tech
  • Travel
Trending

20 charges brought over woman, 21, ‘crushed by wardrobe’ at Adelphi Hotel in Liverpool

May 7, 2026

Video. Latest news bulletin | May 6th, 2026 – Midday

May 7, 2026

Trump gives EU until 4 July to implement trade deal or face ‘much higher’ tariffs

May 7, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
Se Connecter
May 7, 2026
Euro News Source
Live Markets Newsletter
  • Home
  • Europe
  • United Kingdom
  • World
  • Politics
  • Business
  • Culture
  • Health
  • Sports
  • Tech
  • Travel
Euro News Source
Home»Tech
Tech

Court rejects Anthropic’s appeal to pause supply chain risk label given by US government

News RoomBy News RoomApril 16, 2026
Facebook Twitter WhatsApp Copy Link Pinterest LinkedIn Tumblr Email Telegram

In a significant legal setback for the artificial intelligence sector, Anthropic, a leading American AI company, has failed to secure an immediate shield against a controversial government designation. The U.S. Court of Appeals for the D.C. Circuit recently rejected the company’s request to pause its labeling as a “supply chain risk” by the federal government. This designation, a novel and severe mark applied for the first time to a domestic U.S. company, carries substantial operational and financial consequences. It effectively blocks contractors working with the Pentagon from utilizing Anthropic’s AI models, including its advanced chatbot Claude, on Department of Defense projects. The ruling underscores the escalating tensions between innovative tech firms and governmental regulatory bodies, particularly concerning the ethical and operational deployment of cutting-edge AI in sensitive national security domains.

The origins of this conflict trace back to actions taken by the Trump administration earlier this year. In February, the administration imposed the supply chain risk label and ordered federal agencies to cease using Anthropic’s Claude AI assistant. This decisive move followed the company’s refusal to grant the military unrestricted access to its AI model. Anthropic’s ethical guidelines, which include firm corporate “red lines,” reportedly prohibit the use of its technology for certain applications, such as lethal autonomous weapons systems operating without human oversight and mass surveillance programs targeting American citizens. The government’s designation appears to be a direct response to this principled stand, framing Anthropic’s restrictions as a potential liability for military reliability and operational security during critical moments.

The stakes of this legal and ethical battle are profoundly high, given Anthropic’s already deep integration into U.S. national security infrastructure. Prior to the dispute, in 2025, the company secured a substantial $200 million contract with the Pentagon to embed its technology within military systems. Subsequently, Claude had been deployed across classified government networks, within national nuclear laboratories, and was actively performing intelligence analysis for the Department of Defense. The supply chain risk designation threatens to unravel this extensive partnership, causing significant financial harm and stifling a key technological pipeline for the government. In its court filings, the Department of Defense expressed concern that Anthropic might “attempt to disable its technology or preemptively alter the behaviour of its model” during a “warfighting operation” if it felt its ethical boundaries were being violated.

Anthropic has mounted a vigorous legal defense against the administration’s actions. The company filed two separate lawsuits, in San Francisco and Washington D.C., accusing the Trump administration of engaging in an “unlawful campaign of retaliation.” Interestingly, Anthropic had already achieved a victory in the San Francisco court, which forced the administration to remove the label in that jurisdiction. However, the recent D.C. Circuit ruling represents a counterbalance to that success. The appellate panel declined to intervene, stating that “the precise amount of Anthropic’s financial harm is not clear,” and thus did not see an immediate need to revoke the administration’s actions. The court has scheduled further proceedings to hear more evidence in May, leaving the final outcome still pending.

Despite this temporary setback, Anthropic remains determined and optimistic about its legal position. In a statement to the Associated Press following the ruling, the company expressed gratitude that the court recognized the urgency of resolving these issues and maintained confidence that courts will ultimately find the designations unlawful. This ongoing litigation highlights a fundamental clash between corporate ethical governance and state security imperatives. It raises critical questions about how a nation balances its need for advanced, reliable AI tools in defense with the moral frameworks and operational control insisted upon by the private companies that develop them.

The case of Anthropic versus the U.S. government is more than a corporate legal dispute; it is a landmark exploration of boundaries in the age of intelligent machines. As AI systems become more powerful and integral to national defense, the rules governing their use must be clarified. This conflict probes whether companies can enforce ethical restraints on military applications of their products and what recourse the government has when those restraints are perceived as threats to operational readiness. The final rulings, expected later this year, will not only shape Anthropic’s future but also set a precedent for how America manages the complex intersection of technological innovation, corporate ethics, and national security in the decades to come.

Share. Facebook Twitter Pinterest LinkedIn Telegram WhatsApp Email

Keep Reading

“Companies should not be regulated twice”: EU reaches tentative deal to simplify AI rules

Tech May 7, 2026

What is Elon Musk’s new chip-making facility, Terafab, and why is he building it now?

Tech May 6, 2026

Cybercriminals gave AI a go — and came away disappointed, study finds

Tech May 5, 2026

Star Wars in real life? Astronomers find 27 possible twin-sun worlds

Tech May 5, 2026

Europe is hungry for AI data centres — but its energy grid cannot feed them

Tech May 5, 2026

Video. Inside Europe’s flying lab: Scientists study life in zero gravity

Tech May 3, 2026

New debate over Pluto: Is the dwarf set to become a planet again?

Tech May 1, 2026

‘Virtual rape’: AI and deepfakes are silencing women in public life, UN report

Tech April 30, 2026

Would you take orders from a chatbot? This Stockholm café is run by an AI manager

Tech April 30, 2026

Editors Picks

Video. Latest news bulletin | May 6th, 2026 – Midday

May 7, 2026

Trump gives EU until 4 July to implement trade deal or face ‘much higher’ tariffs

May 7, 2026

King Charles completes historic first during moving church service

May 7, 2026

EU critic Rumen Radev named new Bulgarian prime minister

May 7, 2026

Latest News

Video. Cape Verde: three evacuated from MV Hondius over suspected hantavirus

May 7, 2026

Baby dies after ‘incident’ at Manchester home as 999 crews flood city street

May 7, 2026

‘We will not be bullied’: MEPs dig in over delayed US trade deal

May 7, 2026

Subscribe to News

Get the latest Europe and World news and updates directly to your inbox.

Facebook X (Twitter) Pinterest Instagram
2026 © Euro News Source. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?