Close Menu
  • Home
  • Europe
  • United Kingdom
  • World
  • Politics
  • Business
  • Culture
  • Health
  • Sports
  • Tech
  • Travel
Trending

Preparing for a Covid-style crisis  

April 17, 2026

Armed police descend on sleepy Devon neighbourhood with huge area cordoned off

April 17, 2026

Video. ‘Trump needs a better deal than Obama’, says former US envoy for Iran

April 17, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
Se Connecter
April 17, 2026
Euro News Source
Live Markets Newsletter
  • Home
  • Europe
  • United Kingdom
  • World
  • Politics
  • Business
  • Culture
  • Health
  • Sports
  • Tech
  • Travel
Euro News Source
Home»Tech
Tech

Court rejects Anthropic’s appeal to pause supply chain risk label given by US government

News RoomBy News RoomApril 16, 2026
Facebook Twitter WhatsApp Copy Link Pinterest LinkedIn Tumblr Email Telegram

In a significant legal setback for the artificial intelligence sector, Anthropic, a leading American AI company, has failed to secure an immediate shield against a controversial government designation. The U.S. Court of Appeals for the D.C. Circuit recently rejected the company’s request to pause its labeling as a “supply chain risk” by the federal government. This designation, a novel and severe mark applied for the first time to a domestic U.S. company, carries substantial operational and financial consequences. It effectively blocks contractors working with the Pentagon from utilizing Anthropic’s AI models, including its advanced chatbot Claude, on Department of Defense projects. The ruling underscores the escalating tensions between innovative tech firms and governmental regulatory bodies, particularly concerning the ethical and operational deployment of cutting-edge AI in sensitive national security domains.

The origins of this conflict trace back to actions taken by the Trump administration earlier this year. In February, the administration imposed the supply chain risk label and ordered federal agencies to cease using Anthropic’s Claude AI assistant. This decisive move followed the company’s refusal to grant the military unrestricted access to its AI model. Anthropic’s ethical guidelines, which include firm corporate “red lines,” reportedly prohibit the use of its technology for certain applications, such as lethal autonomous weapons systems operating without human oversight and mass surveillance programs targeting American citizens. The government’s designation appears to be a direct response to this principled stand, framing Anthropic’s restrictions as a potential liability for military reliability and operational security during critical moments.

The stakes of this legal and ethical battle are profoundly high, given Anthropic’s already deep integration into U.S. national security infrastructure. Prior to the dispute, in 2025, the company secured a substantial $200 million contract with the Pentagon to embed its technology within military systems. Subsequently, Claude had been deployed across classified government networks, within national nuclear laboratories, and was actively performing intelligence analysis for the Department of Defense. The supply chain risk designation threatens to unravel this extensive partnership, causing significant financial harm and stifling a key technological pipeline for the government. In its court filings, the Department of Defense expressed concern that Anthropic might “attempt to disable its technology or preemptively alter the behaviour of its model” during a “warfighting operation” if it felt its ethical boundaries were being violated.

Anthropic has mounted a vigorous legal defense against the administration’s actions. The company filed two separate lawsuits, in San Francisco and Washington D.C., accusing the Trump administration of engaging in an “unlawful campaign of retaliation.” Interestingly, Anthropic had already achieved a victory in the San Francisco court, which forced the administration to remove the label in that jurisdiction. However, the recent D.C. Circuit ruling represents a counterbalance to that success. The appellate panel declined to intervene, stating that “the precise amount of Anthropic’s financial harm is not clear,” and thus did not see an immediate need to revoke the administration’s actions. The court has scheduled further proceedings to hear more evidence in May, leaving the final outcome still pending.

Despite this temporary setback, Anthropic remains determined and optimistic about its legal position. In a statement to the Associated Press following the ruling, the company expressed gratitude that the court recognized the urgency of resolving these issues and maintained confidence that courts will ultimately find the designations unlawful. This ongoing litigation highlights a fundamental clash between corporate ethical governance and state security imperatives. It raises critical questions about how a nation balances its need for advanced, reliable AI tools in defense with the moral frameworks and operational control insisted upon by the private companies that develop them.

The case of Anthropic versus the U.S. government is more than a corporate legal dispute; it is a landmark exploration of boundaries in the age of intelligent machines. As AI systems become more powerful and integral to national defense, the rules governing their use must be clarified. This conflict probes whether companies can enforce ethical restraints on military applications of their products and what recourse the government has when those restraints are perceived as threats to operational readiness. The final rulings, expected later this year, will not only shape Anthropic’s future but also set a precedent for how America manages the complex intersection of technological innovation, corporate ethics, and national security in the decades to come.

Share. Facebook Twitter Pinterest LinkedIn Telegram WhatsApp Email

Keep Reading

Europe’s defence cloud reliance risks US ‘kill switch,’ think tank warns

Tech April 16, 2026

‘Riding a fireball through atmosphere’: How Artemis II astronauts will return to Earth

Tech April 16, 2026

How will AI impact tourism and travel? Your next trip could be entirely planned by ChatGPT

Tech April 16, 2026

How satellites are driving cooperation beyond the Central Asian region

Tech April 16, 2026

More than 70 robot teams gear up for China’s second humanoid half-marathon

Tech April 16, 2026

‘Fabricating proof’: The disinformation tactics that shaped Hungary’s election

Tech April 16, 2026

Using AI for basic tasks damages a person’s intellect in just 10 minutes, study shows

Tech April 16, 2026

Forget relying on solar power: NASA plans to put nuclear reactors on the surface of the Moon

Tech April 16, 2026

Shooting for the Moon: What’s next for NASA after Artemis II’s lunar fly-by?

Tech April 16, 2026

Editors Picks

Armed police descend on sleepy Devon neighbourhood with huge area cordoned off

April 17, 2026

Video. ‘Trump needs a better deal than Obama’, says former US envoy for Iran

April 17, 2026

Video. Strikes hit Sloviansk and Sumy as Russia pounds Ukrainian cities

April 16, 2026

Europe’s defence cloud reliance risks US ‘kill switch,’ think tank warns

April 16, 2026

Latest News

The Rolling Stones are back with new single ‘Rough and Twisted’

April 16, 2026

Back on track: Madeira’s iconic hiking trail to reopen after two-year closure

April 16, 2026

How secure are the Commission’s group chats?

April 16, 2026

Subscribe to News

Get the latest Europe and World news and updates directly to your inbox.

Facebook X (Twitter) Pinterest Instagram
2026 © Euro News Source. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?