Close Menu
  • Home
  • Europe
  • United Kingdom
  • World
  • Politics
  • Business
  • Culture
  • Health
  • Sports
  • Tech
  • Travel
Trending

Woman says girl’s ghost scratched her arm after giving two-word message

May 8, 2026

Video. Latest news bulletin | May 8th, 2026 – Morning

May 8, 2026

Google, Microsoft and xAI agree to US government AI testing programme

May 8, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
Se Connecter
May 8, 2026
Euro News Source
Live Markets Newsletter
  • Home
  • Europe
  • United Kingdom
  • World
  • Politics
  • Business
  • Culture
  • Health
  • Sports
  • Tech
  • Travel
Euro News Source
Home»Tech
Tech

Google, Microsoft and xAI agree to US government AI testing programme

News RoomBy News RoomMay 8, 2026
Facebook Twitter WhatsApp Copy Link Pinterest LinkedIn Tumblr Email Telegram

A Shift in AI Governance: The U.S. Institutes Pre-Release Testing for Advanced AI

In a significant move for technology governance, the United States government has announced a new initiative to assess advanced artificial intelligence tools before they become publicly available. This decision, emerging in May 2026, marks a pivotal step in acknowledging the profound power and potential perils of next-generation AI. Leading technology giants—Google, Microsoft, and Elon Musk’s xAI—have formally agreed to allow the U.S. Department of Commerce, through its Center for AI Standards and Innovation (CAISI), to examine their upcoming models. The arrangement focuses on collaborative testing, research, and establishing best practices to ensure these powerful systems are developed and deployed responsibly. This pre-emptive scrutiny represents a foundational shift from post-launch reaction to proactive risk management, aiming to build public trust and safeguard national interests in an era of rapid technological advancement.

The scope of CAISI’s evaluations is intentionally broad and security-focused, targeting what it terms “demonstrable risks.” These include grave threats in cybersecurity, biosecurity, and the potential misuse of AI for developing chemical weapons. As CAISI Director Chris Fall stated, “Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications.” This sentiment was echoed by Microsoft, which publicly endorsed the evaluations as a critical tool to stay ahead of threats like AI-powered cyberattacks targeting its Copilot system. The initiative essentially creates a formalized checkpoint where the immense capabilities of frontier AI models are stress-tested against worst-case scenarios before they reach the broader market, blending industry innovation with governmental oversight in the public interest.

This development is particularly notable as it signals a subtle but clear evolution in the regulatory approach of the Trump administration. Historically, President Trump has championed a deregulatory stance, warning that excessive oversight could stifle American innovation and cede ground to competitors like China. His AI National Policy Framework, released earlier in 2026, emphasized removing barriers and accelerating AI deployment across the economy, explicitly opposing the creation of new, overarching federal AI regulatory bodies. The CAISI partnership, therefore, represents a pragmatic compromise. Rather than establishing a new regulator, it leverages an existing agency and domain-specific experts to conduct evaluations, aligning with the administration’s preference for streamlined governance while addressing undeniable security imperatives.

CAISI is not starting from scratch; it brings experience from having already conducted approximately 40 evaluations on various models, including some cutting-edge systems that have never been publicly released. Furthermore, the announcement involves the “renegotiation” of existing agreements with other AI leaders, OpenAI and Anthropic, which were initially signed under the previous Biden administration. While the specifics of these revised terms were not disclosed, this continuity suggests a bipartisan recognition of the core need for safety assessments. OpenAI’s actions underscore this commitment. The company’s chief global affairs officer, Chris Lehane, revealed they provided the government with early access to ChatGPT-5.5 for national security testing ahead of its public debut, demonstrating proactive cooperation.

A key area of collaboration highlighted by OpenAI involves specialized, high-stakes AI applications. The company is working with CAISI to test “GPT-5.5-Cyber,” a model specifically designed to bolster cyber defense capabilities. Currently available only to a limited group of initial users, this tool exemplifies the dual-use nature of advanced AI—it can be a shield for national infrastructure or, in the wrong hands, a potent weapon. OpenAI emphasized its involvement in crafting a “responsible deployment strategy” for such models, including developing a detailed playbook for their controlled distribution within government agencies. This focused work on cybersecurity models illustrates the nuanced, domain-by-domain approach to risk management that the CAISI process aims to institutionalize.

In conclusion, the U.S. government’s new pre-release testing framework for advanced AI, facilitated through CAISI, establishes a landmark public-private partnership for the age of artificial intelligence. It balances the drive for innovation with the imperative for security, adapting the nation’s regulatory philosophy to meet the unique challenges posed by frontier technologies. By mandating independent evaluations for risks ranging from cyberattacks to chemical weapons proliferation, the initiative seeks to ensure that America’s AI leadership is exercised with caution and responsibility. As companies from Google to OpenAI engage in this process, the collaborative effort reflects a collective understanding that guiding the safe development of transformative AI is not just a corporate or political issue, but a fundamental societal priority.

Share. Facebook Twitter Pinterest LinkedIn Telegram WhatsApp Email

Keep Reading

“Companies should not be regulated twice”: EU reaches tentative deal to simplify AI rules

Tech May 7, 2026

What is Elon Musk’s new chip-making facility, Terafab, and why is he building it now?

Tech May 6, 2026

Cybercriminals gave AI a go — and came away disappointed, study finds

Tech May 5, 2026

Star Wars in real life? Astronomers find 27 possible twin-sun worlds

Tech May 5, 2026

Europe is hungry for AI data centres — but its energy grid cannot feed them

Tech May 5, 2026

Video. Inside Europe’s flying lab: Scientists study life in zero gravity

Tech May 3, 2026

New debate over Pluto: Is the dwarf set to become a planet again?

Tech May 1, 2026

‘Virtual rape’: AI and deepfakes are silencing women in public life, UN report

Tech April 30, 2026

Would you take orders from a chatbot? This Stockholm café is run by an AI manager

Tech April 30, 2026

Editors Picks

Video. Latest news bulletin | May 8th, 2026 – Morning

May 8, 2026

Google, Microsoft and xAI agree to US government AI testing programme

May 8, 2026

Girl, 9, rushed to hospital with ‘deep puncture wounds’ after vicious dog attack

May 8, 2026

CCTV captures brazen killer calmly returning home after punching pensioner on night out

May 8, 2026

Latest News

Who’s ‘European’ — and who’s not?

May 8, 2026

Evil John Rytting who fed fatal cocktail of drugs to four-year-old Poppy Widdison dies in prison

May 8, 2026

Deceitful police officer calls in sick — to work shifts at a pub instead

May 8, 2026

Subscribe to News

Get the latest Europe and World news and updates directly to your inbox.

Facebook X (Twitter) Pinterest Instagram
2026 © Euro News Source. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?