In a significant move to adapt its landmark Artificial Intelligence Act for a rapidly evolving technological landscape, the European Union has reached a provisional agreement on a package of simplifications. Dubbed the “Digital Omnibus,” this deal, negotiated between EU member states and the European Parliament, aims to reduce regulatory confusion and foster innovation, particularly among European startups, while maintaining core protections for citizens. The omnibus was proposed just five months ago as a direct response to widespread concern from businesses that the complex, overlapping layers of EU regulation were stifling competition. The central promise of the agreement is to provide clearer, more navigable rules without diluting the safety standards that form the heart of the AI Act.
A cornerstone of the simplification effort is the introduction of more manageable timelines for compliance, especially for systems deemed “high-risk.” Under the original AI Act, such systems—which include AI used in critical infrastructure, education, employment, and border control—faced stringent and immediate obligations. The new agreement grants these high-risk applications an additional year, pushing the compliance deadline to December 2027. For AI embedded in specific products like toys or elevators, companies have until August 2028 to adapt. This breathing room is intended to prevent legal and commercial chaos, giving developers and integrators a realistic runway to adjust their systems. Furthermore, the package creates simpler, tailored rules for small and medium-sized enterprises and establishes an “EU-level sandbox,” a controlled testing environment where innovators can trial their AI products before full-scale deployment, mitigating early-stage regulatory risk.
The rationale behind these adjustments, as explained by lawmakers like Arba Kokalari, the European Parliament’s rapporteur on the file, is to eliminate paralyzing uncertainty. A major industry complaint was the potential for “double regulation,” where a single AI system might simultaneously fall under the horizontal AI Act and older, sector-specific laws. This created a compliance maze that was particularly daunting for smaller players. “We are not weakening any safety rules; we are clarifying the rules for companies in Europe,” Kokalari stated. The omnibus seeks to streamline these requirements, ensuring that companies are regulated appropriately and effectively for their AI components, rather than being burdened by redundant obligations from different legal frameworks.
Alongside these procedural simplifications, the agreement also introduces a potent new ban targeting one of the internet’s most disturbing misuses of AI: so-called “nudification apps.” This provision explicitly prohibits AI systems designed to generate non-consensual sexually explicit content, including deepfake images, videos, and audio. The ban squarely targets applications that digitally strip clothing from photos of real people, a violation that has devastating consequences for victims, disproportionately women and children. The move follows high-profile scandals involving AI chatbots being used to create such abusive material. Lawmaker Michael McNamara clarified that the rules cover any depiction where a person’s “intimate parts” are exposed without consent, drawing a clear ethical line in the digital sand.
Operationally, companies developing or hosting such technologies will have until December 2 of this year to align with the new rules. A key compliance measure will be the mandatory watermarking of AI-generated content, a technical step aimed at helping the public identify synthetic media. It is important to note, as McNamara outlined, that the ban applies specifically to content depicting identifiable human beings, not wholly synthetic AI characters. This distinction aims to balance the prohibition of personal harm with the preservation of creative and fictional expression. For lawmakers like Kokalari, this clause sends an unambiguous message: “We wanted to have clarity on what we think about [nudification apps] in Europe and that we are not accepting of it.”
This provisional deal represents a calibrated evolution of the EU’s approach to governing artificial intelligence. By offering pragmatic timelines and clearer guidance, it seeks to position Europe as a competitor in the global AI race, hoping to nurture homegrown champions. Simultaneously, by swiftly outlawing tools of digital sexual abuse, it reaffirms the bloc’s commitment to being a global standard-setter for ethical technology. The package now awaits formal ratification by the full European Parliament and the council of member states. If approved, it will mark a pivotal shift from the AI Act’s initial passage to its practical implementation, testing Europe’s ability to be both an innovator and a guardian in the age of intelligent machines.












