A sprawling digital underworld, operating with chilling efficiency across Southern Europe, has been exposed in a new investigation. A European non-profit, AI Forensics, uncovered a network of nearly 25,000 individuals using the encrypted messaging service Telegram to systematically trade nonconsensual sexual material and child sexual abuse imagery. Their six-week study, which analyzed 2.8 million messages across 16 groups, reveals an “ecosystem of abuse at scale,” primarily fueled by young heterosexual men. Disturbingly, much of the nonconsensual content targets women who are the perpetrators’ own partners, former partners, or acquaintances, indicating a profound betrayal of trust. The most extreme and illegal content escalates to depictions of children in horrific scenarios of incest and rape. This report arrives amid growing EU legislative action, following a recent European Parliament vote to ban AI systems used to “nudify” individuals without consent—a tool sadly advertised within these very networks.
The mechanics of this abuse are both methodical and monetized. Perpetrators often source their initial material from private exchanges on mainstream platforms like Instagram and WhatsApp. Once obtained, this intimate content is weaponized and commercialized. Operators charge one-time fees of up to €50 for access to vast archives, or even institute monthly subscriptions of around €5. The victims, predominantly women, are frequently identified by name, tagged, and can be located through shared profile links, compounding the trauma with threats to their real-world safety. This content does not remain confined to Telegram; it spills over onto platforms like TikTok and Instagram, while Reddit is used as a “recruitment gateway,” disseminating links to the paid Telegram channels. In total, this vile network reached an estimated 52,000 people—27,000 in Italy and 25,000 in Spain—demonstrating that this scourge is a borderless, pan-European crisis.
Despite the horrific nature of the content, the report casts serious doubt on Telegram’s commitment to curbing it. AI Forensics notes that while Telegram did shut down some of the observed groups, they were often reconstituted under the same names within hours. This rapid regeneration points to what the researchers label “insufficient” moderation practices. They argue that Telegram must implement more robust reporting mechanisms and actively enforce its own policies, particularly scrutinizing its Premium subscription model which provides a direct monetization avenue for this abusive content. The non-profit’s central recommendation is for the European Commission to formally designate Telegram as a Very Large Online Platform (VLOP) under the EU’s Digital Services Act (DSA), a move that would mandate far greater transparency and accountability.
Being classified as a VLOP would subject Telegram to stringent obligations, including detailed risk assessments for systemic harms like the distribution of child sexual abuse material, and forced transparency about how its algorithms and systems function. Telegram has previously claimed it falls below the DSA’s VLOP threshold of 45 million monthly users in the EU, stating it has “significantly fewer.” However, the scale and coordination of the abuse network suggest a massive, active user base. AI Forensics also calls for the upcoming EU AI Act to include stronger provisions for the removal of such illegal imagery. In essence, they advocate for a full regulatory toolbox to compel the platform to take meaningful, sustained action.
In response, Telegram defended its practices to Euronews Next, arguing that its hands-off approach is a virtue. The platform stated that the fact these groups must recruit via other platforms proves its “lack of content promotion algorithms and proactively moderated search features do not allow them to spread on our platform.” It reiterated that child sexual content and non-consensual material are prohibited, and that it uses both AI and human moderators to counter abuse, with user reporting options and penalties ranging from feature restrictions to permanent bans. Telegram also asserted it does not profit from Premium features attached to removed content and claimed its moderation is “more effective” than that of many already-designated VLOPs.
This distressing report illuminates a stark and urgent conflict: the clash between a platform’s ethos of privacy and minimal intervention, and the devastating real-world consequences of its weaponization. While Telegram positions its limited algorithms as a safeguard against viral spread, the investigation reveals a highly organized, manual network thriving in the shadows it creates. The victims—one in three women in the EU has experienced sexual violence—pay the price. Ultimately, the findings underscore that the proliferation of nonconsensual and abusive material is not an isolated issue on a single app, but a “structural problem” demanding a coordinated European response, blending stronger regulation, consistent platform enforcement, and continued societal challenge to the attitudes that allow such exploitation to flourish.












