In a recent briefing, Nick Clegg, President of Global Affairs at Meta, discussed the impact of artificial intelligence (AI) during a year marked by numerous elections across the globe. He emphasized that despite the potential risks associated with AI-generated content, Meta did not observe significant incidents affecting its platforms during the recent Romanian elections. Clegg noted that the company maintained close communication with various Romanian government authorities, including the Ministry of Interior and the Cybersecurity Agency, to ensure transparency and security, concluding that there was no evidence of major issues arising from their platform during that period.
Clegg’s comments come amid scrutiny over election integrity and the influence of social media platforms. In Romania, the National Audiovisual Council has raised concerns about TikTok’s role in the presidential elections, particularly regarding the performance of independent candidate Calin Georgescu, who secured approximately 22.95% of the votes with a notably strong presence on TikTok. In light of this situation, the European Commission has initiated a formal inquiry into TikTok’s practices during the election process, seeking clarity on the platform’s influence and operational guidelines in politically sensitive contexts.
Furthermore, the European Commission organized an online roundtable involving key stakeholders, including TikTok, Google, and Meta, to discuss challenges related to content moderation and the potential for misinformation. In response to these allegations, TikTok has stated that they have not encountered any evidence of covert influence operations or foreign interference on their platform during the recent election cycle. This highlights ongoing concerns related to transparency and accountability, particularly regarding lesser-regulated platforms and their impact on public opinion in electoral processes.
AI’s role in shaping electoral outcomes was also addressed by Clegg, who underscored that during the 2024 elections in major democracies like India and Indonesia, Meta’s existing strategies were effective in mitigating the risks posed by generative AI content. He claimed that during these electoral periods, AI-related misinformation constituted less than 1% of total fact-checked misinformation on their platforms. Clegg pointed out the proactive measures taken by Meta, including rejecting nearly 590,000 requests to utilize Meta’s Image AI for generating politically charged images around the time of the U.S. presidential election.
Despite the implementation of these policies, Clegg acknowledged that there are inherent trade-offs when managing content on social media, particularly with regard to freedom of expression. He lamented that, on occasion, innocuous content might be unfairly moderated, resulting in the unjust penalty of users. Clegg expressed Meta’s commitment to refining their approach in the months ahead to strike a balance between ensuring user safety, integrity of information, and safeguarding freedom of speech on its platforms.
In conclusion, as elections worldwide increasingly intersect with the digital landscape, the role of AI and social media platforms remains a double-edged sword. While companies like Meta seek to safeguard the electoral process from misinformation and foreign influence, the challenge of upholding freedom of expression and the potential overreach of content moderation practices persists. As scrutiny intensifies and platforms are held accountable, the interaction between technology and democratic processes will continue to evolve, raising important questions about the future of communication, governance, and public trust.