Posted by ccld201
9 December 2025Two days before Romanians were to vote in the 2024 presidential run-off, the Constitutional Court annulled the first-round results, citing concerns around fairness and integrity.[1] Behind this lay the unprecedented claim that AI-generated disinformation, allegedly orchestrated by Russia, had compromised Romanians’ integrity to vote.
Declassified intelligence revealed that nearly 800 dormant TikTok accounts linked to Russian operators were reactivated to promote far-right candidate CÄlin Georgescu, while 25,000 new accounts spread fabricated content in the weeks leading up to the election. Videos of false endorsements and manipulated interviews circulated widely, tilting online discourse and eroding trust in the electoral process[2].
This crisis exemplifies how generative AI has transformed disinformation into a scalable, adaptive, and near-indistinguishable force. Democracies now face a dual dilemma: how to protect electoral integrity without sliding into digital authoritarianism, and how to regulate social media without extinguishing free speech.
AI-generated Disinformation and Elections
Disinformation has long been part of political warfare, but generative AI has amplified its reach. What once required substantial coordination can now be produced by individuals using publicly available tools[3]. Deepfakes, cloned voices, and algorithmically targeted propaganda spread faster than journalists or regulators can verify them.
This acceleration of AI-generated deepfakes undermines democracy in three ways. Firstly, it erodes citizensâ capacity to make informed decisions as disinformation and misinformation flood the digital space. Secondly, it intensifies polarisation by reinforcing echo chambers and discouraging openness to alternative viewpoints. Thirdly, it diminished trust in institutions, corroding the legitimacy of democratic governance.
In 2023, the IPSOS-CIGI Survey found that 40% of respondents had lost trust in the media and 22% in government due to online disinformation. Beyond this, such manipulation can spill into digital harassment, hate speech, and political violence. States often respond with surveillance or censorship justified as counter-disinformation, but these measures can supress legitimate news thus undermining democracy and replicating the very authoritarianism they seek to prevent.
Social Media and the Architecture of Manipulation
Social media remains the central infrastructure of digital manipulation. Major platforms fail to enforce consistent standards, particularly in vulnerable or semi-autonomous regions where institutional checks may already be weak[4]. This brings us to the Kurdish elections of 2024, where a deepfake phone call between senior politicians alleging election rigging was broadcast by party-aligned media and went viral just days before the vote, further intensifying an already highly polarised political environment.
Two users reported the content as misinformation, and Meta closed both reports without review. Only following an appeal did Meta consult external experts, who deemed the audio likely manipulated and labelled some identical posts, but inexplicably left the original unmarked. In response, the Oversight Board (an independent body created by Meta to review it content moderation decisions) overturned Metaâs decision to leave the post unflagged, requiring it to be labelled as âlikely digitally created or altered.â The Board criticised Metaâs âincoherent and unjustifiableâ failure to apply labels consistently across identical instances of the same manipulated media, noting that the companyâs systems can only identify static images, not audio or video. It further condemned the absence of manipulated media labels in Sorani Kurdish, despite the language being supported on Facebook[5].
This case is especially significant because Kurdistan, as a semi-autonomous state, represents a democratic hybrid. While holding regular elections and maintaining some extent of institutional pluralism, it operated within a fragile constitutional and geopolitical context. Its experience exposes a structural imbalance in global content governance. Typically, algorithmic and linguistic priorities of large platforms overwhelmingly reflect the needs of the Global North. In such information ecosystems under stress, weak media literacy, limited institutional oversight, and transnational platform dominance converge to magnify disinformation effects. As Bradshaw and Howard observe, âalthough there is nothing necessarily new about propaganda, the affordances of social networking technologiesâalgorithms, automation, and big dataâchange the scale, scope, and precision of how information is transmitted in the digital ageâ [6]. This is particularly relevant as increasingly accessible and advanced AI tools make it easier to create and spread AI-generated disinformation. Metaâs failure to label manipulated audio in a language supported by its own platform reveals not merely a technical lapse but a deeper epistemic bias in how digital safeguards are distributed globally.
The case of Kurdish elections exemplifies the difficulties of flagging AI-generated content in different regions of the world. It demonstrated that the worldâs most powerful communication platforms reproduce patterns of neglect in linguistically diverse and politically fragile regions.
The UK’s Fragile Defence
Following allegations of Russian interference in the 2019 general elections, the European Court of Human Rights in Bradshaw and Others v. The United Kingdom[7] upheld the governmentâs response, citing legislative reforms such as the Elections Act 2022[8], the National Security Act 2023[9], and the Online Safety Act 2023[10]. Yet, these frameworks remain ill-suited to the velocity of AI-generated manipulation. In the last national elections, while concerns about AI-generated disinformation were widespread, its direct electoral impact was limited. Nonetheless, with the resurgence of anti-immigrant rhetoric and rising racial hostility, particularly among far-right political movements, the spread of disinformation now appeared inevitable.
For instance, during the demonstrations held by Tommy Robinson in London in September of 2025, AI-generated synthetic content circulated widely on social media.[11] This fuelled tension and misinformation. This follows a familiar pattern seen in politicians such as Nigel Farage, who used misleading claims during the Brexit campaign, including false claims that EU membership costs Britain ÂŁ55 million per day. AI technologies are likely to accelerate such tactics, amplifying their reach and persuasion.
The measures acknowledged in Bradshaw and Others v The United Kingdom[12] remain insufficient today. The National Security Online Information Team has faced increasing criticism for its lack of transparency and potential for abuse. Critics, including MPs across parties and civil society groups, argue that the unit has flagged legitimate criticism of government policies as âdisinformationâ, blurring the line between safeguarding democracy and constraining dissent.[13] The UKâs Counter Disinformation Unit (CDU) has limited accountability, as seen through the governmentâs refusal to release operational details, ultimately undermining public trust and questioning the UKâs capacity to respond to AI-driven disinformation in line with Article 10 of the ECHR[14].
Thus, while Bradshaw and Others[15] recognises the state’s obligation to protect electoral integrity, it risks overreach and democratic regression, highlighting the difficulty in balancing security and speech in the age of AI-mediated politics.
Towards a Democratic Response
A democratic response to AI-driven disinformation must be embedded in digital literacy, algorithmic transparency, and institutional accountability[16].
Kunal Dhirani is a final-year LLB Law student at the University of Exeter with a keen interest in AI and LawTech.
Zainab Zafaris a freelance journalist from Pakistan, University of Exeter Law School graduate.
The illustration is by Jorge Franganillo and available on Unsplash
[1] Veronica Anghel, âWhy Romania Just Cancelled Its Presidential Election | Journal of Democracyâ (Journal of Democracy December 2024) <https://www.journalofdemocracy.org/online-exclusive/why-romania-just-canceled-its-presidential-election/>.
[2] Hendrik Mildebrath and Bente Daale, âTikTok and EU Regulation: Legal Challenges and Cross-Jurisdictional Insightsâ (2025) <https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/775837/EPRS_BRI(2025)775837_EN.pdf>.
[3] Toby S James and Holly Ann Garnett, âElectoral Integrity Resilience: Protecting Elections during Global Risks, Crises, and Emergenciesâ [2025] Democratization 5.
[4] Giulia Alesse, âThe Weaponization of Disinformation in Modern Geopolitics | Atlas Institute for International Affairsâ (Atlas Institute for International Affairs 13 June 2025) <https://atlasinstitute.org/the-weaponization-of-disinformation-in-modern-geopolitics/>.
[5] âAlleged Audio Call to Rig Elections in Iraqi Kurdistan | Oversight Boardâ (The Oversight Board 29 August 2025) <https://www.oversightboard.com/decision/fb-bu05syro/> accessed 4 October 2025.
[6] Bradshaw, Samantha and Howard, Philip N., “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation” (2019). Copyright, Fair Use, Scholarly Communication, etc.. 207. https://digitalcommons.unl.edu/scholcom/207 <https://www.researchgate.net/publication/355335659_The_Global_Disinformation_Order_2019_Global_Inventory_of_Organised_Social_Media_Manipulation>.
[7] Bradshaw and Others v United Kingdom App no 15653/22 (ECtHR, 22 July 2025)
[8] Elections Act 2022
[9] National Security Act 2023
[10] Online Safety Act 2023
[11] Tom Cheshire, âWhy Tommy Robinson Rally Was Different to Any Otherâ (Sky News13 September 2025) <https://news.sky.com/story/why-tommy-robinson-rally-was-different-to-any-other-13430517>.
[12] Bradshaw and Others v United Kingdom App no 15653/22 (ECtHR, 22 July 2025)
[13] âBriefing Note for Parliamentarians on Disinformation and the Governmentâs National Security Online Information Team, November 2024 Background: How the Counter Disinformation Unit Became âNSOITââ <https://bigbrotherwatch.org.uk/wp-content/uploads/2024/11/BigBrotherWatch-Briefing-on-the-National-Security-Online-Information-Team.pdf> accessed 4 October 2025.
[14] European Convention on Human Rights, art 10
[15] Bradshaw and Others v United Kingdom App no 15653/22 (ECtHR, 22 July 2025)
[16] Alexander Romanishyn, Olena Malytska and Vitaliy Goncharuk, âAI-Driven Disinformation: Policy Recommendations for Democratic Resilienceâ (2025) 8 Frontiers in Artificial Intelligence.