Dignity & Democracy
  • HRDF logo
  • Democratic Crisis: The New Reality of AI-Generated Politics, by Kunal Dhirani and Zainab Zafar

    Posted by ccld201

    9 December 2025

    Two days before Romanians were to vote in the 2024 presidential run-off, the Constitutional Court annulled the first-round results, citing concerns around fairness and integrity.[1] Behind this lay the unprecedented claim that AI-generated disinformation, allegedly orchestrated by Russia, had compromised Romanians’ integrity to vote.

    Declassified intelligence revealed that nearly 800 dormant TikTok accounts linked to Russian operators were reactivated to promote far-right candidate Călin Georgescu, while 25,000 new accounts spread fabricated content in the weeks leading up to the election. Videos of false endorsements and manipulated interviews circulated widely, tilting online discourse and eroding trust in the electoral process[2].

    This crisis exemplifies how generative AI has transformed disinformation into a scalable, adaptive, and near-indistinguishable force. Democracies now face a dual dilemma: how to protect electoral integrity without sliding into digital authoritarianism, and how to regulate social media without extinguishing free speech.

    AI-generated Disinformation and Elections

    Disinformation has long been part of political warfare, but generative AI has amplified its reach. What once required substantial coordination can now be produced by individuals using publicly available tools[3]. Deepfakes, cloned voices, and algorithmically targeted propaganda spread faster than journalists or regulators can verify them.

    This acceleration of AI-generated deepfakes undermines democracy in three ways. Firstly, it erodes citizens’ capacity to make informed decisions as disinformation and misinformation flood the digital space. Secondly, it intensifies polarisation by reinforcing echo chambers and discouraging openness to alternative viewpoints. Thirdly, it diminished trust in institutions, corroding the legitimacy of democratic governance.

    In 2023, the IPSOS-CIGI Survey found that 40% of respondents had lost trust in the media and 22% in government due to online disinformation. Beyond this, such manipulation can spill into digital harassment, hate speech, and political violence. States often respond with surveillance or censorship justified as counter-disinformation, but these measures can supress legitimate news thus undermining democracy and replicating the very authoritarianism they seek to prevent.

    Social Media and the Architecture of Manipulation

    Social media remains the central infrastructure of digital manipulation. Major platforms fail to enforce consistent standards, particularly in vulnerable or semi-autonomous regions where institutional checks may already be weak[4]. This brings us to the Kurdish elections of 2024, where a deepfake phone call between senior politicians alleging election rigging was broadcast by party-aligned media and went viral just days before the vote, further intensifying an already highly polarised political environment.

    Two users reported the content as misinformation, and Meta closed both reports without review. Only following an appeal did Meta consult external experts, who deemed the audio likely manipulated and labelled some identical posts, but inexplicably left the original unmarked. In response, the Oversight Board (an independent body created by Meta to review it content moderation decisions) overturned Meta’s decision to leave the post unflagged, requiring it to be labelled as “likely digitally created or altered.” The Board criticised Meta’s “incoherent and unjustifiable” failure to apply labels consistently across identical instances of the same manipulated media, noting that the company’s systems can only identify static images, not audio or video. It further condemned the absence of manipulated media labels in Sorani Kurdish, despite the language being supported on Facebook[5].

    This case is especially significant because Kurdistan, as a semi-autonomous state, represents a democratic hybrid. While holding regular elections and maintaining some extent of institutional pluralism, it operated within a fragile constitutional and geopolitical context. Its experience exposes a structural imbalance in global content governance. Typically, algorithmic and linguistic priorities of large platforms overwhelmingly reflect the needs of the Global North. In such information ecosystems under stress, weak media literacy, limited institutional oversight, and transnational platform dominance converge to magnify disinformation effects. As Bradshaw and Howard observe, “although there is nothing necessarily new about propaganda, the affordances of social networking technologies—algorithms, automation, and big data—change the scale, scope, and precision of how information is transmitted in the digital age” [6]. This is particularly relevant as increasingly accessible and advanced AI tools make it easier to create and spread AI-generated disinformation. Meta’s failure to label manipulated audio in a language supported by its own platform reveals not merely a technical lapse but a deeper epistemic bias in how digital safeguards are distributed globally.

    The case of Kurdish elections exemplifies the difficulties of flagging AI-generated content in different regions of the world. It demonstrated that the world’s most powerful communication platforms reproduce patterns of neglect in linguistically diverse and politically fragile regions.

    The UK’s Fragile Defence

    Following allegations of Russian interference in the 2019 general elections, the European Court of Human Rights in Bradshaw and Others v. The United Kingdom[7] upheld the government’s response, citing legislative reforms such as the Elections Act 2022[8], the National Security Act 2023[9], and the Online Safety Act 2023[10]. Yet, these frameworks remain ill-suited to the velocity of AI-generated manipulation. In the last national elections, while concerns about AI-generated disinformation were widespread, its direct electoral impact was limited. Nonetheless, with the resurgence of anti-immigrant rhetoric and rising racial hostility, particularly among far-right political movements, the spread of disinformation now appeared inevitable.

    For instance, during the demonstrations held by Tommy Robinson in London in September of 2025, AI-generated synthetic content circulated widely on social media.[11] This fuelled tension and misinformation. This follows a familiar pattern seen in politicians such as Nigel Farage, who used misleading claims during the Brexit campaign, including false claims that EU membership costs Britain ÂŁ55 million per day. AI technologies are likely to accelerate such tactics, amplifying their reach and persuasion.

    The measures acknowledged in Bradshaw and Others v The United Kingdom[12] remain insufficient today. The National Security Online Information Team has faced increasing criticism for its lack of transparency and potential for abuse. Critics, including MPs across parties and civil society groups, argue that the unit has flagged legitimate criticism of government policies as “disinformation”, blurring the line between safeguarding democracy and constraining dissent.[13] The UK’s Counter Disinformation Unit (CDU) has limited accountability, as seen through the government’s refusal to release operational details, ultimately undermining public trust and questioning the UK’s capacity to respond to AI-driven disinformation in line with Article 10 of the ECHR[14].

    Thus, while Bradshaw and Others[15] recognises the state’s obligation to protect electoral integrity, it risks overreach and democratic regression, highlighting the difficulty in balancing security and speech in the age of AI-mediated politics.

    Towards a Democratic Response

    Tackling AI-driven disinformation requires more than automated moderation or ad-hoc legislation. Democracies must pursue a coherent rights-based approach. This can be done by pursuing an international regulatory cooperation to establish clear standards for transparency and accountability in digital political communication. Social media platforms must be legally required to identify and label AI-generated political content, ensuring that manipulation cannot masquerade as legitimate discourse. This must be complemented by an independent oversight mechanism. Such institutions can help mediate the tension between freedom of expression and electoral integrity, ensuring that the fight against disinformation does not become a pretext for state censorship.

    A democratic response to AI-driven disinformation must be embedded in digital literacy, algorithmic transparency, and institutional accountability[16].

    Kunal Dhirani is a final-year LLB Law student at the University of Exeter with a keen interest in AI and LawTech.

    Zainab Zafaris a freelance journalist from Pakistan, University of Exeter Law School graduate.

    The illustration is by Jorge Franganillo and available on Unsplash


    [1] Veronica Anghel, ‘Why Romania Just Cancelled Its Presidential Election | Journal of Democracy’ (Journal of Democracy December 2024) <https://www.journalofdemocracy.org/online-exclusive/why-romania-just-canceled-its-presidential-election/>.

    [2] Hendrik Mildebrath and Bente Daale, ‘TikTok and EU Regulation: Legal Challenges and Cross-Jurisdictional Insights’ (2025) <https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/775837/EPRS_BRI(2025)775837_EN.pdf>.

    [3] Toby S James and Holly Ann Garnett, ‘Electoral Integrity Resilience: Protecting Elections during Global Risks, Crises, and Emergencies’ [2025] Democratization 5.

    [4] Giulia Alesse, ‘The Weaponization of Disinformation in Modern Geopolitics | Atlas Institute for International Affairs’ (Atlas Institute for International Affairs 13 June 2025) <https://atlasinstitute.org/the-weaponization-of-disinformation-in-modern-geopolitics/>.

    [5] ‘Alleged Audio Call to Rig Elections in Iraqi Kurdistan | Oversight Board’ (The Oversight Board 29 August 2025) <https://www.oversightboard.com/decision/fb-bu05syro/> accessed 4 October 2025.

    [6] Bradshaw, Samantha and Howard, Philip N., “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation” (2019). Copyright, Fair Use, Scholarly Communication, etc.. 207. https://digitalcommons.unl.edu/scholcom/207 <https://www.researchgate.net/publication/355335659_The_Global_Disinformation_Order_2019_Global_Inventory_of_Organised_Social_Media_Manipulation>.

    [7] Bradshaw and Others v United Kingdom App no 15653/22 (ECtHR, 22 July 2025)

    [8] Elections Act 2022

    [9] National Security Act 2023

    [10] Online Safety Act 2023

    [11] Tom Cheshire, ‘Why Tommy Robinson Rally Was Different to Any Other’ (Sky News13 September 2025) <https://news.sky.com/story/why-tommy-robinson-rally-was-different-to-any-other-13430517>.

    [12]  Bradshaw and Others v United Kingdom App no 15653/22 (ECtHR, 22 July 2025)

    [13] ‘Briefing Note for Parliamentarians on Disinformation and the Government’s National Security Online Information Team, November 2024 Background: How the Counter Disinformation Unit Became “NSOIT”’ <https://bigbrotherwatch.org.uk/wp-content/uploads/2024/11/BigBrotherWatch-Briefing-on-the-National-Security-Online-Information-Team.pdf> accessed 4 October 2025.

    [14] European Convention on Human Rights, art 10

    [15] Bradshaw and Others v United Kingdom App no 15653/22 (ECtHR, 22 July 2025)

    [16] Alexander Romanishyn, Olena Malytska and Vitaliy Goncharuk, ‘AI-Driven Disinformation: Policy Recommendations for Democratic Resilience’ (2025) 8 Frontiers in Artificial Intelligence.

    Share

    Back home Back