Dignity & Democracy
  • HRDF logo
  • AI tools and disinformation during elections,   by Ricardo Vasquez Dazarola

    Posted by ccld201

    15 October 2024

    Disinformation, which refers to false or misleading content that is spread with the intention to deceive or secure economic or political gain and which may cause public harm [1], has long been a weapon in political campaigns. However, the advent of AI has dramatically increased both the efficiency and scale of these operations, opening up new and unprecedented avenues for manipulating users and distorting public opinion [2]. AI-powered technologies, such as deep fakes, enable malicious actors to generate more realistic, precisely targeted, and large-scale disinformation, particularly during elections. The strategic use of these tools aims to undermine public trust in democratic institutions, heighten voter confusion, suppress political participation, and polarise societies [3]. This technological shift has significantly transformed the disinformation landscape, making it more sophisticated, widespread, and difficult to detect than ever before.

    Deep fakes and automation of content generation

    One of the key ways AI is utilised in disinformation operations is through the automation of content generation. AI-generated content, popularly known as deep fakes, entails realistic manipulated or generated content, such as videos, images or audio recordings, produced using AI techniques, especially machine learning and its subset, deep learning, where people feature saying or doing something they have never said or done in reality [4]. One of the latest and most famous examples of deep fake images is the hyper-realistic photo of former president Donald Trump being arrested by the police due to his indictment, which was viewed millions of times [5].

    When deep fakes are disseminated in conjunction with disinformation operations, they can produce a broad range of harms, including psychological (i.e., defamation) and societal (i.e., damage to democracy). For example, a deep fake video depicting a political candidate could be used to undermine the electorate’s ability to differentiate between genuine and fabricated material when making crucial decisions about democratic processes [6]. Consequently, deep fakes play a crucial role during disinformation operations because they can lower people’s suspicion that they are encountering inauthentic material, thus increasing the chances that it will be believed and subsequently shared.

    Micro-targeting disinformation

    In addition to content generation, AI plays a crucial role in targeting disinformation. Specifically, political micro-targeting is a strategy that involves data analytics, psychometrics and pattern recognition to collect and analyse enormous volumes of data about potential voters (i.e., gender, education, online behaviour) in order to send highly personalised political messages. Personalised communication makes recipients more responsive and likely to act upon them. For instance, political micro-targeting enables the precise targeting of disinformation to vulnerable or undecided segments of the population—those who are most likely to consume, believe, and spread disinformation with highly individualised, emotive narratives [7]. This allows malicious actors to exploit individuals’ vulnerabilities by pushing false messages that are more likely to impress and persuade those persons to change their voting behaviour or opinions on specific issues without their knowledge [8]. Therefore, implementing political micro-targeting in disinformation operations can be essential to interfere with democratic elections covertly, undermine political communication and electoral fairness, and erode trust in institutions [9].

    Proliferation of bots mimicking human conduct on social media

    Another major challenge posed by AI-driven disinformation is the proliferation of bots and automated social media accounts. Bots, an abbreviation for software robots, are fully or semi-automated computer programs empowered by AI that are designed for communication and imitation of human online behaviour [10]. Their functions range from basic activities such as liking, sharing content, and following accounts to more complex tasks like generating posts, engaging in conversations, and interacting with users in real-time. What makes bots particularly dangerous in the context of disinformation is their scalability [11]. This means that, by mimicking human conduct, they can massively amplify disinformation’s volume and visibility on a scale far beyond human operators’ capacity [12]. As a result, bots can ensure that certain narratives or hashtags gain traction, trending across social media and shaping public discourse by posting a thousand times a day. Indeed, the sheer volume of content generated by bots can overwhelm users, creating the impression that certain viewpoints are more widespread or credible than they actually are. This phenomenon is particularly dangerous when bots are programmed to overwhelm and silence minorities, discouraging them from expressing their opinions for fear of harassment, defamation, or social ostracism, negatively affecting their right to freedom of expression.

    Content recommendation algorithms

    Lastly, social media content recommendation algorithms can significantly amplify the reach of disinformation. At their most basic level, content recommendation algorithms can be understood as systems that, empowered by machine learning techniques, actively predict and suggest what kinds of content are most likely to be read, watched or shared by users. The algorithm performs these recommendations based on individual interactions, preferences, and tendencies [13]. Since most social media operate on profit-driven models that maximise user attention to boost advertising revenues, content recommendation algorithms are often designed to increase user engagement by promoting content that sparks strong emotional reactions, including controversial or sensational material [14]. Given the frequent overlap between disinformation and sensational content, these algorithms can inadvertently—or even deliberately—amplify the spread of false information, enabling it to proliferate beyond the confines where it would normally thrive [15]. For example, during elections, algorithms may prioritise promoting a disinformation conspiracy theory about a politician over its debunking simply because users are likelier to click on or interact with the conspiracy, leading to its spread to new users [16]. As the UN Special Rapporteur highlighted, content recommendation algorithms are widely acknowledged for directing users towards ‘extremist’ publications and conspiracy theories, which undermine fundamental rights such as the right to form an opinion and freedom of expression [17].

    While AI dramatically enhances the capabilities of disinformation actors, it also presents opportunities for defending against such operations. AI-powered tools can be deployed to detect and counter disinformation by analysing patterns in text, images, and video content that may indicate manipulation. Notably, machine learning models trained to recognise deep fake videos can flag suspicious content for further analysis. Additionally, AI can assist in real-time social media monitoring, identifying coordinated disinformation campaigns and curbing the spread of false information before it goes viral.

    In conclusion, AI plays a dual role in the context of disinformation during elections. On the one hand, AI technologies can be misused to supercharge disinformation’s speed, volume, reach, and efficacy. Significantly, this means that when implemented in large-scale disinformation campaigns, they could profoundly affect society, especially by threatening the trust in democratic processes. On the other hand, AI also holds the potential to combat disinformation through advanced detection and mitigation techniques.

    Ricardo Vasquez Dazarola is a Chilean Lawyer graduated from the Law School of Universidad Adolfo Ibáñez, Chile (LL.B) and the Universiteit Leiden, the Netherlands (Advanced LL.M. – Law and Digital Technologies). He is currently a PhD fellow at the Centre for European Comparative and Constitutional Legal Studies. His research topic aims to study the new regulatory framework emerging at the EU level to address the artificial amplification of disinformation on social media during elections. In addition, he teaches the “Human Rights, Democracy and Digital Technology” master course at UCPH. Previously, he worked as a lawyer for Arias, Gompertz law firm and the Senate of the Republic of Chile as an external advisor on (i) digital platforms and (ii) artificial intelligence regulation.

    This paper was presented at the workshop on Democracy and Representation Challenges, University of Exeter and the Observatory for Representation, 3-4 October 2024.


    [1] European Commission, Joint Communication – On the European democracy action plan, December 2020, at 18

    [2] N. Bontridder & P. Yves, The Role of Artificial Intelligence in Disinformation, Data & Policy 3, November 2021, at 5-6.

    [3] C. Tenove, Protecting Democracy from Disinformation: Normative Threats and Policy Responses, The International Journal of Press/Politics, July 2020, 25(3), 517-537, at 525.

    [4] M. van Huijstee, P. van Boheemen & Others (European Parliament), Tackling deepfakes in European policy, July 2021, at I.

    [5] The Washington Post – Deepfake Trump arrest photos show disruptive power of AI. Available at https://www.washingtonpost.com/politics/2023/03/22/trump-arrest-deepfakes/

    [6] B. van der Sloot, Regulating the Synthetic Society: Generative AI, Legal Questions and Societal Challenges, Oxford: Hart Publishing, February 2024, at (i).

    [7] A. Arsenault, Microtargeting, Automation, and Forgery: Disinformation in the Age of Artificial Intelligence, University of Ottawa, March 2020, at 40.

    [8] F. Zuiderveen Borgesius, J. Moller and Others, Online Political Microtargeting: Promises and Threats for Democracy, Utrecht Law Review, 14(1), p. 82-96, February 2018, at 82.

    [9] T. Dobber, D. Trilling & Others, Effects of an issue-based microtargeting campaign: A small-scale field experiment in a multi-party setting, The Information Society, 39:1, 35-44, November 2022, at 4.

    [10] N. Bontridder & P. Yves, The Role of Artificial Intelligence in Disinformation, Data & Policy 3, November 2021, at 5.

    [11] M. Brkan, Artificial Intelligence and Democracy: The Impact of Disinformation, Social Bots and Political Targeting,Delphi – Interdisciplinary Review of Emerging Technologies, Volume 2, Issue 2, pp. 66-71, August 2019, at 67

    [12] V. Boehme-Neßler, Digitising Democracy: On Reinventing Democracy in the Digital Era – A Legal, Political and Psychological Perspective, Springer, January 2020, at 49-50.

    [13] M. Buiten, Combating disinformation and ensuring diversity on online platforms: Goals and limits of EU platform regulation, January 2022, at 11. See also The Washington Post – Here’s how Facebook’s algorithm works. Available at https://www.washingtonpost.com/technology/interactive/2021/how-facebook-algorithm-works/  

    [14] Id., at 4.

    [15] F. Saurwein & C. Spencer-Smith, Automated Trouble: The Role of Algorithmic Selection in Harms on Social Media PlatformsMedia and Communication, 9(4), 222-233, June 2021, at 225-227.

    [16] J. van Hoboken, Naomi Appelman & Others, The legal framework on the dissemination of disinformation through Internet services and the regulation of political advertising, December 2019, at 21.

    [17] United Nations – General Assembly, Disinformation and Freedom of Expression – Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, April 2021, at 14.

    Share

    Back home Back