Artificial Intelligence (AI) has integrated deeply into various aspects of our lives. Initially, concerns about AI focused on job displacement, but today, the discussion has shifted. The rise of generative AI (Gen AI) has introduced significant concerns, particularly regarding its influence on global democratic processes like elections.
A recent report by the Centre for the Study of Democratic Institutions (CSDI), a Canadian think tank, explores the potential risks generative AI poses to the integrity of elections. Titled “Harmful Hallucinations: Generative AI and Elections,” the report examines how AI can impact election outcomes by generating deceptive content, such as deepfakes, and amplifying misinformation. Authored by experts including Chris Tenove, Nishtha Gupta, and Netheena Mathews, the report dives into the dangers of Gen AI, as well as its possible benefits.
The Age of “Deepfake Elections”
With multiple major elections approaching in 2024, such as in the United States, India, and the European Union, experts have labeled this period the “age of deepfake elections.” The CSDI report highlights how generative AI has become a critical tool in election campaigns across these regions. While Gen AI technology isn’t new, its accessibility and advancements have made it easier than ever to create misleading content, ranging from AI-generated misinformation to manipulated media and deepfakes.
“Generative AI technologies lower the cost of producing deceptive content and amplify existing threats to democracy,” said Chris Tenove, Assistant Director of CSDI. Rather than creating entirely new problems, AI exacerbates existing ones, making them more pervasive.
Key Risks Posed by Generative AI
The report categorizes the threats posed by Gen AI into three major areas: deception, harassment, and the pollution of information environments. Each of these risks is already a challenge to democratic processes, but Gen AI has magnified their scale and impact.
Deception
One of the most concerning uses of generative AI is its ability to deceive voters. Gen AI can create convincing deepfakes—videos or images that appear real but are entirely fabricated. A striking example from earlier this year involved a deepfake of U.S. President Joe Biden, which went viral in New Hampshire. The video, circulated via robocalls, falsely depicted Biden urging voters to skip the primaries and save their votes for the general election, a tactic used to suppress voter turnout.
In India, as the 2024 General Elections approach, AI-generated videos featuring Bollywood actors criticizing Prime Minister Narendra Modi and supporting his opponents surfaced. These deepfakes were widely shared before they were identified as false, misinforming thousands of voters.
Harassment
Generative AI is also a powerful tool for amplifying targeted harassment, particularly against political candidates. The report highlights an incident during the UK elections where over 400 doctored images of women from various political parties were posted on a fake pornography website.
In India, AI content creators have reportedly received requests to produce explicit deepfakes or superimposed images of politicians. This misuse of technology raises significant ethical concerns about the boundaries of AI in political discourse. Mathews, one of the report’s authors, emphasized the emotional and psychological toll such harassment can take, even on individuals who are not active political figures.
Polluting the Information Environment
Perhaps the most widespread impact of generative AI is its capacity to flood the information ecosystem with inaccurate and misleading content. AI systems can inadvertently produce false information, adding to the already complex digital landscape. The CSDI report points to the 2024 European Union elections, where Microsoft’s CoPilot AI generated incorrect election data in one-third of its outputs.
The sheer volume of AI-generated misinformation makes it increasingly difficult for people to distinguish between truth and falsehood. In some cases, genuine information is dismissed as AI-generated, while real offenders can deny harmful content about them by claiming it is a deepfake.
Balancing the Threats with Opportunities
Despite these risks, the CSDI report also highlights the potential benefits of generative AI in elections. For example, in India, the Bhashini AI system, developed under the National Language Technology mission, enabled Prime Minister Modi to communicate with citizens in multiple languages, ensuring a broader reach.
AI tools can also play a positive role in moderating online debates to foster constructive political discussions, summarizing complex policy documents, and even providing real-time language translation for political speeches, making information more accessible to diverse populations.
Addressing the Challenges Ahead
With elections scheduled in the U.S., India, Brazil, and other countries in 2024, the risks posed by generative AI will continue to grow. As Chris Tenove explained, “There’s a lot of hype around generative AI and doomsaying about its impacts. We wanted to assess the real evidence on the harmful uses of Gen AI and identify solutions.”
The CSDI’s report calls for proactive measures to mitigate the dangers posed by generative AI, including regulations, increased awareness, and developing tools to detect and counter misinformation. While AI holds transformative potential, ensuring it strengthens democratic institutions rather than undermines them will be critical.
In a world where AI’s influence on elections is becoming unavoidable, the challenge lies in leveraging its benefits while protecting the core values of democratic processes.
Also Read : Government’s Latest Push Against Cybercrime: Samanvaya Portal and Other Key Initiatives 2024
1 thought on “Generative AI and the Threat to Democratic Elections: A CSDI Report 2024”