2024 was a landmark year for democracy, as many nations headed to the polls. At this crucial juncture where the ‘super election’ year drew to an end, upon popular reflection it has been identified that a major predicament in the way of democracy was the use of generative artificial intelligence to sway electoral processes, making significant room for democratic backsliding.
Illustration by The Geostrata
With the widespread popularity of AI applications and large language models, commoners' conversations with AI systems and the systemic consumption of content generated by them have increased multifold. This phenomenon intensifies as AI-generated content is continuously produced, shared, and consumed across various platforms.
Personalised recommendations and widespread content generation have augmented ongoing engagement with AI materials.
With this, the information ecosystem has been subjected to multidimensional radicalisation; meaning that complex changes in its environment have redefined not only how information flows but also its intended purpose.
The capacity AI wields to generate deepfakes and synthetic media, launch cyberattacks and spread misinformation of all kinds is mammoth; virtually all electoral democracies are grappling with the many problems it poses. Despite the capability it possesses to advance democratic practices, the problems produced by generative AI are now a disproportionate counter-weight to it.
State policy makers at large are falling short of equipping their machineries to respond to these unique problems with precision. With AI’s complexities outpacing regulation attempts, creating efficient laws is a gamble.
Without enough insight into its full long-term impact, legislation risks missing the right mark, leaving room for unintended consequences. This article will explore the many problems AI systems have sown on fertile democracies alongside suggestions to maintain electoral integrity.
POLITICAL DEEP FAKES: MANUFACTURING ‘REALITIES’
In the post-truth age, one of the most roaring risks produced by AI is the spread of misinformation. AI-generated deepfakes and false information from robo-chatbots are widely circulated in an attempt to bait voters into desired spheres of influence.
What makes navigation difficult is the burgeoning capacity and penetration of AI tools in recent years, in the absence of well-laid-out laws to govern its usage.
Deepfakes took social media by storm ahead of the 2024 elections in many nations. It has the remarkable capacity to clone any person’s voice with virtually no training data.
Synthetic media technology can now be viewed as a powerful foe to nuclear policies. As the great power rivalry intensifies in the backdrop of a cloudy global political atmosphere and a concerning leadership deficit; deepfakes could undermine the opposite side’s intent, creating a misperception. For instance, an escalation of power rivalry could be facilitated by deepfakes of a relevant leader announcing war against their adversary or merely indicating the consolidation of nuclear weaponry.
Popular figures like Donald Trump have fueled the amplification of political deepfakes. In August 2024, Trump circulated an AI-generated image of Pop musician Taylor Swift endorsing the Republican Party and Kamala Harris in Communist garb.
Later in September, a video featuring Kamala Hariss involved in a hit-and-run stormed across social media, gaining millions of views. Moreover, strategic AI efforts were applied to influence votes in swing States in the U.S. This raised concerns over the manipulation of public opinion and the potential for further political polarisation.
Illustration by The Geostrata
In Belarus, the country’s opposition created an AI-generated candidate in an attempt to reach a larger number of Belarusian voters. It had arguably resulted in political insecurity ahead of elections.
In another instance, Chinese-backed actors were identified to deploy generative AI to meddle in Taiwanese elections in early 2024, in a pivotal game against the Beijing-skeptic Lai Ching-te.
With internet literacy efforts still at its nascent stage in the Global South; the identification of AI-generated content proves to be a daunting challenge to many within these nations. The 2024 Lok Sabha Elections were no exception and were heavily influenced by disinformation campaigns curated by AI-based technologies.
In India, political parties are known to have funnelled in massive amounts of money to create AI-generated content that could favour them at the polls. From dead politicians endorsing their party’s candidates to misleading deepfakes with politically charged undertones featuring public figures, this election cycle witnessed it all.
Political nostalgia, which has consistently been at play in India, was advanced in 2024, with the circulation of ‘psychologically manipulative’ deepfakes to bolster parties’ election tactics.
As a case in point, former Tamil Nadu Chief Minister, Karunanidhi who passed away in 2018, appeared via a deepfake, wearing his signature yellow scarf and sunglasses.
The politician highlighted his party’s sincere efforts to accelerate development in the state and urged voters to continue the popular support. The video whipped up an emotional storm among Tamil voters, simultaneously igniting debates across the nation about its ethical high ground.
While there have been many notable instances of political deepfakes from around the world, these are yet to make widespread ‘noticeable impacts’ in voter behaviour. Nevertheless, it has been understood that deepfakes are increasingly influencing public opinion.
HARNESSING AI FOR DEMOCRATIC PROGRESS
After all, the expansion of artificial intelligence capabilities is not dangerous in itself, but rather the method of its use is the primary matter of concern. Consensual use of AI has significantly helped resource-deficient campaigns, especially those launched by minor political parties.
AI tools have also been used to multiply voter engagement rates through micro-targeting messaging. Furthermore, these smart tools have made language translation easier, bridging gaps. Moreover, it is seamlessly facilitating party-to-people communications, accentuating the spirit of participatory democracy. Post-election analysis has also been carried out with greater efficiency by the use of artificial intelligence technologies.
CHILLING EFFECT ON POLITICAL COMMUNICATION: THE WAY FORWARD
Identification of deepfakes can also be made possible using the same technology which produces it. Machine learning algorithms, digital watermarking and blockchain technology could principally aid in its detection.
Social media platforms play the most crucial role in the regulation of misinformation, as circulation is carried out within these applications.
Popular platforms should be able to evolve their technologies to mandatorily flag generative AI content, and take it down if it poses risks of deception to the user community.
Therefore, an effective way to counter deepfake technology is by enforcing accountability on these platforms. Social media companies need to have objective parameters in checking AI content, and they should ideally be able to distinguish between allowed and banned usages. ‘Code of Practice on Disinformation’ under the Digital Services Act is a groundbreaking mechanism evolved by the European Union to detect, analyse and expose disinformation.
It is a framework for online platforms to report periodically and systematically on their interventions to counter disinformation. Moving forward, It’s best if all democracies prioritise platform intervention mechanisms in countering disinformation pollution.
If operationalised efficiently it could prove to be productive in sustaining the integrity of the democratic spirit. To realise this, state and non-state actors must join their hands to translate this vision to life. Another threatening block on the road to the implementation of AI-governing laws is the generational divide between older and newer decision-makers.
While the more experienced senior ones may not possess the technical know-how and a comprehensive understanding of technological functionalities and threats birthed by it; the younger decision-makers may lack expertise on materialising legislation in tandem with conventions and history.
Collaborative governance that bridges the generational gap is the need of the hour. A structured mechanism such as interdisciplinary advisory councils and joint drafting committees must be instituted to harmonise diverse perspectives. This would ensure the formulation of policies which are both forward-looking as well as rooted in legal rigour.
In addition, as for voters—Awareness programs aimed at helping them evaluate media content must be initiated, alongside encouragement to inculcate a certain level of scepticism among voters can induce critical thinking; which could prompt them into scrutinising information that is sourced from the virtual realms, especially in the time leading up to election seasons.
Democratic governments need to buckle up to wage a war against two powerful weapons– one that of distortion and the other of destruction.
The misuse of AI tools to influence democratic landscapes has proven to generate a multifaceted outcome, ranging from electoral manipulation to political intrusion and instability.
Effective regulation, awareness and counter-technologies are increasingly indispensable to safeguard global democratic integrity. Reports indicate that 2024 electoral outcomes were less affected by the circulation of deceptive AI content during the campaigns, however, it is essential to realise that with the ever-evolving technology constantly at work, the future of democracy can be at jeopardy if we turn a blind eye to these inflating predicaments.
BY NAKSHATRA H M
TEAM GEOSTRATA
コメント