Not only the coming months, but the entire year 2024 will be full of elections in many democratic countries. The ability to detect Russian, Chinese or Iranian attempts to interfere in the electoral process in various countries today can help prepare for such threats, warns American cryptographer Bruce Schneier.
Around the world, the electoral process is increasingly at risk of interference from foreign players who will now use artificial intelligence (AI) for this purpose, he says. Bruce Schneier, American cryptographer, IT security expert and lecturer at Harvard Kennedy School. He presented his observations in the article entitled “AI disinformation poses a threat to elections – learning to detect Russian, Chinese and Iranian interference in other countries could help the U.S. prepare for 2024.” The text was published on the website The Conversation September 29, 2023. Below we publish its extensive translation:
When Russians launched a series of disinformation campaigns on social media in 2016 aimed at interfering in the US presidential election, it was the beginning of a new era for governments seeking to influence other countries’ electoral processes. Over the course of seven years, many of them – most notably China and Iran – have used social media to influence elections abroad: both in the United States and elsewhere in the world. So there is no reason to expect that things will be different in 2023 and 2024.
However, a new factor has emerged: generative artificial intelligence (generative AI) and large language models (LLM). They allow you to quickly and easily create endless amounts of text on any topic, in any tone and from any perspective. As a security specialist, I believe that this is a tool perfectly suited to propaganda for the Internet age.
WATCH ON TVN24 GO: Development and supervision of artificial intelligence
We are dealing with something completely new. ChatGPT was presented in November 2022. The more powerful GPT-4 arrived in March 2023. Other artificial intelligences for creating text and images date from roughly the same period. It is not yet known how these technologies will change disinformation, how effective they will be and what effects they will have. But we’ll find out soon.
Election marathon of 2024 in the world
The election season will start soon in full swing in most democratic countries. By the end of 2024, 71 percent people living in democratic countries will vote in national elections. Among them are Argentina and Poland (the elections took place on October 15 – editor), the elections in Taiwan will be held in January 2024, in Indonesia in February, and in India in April. European and Mexican elections are scheduled for June 2024. The United States will elect a new president in November 2024.
Nine African democracies, including South Africa, will also have elections next year. Australia and Great Britain do not have set dates yet, but elections there will probably take place in 2024. It is worth adding that only in Poland within nine months the following parliamentary elections will be held (October 15 2023), local government elections (scheduled for spring 2024) and elections to European Parliament (between June 6 and 9, 2024).
Many of these choices are also important for external countries that have already conducted influence operations on social media in the past. And so China is particularly interested in this Taiwan, Indonesia, India and many African countries. Russia is interested in Great Britain, Poland, Germany and EU in general. Everyone, however, is looking closely at the United States.
We’re only talking about the biggest players here. Because in every U.S. national election since 2016, another country has appeared trying to influence the outcome. First it was just Russia, then Russia and China, and recently Iran joined them. As it has become cheaper to influence foreign countries, more and more countries can get involved in such actions. Tools like ChatGPT significantly reduce the costs of producing and distributing propaganda, bringing this ability within the budget of many more countries.
WATCH ON TVN24 GO: Regulations regarding artificial intelligence and AI in the public sector
A few months ago, I attended a conference with representatives from all cybersecurity agencies in the US. They discussed expectations regarding interference in the 2024 elections. They expected the traditional players – Russia, China and Iran – and a significant new: “internal actors”. This is a direct result of lower disinformation costs.
Of course, running a disinformation campaign is more than just generating content. The hardest part is distribution. The propagandist needs a number of fake accounts to post on, as well as other accounts to increase the reach of this content into the mainstream, where it can go viral. Companies like Meta are now much better at identifying such accounts and removing them more efficiently. In August 2023, Meta she informedthat it deleted 7,704 Facebook accounts, 954 Facebook pages, 15 Facebook groups and 15 Instagram accounts associated with the Chinese influence campaign. In addition, she identified hundreds of other accounts on TikTok, the X platform (formerly Twitter) and on two blogging sites – LiveJournal and Blogspot. However, this was a campaign that began four years ago, meaning it was disinformation before artificial intelligence became widely available.
Disinformation is an arms race. Both attackers and defenders are getting better, but the world of social media is also changing. Four years ago, Twitter was a direct channel for reaching the media, and propaganda on this platform was a way to tilt the political narrative. A Columbia Journalism Review study found that most major news outlets used Russian tweets as a source of biased opinions. That Twitter that virtually every news publisher read and everyone who posted there no longer exists.
Many propaganda channels have moved from Facebook to messaging apps such as Telegram and WhatsApp, making them difficult to identify and remove. TikTok is a newer platform, controlled by China, and is more suited to short, provocative videos – the kind that are much easier to create thanks to artificial intelligence. And there are generative artificial intelligences available today connected to tools, which will also facilitate content distribution. Generative AI tools also enable the use of new production and distribution techniques, for example the large-scale dissemination of low-quality propaganda. However, if replicated in the thousands or millions, they would have a much greater impact.
Let’s imagine a personal account created on social media and operated by artificial intelligence. Posts about fake everyday life, joins interest groups and comments on others’ posts; generally behaves like a regular user. Every now and then, not very often, he writes – or spreads – something political. These persona bots, as computer scientist Latanya Sweeney calls them, by themselves have a negligible impact. But if replicated by the thousands or millions, they would have a much greater impact.
Disinformation on artificial intelligence steroids
This is just one scenario. Military officials in Russia, China and elsewhere responsible for election interference are probably already tasking their top people with developing other scenarios. And their tactics will probably be much more sophisticated than in 2016.
Countries like Russia and China have a history of testing both cyberattacks and information operations on smaller countries before launching them on a large scale. When this happens, it’s important to be able to identify these tactics.
Countering new disinformation campaigns requires the ability to recognize them, which in turn requires searching and cataloging them now. In the world of IT security, researchers realize that sharing attack methods and their effectiveness is the only way to build strong defense systems. This approach works well for information campaigns: the more researchers analyze the techniques used in other countries, the better they will be able to defend their own.
WATCH ON TVN24 GO: Computer vs. crime
Disinformation campaigns in the era of artificial intelligence are likely to be more sophisticated than in 2016. I believe the United States must make efforts to target and identify AI-produced propaganda outside its borders – for example in Taiwan, where a presidential candidate claims he was defamed with a fake audio recording. If they don’t do this, they won’t see the threat when it reaches them. Unfortunately, there are disinformation researchers under attack and harassed.
Examples from the United States show how these tools are already being used by both internal and external forces. IN April this year, just after Joe Biden announced that he would run for re-election in 2024, the Republican National Committee (RNC) released a video created entirely using artificial intelligence. It depicted a dystopian version of the future if Biden became US president again in 2024.
In June, Republican presidential candidate Ron DeSantis in his campaign took advantage of AI-generated photos of former President Donald Trump hugging Anthony Fauci, former White House medical adviser. At the beginning of September this year, Microsoft analysts they warned against the actions of Chinese agents who used images created by artificial intelligence to imitate American voters on the Internet. In this way, they tried to spread disinformation and provoke discussion about political issues that divide society as the US elections approach.
Perhaps everything will end well. In the era of generative AI, there have been several important democratic elections without significant disinformation threats: the primary elections in Argentina, the first round of elections in Ecuador, and national elections in Thailand, Turkey, Spain and Greece. But the sooner we know what to expect, the better we will be able to cope with what comes next.
Konkret24, The Conversation
Main photo source: Shutterstock