The UK general election is under close scrutiny due to alarming warnings about the rapid advancement of cyber technologies, especially AI, and growing tensions between major nations, which could jeopardize the integrity of the landmark votes in 2024.
In April, Agnes Callamard, the head of Amnesty International, warned, “These rogue and unregulated technological advancements pose a significant threat to all of us. They can be used to discriminate, spread misinformation, and create divisions.”
The UK election scheduled for July 4, four months before the United States’, will serve as a test for election security, according to Bruce Snell, a cyber-security strategist at the US firm Qwiet AI, which utilizes AI to prevent cyber-attacks.
Although AI has been the focus of attention, traditional cyber-attacks continue to pose a major threat.
Ram Elboim, the head of the cyber-security firm Sygnia and a former senior operative at Israel’s 8200 cyber and intelligence unit, emphasized, “It involves misinformation, party disruptions, data leaks, and targeting specific individuals.”
State actors, particularly China and Russia, are expected to be the primary threats, with the UK already issuing warnings about potential interference.
“Their objectives could be to endorse specific candidates or agendas,” explained Elboim. “They could also aim to create internal instability or chaos that would influence public sentiment.”
The UK has an edge over the United States due to the short gap between announcing and holding the election, which limits attackers’ time to strategize and carry out their plans, Elboim noted. Additionally, the UK’s non-automated voting system makes it less vulnerable to attacks on election infrastructure.
– Deepfakes –
While the hacking of institutions remains a concern, the UK has already attributed an attack on the Electoral Commission to China.
Elboim pointed out, “Disrupting a party, their computers, or a related third party could have a significant impact, even without affecting the main voting system directly.”
Individuals are at high risk of being targeted, with any compromising information potentially used for blackmailing candidates.
However, attackers are more likely to leak information to shape public opinion or hijack accounts to spread misinformation.
Former Conservative party leader Iain Duncan Smith alleged that Chinese state actors impersonated him online, sending fake emails to global politicians.
The use of AI to produce and distribute misinformation remains a major unknown factor in this year’s elections, Snell emphasized, especially with the rise of “deepfakes” – fabricated videos, images, or audio.
Snell highlighted the alarming potential for fakery, including software that can replicate someone’s voice from a short sample.
Wes Streeting, Labour’s health spokesman, claimed to have fallen victim to a deepfake audio, where he was falsely depicted insulting a colleague.
– Bot farms –
Snell recommended authorities focus on raising awareness to address this issue effectively.
Despite filters on many AI applications to prevent the creation of real people’s depictions, other software can still generate fake pictures and videos.
Snell pointed out that although AI is sophisticated, it is also easily deceived when it comes to creating images of real people.
AI is also utilized to create “bots” that automatically inundate social media with comments to influence public opinion.
Snell mentioned the increasing sophistication of AI, making it difficult to detect bot farms that can mimic a variety of communication styles.
While some software can verify if videos and pictures are produced using AI effectively, Snell believes it is not yet widely utilized to combat the issue.
He emphasized that the AI industry and social media companies must take responsibility for addressing misinformation in this rapidly evolving landscape where lawmakers are struggling to keep up.
jwp/phz/ach/smw