Deepfakes / AI-generated women's face.

"Deepfakes are realistic-looking content created without consent," explain Goodstein and Thomson. "They can employ voices, videos, images to create online content to deceive people."

(Photo: FotografieLink via iStock/Getty Images Plus)

10 Actions Every Campaign Can Take to Make A Difference Against Deepfakes

Regulative and legislative solutions are being suggested to deal with deep fakes, but campaigns can take real actions now.

Nefarious audio and video content made to trick voters present a clear and present danger to free and fair elections. Deepfakes technology has advanced at a rapid speed as computer processing technology has become faster and cheaper, and audio and video editing software has become universally available.

These advances in technology have already been deployed in the U.S. primary elections with a fake robocall with a spoofed voice of Joe Biden nefariously made to confuse voters on what day the election was held, all to suppress voter turnout amongst a targeted group of voters. With the UK General Election having been called and the US presidential election coming into view, we need to be more aware than ever of the dangers of deepfakes.

Political campaigns cannot stop the advancement of technology, instead they should embrace the new reality of modern campaigns and how dirty tricks have also evolved. Nonetheless, there are steps that every political campaigns (and concerned citizens) can take to minimize the chances of being thrown off course and being duped by a deepfake.

Deepfakes are the new reality, and their impact could bring major political harm and undermine democracy.

Deepfakes are realistic-looking content created without consent. They can employ voices, videos, images to create online content to deceive people. They can cause significant harmful impacts on individuals being used to blackmail, harass, commit fraud, gain revenge and other purposes. As AI advances, the quality of the deepfakes increases.

It is not that the technology is inherently bad. Some businesses have recently experimented with using AI to send personalized messages to their staff.

Similarly, some politicians have used the technology for light-hearted purposes such as creating online games involving the candidates. But we have already seen examples of deepfakes being used in elections to try and trick voters. Joe Biden, Keir Starmer and Sadiq Khan has already been the victims of deepfakes. Examples from all parts of the world are increasing in frequency – Indonesia and France.

It is not just about video but audio as well. What could be better than a slightly poor-quality ‘illicitly recorded’ phone conversation or comments from an event where a candidate says something outrageous? The more amateur the sound quality, the more damage it may do.

And with elections across the world, not least the US, EU, and UK, taking place this year, there is a focus on what can be done about the danger.

According to a new survey from the BCS, The Chartered Institute for IT in the UK, the influence of deepfakes on the UK General Election is a concern for most tech experts. 65% of IT professionals polled said they feared AI-generated fakes would affect the result.

But they also think the parties themselves will be involved: “92% of technologists said political parties should agree to publicize when and how they are using AI in their campaigns.” This suggests that they do not entirely trust politicians either…

According to another survey, 70% of UK MPs fear deepfakes.

Regulative and legislative solutions are being suggested to deal with deep fakes, but campaigns can take real actions now.

1) Avoid the void – problems arise when there is a space to fill. The more content that a campaign has, the more it can cover a wide range of topics, the less space there is for a deepfake to fill a void.

2) Deal with controversy – rather than failing to have a position on a difficult issue of the day, a campaign needs to tackle it. Again, this prevents a deepfake from being able to exploit an issue where there are firm views but political silence.

3) Consistency of approach – moving around too much on an issue opens space for deepfakes to exploit. The more an announcement looks out of the ordinary, away from the usual, the easier it will be to expose and challenge a deepfake.

4) Establish a dedicated unit for rapid response. All campaigns should have a team responsible for looking to monitor and correct any and all false and misleading information. Whether its coming from AI that created a Deepfake, or simply a misquoted statement and dealing with them. The more that responsibility is vague or unattributed, the less coherent and speedy the necessary response will be.

5) Call it out as soon as possible – the dedicated unit needs to have access to the latest detection software and be staffed by a team of experts. Critically, the deepfake needs to be challenged as soon as possible to prevent it from gaining traction. Sometimes, media relations advice will say do not publicize or give airtime to an opponent’s argument as it only raises its profile. But deepfakes are different, they need to be warned against.

6) Cross candidate / party consensus – as much as possible there should be a commonality of approach on deepfakes. All candidates and campaigns have an interest in tackling deepfakes. The more that some think they will gain through their distribution, the more likely they are to have an impact.

7) All candidates should take responsibility – dealing with deepfakes should not be seen as just the responsibility of a central campaign team. Every candidate runs a risk so there needs to be a local as well as a national focus.

8) Inform the media—Journalists are aware of their responsibilities when dealing with deep fakes and will welcome information when examples are found.

9) Work with social media channels – campaigns should set up discussions with them in advance so that action can be immediate if examples are found. Establishing working protocols will help with speed.

10) Candidates must control their search results as well as the narrative around fakes and rumors. Remember, today’s deepfakes and smear campaigns are known for dropping false and misleading information late in the campaign cycle, often close to election day. A team needs to think about whether it can get a credible newspaper to run an article about the accuracy. How fast can the campaign issue a statement and post it on to their website? Will enough voters even see the response? Campaigns need to be prepared to run search ads to direct curious citizens as well as contextual ads based on keywords around the deepfake to inform people to “be aware.”

The Internet has taught us to always run into the fire instead of away from it. These attacks are salacious enough to cause news stories about these tactics, and they spread organically through old-fashioned word of mouth. Folks will be searching the gossiped rumors on their phone to learn more. You must, therefore, think about what the search results look like. Search engine optimization (SEO) is important to knock down misleading information. Is there a credible place that will be covering the campaign’s late-breaking rumors? If not, you may need to create your own campaign website, such as SNOPES, FactChecker, Politifacts, etc.

When these types of services did not exist to quickly dismiss the rumors and misleading propaganda in Ukraine, young students created their own website called StopFake.org to become the transparent hub and debunk the flurry of rumors and misinformation with credible hyperlinked sourced facts. This concept is not new either. In 2008, Barack Obama’s campaign created FighttheSmears.com, a website that addressed all of the rumors. The campaign controlled its narrative with this fact-based, credible website that was indexed by Google and Yahoo at the top of the search results.

Unfortunately, the spreading of lies has become more advanced with using technology that morphs the candidate's voices and facial expressions. Deepfakes are the new reality, and their impact could bring major political harm and undermine democracy. Campaigns must not hide their heads or pretend that this technology isn’t here and does not exist. We are all responsible for acting against false and misleading advertisements from nefarious operators trying to cause chaos and sow discontent, cause confusion, or suppress voters. If that responsibility is not embraced, then we will all suffer the consequences.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.