Amid Fears of AI Disrupting the 2024 Elections, More Focus on Credible Sources
Knowing humans are providing news and not bots, reaching voters where they are at with reliable information, pursued as strategies to mitigate disinformation risk boosted by AI
By John P. Desmond, Editor, AI in Business
Fears that AI will disrupt 2024 elections in the US are focusing more attention on where people get their news, in the hopes of guiding voters towards credible sources of election information.
Social media platforms including TikTok and Instagram, for example, are used by 42 percent and 44 percent of younger voters as a source of news, according to a recent account from Brookings. But these platforms have been underutilized by candidate campaign organizations, the authors report.
Research from the Social Science Research Council, for example, found that in the 2020 general election, only two percent of counties nationwide had Instagram or TikTok accounts.
Middle-aged and older votes might be better reached through Facebook, used mostly by ages 30 to 49. The Seminole County Supervisor of Elections Office in Florida used Facebook effectively in the 2020 elections, employing a voting alert feature in 67 state counties. Followup studies from the US Election Assistance Commission showed that voters who lived in the Florida counties where election information was shared via Facebook were more likely to register to vote.
Voter education campaigns are seen as an effective way for election officials to establish a dialogue with voters and the public to help voters recognize attempts to dupe them. “Planned and strategic voter education campaigns can mitigate the effects of mis- and disinformation generated by AI tools,” stated the authors, led by Norman Eisen, senior fellow in governance studies with Brookings.
Several states have tried voter education campaigns, including Missouri, which released eight videos on Facebook, Twitter (now X), Instagram and TikTok. Ohio tried a campaign to show voters behind the scenes of an election office, where they saw recruitment, training, accuracy testing and auditing, as a means of increasing voter confidence in the election.
Another strategy is to train election staff to simulate potential scenarios. The Arizona Secretary of State held multiple training simulations with staff and county officials using manufactured deepfakes, to help them recognize and respond to deepfake misinformation campaigns.
Suggestions from Eric Schmidt
Platform providers were advised in a recent account in MIT Technology Review authored by Eric Schmidt on how to protect elections from risks of AI fakes and misinformation. Schmidt was the CEO of Google from 2001 to 2011, and is currently cofounder of Schmidt Futures, a philanthropic initiative. Among his recommendations:
Verify that a news source is human, as opposed to a bot;
“Know every source. Knowing the provenance of the content and the time it entered the network can improve trust and safety.” (A bit rich coming from a guy credited with being an architect of the surveillance economy.)
Filter advertisers to come up with a safe list of those who comply with laws and industry standards, such as adding disclaimers if synthetic content is used. Meta announced recently it will require political ads to disclose whether they used AI.
“With a concerted effort from companies, regulators, and Congress, we can adopt these proposals in the coming year, in time to make a difference,” Schmidt stated, adding a caveat, “The existing incentive structures will make misinformation hard to eliminate.”
Fake audio and video made with tools incorporating AI make the threat more challenging. The US Congress is working on bills to better guard privacy and regulate AI, but the likelihood of safeguarding the election in November with those tools is not promising. Instead, experts are working to help voters discern real and credible sources of election information from what is fake and manipulative or just plain lies.
“The ability to deceive from AI has put the problem of mis- and disinformation on steroids,” stated Lisa Gilbert, executive vice-president of Public Citizen, a group encouraging federal and state regulation of the use of AI in politics, according to a recent account in The Guardian.
Some states have instituted laws governing the use of AI in political advertising, and in many more states, bills are in progress to require measures such as clear disclosures and disclaimers so that voters understand if content is AI-generated. The Federal Election Commission is formulating rules, but none are yet in place.
“Hopefully we will be able to get something in place in time, so it’s not kind of a wild west,” Gilbert stated. “But it’s closing in on that point, and we need to move really fast.”
Low-Cost Generative AI Tools Supercharging Disinformation Efforts
The threat of election disinformation is being supercharged by AI in 2024, with free and low-cost generative AI tools available from Google, OpenAI and others, enabling nearly anyone to create “deepfake” video and audio with minimal effort and technical skill.
Deepfakes generated with AI have been tied to elections in Europe and Asia for months, serving as a warning for this year. “You don’t need to look far to see some people ... being clearly confused as to whether something is real or not,” stated Henry Ajder, an expert in generative AI based in Cambridge, England, in a recent account from the Associated Press. Ajder is the founder of the latent Space Advisory consulting firm.
Ajder and other experts fear a surge in AI deepfakes could erode the public’s trust in what they see and hear. Tracking the source of AI deepfakes is difficult; governments and companies are not yet capable of stopping the surge or moving fast enough to prevent it, experts say.
As the technology improves, “definitive answers about a lot of the fake content are going to be hard to come by,” Ajder stated.
FBI Director Christopher Wray has warned that the growing threat of generative AI used for election disinformation makes it easy for “foreign adversaries to engage in malign influence.”
For example in Moldova, an Eastern European country bordering Ukraine, the pro-Western President Maia Sandu has been a target. One AI deepfake circulated shortly before an election depicted her endorsing a party friendly to Russia and having her announce plans to resign.
Moldova officials believe the Russian government is behind the activity. With presidential elections this year, the deepfakes aim “to erode trust in our electoral process, candidates and institutions — but also to erode trust between people,” stated Olga Rosca, an adviser to Sandu. The Russian government declined to comment to the AP for the story.
Authorities Scrambling to Put Guardrails in Place
Authorities worldwide are scrambling to put guardrails in place. The European Union requires social media platforms to cut the risk of spreading disinformation by, for example, mandating labeling of AI deepfakes, starting next year. The rest of the world is further behind right now.
The world’s largest tech companies recently and voluntarily signed a pact to prevent AI tools from disrupting elections. Meta, owner of Instagram and Face, has said it will start labeling deepfakes on its platforms.
AI chatbots are capable of generating false and misleading information that could disenfranchise voters. “A world in which everything is suspect — and so everyone gets to choose what they believe — is also a world that’s really challenging for a flourishing democracy,” stated Lisa Reppell, a researcher at the International Foundation for Electoral Systems in Arlington, Virginia, in the AP account. She is a social media and disinformation specialist, and a digital democracy advisor.
Read the source articles and information from Brookings, in MIT Technology Review, from The Guardian and from the Associated Press.
To subscribe, write to me at jd@jpdcontentservices.com; I will add you to the list manually. It’s free!
(Ed. Note: I am experiencing an issue with the Substack free subscription button at the moment; it’s swapping out the free subscription email field for a Pledge Your Support button, which is asking for credit card information. I would not do that; my preference is to win support for the newsletter from advertising. I wrote to Substack support about it. Thank you for your patience.)