
Discover more from AI in Business
AI and the 2024 US Elections: Guardrails Needed
Deepfake video and audio seen as potentially making it difficult for voters to discern the truth; lawmakers and tech leaders say they seek protections; details to come
By John P. Desmond, Editor, AI in Business

Alarm bells are being sounded about how AI will affect the next US presidential election cycle, as Congress and policymakers consider what safeguards can be put in place.
About half of Americans expect misinformation spread by AI to have an impact on who wins the 2024 election, and one-third report being less trusting of the results because of AI, according to a poll released this week by Axios and Morning Consult, a business intelligence company.
The results varied by political affiliation. Supporters of former President Trump were nearly twice as likely as backers of President Biden to say AI would decrease their trust in election results (47 percent to 27 percent).
Experience with AI was found to vary by political affiliation as well. Self-identified liberals (21 percent) were nearly twice as likely to report having used generative AI for work or education than moderates (11 percent) or conservatives (12 percent). That may be tied in part to age, with 35 percent of Gen Z (born between 1997 and 2012), but just 3 percent of baby boomers (born between 1946 and 1964), saying so.
Maybe AI will reach a point where it is able to create a candidate to run for office who is not a real person. “There is growing interest in the possibility of an AI or technologically assisted, otherwise-not-traditionally-eligible candidate winning an election,” state authors Bruce Schneier and Nathan E. Sanders in a recent account in MIT Technology Review. Schneier is a long-time security professional, now describing himself as a public interest technologist. Sanders is a data scientist focused on creating open technology to help vulnerable communities.
“It is certainly possible for an AI candidate to mimic the media presence of a politician,” the authors suggest, allowing that it is implausible. “The age of non-human elected officials is far off,” they state, while pointing to some more likely near-term scenarios.
These include: acceptance of testimony or comments generated by AI to a legislature or its agents; and AI-generated political messaging outscoring campaign consultant recommendations in poll testing;
“Like everyone else, political campaigners and pollsters will turn to AI to help with their jobs,” the authors state.
Further out, the authors speculate about the potential of AI to define a platform for a political party that would win over many voters.
“One could imagine a generative AI with skills at the level of or beyond today’s leading technologies could formulate a set of policy positions targeted to build support among people of a specific demographic, or even an effective consensus platform capable of attracting broad-based support,” the authors suggest.
Brennan Center Sees Potential of AI to Twist Election Outcomes
The potential of powerful AI tools available today to twist the outcome of elections has never been greater. “Next year will bring the first national campaign season in which widely accessible AI tools allow users to synthesize audio in anyone’s voice, generate photo-realistic images of anybody doing nearly anything, and power social media bot accounts with near human-level conversational abilities — and do so on a vast scale and with a reduced or negligible investment of money and time,” stated Mekela Panditharatne, counsel for the Brennan Center’s Democracy Program, in a recent account posted at the Brennan Center for Justice.
The proliferation of disinformation is a top concern. “Elections are particularly vulnerable to AI-driven disinformation. Generative AI tools are most effective when producing content that bears some resemblance to the content in their training databases,” Panditharatne stated.
Safeguards are needed. The author suggests involving the Cybersecurity and Infrastructure Security Agency, the Federal Election Commission, the Defense Advanced Research Projects Agency and the new AI Institute for Agent-based Cyber Threat Intelligence and Operation, at the University of California, Santa Barbara.
“AI developers and social media companies must play a role in mitigating threats to democracy,” the authors suggest. Potential actions could include: filters for election falsehoods, and limitations to make it more difficult to launch wide-scale disinformation campaigns;
For social media companies: policies that reduce harm from AI-generated content while preserving legitimate discourse; use of verified sources of election information (such as the National Association of Secretaries of State); and the ability to identify and remove coordinated bots and label deepfakes meant to influence elections;
For Congress and state legislatures: act quickly to regulate AI; “Lawmakers cannot afford to dawdle or allow themselves to become mired in partisan bickering,” the authors suggest.
Also, require algorithmic impact assessments for AI systems deployed in governance settings, and require “paid for” disclaimers and other disclosures for a wider range of online ads than the law currently mandates.
Disinformation Prominent in the 2016 Presidential Campaign
Disinformation was a prominent feature of the 2016 US presidential campaign, punctuated especially by the Russian-backed troll farm which between 2015 and 2017 purchased 3,500 ads containing fake claims designed to disrupt the American political system. In addition, the House Intelligence Committee found there were at least “80,000 pieces of organic” social media content produced by Russian interests, according to an account written by David Gewirtz recently in ZDNet. Gewirtz is an author, entrepreneur and consultant for White House IT security.
Disinformation also surfaced in a 2019 video purportedly showing House Speaker Nancy Pelosi slurring her words as if drunk, “The video, shared on Facebook more than 45,000 times and hosting more than 23,000 comments, was a fake,” with the video and audio from a speech she made at an event edited, Gewirtz stated.
Now with generative AI in the toolset, concern is high about its potential to disrupt elections by distorting the facts. For example, a campaign ad for Florida Governor Ron DeSantis, a Republican candidate for president, used an AI-generated voice to imitate President Trump in a way that made it appear he was attacking Republicans, Gewirtz stated.
“While AI could definitely prove to be a force multiplier for national elections, I expect it to be used heavily in campaigns where lower budgets require more creative use of resources,” Gewirtz stated, adding, ”Just because a campaign is using generative AI doesn't mean it's weaponizing it.”
Autonomous AI Agents Seen as Potential Threat
The most concern is around use of generative AI for deepfake videos and forgeries. Lawrence Pingree, vice president of emerging technologies and trends for Gartner, told ZDNET, “The biggest new danger is that generative AI large language models can now be instrumented with automation,” such as AutoGPT or BabyAGI, autonomous AI agents that can be used to run goal-oriented misinformation campaigns. “This makes it much easier for both political parties or nation states to carry out influence operations,” Pingree stated.
Moreover, “LLMs also can be paired with deepfake technologies to run campaigns that can even potentially do audio and video. Taken to social media, these types of tools can create a very difficult digital environment for voters who want to know the truth,” Pingree stated.
Political consultants and election professionals are sounding the alarm. The American Association of Political Consultants recently condemned the use of deepfake content in political campaigns as a violation of its ethics code, according to an account in The New York Times.
“People are going to be tempted to push the envelope and see where they can take things,” stated Larry Huynh, incoming president of the association. “As with any tool, there can be bad uses and bad actions using them to lie to voters, to mislead voters, to create a belief in something that doesn’t exist.”
Tech bigwigs including Elon Musk, Mark Zuckerberg and Sundar Pichai on Wednesday this week met on Capitol Hill in a closed session with lawmakers to discuss AI regulation.
According to an account from Reuters, Musk stated afterwards, "It's important for us to have a referee," to "ensure that companies take actions that are safe and in the interest of the general public."
Who appointed Musk as the arbiter of fairness was not mentioned.
More than 60 senators who took part expressed universal agreement about the need for government regulation of AI, especially to safeguard against deep fakes such as bogus videos.
Senate Majority Leader Chuck Schumer emphasized the importance of putting regulations in place before the 2024 US presidential election, especially around deep fake videos. "A lot of things that have to be done, but that one has a quicker timetable maybe than some of the others," he stated.
Other attendees included Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, former Microsoft CEO Bill Gates and AFL-CIO labor federation President Liz Shuler.
Read the source articles and information from Axios, MIT Technology Review, the Brennan Center for Justice, in ZDNet, in The New York Times and from Reuters.
(Write to the editor here; tell him what you would like to read about in AI in Business.)
Click on the image to buy me a cup of coffee and support the production of the AI in Business newsletter. Thank you!