Outlook on AI from Presidential Candidates, and From the States
Harris and Trump expected to continue policies each has pursued, while states are taking a more active role in AI regulation
By John P. Desmond, Editor, AI in Business

AI policy in the next presidential administration is likely to follow a path emphasizing safety, security and privacy as well as innovation and competition.
Where the 2024 US presidential candidates stand on AI policy, as derived from their histories and recent statements, mapped against a backdrop of AI regulation efforts emerging from a number of US states, paints a picture of where AI policy is likely to go in the next administration.
President Joe Biden’s most visible actions around AI came in October 2023 when he signed an Executive Order on AI that addressed issues around AI safety and security, securing privacy, promoting innovation and competition, and confronting potential worsening of discrimination, bias and job displacement as the result of increased use of AI.
“If elected, Kamala Harris is expected to continue to advance the policy objectives captured by the Executive Order,” according to a summary prepared by Winston & Strawn LLP. The firm identified the key policy objectives as needing to balance the risks and potential associated with AI, and to work together with private companies to mitigate the risks of AI.
The most visible act around AI for former President Donald Trump was when he signed an Executive Order in February 2019 entitled, “American AI Initiative,” which called on federal agencies to start shifting funding towards AI projects. Winston & Strawn was not able to find an official statement on AI from the Trump campaign site.
In December 2023, Trump said at a campaign gathering in Iowa that if elected, he would “cancel Biden’s AI executive order and ban the use of AI to censor the speech of Americans.”
How Parties Align With Policy
Another report on where the candidates stand on AI, prepared by Capitol Technology University, addressed some specific issues. On data privacy, “Democrats have generally advocated for stronger data protection laws and user control, while Republicans have been more likely to prioritize business interests and national security concerns.” The authors noted Joe Biden’s continued urging of Congress to pass data privacy legislation.
On social media and misinformation, Democrats have pushed for tighter content moderation and antitrust measures, while “Republicans have focused on algorithmic transparency–but not interference–to project free speech,” the authors stated. Donald Trump has advocated for less content moderation and more free speech online; he launched Truth Social in 2022, an alternative social media platform he positioned to be more “open and free” than existing platforms.
States Actively Pursuing AI Regulations
Many US states are actively pursuing regulation of AI. In Texas, Gov. Gregg Abbott created an AI Council charged with studying how state agencies are using AI and assessing whether the state needs an AI code of ethics, according to a recent account from KERANews via the Texas Tribune.
Some 100 of 145 state agencies in Texas are using AI in some form, according to Amanda Crawford, CIO of the Texas Department of Information Resources, in testimony for lawmakers at a recent hearing. Many of the agencies testified that AI has helped them save time and money. Edward Serna, executive director of the Texas Workforce Commission, for example, said a chatbot the agency created in 2020 has helped to answer 23 million questions.
Tina McLeod, information officer in the Attorney General’s Office, said workers there have saved at least an hour a week using a tool that searches through lengthy child support cases.
Some speakers testified to the risks posed by AI. Grace Gedye, a policy analyst for Consumer Reports, testified that private companies have used biased AI models to make critical housing and hiring decisions that can hurt consumers. She suggested that lawmakers mandate that companies relying on AI for decision-making, conduct audits to disclose to consumers how they are being evaluated. She noted that New York City enacted such a law, but that few employers there are actually conducting audits.
European AI Act Serves as a Model
Lawmakers repeatedly asked those testifying if they know of any templates from other states or counties for how to craft AI policy.
Generally, the European AI Act, approved by the European Union in March of this year, is serving as a model. The Act requires companies to put an AI ethical risk and responsibility program in place as an enterprise-wide endeavor, involving the board of directors, C-suite, compliance professionals and team managers, according to a recent summary from Thomson Reuters Institute.
Saskia Vermeer-de Jongh, a partner with HVG Law LLP, stated that the Act “is clear that building trust in AI starts with ensuring human oversight.” She added, “The EU AI Act’s detailed legislation provides a level of clarity and certainty for companies across sectors in developing and deploying AI systems.” The EU AI Act sets up a tiered, risk-based rubric to determine the level of oversight needed for an AI system. The risk levels are: unacceptable, high-risk and limited risk. Systems developed for the military or for scientific research are exempt. The timeline for compliance with the act is phased. The Act is enforceable, with penalties keyed to company revenue.
Different regions of Europe are adopting “distinctly different strategies on AI policy,” she stated, but all are consistent on core principles of the AI Act requiring AI systems to respect human rights, sustainability, transparency and risk management. Suggesting that comprehensive legislation is not expected in the US for some time, “There is general consensus growing around the need to limit bias, strengthen data privacy and mitigate the impact of AI on the US workforce,” Vermeer-de Jongh stated.
The California State Assembly in late August passed a bill that would enact what many see as the strictest regulations in the nation on AI companies. Gov. Gavin Newsom has until September 30 to sign it, or veto it, according to a recent account in the Washington Post.
The bill’s author, Democratic state Sen. Scott Wiener, said he designed the bill to apply to companies training large and expensive AI models, not smaller startups seeking to compete with Big Tech companies. The law would require companies selling major AI systems to address certain“catastrophic” risks, and to conduct tests. If the companies fail to conduct the tests and the technology is used to harm people, they could be sued by the attorney general of California. The measure passed the house with a vote of 41-15 and went to the state senate, which was expected to send it to the governor.
The battle over the bill is contentious. AI researchers associated with the effective altruism movement are advocating for stricter limits on AI development. AI researchers on the other side say strict government limits could stifle AI innovation and potentially allow other nations to surpass the US in technical prowess.
Hard lobbying is going on for each side. Dan Hendrycks, director and cofounder of the Center for AI Safety, a brain trust that has received funding from Dustin Moskovitz, a leader in the effective altruism movement, consulted on an early version of the bill and testified for it before the California legislature. Also supporting the bill are well-known AI researchers Geoff Hinton and Yoshua Bengio. Even Elon Musk expressed support for the bill.
Industry lobbyists have launched a website that generates letters calling on state legislators to vote against the measure. Representatives from technology trade groups, including the Software Alliance, have been lobbying Gov. Newsom during meetings in his office.
“If it reaches his desk, we will be sending a veto letter,” stated Todd O’Boyle, tech policy lead at Chamber of Progress, a group that receives funding from Google, Apple and Amazon.
National leaders and AI experts have also weighed in on the bill, with far from predictable positions. Andrew Ng, computer scientist and tech entrepreneur who was a founder of Coursera among his ventures, stated, “This proposed law makes a fundamental mistake of regulating AI technology instead of AI applications.”
The range of positions indicates that AI regulation is tricky political territory in the US.
Read the source articles and information from Winston & Strawn LLP, from KERANews via the Texas Tribune, from Thomson Reuters Institute and from the Washington Post.