Prognosticating AI Policy Under the Second Trump Administration
Consensus on a regulation slowdown and on a national security focus; Biden Administration worked to “future-proof” much of its AI policy work, with many provisions having bipartisan support
By John P. Desmond, Editor, AI in Business

A review of the prognostications for what AI will be like in the second Trump Administration shows a consensus that the march toward government regulation will be slowed, and big tech’s bromance with the government will be enhanced.
Elon Musk, however, is a big X factor. He’s inclined to ping to all sides of an issue, making it difficult to discern his overall direction.
Brookings repeated predictions that the new Trump Administration will repeal the AI Executive Order signed by President Joe Biden in October 2023. The order outlined a regulation-heavy approach to AI from multiple agencies. In one instance, the order directed the Department of Commerce to define requirements for reporting large AI models and computing clusters to the government.
The Office of Management and Budget is expected to back off requirements around the procurement of AI, by adopting for example a less risk-centered policy or reducing the consideration of climate impacts. The new administration is likely to have the Department of Commerce and the Federal Trade Commission take a more hands-off approach to AI regulation, Brookings predicts.
Conversely, the new administration is likely to get tougher with AI export control, especially to make it more difficult for China to acquire the most advanced AI semiconductor chips. With recent reports of growing military and intelligence activity around AI in China, the incoming administration may increase collaboration with American AI companies, in part by adopting a less risk-averse approach than is reflected in the National Security Memorandum on AI adopted by the Biden Administration in October 2024.
Trump Has Promised to Repeal Biden Executive Order on AI
An account in Investopedia was more blunt, quoting directly from the President-elect’s official platform, which states about AI: "We will repeal Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing."
President Biden’s October 2023 executive order on AI included a principle that the AI initiative "must be consistent with my Administration's dedication to advancing equity and civil rights." In addition, the order created the US Artificial Intelligence Safety Institute (AISI), an organization meant to evaluate the safety and security of advanced AI models. Dozens of tech companies last month urged Congressional leaders to enact legislation that would formalize and fund the institute before the end of the year, the Investopedia account stated.
Some tech investors are happy with the Trump takeover of Washington. Among them is venture capitalist Marc Andreessen, cofounder and general partner of Andreessen Horowitz. In December 2023, he called the regulation of AI "the foundation of a new totalitarianism." In a "Techno-Optimist Manifesto" published in October 2023, Andreessen railed against "trust and safety" and "tech ethics" as two prongs of a "mass demoralization campaign" that he said threatened to stifle economic growth and lead to society’s decline, according to the Investopedia account.
Elon Musk, jibed on Saturday Night Live for hanging out so much at Mar-a-Lago, has a quizzical history on AI safety. He was a founder of OpenAI as a nonprofit with the goal of building “safe and beneficial artificial general intelligence for the benefit of humanity.” But he split with the company when it went in a more commercial direction under Sam Altman. This seems to indicate a concern about the dangers of AI to human safety.
Then as the owner of Tesla, Musk unleashed self-driving cars on a market where software is at times trusted on its own to get the driver to the destination safely. Self-driving cars are an impressive technical accomplishment; the jury is out on how safe they are. Releasing them to the market seems to indicate little concern for the tremendous risk to the driving public of self-driving cars. So sometimes Musk is concerned about AI safety, and sometimes he’s not. (Good luck to us.) Musk did back the California state bill that would have increased AI regulation; Gov. Gavin Newsom vetoed it; now it goes back to the drawing board. Among its provisions were requirements to have large language models undergo testing.
One player sees the Trump Administration as leading a boost in AI, with the giants Microsoft, Google, Amazon and Palantir among those that stand to benefit. Palo Alto Networks CEO Nikesh Arora told CNBC shortly after the election that it was "more than likely" that a Trump administration would accelerate investment in AI, Investopedia reported.
Agencies Seen Becoming More ‘Hands-Off’ on AI Regulation
A consensus is emerging that agencies are likely to back off expectations that regulation of AI will increase under the new Trump Administration. Very likely, “Agencies will be instructed to take a slightly more hands-off approach to AI regulation and also consider alternative approaches besides regulation,” as was the case under the first Trump administration’s own AI memorandum, stated Adam Thierer, a senior fellow for R Street Institute, a public policy think tank based in Washington, DC, in a recent account in FedScoop.
More of a challenge for the new Trump team will be how to play a “cut-and-paste game of saying what you do and do not like as you do the repeal and replace of the executive order,” Thierer stated. Aspects of AI policy with bipartisan support may stay in place, including cybersecurity guidelines, certain recommendations on national security and some guidance on government efficiency.
One observer expects the Trump Administration to put national security in the forefront of AI policy, as the Biden Administration has done, with guardrails likely to differ. “The way we get there — I think that’s where you will see differences,” stated Divyansh Kaushik, a VP at Beacon Global Strategies, a DC-based national security advisory firm, in the FedScoop account.
The Department of Commerce’s AISI and the National AI Research Resource (NAIRR) pilot, being run by the National Science Foundation, have had bipartisan support.
The AISI, housed in the National Institute of Standards and Technology, has been focused on evaluation, testing and development guidelines for AI models. The NAIRR is aimed at providing tools needed to advance AI research. Whether Congress decides to advance bipartisan legislation to firm up the continuation of these efforts, could affect where the Trump Administration lands on them.
Beacon’s Kaushik sees it as unlikely this legislation will make it through this Congress. R Street’s Thierer sees the Trump Administration as likely to pull back some authority of the AISI to the White House and away from the Department of Commerce.
The NAIRR is the product of recommendations from a task force created under the National AI Initiative Act, which was signed into law by Trump in his first administration. It is often described as a shared research infrastructure.
Musk Introduces an “Unpredictable Dynamic” in Forecasting AI Regulation
The impact of Elon Musk on AI policy under Trump is a guessing game right now. Writing in Tech Policy Press, AI scientist Dr. Haileleal Tibebu stated, “To what extent will Musk's guidance on AI shape policy change? What direction is Musk likely to advocate for—will his influence lead to policy cuts, or will safety, and ethical standards hold priority?” Tibebu is on the faculty of the University of Houston, and is the founder of AfricanVoiceAI.org, an organization that promotes an African perspective on global AI discourse. He conducts advanced research in responsible AI.
He added, “This dual stance—advocating both for regulatory freedom to accelerate technology and for caution in specific high-risk areas—adds an unpredictable dynamic to President-elect Trump’s deregulation agenda, raising questions about how AI policy may ultimately take shape under this administration.”
Whether the US will cede leadership of global AI safety practices remains to be seen, and if it happens, could open the door to other superpower nations to take the lead. The Biden Administration has been engaged in efforts around the world to set ethical AI standards, partnering with international organizations including the United Nations. “If the Trump administration steps back from these global efforts, other major players—such as China—could take the lead in shaping international AI standards, potentially resulting in a fragmented global regulatory landscape,” Dr. Tibebu stated.
The landscape of AI ethics, accountability and global cooperation will be shaped in coming weeks and months by the actions of the Trump Administration around AI.
Read the source articles and information from Brookings, Investopedia, FedScoop and Tech Policy Press.