When Not Using AI May Be Illegal To Stay Compliant
Revoking of Joe Biden’s Executive Order on AI makes the outlook foggy, but the direction towards more scrutiny of AI systems had been clear [Ed Note: This post delayed from publication on Jan. 31.]
By John P. Desmond, Editor, AI in Business

Is it ever illegal not to use AI? For compliance reasons in certain industries, it is illegal not to know enough about how AI is being used and the business practice surrounding it.
This started out being an account of when it might be illegal not to use AI, a look at what was coming around AI compliance. But now that the Trump Administration is in office and has revoked Joe Biden’s Executive Order on AI, reasoning that it “hampered” the ability of the industry to compete. Now, federal regulation of AI is up in the air.
Trump revoked Biden’s Executive Order on AI on January 20. The next day, he announced the “Stargate” joint venture to invest $500 billion in AI infrastructure, which so far translates mainly to new data centers. Contributions to this fund included investments from SoftBank, OpenAI and Oracle.
One has to question the value of an update on trends in AI compliance at this time, given the uncertainty, so this is more about where it was going before this change in direction -if it is that, it may not be clear yet. We will assume what compliance requirements are in place in the US and elsewhere, need to be observed.
The proper use of AI is/was related to risk management. “AI compliance is a process that involves making sure that AI-powered systems are in compliance with all applicable laws and regulations,” stated Vikas Chaubey, a certified information systems auditor working independently in India, in a recent post on LinkedIn.
This includes checking that companies and individuals are not using AI systems to break any laws or regulation, ensuring that the data used to train the AI system is collected and used legally and ethically, and that the AI-powered system is not manipulating or deceiving people. That sounds like a heavy lift.
Also, “AI compliance helps to protect organizations from potential legal and financial risks,” Chaubey states. If an AI system is found to be non-compliant, the organization could be subject to fines, penalties or other legal action.
In the EU, the AI Act puts a regulatory framework in place that calls for heavy fines against violators. A company that implements a prohibited AI system could be subject to a fine of 30 million Euro or six percent of annual worldwide turnover, whichever is higher. The Dutch Data Protection Authority issued a $31 million fine against Clearview AI in 2022, for illegal data collection practices related to its facial recognition technology, according to a press release from the European Data Protection Board.
Guard Against AI ‘Worst Practices’
Experienced CPAs, not usually in the vanguard of tech stuff, are starting to see the writing on the wall about AI. “Artificial intelligence (AI) is not a philosophical discussion, business dream, or science fiction for CPAs—whether they are risk managers, auditors, or financial executives,” stated CPA Joel Lanz writing recently in The CPA Journal. He is a lecturer at SUNY-Old Westbury and an adjunct professor at NYU-Stern School of Business in New York.
The writer took the approach of identifying these “worst practices” around providing AI-related services:
Failure to governance practices and policies the company has in place;
Not obtaining a core understanding of AI;
Neglecting financial statement implications;
Neglecting to consider a recognized AI risk management framework;
Ignoring industry-specific AI risks and challenges;
Forgetting the vendor risks involved.
To come up to speed quickly on AI, the author suggested reading the United Kingdom’s National Cyber Security Centre article, “AI and Cyber Security: What You Need to Know” (https://tinyurl.com/4f4xt7d8), “which is reliable and to the point, emphasizing key risk considerations for organizations at a high level.”
Here’s a nugget: “When one considers that AI is still in its infancy, the number of consultants proclaiming themselves AI experts is quite alarming.”
CPA Lanz also suggests that CPAs peruse the National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework (NIST AI 100-1, https://tinyurl.com/mwfzndb5), which “is quickly becoming a critical professional reference in understanding the practices expected to manage AI risk.”
He suggests, “CPAs should leverage the contents in facilitating discussions and challenging line executives on identifying and managing risks.”
To gain a real world understanding of the risk management challenges, the author recommends perusing the US Treasury Department’s March 2024 report focused on the state of AI-related threats in financial services. (https://home.treasury.gov/news/press-releases/jy2212). He notes that the report’s findings are based on 42 in-depth interviews conducted in late 2023 with executives from the financial services sector, IT firms, data providers and companies offering anti-fraud services. “Among its key findings is a list of best practices used in the industry to manage AI risk,” the author stated.
To help manage vendor risk, the author advised, “execute on the fundamentals.”
Department of Justice Churn Means Uncertainty
The US Department of Justice is experiencing major change right now. They had recently weighed in on the considerations related to a corporation’s use of AI and its compliance programs intended to mitigate risks associated with AI. The DOJ’s Evaluation of Corporate Compliance Programs (ECCP) [Still available last time we checked. The Editors] provides corporations with a list of AI-specific questions that help guide the development and ongoing assessment of compliance programs, according to an account from JDSupra.
Lawyers at that firm wrote in November 2024 to expect “regulators to prioritize AI compliance reviews in 2025.” And, “We expect 2025 to see a surge in federal and state regulators sending requests for information, conducting compliance sweeps, and bringing enforcement actions targeted at specific industries where the risks posed to consumers may be higher.”
Now that the Trump Administration is in place, it is far from clear whether any of that is going to happen. Trump on day one rescinded The Executive Order on AI that Joe Biden aimed at reducing the risks AI posed to consumers, workers and national security. The new Trump Executive Order on AI “revokes the Biden AI Executive Order, which hampered the private sector’s ability to innovate in AI by imposing government control over AI development and deployment,” according to a press release from the White House.
In an account describing the rapid-fire developments starting on January 20, lawyers with A&O Shearman stated, “The AI Executive Order included a wide range of federal regulatory principles and priorities regarding AI and directed a number of federal agencies to promulgate technical standards and guidelines.”
On January 23, another executive order was issued, “Removing Barriers to American Leadership in Artificial Intelligence,” described by the A&O Shearman lawyers as establishing “US AI policy to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness and national security.” The order calls for a big committee to report back in 180 days with an “action plan” for the president. “This action plan will likely define a great deal of federal policy concerning AI for the foreseeable future,” stated the lawyers, including Atty. Helen Christakos, a privacy and data security partner, one of the authors.
States Likely To Assume Greater AI Regulation Role
So for up to the next six months, details about how the federal government will propose to regulate AI may be scarce. The lawyers hardly went out on a limb to state, “We believe that the Trump administration will likely take a more hands-off approach to regulating AI,” with some exceptions likely.
The Shearman lawyers added, “Similar to the way in which US privacy laws developed, we believe that, in the absence of federal regulation, the opportunity to create AI-specific regulation will pivot back to the US states.”
Before the inauguration, President Trump in December announced the appointment of David Sacks to be its “AI and Crypto Czar.” He is a venture capitalist, an early investor in PayPal and a friend of Elon Musk. Head of the firm Craft Ventures, Sacks has “called for a more permissive policy on both cryptocurrency and artificial intelligence,” according to an account from The New York Times. Resonating with the more conservative parts of Silicon Valley, Sacks hosted a fundraiser for Trump in San Francisco in June, and he delivered a speech at the Republican National Convention in July.
The day after the Stargate joint venture announcement, Elon Musk posted on his social platform X that, referring to OpenAI, Oracle and Softbank, “They don’t actually have the money,” according to an account on CNBC. Dueling oligarchs engaged, with Sam Altman of OpenAI firing back that Musk’s claim on the venture’s liquidity was “wrong.” He then chided Musk, “I realize what is great for the country isn’t always what’s optimal for your companies, but in your new role I hope you’ll mostly put [American flag emoji] first.”
Enduring dueling oligarchs and locked in wait-state are where we are in US AI regulatory policy at the moment.
Read the source article and information on LinkedIn, in The CPA Journal, from JDSupra, from a White House press release, from The New York Times and from CNBC.