
Discover more from AI in Business
AI Regulation Heats Up in the US and the EU
The EU and US have major differences in their approach to the regulation of AI; the EU-US Trade and Technology Council is a platform for working together
By John P. Desmond, Editor, AI in Business

The US needs to step up action on the regulation of AI, the head of the European Union’s AI enforcement agency told US regulators on a recent visit to Washington.
Days later, the US Federal Trade Commission announced that it will probe how OpenAI gathered data for ChatGPT, to see if any of its rules were violated.
These and other related developments show that amidst pressure from constituents, the US is getting more serious about the regulation of AI.
It’s been four years on the job for Didier Reynders, the European commissioner for justice, who is responsible for drafting and enforcing laws that apply across the 27 nations of the European Union. In that time, he has seen the EU’s privacy law GDPR go into effect, giving Europeans new rights for how to protect their data. In the US, many proposals have been discussed, but he sees no “real follow up,” he stated in a recent account in Wired.
As it did on privacy, the EU is leading the way on AI regulation with its recent further approval of the AI Act, which seeks to ensure that AI systems used in the EU are safe, transparent, traceable, nondiscriminatory and environmentally friendly.
Now with the arrival of ChatGPT in November 2022, and questions around how it was assembled and how it is appropriately used, Reynders is urging US regulators to do more, and suggesting that the EU AI Act can be a model.
“If you have a common approach in the US and EU, we have the capacity to put in place an international standard,” Reynders stated. The GDPR took years to gain acceptance on other continents, with the EU going it alone at the outset. Today, “With real action on the US side, together, it will be easier.”
FTC Queries OpenAI
The FTC last week opened a probe into OpenAI, sending the company a 20-page document asking for records related to how it evaluated risks associated with its AI models, according to an account in The Washington Post. The FTC has issued warnings that privacy and consumer protection laws apply to AI.
The FTC asked OpenAI to provide details of all complaints it has received about its products making “false, misleading, disparaging or harmful” statements about people. It also asked about the company’s data security protection practices, to see how they align with consumer protection laws.
At a hearing last week in Washington, FTC Chair Lina Khan said that misuse of people’s private information in AI training could be a form of fraud or deception. “We’re focused on, ‘Is there substantial injury to people?’ Injury can look like all sorts of things,” Khan stated at the hearing.
Samuel Levine, the director of the FTC’s Bureau of Consumer Protection, stated at a speech at Harvard Law School this spring that the agency would be “nimble” in getting ahead of emerging threats. “The FTC welcomes innovation, but being innovative is not a license to be reckless,” Levine stated in the Post report. “We are prepared to use all our tools, including enforcement, to challenge harmful practices in this area.”
At a news conference with the Biden administration officials in April on the risks of AI discrimination, Khan stated, “There is no AI exemption to the laws on the books.” She got pushback from the industry after that from, for example, Adam Kovacevich, founder and CEO of the industry’s Chamber of Progress. He commented that the FTC has oversight of data security and misrepresentation, but he said it’s not clear that the agency has the authority to “police defamation or the contents of ChatGPT’s results.”
EU Moves Forward With AI Act
While the haggling goes on in the US, Europe gains experience with its AI Act. The act passed on June 16 with an overwhelming majority of the European Parliament. Its president, Roberta Metsola, described the Act as “legislation that will no doubt be setting the global standard for years to come,” according to an account in MIT Technology Review.
Negotiations between interested parties in the European system will take some time, possibly years. However, the major implications of the Act’s passage cited by the Tech Review authors included:
A ban on emotion recognition AI software. The authors suggest a political fight will come over this.
A ban on real-time biometrics and predictive policing in public spaces. A major legislative battle over this is expected. Police groups are not in favor of the ban on biometric tech, and France, for example, is considering increasing its use of facial recognition software.
A ban on social scoring. The use of data to generalize about people’s social behavior would be outlawed. This is complicated, since social scoring, often associated with China and other authoritarian governments, is arguably used to evaluate credit-worthiness and is used in hiring.
New restrictions for generative AI. The Act would ban the use of any copyrighted material in the training set of large language models, such as OpenAI’s ChatGPT. The draft bill would also require that AI-generated content be labeled as such. The tech industry is expected to lobby hard against this.
New restrictions on the recommendation engines of social media, assigning such recommender systems to a “high risk” category, an escalation from other proposals. It would mean more scrutiny for how the recommender systems work, and more liability for tech companies for the impact of user-generated content.
Margrethe Vestager, executive VP of the EU Commission, described in a press conference the risks of AI, emphasizing concerns about the future of trusted information, vulnerability to social manipulation by bad actors and mass surveillance.
“If we end up in a situation where we believe nothing, then we have undermined our society completely,” Vestager stated in the MIT account.
EU-US Trade and Technology Council Sees Early Success
As a way to try to align US and EU objectives around the regulation of AI, the EU-US Trade and Technology Council was formed in Brussels in 2021.
The council “has demonstrated early success working on AI, especially on a project to develop a common understanding of metrics and methodologies for trustworthy AI,” stated a recent report from Brookings written by Alex Engler, a fellow in governance studies for the organization. However, he sees that the approaches of the US and EU have significant differences.
“The EU and US strategies share a conceptual alignment on a risk-based approach, agree on key principles of trustworthy AI, and endorse an important role for international standards,” Engler stated. “However, the specifics of these AI risk management regimes have more differences than similarities.” Specifically in the area of socioeconomic processes and online platforms, “the EU and US are on a path to significant misalignment,” Engler wrote.
In one example, as an offshoot of the GDPR regulation’s Article 22, which provides protection for individuals against purely automated decisions, Uber was ordered in 2021 in a case in the Netherlands, to rehire its drivers who had been fired as a result of negative ratings derived from algorithms. Observers said it was the first time a court ordered the overturning of an automated decision to dismiss workers from employment.
This divergence is rooted in the EU AI Act enabling broad regulatory authority over many types of systems, and the US federal agencies “largely constrained to adapting existing US law to AI systems,” the Brookings report states. As a result, “the EU continues to roll out comprehensive platform governance while US policy developments remain obstructed.”
Read the source articles and information in Wired, in The Washington Post, in MIT Technology Review and from Brookings.
(Write to the editor here; tell him what you want to read about in AI in Business.)