ChatGPT Spurring AI Regulation Efforts Worldwide
AI regulators seek a framework for policing ChatGPT and similar generative AI models, European authorities go back to the drawing board to tune AI Act
By John P. Desmond, Editor, AI in Business
After Italy issued a ban on ChatGPT on March 31, speculation ensued as to whether other European countries would follow.
Europe is far ahead of the US when it comes to AI regulation, having proposed the Artificial Intelligence Act in 2021, which sought to designate some uses of AI as “high-risk” and would order stricter requirements for transparency, safety and human oversight. ChatGPT is upending these plans, sending AI regulators in Europe back to the drawing board.
The Italian National Authority for Personal Data Protection said that ChatGPT had "suffered a data breach on March 20 concerning users' conversations and payment information of subscribers to the paid service," according to an account in Euronews. The ChatGPT ban began immediately.
The Italian data regulator was critical of OpenAI for not providing notice to users whose data has been collected. The regulator also challenged "the lack of a legal basis justifying the collection and mass storage of personal data with the aim of 'training' the algorithms that run the platform".
The regulator also criticized the absence of any means of age verification, which would expose “minors to responses absolutely not in accordance with their level of development.” OpenAI was asked to respond within 20 days, “or risk a fine of up to 4 percent of its annual worldwide turnover,” the regulator stated.
Since its launch in November 2022, ChatGPT has been used by hundreds of millions of people, making it the fastest-growing consumer application in history, according to a UBS study.
After Italy’s decision to restrict access, the European Consumer Organization (BEUC) asked all authorities to investigate similar technology. "Consumers are not ready for this technology. They don't realize how manipulative, how deceptive it can be. They don't realize that the information they get is maybe wrong," stated Ursula Pachl, Deputy Director of the BEUC.
She called the early experience with ChatGPT "kind of a wake-up call for the European Union because even though European institutions have been working on an AI Act, it will not be applicable for another four years. And we have seen how fast these sorts of systems are developing," Pachl stated.
Regulators in Ireland, France, the UK and Germany all expressed concerns similar to those of Italy. OpenAI published a blog post on April 5 outlining its approach to AI safety, stating that it works to remove personal information from training data when feasible, trains its models to reject requests for personal information of individuals, and acts on requests to delete personal information.
European Commission Executive Vice President Margrethe Vestager was not pleased. "No matter which tech we use, we have to continue to advance our freedoms and protect our rights," she stated in a Twitter post. "That’s why we don’t regulate AI technologies, we regulate the uses of AI. Let’s not throw away in a few years what has taken decades to build."
OpenAI CEO Sam Altman stated on Twitter that the company would defer to “the Italian government” on the issue. “Italy is one of my favorite countries and I look forward to visiting again soon,” he stated.
Canada Also Concerned About ChatGPT Eroding Privacy
Meanwhile, regulators in Canada also got involved. On April 4, Canada’s privacy commissioner announced an investigation of OpenAI, citing concerns about privacy. "You might say, 'Oh, maybe it feels a bit heavy-handed,'" stated Katrina Ingram, founder of Edmonton-based consulting company Ethically Aligned AI, according to an account from the CBC (Canadian Broadcast Corp.) "On the other hand, a company decided that it was just going to drop this technology onto the world and let everybody deal with the consequences. So that doesn't feel very responsible as well.”
OpenAI has said it used a variety of data sources to build ChatGPT’s models, including licensed content and content publicly available on the internet. "We didn't consent to any of this," Ingram stated. "But as a byproduct of living in a digital age, we are entangled in this."
Another Canadian observer said more information is needed about how ChatGPT was built. "One of the challenges right now is that I think we may not know enough about what's going on under the hood. An investigation can help to clarify that," stated Teresa Scassa, Canada Research Chair in Information Law and Policy and a law professor at University of Ottawa.
She noted a precedent in Canada for cases of internet data harvesting violating privacy law. The American technology firm Clearview AI was found in a 2021 case to have violated Canadian privacy laws by collecting photos of Canadians without their knowledge or consent.
"We can have whatever law we want in Canada, but we're ultimately dealing with a technology that's coming from another country and that may be operating by different norms," stated Scassa.
AI Poses Challenges for Regulators Worldwide
Regulators in Canada, like those in Europe and like the US Congress, are having difficulty grounding the effort to regulate AI. Scassa suggested that governments need “to put in place something that will structure our response so that we set legal parameters that will help us govern AI." She added, "I certainly think that this is a very pressing issue that until we have those frameworks in place, it will be very difficult to respond to and to shape AI."
Not all countries are considering AI regulations right now. India is not. In a statement issued on April 4, India’s Ministry of Electronics and Information Technology explicitly stated that the government “is not considering bringing a law or regulating the growth of artificial intelligence in the country,” according to an account in Gizmodo. The ministry referred to AI as an “enabler of the digital economy” which it sees as playing a strategic role for the country moving forward.
The Indian government has committed a fraction of the spending of the US and China on AI research, and has not been generating significant AI startup activity, according to the account.
European AI Act Back to the Drawing Board
ChatGPT has upended work being done by the European Commission on the Artificial Intelligence Act, an attempt to define a framework for AI regulation. European regulators have been working on tuning how the Act would work since its introduction in 2021. The large language, generative AI model powering ChatGPT may not neatly fit, so the regulators are working on modifying the framework.
“They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose," stated Dragoș Tudorache, a Romanian lawmaker of the new LLM technology. Together with Italian lawmaker Brando Benifei, he is working to shepherd the AI Act through the European Parliament, according to an account in Politico.
The EU’s draft AI Act regulations separate systems into three risk categories: “unacceptable-risk” AI systems, including manipulative systems that cause harm and all forms of social scoring systems; “high-risk” AI systems, including those evaluating consumer creditworthiness or assist with recruiting; and “limited-risk” AI systems such as chatbots. (See The EU’s Digital Markets Act Sets the Pace for Big Tech Regulation, March 31, 2022, AI in Business.)
Seeking to prevent ChatGPT from turning out disinformation on a wide scale, Benifei and Tudorache in February proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list.
The direction of the European regulators is being closely watched by big tech companies. Natasha Crampton, Chief Responsible AI Officer for Microsoft, urged the EU regulators to maintain a “focus on high-risk use cases” and enable the low-risk cases to continue to be available to Europeans, according to the Politico account.
At the same time, Politico reports that the transparency activist group Corporate Europe Observatory has found that industry actors including Microsoft and Google have been lobbying EU policymakers to exclude general-purpose AI such as ChatGPT from the regulations likely to be imposed on high-risk AI systems.
Read the source articles and information in Euronews, Gizmodo and in Politico.
(Write to the editor here.)