
Discover more from AI in Business
Human Deep Thinking Needed to Address the Threat of AI
Departure of Geoffrey Hinton from Google spurs more introspection about the threat of AI to civilization, and what to do about it
By John P. Desmond, Editor, AI in Business

AI could be a threat to civilization, it seems.
This week we learned that AI scientist Geoffrey Hinton is leaving Google, where he has worked for 10 years, for fear that AI is out of control and he is partly responsible. Hinton’s work as an undergraduate helped to advance neural network AI techniques, which have led to rapid advances resulting in the development of large language models such as ChatGPT.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton stated in an account in The New York Times. He quit his job at Google, he told the Times, so he could more freely speak out about the risks of AI. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton stated.
.Some AI scientists are worried that ChatGPT and other generative AI tools may have been released too soon, that there are no good ways to prevent them from providing misinformation and to be used by the less altruistic for nefarious purposes. “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton stated.
The professor had worked at Carnegie Mellon University in the 1980s, but he was reluctant to be part of teams taking funding from the Pentagon, so he left and went to Canada. Dr. Hinton is opposed to the use of AI on the battlefield, to help make what he calls “robot soldiers.”
What changed the game for Dr. Hinton at Google was the onset of competition with Microsoft, whose investment in OpenAI positions the company to threaten Google’s search business, which generates most of the company’s revenue via advertising. He is concerned the two companies are tied to a competition they may not be able to control; and he is concerned in the short run that the internet will be flooded with false photos, videos and text, so that ordinary people may “not be able to know what is true anymore,” Dr. Hinton said to the Times.
The AI scientist cited a concern that the current LLM technology learns unexpected behavior from the boundless amounts of data it analyzes, and it can produce computer code and run it on its own. “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off … I thought it was 30 to 50 years, or even longer away. Obviously, I no longer think that.”
Fear of AI Growing; Audits Seen as Helping
Meanwhile, fear of AI appears to be growing. In a survey conducted by Forbes magazine of more than 2,000 Americans in April, 77 percent were concerned AI would cause job loss within the next 12 months, according to a recent account in NextGov. In addition, 44 percent were “very concerned” and 33 percent said they were “somewhat concerned.”
Also, 75 percent of respondents were concerned about misinformation produced by AI, even while 65 percent said they planned to use ChatGPT, despite it having some history of presenting made-up statements confidently as facts.
In the way of meaningful responses that could lead to some mitigation of AI risk, some AI experts advocate for audits of AI models to be conducted and possibly required, and the results published.
One well-known audit of commercial facial recognition systems, conducted in 2018 by AI researchers Joy Buolamwini and Timnit Gebru, found the error rate for darker-skinned people was up to 34 percent, according to a recent account in MIT Technology Review. This audit kicked off a body of critical work that exposed bias and discrimination in the use of facial analysis algorithms; IBM and other companies subsequently pulled related products off the market.
The demand for audits is being driven by increased regulation. New York City will start requiring the AI hiring tools be audited starting in 2024. The European Union will require big tech companies to audit their AI systems annually starting in 2024, and all “high risk” AI systems will require audits.
Consensus is lacking on what a model AI audit should look like and what skill set is required to execute them. Alex Engler, who studies AI governance for the Brookings research group, told the Times the quality of AI audits varies widely.
Bias Bounties Seen As Way to Encourage Tool Creation
Bias bounty competitions are seen as one way to encourage the creation of tools to identify and mitigate algorithmic bias in AI models. Twitter tried a competition in 2021 that was won by Bogdan Kulynych, Participants were given full access to the code underlying Twitter’s image-cropping algorithm, which determines how photos should be cropped when they are displayed on a user’s timeline, according to an account on ZDNet.
A student studying security and privacy engineering, Kulynych won the top prize of $3,500. He found that the Twitter algorithm tended to apply a “beauty filter” that favored slimmer, younger faces, and those with a light skin color, smooth skin texture and typically feminine facial features.
"I saw the announcement on Twitter, and it seemed close to my research interests – I'm currently a PhD student and I focus on privacy, security, machine learning and how they intersect with society," Kulynych stated to ZDNet. "I decided to participate and managed to put something up in one week." [Ed. Note: A native of Ukraine, Kulynych now asks for help for his suffering homeland through his website.]
The bias bounty area is fraught with challenges: limited access to code, no set methodology; no best practices; no accepted definitions of what constitutes harm (although Twitter had an impressive list, according to Kulynych); and the question of whether the algorithmic model should even exist. "The big thing about bias is it presupposes that the system should exist, and if you only fix bias, then everything about the system will be fine," stated Kulynych.
Effort to Protect From Harm of AI Seen As Beyond One Country or Company
One AI thought leader sees the effort to protect against potential harm is beyond any single company or nation. Writing recently in Wired, Rumman Chowdhury makes the case that an effort akin to that which took place in the nuclear proliferation era after World War II, in which countries of the world led by the US and under the guidance of the United Nations, convened to form an independent body free of government and corporate ties that would provide solutions.
Chowdhury was the AI ethics lead at Twitter, whose position and department was eliminated by Elon Musk after he took over the company. Today, she stays busy as an independent consultant, with activities including being a startup advisor for the Creative Destruction Lab. The generative AI scenario set up today, in her view, positions the leading tech companies to put profit above any benefit to the public of their new models, which were foisted on society without having been asked.
“Code transparency alone is unlikely to ensure that these generative AI models serve the public good,” Chowdhury stated.
To create a public benefit, some accountability will be required. “The world needs a generative AI global governance body to solve these social, economic, and political disruptions beyond what any individual government is capable of, what any academic or civil society group can implement, or any corporation is willing or able to do,” Chowdhury stated.
Enter – who or what? Outside the example of the efforts to stem nuclear proliferation, Dr. Chowdhury points to the example of Facebook’s Oversight Board, composed of a global group representing many disciplines, and which has been given some teeth.
Far from perfect, “these examples provide a starting point for what an AI global governance body might look like,” Chowdhury suggests, while sketching out ideas about how this generative AI global governance body might be funded.
Read the source articles and information in The New York Times, NextGov, MIT Technology Review, ZDNet and in Wired.
(Write to the editor here; tell him what AI topics you would like to read about in this newsletter.)