
Discover more from AI in Business
Elon Musk, an OpenAI Cofounder, Now Leads Call for an AI Development Pause
Microsoft’s investment in OpenAI followed Musk’s decision to no longer fund the nonprofit; today the founder of Tesla calls AI a “risk” needing regulation
By John P. Desmond, Editor, AI in Business

What Sam Altman and Elon Musk have in common is that within the last several months, they each have unleashed powerful AI software on the world that has had widespread impact.
Sam Altman’s OpenAI released ChatGPT-3 and then ChatGPT-4 in short order, causing upheaval in a wide range of professions encompassing writing, creative arts, computer programming and academia.
Elon Musk’s Tesla began testing Full Self-Driving (FSD) features years ago, then released them more widely with version 11 in November. A recent account in The Washington Post described lawsuits against the company alleging over-promising on FSD and inquiries from the SEC and the US Department of Justice around claims Musk has made around FSD.
Then in late March, Musk and a group of other AI experts and industry executives in an open letter issued by the Future of Life Institute (FLI) called for a six-month pause in developing systems more powerful than OpenAI’s GPT-4. The letter cited risks to society, without mentioning FSD. Sam Altman was not among the signers.
Two statements from the letter: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
And, “AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.”
Not All Praise the ‘Pause Letter’
Reaction to the letter was swift and not all praiseworthy. The letter included signatures from people who did not actually sign it, including Meta’s chief AI scientist Yann LeCun, who tweeted that he did not support it, according to an account in The Guardian.
FLI is primarily funded by the Musk Foundation; some observers have been critical of FLI’s suggestions of “apocalyptic scenarios” being prioritized over immediate concerns of racist or sexist biases being programmed into AI systems, The Guardian reported.
The “pause letter” cited 12 pieces of research, including “On the Dangers of Stochastic Parrots,” a paper coauthored by Margaret Mitchell, previous head of ethical AI research at Google, now chief ethical scientist at Hugging Face. Mitchell was critical of the letter, stating to Reuters that “it was unclear what counted as ‘more powerful’ than GPT-4.”
Mitchell further stated, “By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have.” Mitchell’s coauthors Timnit Gebru and Emily M. Bender also criticized the letter on Twitter,
Artificial intelligence expert Eliezer Yudkowsky stated in an op-ed in Time magazine that a six-month pause is not long enough and that development of powerful AI systems should be shut down. A decision theorist at the Machine Intelligence Research Institute, Yudkowsky stated, “It took more than 60 years between when the notion of artificial intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence — not perfect safety, safety in the sense of ‘not killing literally everyone’ — could very reasonably take at least half that long.”
Google CEO Sundar Pichai would not commit to the pause. Commenting for the Hard Fork podcast of the New York Times, Pichai stated, “I think in the actual specifics of it, it’s not fully clear to me how you would do something like that today.” Asked if he would email his engineers to ask them to pause their AI work, he stated, “But if others aren’t doing that, so what does that mean? To me, at least, there is no way to do this effectively without getting governments involved. So I think there’s a lot more thought that needs to go into it.”
Pichai did agree with the need for some regulation of AI. “AI is too important an area not to regulate,” he stated, adding, “It’s also too important an area not to regulate well.”
Musk, a Founder of OpenAI, Broke With It in 2018
Musk and OpenAI have some history. The two were among a group in Silicon Valley that worried in the mid-2010s that AI had the potential to destroy the world, a movement called rationalists or effective altruists, according to a recent account in The New York Times. In a surprise move, Altman was named president of Y Combinators, the Silicon Valley startup accelerator and seed investor, in 2014. He expanded its investments and began working on projects outside the firm as well, including OpenAI, which he founded as a nonprofit in 2015 with a group that included Musk.
In 2019, Altman stepped down as president of Y Combinator to concentrate on OpenAI, then a company with fewer than 100 employees.
Musk broke with OpenAI in 2018, believing that the company was too far behind Google and that he could solve it by taking control and running it himself, according to a recent account in Semafor, a news website. Altman and OpenAI’s other founders rejected the proposal, and Musk then walked away from the company. He had contributed $100 million to OpenAI at that point, and had promised up to $1 billion, but he made no further investments. That meant OpenAI could not pay for developing its AI models on powerful computers.
In March, 2019, OpenAI announced it was creating a for-profit entity to raise money to fund development. Within six months, OpenAI took a $1 billion investment from Microsoft, which enabled the company to invest in the development that resulted in ChatGPT.
In January of this year, OpenAI took another $10 billion investment from Microsoft. In early February of this year, Musk tweeted, “OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.”
While other big tech companies had been developing large-language models, generative AI models, OpenAI was the first to release theirs to the market. Many believe it was too early, according to the New York Times account, since ChatGPT gets many facts wrong, makes some things up and can be used to spread disinformation. The government of Italy last week imposed a temporary ban on ChatGPT, citing privacy concerns and worries over minors being exposed to explicit materials, the Times reported.
Musk in February warned that AI is a risk to the future of civilization. “One of the biggest risks to the future of civilization is AI,” Musk stated to attendees at the World Government Summit in Dubai, United Arab Emirates, shortly after mentioning the development of ChatGPT, according to an account from CNBC. “It’s both positive or negative and has great, great promise, great capability,” Musk stated, adding, “With that comes great danger.”
AI has no rules or regulations to keep its development under control, according to Musk. “We need to regulate AI safety, frankly,” Musk stated. “It is, I think, actually a bigger risk to society than cars or planes or medicine.” Regulation “may slow down AI a little bit, but I think that that might also be a good thing,” Musk stated.
He made no mention of Tesla’s FSD, which by its release has exposed all drivers on the road to some degree of risk.
Read the source articles and information in The Washington Post, in an open letter issued by the Future of Life Institute, The Guardian, Time magazine, the Hard Fork podcast of The New York Times, The New York Times, in Semafor and from CNBC.
(Write to the editor here.)