OpenAI’s Board Now Home to Former NSA Director
Company racing to further commercialize its business model, amid persistent questions on whether its technology can be safeguarded to prevent harm
By John P. Desmond, Editor, AI in Business
Following the evolution of the OpenAI board of directors arguably provides some insight into the direction of Big AI, the effort by the biggest AI tech players to find the business models that work for them, while surviving an evolving regulatory climate and the feeling that things have been too easy for too long for them.
The latest development is the appointment of an ex-head of the US National Security Agency to OpenAI’s board. Paul M. Nakasone, retired general of the US Army, was nominated to lead the NSA by former President Donald Trump. He led the agency from 2018 until February 2024.
OpenAI announced Nakasone will join its Safety and Security Committee, announced in May and led by CEO Sam Altman “as a first priority,” according to an account in The Verge. The company also stated that the former NSA head will “also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats.”
OpenAI issued a statement from board chair Bret Taylor: “Artificial intelligence has the potential to have huge positive impacts on people’s lives, but it can only meet this potential if these innovations are securely built and deployed. General Nakasone’s unparalleled experience in areas like cybersecurity will help guide OpenAI in achieving its mission of ensuring artificial general intelligence benefits all of humanity.”
Churning on OpenAI’s board started in earnest in November 2023, when CEO Sam Altman was fired by the board on a Zoom call. The essential conflict seemed to be between advancing AI commercially and slowing down to work more on safeguarding AI.
“One interpretation is that Sam Altman might have been more interested in progressing raw capabilities and determining ways to commercialize / monetize the technology, while the board may have been more cautious and interested in ensuring guardrail technology remained a top priority,” stated an account from The Motley Fool.
After five days, Altman was reinstated as CEO and three of the four board members who voted to fire him stepped down. OpenAI released findings from an investigation by the law firm WilmerHale into the ouster; the firm stated that the board fired Altman because of a “breakdown in the relationship and loss of trust between the prior board and Mr. Altman,” according to a March account in The Washington Post.
When Altman returned, independent directors who stepped down included Helen Toner, with Georgetown University’s Center for Security and Emerging Technology, and Tasha McCauley, a tech entrepreneur. The two issued a joint statement after leaving the board, which said in part, “We hope the new board does its job in governing OpenAI and holding it accountable to the mission. As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable,” according to the Post account.
Departure of Ilya Sutskever Seen As Safety Setback
Safety-minded employees at OpenAI saw the resignation last month of two senior AI researchers at OpenAI, Ilya Sutskever and Jan Leike, as a setback. Dr. Sutskever as a member of OpenAI’s board had voted to fire Altman. Dr. Sutskever had helped to create the Super Alignment team inside OpenAI to explore ways to prevent future versions of its technology from doing harm. (See Is the AI Safety Movement, Said to be Dead, Coming Back?, June 7, 2024 AI in Business.)
OpenAI’s board of directors now includes Nakasone, Altman, Adam D’Angelo, Larry Summers, Bret Taylor, Dr. Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo. Microsoft’s Dee Templeton also has a non-voting observer seat.
Reports surfaced in mid-June that Altman is considering making OpenAI a for-profit company that its nonprofit board would not be able to control, according to an account in PYMNTS. The company is considering becoming a for-profit benefit corporation, which commits to creating a positive impact on society and the environment. Examples include: Anthropic, the generative AI company; and xAI, an Elon Music venture focused on AI research in the tech space; and King Arthur Baking, the flour company, in the food market.
The report suggested the move could set the stage for an initial public offering for OpenAI, enabling Altman to take a stake in the company.
Anticipating more government regulation is on the way for AI, OpenAI has announced a plan to expand its lobbying department from the current 35 to 50; last year, the company had three in the department, the PYMNTS report noted.
Helen Toner Speaks Out, Gets Reaction From New Board
Helen Toner has recently spoken more about what led to her as a member of OpenAI’s board supporting the firing of Altman. An Australian researcher, she is director of strategy at Georgetown’s Center for Security and Emerging Technology. Her background includes involvement with the effective altruism movement, focusing on using resources well for charitable impact and in support of the ethical development of AI. She has worked at GiveWell and Open Philanthropy. She was appointed to the board of OpenAI in late 2021.
She wrote a paper in the fall of 2023 that was critical of ethics issues related to the launch of ChatGPT, including labor conditions for data annotators. Reports were that Altman was not happy.
Two members of OpenAI’s current board, Bret Taylor and Larry Summers, authored an account in The Economist published on May 30 that responded to “a warning from former members” of OpenAI’s board.
Gary Marcus, AI critic and cognitive scientist at New York University, made the case in his recent Marcus on AI post in Substack, that the piece was a veiled attack on Toner. Marcus found this disturbing. “The misleading new oped has made me lose confidence in the new board,” Marcus stated.
He makes the case that because OpenAI is still a 501(3)(c) nonprofit, “still getting tax-exempt status on the basis of their promise to deploy AI safely,” it is incumbent on the company to be transparent about its efforts to do so. Marcus notes, “There is a connection between safety and candor, and an obvious one at that: if Altman can’t be trusted, safety could become a serious issue.”
What probably makes the most sense is for OpenAI to dissolve the nonprofit. “OpenAI does not behave like a nonprofit and has not for a long time,” Marcus stated.
Read the source articles and information in The Verge, The Motley Fool. The Washington Post, PYMNTS, The Economist and Marcus on AI.