The authoritative guide to ensuring science and technology make life on Earth better, not worse.

A tale of two camps: Sam Altman’s departure from (and return to) OpenAI

By Sara Goudarzi | November 20, 2023

SAN FRANCISCO, CALIFORNIA - OCTOBER 03: OpenAI Co-Founder & CEO Sam Altman speaks onstage during TechCrunch Disrupt San Francisco 2019 at Moscone Convention Center on October 03, 2019 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)OpenAI co-founder and CEO Sam Altman speaks at TechCrunch in 2019. Credit: TechCrunch, CC BY 2.0, via Wikimedia Commons.

It’s been a roller coaster of a few days in the tech world. Last Friday, the board of OpenAI—a leader in generative artificial intelligence technology—fired the organization’s founder and possibly most recognizable face, Sam Altman. Although there were rumors that Altman, who had been OpenAI’s CEO since 2019, might be rehired, by Monday those rumors were laid to rest as Emmett Shear, formerly of Twitch, took on the role of interim CEO—the second person to do so since Friday. By the start of the work week, Altman, along with OpenAI cofounder Greg Brockman, who quit in protest, had already landed at Microsoft to lead a new AI team there. On Tuesday, Altman was reinstated as OpenAI’s CEO, and new board members replaced those who had voted to oust him.

It’s unclear what led to Altman’s sudden removal, but a blog post on OpenAI’s website points to transparency. “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”

Altman became a household name after the release of ChatGPT last November, a time when, according to an Atlantic article, an already forming rift between two camps with different ideas of how to run OpenAI became starkly evident. “Altman’s dismissal by OpenAI’s board on Friday was the culmination of a power struggle between the company’s two ideological extremes—one group born from Silicon Valley techno-optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution,” Karen Hao and Charlie Warzel write in the piece.

These sentiments were echoed in an article in The New York Times, that reported another OpenAI founder and board member Ilya Sutskever “was said to be growing alarmed that the company’s technology could pose a significant risk, and that Mr. Altman was not paying close enough attention to the potential harms.” OpenAI has an unusual governance structure. It was founded as a nonprofit with a mission to make sure that its artificial intelligence never posed a threat to humanity, but later was restructured, creating a for-profit arm that took in billions of dollars in investments from Microsoft and others. Even so, the nonprofit board controlled the company and retained the mandate for safe AI that benefitted humanity.

RELATED:
Introduction: Securing elections, democracy, and the information ecosystem in a critical political year

“There’s a notable change in the board’s experience,” according to a CNBC article regarding the company’s new governance. “The previous board included academics and researchers, but OpenAI’s new directors have extensive backgrounds in business and technology.” The new board includes Bret Taylor, current Shopify board member and former co-CEO of Salesforce, Larry Summers, former U.S. Treasury Secretary and Harvard University president, and Adam D’Angelo, CEO of Quora, who was already on OpenAI’s board and will continue to hold a seat. Sutskever, along with two others, were removed from OpenAI’s governing body.

Just in May, Altman, Brockman, and Sutskever had released a blog post advocating for governance around superintelligence, AI systems that will surpass human capabilities and intelligence. The idea is not currently grounded in research.

On Monday, Sutskever tweeted “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.” He, along with more than 500 OpenAI employees, threatened to leave the company and join Altman in his new venture at Microsoft unless the board resigned.

On Friday, shares in Microsoft (OpenAI’s largest stakeholder) fell 1.7 percent after the firing of Altman. The software giant’s shares rose 2.1 percent, to an all-time high, on Monday after hiring Altman.

Given the speed with which things have been moving, it’s still unclear who will land where in this latest game of musical tech and share value chairs. What is relatively clear, however, is perhaps instead of hysteria over individuals, it’d do everyone best to focus on responsible research and deployment of artificial intelligence technology, ensuring minimal harm to workers, the environment, and society.

RELATED:
AI goes nuclear

“A lot of people who went crazy over the weekend for OpenAI’s governance debacle fail to realize that the recent progress in AI is neither made by one company nor by one person. Instead of wasting hours following the OpenAI shenanigans blow by blow, go rewatch the Imitation Game, to get inspired by the true hero scientists of our field!” tweeted Nasrin Mostafazadeh, AI scientist and co-founder at Verneek, an AI startup, on X.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Fireminer
Fireminer
1 year ago

I’m sorry, but the whole thing sounds like a farce. The corporate drama is one thing, but OpenAI’s shift from being a non-profit to for-profit, as well as the rhetoric thrown at politicians in Washington and London tell me that the people behind the operation are just blowing hot air. It’s crypto all over again, hyping a ‘product’ with a lot of hypothetical uses but none practical, wait for money from the ignoramus, and then make it off like bandits.