Sam Altman had a bad day in court
As the trial between Elon Musk and OpenAI ended its second week, the Tesla CEO started scoring points against Sam Altman.
His witnesses landed three solid punches in testimony about how Altman runs OpenAI as CEOraising concerns about his dedication to AI safety, the nonprofit’s mission, and his honesty as a leader of the organization.
It remains to be seen what the jury will make of the testimony, and how much it’ll play a role in their decision about whether to find Altman and the ChatGPT maker liable in the case. Musk alleges Altman and OpenAI President Greg Brockman “looted” the charity they started together back in 2015 by forming a partnership with Microsoft.
This week, Musk’s legal team called a parade of witnesses who questioned whether Altman was acting in the interest of the nonprofit.
On Thursday, that included a former OpenAI safety researcher, who described a slow erosion of the company’s safety teams, which prompted her to leave the company. Witnesses also shared stories about the company launching products without the proper safety reviews — or the knowledge of the board.
Here’s what they said:
An OpenAI safety researcher says she quit over safety
Rosie Campbell testified about her time at OpenAI as a safety researcher. Bloomberg/Getty Images
Rosie Campbell, a former artificial intelligence safety researcher at OpenAI, testified she believed the organization was abandoning its commitment to safety when she worked there between 2021 and 2024.
Campbell testified that, when she began working at OpenAI, it had two teams dedicated to long-term artificial intelligence safety. One ensured that AI was aligned with human values. The other, which she worked on, was about preparing the world for superhuman artificial intelligence.
But over time, OpenAI became a more product-focused, she said.
Both long-term AI safety teams were eliminated. Campbell said about half her team left OpenAI rather than get another job at the company.
When the OpenAI board ousted Altman as CEO, Campbell signed a letter calling for his reinstatement. She told the jury she only did that because she feared that without him, OpenAI employees would end up working at Microsoft, which she believed would be even less likely to be devoted to AI safety.
“It was my understanding at the time that the best way for OpenAI to not disintegrate and fall about would be for Sam to return,” she said.
At one point, Campbell credited OpenAI. She said that xAI, Musk’s artificial intelligence company, likely had an inferior approach to safety than OpenAI.
An ex-board member called Altman a liar
Tasha McCauley, pictured here in 2014, was part of the OpenAI board that ousted Sam Altman. Jerod Harris/Getty Images for Kairos Soceity
A deposition from Tasha McCauley piled onto previous testimony from fellow ex-OpenAI board member Helen Toner about how little they trusted Altman and the “toxic culture” he presided over.
According to McCauley, Altman had caused “chaos” and “crisis” by fostering a “culture of lying and culture of deceit” that had trickled down to other members of OpenAI’s leadership.
McCauley testified that Altman had been dishonest about the launch of an artificial intelligence model, GPT4-Turbo. Altman wrongly said that OpenAI’s legal department told him that it didn’t need to be reviewed by an internal safety board before its launch in India, McCauley said.
The former board member said Altman’s dishonesty had caused “crisis events” every few months. She said she received an email from now-former OpenAI board member Ilya Sutskever that described “dozens of pages of examples of different chaotic events that had occurred from Sam’s behavior or lies that he had told.”
Musk’s nonprofit expert also took Altman to task
David Schizer, a former Columbia Law School dean, testified for Elon Musk as an expert witness on nonprofit governance. Tom Williams/CQ-Roll Call, Inc via Getty Images
Musk’s lawyers then called David Schizer to the stand to talk about nonprofit law. This may sound boring, but it’s key to the case — and the professor of law at Columbia Law School did a good job, with the help of Musk’s lawyer, of making it about Altman.
Musk’s lawyer, Steven Molo, reviewed a list of Altman’s actions, as previously described by witnesses, to determine whether they were consistent with OpenAI’s safety-first mission and “nonprofit custom and practice.”
The answer, almost inevitably, was no.
For example, Molo asked Schizer about complaints that OpenAI, under Altman, launched products without the board’s knowledge. One question involved complaints that Microsoft tested a version of GPT-4 without first going through the company’s safety review process.
“The board and CEO need to be partnering, working together, to make sure the mission is being followed,” Schizer said.
“If the CEO is withholding that information, it’s a big problem,” he said.
