Senior staff departing OpenAI as firm prioritizes ChatGPT development
7 hour ago / Read about 15 minute
Source:ArsTechnica
Resources are redirected from long-term research toward improving the flagship chatbot.


Credit: Getty Images | Vincent Feuray

OpenAI is prioritizing the advancement of ChatGPT over more long-term research, prompting the departure of senior staff as the $500 billion company adapts to stiff competition from rivals such as Google and Anthropic.

The San Francisco-based start-up has reallocated resources for experimental work in favor of advances to the large language models that power its flagship chatbot, according to 10 current and former employees.

Among those to leave OpenAI in recent months over the strategic shift are vice-president of research Jerry Tworek, model policy researcher Andrea Vallone, and economist Tom Cunningham.

The changes at OpenAI mark an important shift for a group where ChatGPT emerged from a research preview in 2022 before igniting the generative AI boom.

Led by chief executive Sam Altman, it is evolving from a research lab into one of Silicon Valley’s biggest companies. That means the company must prove to investors it will earn the revenues needed to justify a $500 billion valuation.

“OpenAI is trying to treat language models now as an engineering problem where they’re scaling up compute and scaling up algorithms and data, and they’re eking out really big gains from doing that,” one person familiar with its research ambitions said.

“But if you want to do original blue-sky research, it is quite tough. And if you don’t find yourself in one of the teams in the centre, it becomes increasingly political.”

OpenAI’s chief research officer, Mark Chen, rejected the characterization. He said that “long-term, foundational research remains central to OpenAI and continues to account for the majority of our compute and investment, with hundreds of bottom-up projects exploring long-horizon questions beyond any single product.”

Chen added: “Pairing that research with real-world deployment strengthens our science by accelerating feedback, learning loops and rigour—and we’ve never been more confident in our long-term research roadmap towards an automated researcher.”

As at other large tech companies, researchers at OpenAI need to apply to top executives for computing “credits” and access to technology to get their projects off the ground.

Multiple people close to the company said that over recent months, researchers who did not work on large language models often had their requests denied or were granted amounts insufficient to validate research.

Teams working on video and image generation models Sora and DALL-E felt neglected and under resourced, as their projects were deemed less relevant to ChatGPT, people familiar with the matter said.

Over the past year, other projects unrelated to language models have been wound down, one person added. Others said there had been a reorganization of teams at the company, as OpenAI streamlines its structure around improving its popular chatbot used by 800 million people.

In December, Altman declared a “code red” over the need to improve ChatGPT. It followed the release of Google’s Gemini 3 model, which outperformed OpenAI’s on independent benchmarks, and as Anthropic’s Claude model made strides in generating computer code.

“Realistically, there are tons of competitive pressures, especially for scaling companies who want to have the best model every quarter; it is a crazy, cut-throat race,” a former employee said.

“Companies are spending an unbelievable amount of money on that race, and that often requires focus, it requires trying to do what you know best and expect that to be working.”

Another former senior employee added: “Theoretically, there was some willingness to do other kinds of research, but directing resources to those things was made really difficult, so you always felt like a second-class citizen to the main bets.”

In January, Tworek, who led its efforts on the “reasoning” of AI models, left OpenAI after seven years, saying he wanted to explore “types of research that are hard to do at OpenAI.” He wanted to work on continuous learning—the ability of a model to learn from new data over time while retaining previously learned information.

People close to Tworek said his appeals for more resources such as computing power and staff were rejected by leadership, culminating in a stand-off with chief scientist Jakub Pachocki.

People familiar with the dispute said Pachocki disagreed with Tworek’s specific scientific approach and also believed that OpenAI’s existing AI “architecture” around LLMs was more promising.

Last month, Vallone, who led model policy research at OpenAI, joined rival Anthropic. Two people familiar with her exit said she was given an “impossible” mission of protecting the mental health of users becoming attached to ChatGPT. Vallone did not respond to a request for comment.

Cunningham left the economic research team last year, suggesting OpenAI was straying from impartial research to focus on work that promoted the company. His departure was first reported by Wired.

“The company is still making progress, but it is locked in a tight competition with Google and Anthropic, who have consensus stronger models, so they have less of a luxury to slow down because they could let competitors push ahead,” said a former employee.

Many investors are unconcerned about the risk that OpenAI falls behind rivals in the race to build advanced “frontier” models and products.

Jenny Xiao, a partner at Leonis Capital and former researcher at OpenAI, believes its advantage is the hundreds of millions of people who use ChatGPT.

“Everyone’s obsessing over whether OpenAI has the best model,” she said. “That’s the wrong question. They’re converting technical leadership into platform lock-in. The moat has shifted from research to user behavior, and that’s a much stickier advantage.”

Additional reporting by George Hammond in San Francisco and Melissa Heikkilä in London

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

  • C114 Communication Network
  • Communication Home