Leader of OpenAI's Mental Health Safety Division Joins Anthropic's Alignment Team
2026-01-16 / Read about 0 minute
Author:小编   

Over the past year, one of the main controversies swirling around OpenAI has centered on how its chatbots react when users show signs of mental health struggles during interactions. Now, Andrea Vallone, who headed safety research in this specific domain, has exited the company and taken up a position at Anthropic. Prior to this, a number of OpenAI's top - level executives have also left, such as Lilian Weng, the Vice President of Safety, and Jan Leike, the leader of the Superalignment team.

In the context of AI chatbots providing mental health support, there are potential hazards. These include data breaches, insufficient privacy safeguards, and the risk of amplifying users' negative feelings or prompting them to make incorrect decisions. Despite the fact that OpenAI has implemented measures for improvement, like tweaking response tactics and rolling out new alert functions, significant challenges still remain.