Following an 18-month period of meticulous investigative journalism, The New Yorker unveiled a 70-page internal memorandum authored by OpenAI's Chief Scientist, Ilya Sutskever, along with over 200 pages of confidential notes from Anthropic's co-founder, Dario Amodei. Both documents highlighted a recurring pattern of 'consistent dishonesty' on the part of Sam Altman.
In November 2023, the OpenAI board removed Altman from his position due to 'a lack of candor in communications.' However, he was reinstated merely five days later, an episode employees dubbed 'The Blip.' Despite the investigation ultimately clearing Altman of wrongdoing, no comprehensive report was made public, fueling skepticism from external observers.
Concurrently, OpenAI's dedication to safety has seen a systematic decline. The Superalignment team has been disbanded, and the organization's safety-centric culture is gradually being eclipsed by a focus on product development.
From a geopolitical standpoint, Altman has initiated numerous collaboration projects with Middle Eastern governments, sparking concerns over security implications. Moreover, OpenAI is in the process of transforming from a non-profit entity into a for-profit corporation, with Altman's remarks on equity-related matters remaining notably ambiguous.
Altman is also contending with smear campaigns orchestrated by competitors, while the rapid advancement of AI technology presents tangible risks, including potential military applications and ensuing legal disputes. It is worth noting that Altman displayed similar behavioral tendencies during his tenure at Y Combinator; although he may not possess technical brilliance, he excels in persuasion.
