A recent report by the Financial Times has revealed that OpenAI has dramatically cut down on the time and resources allocated for safety testing its AI models, prompting widespread public concern. The report indicates that both in-house staff and third-party teams now have mere days to assess the latest large language models, a stark contrast to the previous extensive evaluation cycles that spanned several months. OpenAI finds itself under intense pressure to swiftly release new models in order to maintain its competitive edge, which has led to this abbreviated testing period. This shift has sparked questions regarding the adequacy of safeguards in place to ensure the safety and reliability of its technology.
