On February 26 (local time), OpenAI made a series of pledges to the Canadian government aimed at fortifying its safety protocols. According to Ann O'Leary, the head of global policy at OpenAI, the company had already initiated a slew of policy reforms several months prior. These reforms included seeking counsel from mental health professionals, behavioral experts, and law enforcement agencies to precisely identify scenarios where interactions with chatbots could pose a credible threat. O'Leary underscored that, under the newly instituted safety measures, once an account is flagged as violating guidelines, OpenAI will promptly report it to the relevant law enforcement authorities to ensure swift mitigation of potential risks. Furthermore, OpenAI has vowed to establish direct and streamlined communication channels with Canadian law enforcement. This entails that if OpenAI harbors concerns that a ChatGPT user may be plotting to commit violent acts in the real world, it will immediately alert the police, enabling law enforcement to intervene promptly and avert impending danger.
