This Thursday, OpenAI unveiled a set of safety guidelines concerning the utilization of artificial intelligence by teenagers. The details of this blueprint were initially shared with Axios, with the aim of prompting both the general public and policymakers to prioritize teenager safety and establish appropriate norms for AI usage. The proposed guidelines consist of five key recommendations: firstly, accurately identifying teenage users and offering interaction methods suitable for their age group; secondly, preventing AI from generating content related to suicide, self-harm, or sensitive forms of violence, and prohibiting any encouragement of dangerous behaviors; thirdly, automatically activating a minor protection mode when the user's age cannot be ascertained; fourthly, equipping families with parental control tools to monitor and manage AI usage; and finally, continuously expanding protective features based on the latest research findings. This initiative arrives at a time when multiple states are contemplating AI safety legislation, the U.S. Senate is progressing with a bill to prohibit minors from using chatbots, and OpenAI is facing external scrutiny due to lawsuits involving teenage suicides.
