On Tuesday (local time), Sam Altman, the CEO of OpenAI, announced in a blog post that the company is actively working to achieve a harmonious balance between the safety, freedom, and privacy of minors. To realize this objective, OpenAI is in the process of creating an 'age - prediction system' that can estimate the ages of its users. By default, when there is uncertainty about a user's age, the system will assume the user is under 18. Moreover, in specific situations or countries, users might be required to present identification documents.
The company has also formulated plans to introduce special regulations for minor users to prevent them from engaging in certain types of conversations. In the event that a user under 18 is identified as having suicidal tendencies, the company will make every effort to reach out to their parents. If it is unable to contact the parents, the company will then get in touch with the relevant authorities.
On the same day, the U.S. Senate Subcommittee on Crime and Terrorism conducted a hearing to scrutinize the potential dangers posed by AI chatbots. The hearing was attended by grieving parents whose children had taken their own lives.