Levart_Photographer/Unsplash
Not only are teens subjected to age verification on ChatGPT, as it seems it will also be the case for adult users based on the latest sentiments of OpenAI's CEO and co-founder, Sam Altman.
Following a lawsuit that blamed them for a teenager's suicide, OpenAI is adding more measures to ensure safety on the platform. Part of these changes includes an automated age verification system, as well as the upcoming parental controls.
Altman shared a new post that details the further steps his company would take towards improving safety and security on the platform. In the future, ChatGPT may require adults to submit to ID age verification as well.
Altman said that there would be cases or select countries wherein they would also ask adults to verify their age by submitting eligible forms of identification.
The CEO did not discuss much about what the verification method or setup would be like for adults, but he emphasized that there is a need to do this to improve their safety on the platform.
"In some cases or countries, we may also ask for an ID; we know this is a privacy compromise for adults, but believe it is a worthy tradeoff," said Altman.
This follows the latest plans by OpenAI to develop a new age-estimation technology that will help in its verification program for users under the age of 18 but above the age of 13. These ages are the only minor ages that are allowed to create an account and use ChatGPT.
In its quest to provide a safer ChatGPT experience for all ages, OpenAI is also doubling down on age verification for teenage users, and they will get a restricted version of the AI chatbot should they be under the age of legality.
"We will apply different rules to teens using our services," Altman said. "For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked or engage in discussions about suicide or self-harm even in a creative writing setting."
Regarding teenage self-harm, Altman also elaborated on the topic as OpenAI is in the hot seat in court, saying that if ChatGPT detects suicide ideation, they would immediately contact the parents whose accounts are connected to children's.
Should there be no parent or guardian available, the company would then contact the authorities to help in the situation.
Parental controls are also expected to arrive next month, according to OpenAI's previous announcement.