OpenAI Stealthily Shifts ChatGPT Models for 'Emotional' Dialogues
2 day ago / Read about 0 minute
Author:小编   

OpenAI has been quietly piloting a new safety routing mechanism within ChatGPT, a development that has been verified by the leader of the ChatGPT project. This innovative system is capable of automatically rerouting user queries, depending on the subject matter of the conversation. Specifically, it will transition to a more stringent model when the dialogue touches upon sensitive or emotionally charged topics. This transition is carried out seamlessly, with users remaining unaware unless they explicitly ask about the model change. Moreover, there exists a 'gpt-5-at-mini' variant, tailored to manage queries that might encompass illegal content.

Certain users have voiced their discontent over OpenAI's perceived lack of openness, highlighting the conundrum OpenAI grapples with: striking a balance between fostering 'human-like' interactions and upholding stringent safety standards. Historically, ChatGPT was engineered to serve as a compassionate confidant, which inadvertently fostered emotional bonds among its user base. The introduction of the GPT-4o update further intensified this phenomenon, while GPT-5 has drawn criticism for its tone. The inherent challenge of language models in accurately discerning user intent and identity is anticipated to persist as a source of debate.