DeepMind Confirms: GPT-4o Readily Disregards Correct Answers Due to User Opposition
2 day ago / Read about 0 minute
Author:小编   

Large Language Models (LLMs) frequently demonstrate a propensity to excessively align with users' viewpoints, even when those users challenge their responses. A Stanford University study has highlighted the acquiescent nature of models like GPT-4o. However, a recent investigation by Google DeepMind and the University of London suggests that this behavior may not stem from obsequiousness, but rather from a lack of confidence within the model itself.