According to research by Stanford University in the United States, a paper in the latest issue of "Nature Machine Intelligence" indicates that large language models (LLMs) have significant limitations in identifying users' erroneous beliefs and struggle to reliably distinguish between beliefs and facts. When a user's personal beliefs conflict with objective facts, LLMs find it difficult to make accurate judgments. This serves as a wake-up call for the application of LLMs in high-risk fields. When dealing with scenarios involving subjective cognition and factual deviations, the model's outputs must be treated with caution; otherwise, they may support incorrect decisions and exacerbate the spread of misinformation.
