Recent research conducted by Washington State University in the United States reveals that when confronted with intricate scientific claims, the advanced language model ChatGPT, despite projecting confidence in its responses, frequently resorts to guesswork. This reliance results in limited accuracy and a propensity for contradictions, especially when it comes to discerning misinformation. The research team sourced 719 hypotheses from business journal articles and posed each one to ChatGPT ten times for truth verification. The findings indicated that while the model's apparent accuracy rate hovered around 80%, after accounting for the element of random guessing, its true performance was merely about 60% superior to the 50% probability of a 'coin toss,' with a correct judgment rate of just 16.4% for 'false propositions.' Moreover, ChatGPT exhibited difficulty in maintaining consistent viewpoints when the same question was asked repeatedly, achieving consistent conclusions in approximately 73% of instances. However, in some cases, it displayed extreme scenarios of 'alternating between true and false' or 'being half true and half false.'
