Research conducted by OpenAI has shed light on the fact that the main culprit behind language models producing hallucinations is that the current training and evaluation frameworks actually incentivize models to speculate blindly, rather than admitting their lack of certainty.