TechCrunch reports that OpenAI's newly introduced o3 and o4-mini models are afflicted with significant hallucination issues, generating content that appears plausible yet is erroneous. This problem appears to be more pronounced compared to previous iterations, prompting OpenAI to acknowledge the need for deeper investigation into this phenomenon.
