Research has revealed that as of August 2023, the top ten generative AI tools carried a 35% likelihood of disseminating misinformation when dealing with real-time news. This figure has nearly doubled from the 18% recorded in August of the previous year. This uptick is closely linked to the incorporation of real-time web search capabilities in AI chatbots. Numerous malicious actors are taking advantage of AI to spread false information, and there are marked disparities in the performance of various AI models. Among them, Inflection's AI model exhibits the highest propensity for spreading misinformation, with a rate reaching 56.67%. In contrast, Perplexity's performance has seen a notable decline. Initially, AI models steered clear of spreading misinformation by simply refusing to respond. However, in today's context, with the proliferation of misinformation on the internet, it has become increasingly challenging to distinguish between true and false information. OpenAI has admitted that its current models may produce misinformation and is actively developing new technologies to tackle this pressing issue.