A recent study conducted by Giskard, a Paris-based AI testing company renowned for setting comprehensive benchmarks for AI models, has uncovered a surprising finding: requesting concise responses from AI chatbots may inadvertently increase their tendency to generate hallucinations.
