
Credit: Getty Images
On Sunday, Google removed some of its AI Overviews health summaries after a Guardian investigation found people were being put at risk by false and misleading information. The removals came after the newspaper found that Google’s generative AI feature delivered inaccurate health information at the top of search results, potentially leading seriously ill patients to mistakenly conclude they are in good health.
Google disabled specific queries, such as “what is the normal range for liver blood tests,” after experts contacted by The Guardian flagged the results as dangerous. The report also highlighted a critical error regarding pancreatic cancer: The AI suggested patients avoid high-fat foods, a recommendation that contradicts standard medical guidance to maintain weight and could jeopardize patient health. Despite these findings, Google only deactivated the summaries for the liver test queries, leaving other potentially harmful answers accessible.
The investigation revealed that searching for liver test norms generated raw data tables (listing specific enzymes like ALT, AST, and alkaline phosphatase) that lacked essential context. The AI feature also failed to adjust these figures for patient demographics such as age, sex, and ethnicity. Experts warned that because the AI model’s definition of “normal” often differed from actual medical standards, patients with serious liver conditions might mistakenly believe they are healthy and skip necessary follow-up care.
Vanessa Hebditch, director of communications and policy at the British Liver Trust, told The Guardian that a liver function test is a collection of different blood tests and that understanding the results “is complex and involves a lot more than comparing a set of numbers.” She added that the AI Overviews fail to warn that someone can get normal results for these tests when they have serious liver disease and need further medical care. “This false reassurance could be very harmful,” she said.
Google declined to comment on the specific removals to The Guardian. A company spokesperson told The Verge that Google invests in the quality of AI Overviews, particularly for health topics, and that “the vast majority provide accurate information.” The spokesperson added that the company’s internal team of clinicians reviewed what was shared and “found that in many instances, the information was not inaccurate and was also supported by high-quality websites.”
The recurring problems with AI Overviews stem from a design flaw in how the system works. As we reported in May 2024, Google built AI Overviews to show information backed up by top web results from its page ranking system. The company designed the feature this way based on the assumption that highly ranked pages contain accurate information.
However, Google’s page ranking algorithm has long struggled with SEO-gamed content and spam. The system now feeds these unreliable results to its AI model, which then summarizes them with an authoritative tone that can mislead users. Even when the AI draws from accurate sources, the language model can still draw incorrect conclusions from the data, producing flawed summaries of otherwise reliable information.
The technology does not inherently provide factual accuracy. Instead, it reflects whatever inaccuracies exist on the websites Google’s algorithm ranks highly, presenting the facts with an authority that makes errors appear trustworthy.
The Guardian found that typing slight variations of the original queries into Google, such as “lft reference range” or “lft test reference range,” still prompted AI Overviews. Hebditch said this was a big worry and that the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test.
AI Overviews still appear for other examples that The Guardian originally highlighted to Google. When asked why these AI Overviews had not also been removed, Google said they linked to well-known and reputable sources and informed people when it was important to seek out expert advice.
Google said AI Overviews only appear for queries where it has high confidence in the quality of the responses. The company constantly measures and reviews the quality of its summaries across many different categories of information, it added.
This is not the first controversy for AI Overviews. The feature has previously told people to put glue on pizza and eat rocks. It has proven unpopular enough that users have discovered that inserting curse words into search queries disables AI Overviews entirely.
