• Login
  • Register

Work for a Member organization and need a Member Portal account? Register here with your official email address.

Article

Google puts users at risk by downplaying health disclaimers under AI Overviews

 Lionel Bonaventure/AFP/Getty Images

By Andrew Gregory

Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may be wrong.

When answering queries about sensitive topics such as health, the company says its AI Overviews, which appear above search results, prompt users to seek professional help, rather than relying solely on its summaries. “AI Overviews will inform people when it’s important to seek out expert advice or to verify the information presented,” Google has said.

But the Guardian found the company does not include any such disclaimers when users are first presented with medical advice.

AI experts and patient advocates presented with the Guardian’s findings said they were concerned. Disclaimers serve a vital purpose, they said, and should appear prominently when users are first provided with medical advice.

“The absence of disclaimers when users are initially served medical information creates several critical dangers,” said Pat Pataranutaporn, an assistant professor, technologist and researcher at the Massachusetts Institute of Technology (MIT) and a world-renowned expert in AI and human-computer interaction.

“First, even the most advanced AI models today still hallucinate misinformation or exhibit sycophantic behaviour, prioritising user satisfaction over accuracy. In healthcare contexts, this can be genuinely dangerous.

“Second, the issue isn’t just about AI limitations – it’s about the human side of the equation. Users may not provide all necessary context or may ask the wrong questions by misobserving their symptoms.

“Disclaimers serve as a crucial intervention point. They disrupt this automatic trust and prompt users to engage more critically with the information they receive.”

Related Content