According to TechSpot, a large-scale study by SE Ranking analyzed over 50,000 health searches on Google in Berlin and found a major issue with the new AI Overviews. The research revealed that YouTube was the single most-cited source in these AI-generated health summaries, appearing in 4.43% of all citations. That’s more than any hospital network, health ministry, or academic medical site. The study, which used German-language prompts in December 2025, showed AI Overviews appeared in over 82% of the health searches analyzed. Google responded by arguing the tool draws on credible info “regardless of format,” but researchers counter that the data shows a structural reliance on a platform not known for medical publishing.
The popularity problem
Here’s the thing: this isn’t just a quirky data point. It’s a window into how these AI systems actually work. They’re trained to find and summarize information that’s already out there on the web. And what’s the most dominant, engagement-driven, algorithmically-promoted content platform on the planet? YouTube. So in a way, it’s almost inevitable that a tool designed to scrape and summarize the internet would lean heavily on it. But when the question is “what are the symptoms of lupus?” or “how effective is this cancer treatment?”, you probably don’t want the answer shaped by what’s popular. You want it shaped by what’s true. The study’s author, Hannah van Kolfschooten, nailed it: this shows the system is driven by “visibility and popularity, rather than medical reliability.” That’s a foundational problem you can’t just patch later.
google-s-thin-defense”>Google’s thin defense
Google’s response to The Guardian was telling. They basically said, ‘Hey, lots of doctors are on YouTube!’ And that’s true. They also pointed out that 96% of the *top 25* most-cited videos came from verified medical channels. But the researchers shot back with a crucial detail: those 25 videos made up less than 1% of the total YouTube citations in the study. So that defense feels a bit like cherry-picking. It’s like saying, “We serve gourmet meals,” but only if you count the tiny amuse-bouche and ignore the mountain of fast food behind it. The broader sample is what matters, and it’s clearly pulling from all over the platform. When your number one source for medical info is a site also full of conspiracy theories and miracle cures, you’ve got a credibility gap.
A structural risk, not a bug
This is why the recent SE Ranking study is so important. It moves the conversation beyond scary one-off examples of AI giving bad advice (which The Guardian also found, leading Google to temporarily suspend AI Overviews for some medical searches). It provides data that the risk is baked into the model itself. The AI is optimizing for summarizing a web that itself optimizes for clicks and watch time. In a tightly regulated info environment like Germany’s, it still defaulted to YouTube. What does that suggest will happen in less-regulated markets? The system is working as designed—it’s just that the design might be wrong for something as high-stakes as human health.
What this means for you
So what’s the takeaway? Basically, treat AI Overviews for health questions with extreme skepticism. They are not a trusted medical source. They are a synthesis of what’s popular online, and right now, that’s YouTube. For now, you’re still better off clicking those “blue links” from actual medical institutions and doing your own reading. And for Google, the pressure is on. They insist the tool “surfaces high-quality content,” but the data shows their most influential source is a video platform where authority is, let’s say, inconsistent. Fixing this isn’t about tweaking an algorithm. It might require a fundamental rethink of what sources are deemed authoritative for life-and-death topics. That’s a much harder problem to solve.
