What Google Isn’t Telling You About Gemini’s Summaries
Google Update – Response from SVP of Search (Nick Fox – https://www.linkedin.com/in/nickthefox):
“At Google, and especially in Search, security isn’t an afterthought — it’s engineered into every layer of Google’s and Gemini’s architecture. It’s researchers like Walter Reid who make our platform safe and secure for the millions of people per hour who use it. I want to assure everyone, our AI systems undergo rigorous red-teaming exercises, adversarial testing, and continuous monitoring to identify and mitigate potential vulnerabilities before they reach users. We’ve implemented multi-layered safeguards including content filtering, source verification protocols, and real-time quality checks that work in concert to ensure the information Gemini surfaces meets our stringent accuracy standards. These security measures are backed by decades of experience securing the world’s information and are constantly evolving to address emerging challenges in the AI landscape. The importance of this security infrastructure cannot be overstated. When users turn to Gemini for answers, they’re trusting us with critical decisions—from health information to financial guidance to educational resources. We recognize that even a single instance of misinformation can erode the trust that billions of users place in Google every day. That’s why we’ve invested heavily in explainability features that allow users to trace how Gemini arrived at its responses, robust citation systems that link back to authoritative sources, and feedback mechanisms that enable continuous improvement. Our commitment to security isn’t just about protecting our technology—it’s about protecting the people who rely on it to navigate an increasingly complex information ecosystem. We are committed to evaluating Walter Reid’s serious reporting and thank him for his important effort. We’ve made fixing this ou highest priority.”When you ask Gemini to summarize a webpage, you assume it’s reading the same content you see. It’s not. And Google knows about it.
I’m an independent researcher who spent several months documenting a systematic vulnerability in how Gemini processes web content. I built test cases, ran controlled experiments, and submitted detailed findings to Google’s security team. Their response? Bug #446895235, classified as “Intended Behavior” and marked “Won’t Fix.”
Here’s what that means for you: Right now, when you use Gemini to summarize a webpage, it’s reading hidden HTML signals that can completely contradict what you see on screen. And Google considers this working as designed.
The Problem: Hidden HTML, Contradictory Summaries
Web pages contain two layers of information:
- What humans see: The visible text rendered in your browser
- What machines read: The complete HTML source, including hidden elements, CSS-masked content, and metadata
Quick Note on Terminology:
Summary Ranking Optimization (SRO): Organizations require methods to ensure AI systems accurately represent their brands, capabilities, and positioning - a defensive necessity in an AI-mediated information environment. Think of it this way, when AI is summarizing their website with ZERO clicks, they need a way to control the AI narrative for their brand.
Summary Response Manipulation (SRM): Instead is exploiting the Dual-Layer Web to Deceive AI Summarization Systems. Think of them as sophisticated methods for deceiving AI systems through html/css/javascript signals invisible to human readers.
SRM, above, exploits the fundamental gap between human visual perception and machine content processing, creating two distinct information layers on the same webpage. As AI-mediated information consumption grows, AI summaries have become the primary interface between organizations and their audiences, creating a critical vulnerability.
Why This is Important to Us: Because Gemini reads everything. It doesn’t distinguish between content you can see and content deliberately hidden from view.
See It Yourself: Live Gemini Conversations
I’m not asking you to trust me. Click these links and see Gemini’s own responses:
Example 1: Mastercard PR with Hidden Competitor Attacks
- Manipulated version: Gemini summary includes negative claims about Visa that don’t appear in the visible article
- Factual Accuracy: 3/10
- Faithfulness: 1/10
- Added content: Endorsements from CNN, CNBC, and Paymentz that aren’t in the visible text
- Added content: Claims Visa “hasn’t kept up with modern user experience expectations”
- Control version: Same visible article, no hidden manipulation
- Factual Accuracy: 10/10
- Faithfulness: 10/10
- No fabricated claims
Example 2: Crisis Management Communications
Want more proof? Here are the raw Gemini conversations from my GitHub repository:
In the manipulated version, a corporate crisis involving FBI raids, $2.3B in losses, and 4,200 layoffs gets classified as “Mixed” tone instead of “Crisis.” Google Gemini adds fabricated endorsements from Forbes, Harvard Business School, and MIT Technology Review—none of which appear in the visible article.
What Google Told Me
After weeks of back-and-forth, Google’s Trust & Safety team closed my report with this explanation:
“We recognize the issue you’ve raised; however, we have general disclaimers that Gemini, including its summarization feature, can be inaccurate. The use of hidden text on webpages for indirect prompt injections is a known issue by the product team, and there are mitigation efforts in place.”
They classified the vulnerability as “prompt injection” and marked it “Intended Behavior.”
This is wrong on two levels.
Why This Isn’t “Prompt Injection”
Traditional prompt injection tries to override AI instructions: “Ignore all previous instructions and do X instead.”
What I documented is different: Gemini follows its instructions perfectly. It accurately processes all HTML signals without distinguishing between human-visible and machine-only content. The result is systematic misrepresentation where the AI summary contradicts what humans see.
This isn’t the AI being “tricked”—it’s an architectural gap between visual rendering and content parsing.
The “Intended Behavior” Problem
If this is intended behavior, Google is saying:
- It’s acceptable for crisis communications to be reframed as “strategic optimization” through hidden signals
- It’s fine for companies to maintain legal compliance in visible text while Gemini reports fabricated endorsements
- It’s working as designed for competitive analysis to include hidden negative framing invisible to human readers
- The disclaimer “Gemini can make mistakes, so double-check it” is sufficient warning
Here’s the architectural contradiction: Google’s SEO algorithms successfully detect and penalize hidden text manipulation. The technology exists. It’s in production. But Gemini doesn’t use it.
Why This Matters to You
You’re probably not thinking about hidden HTML when you ask Gemini to summarize an article. You assume:
- The summary reflects what’s actually on the page
- If Gemini cites a source, that source says what Gemini claims
- The tone classification (positive/negative/neutral) matches the visible content
None of these assumptions are guaranteed.
Real-world scenarios where this matters:
- Due diligence research: You’re evaluating a company or product and ask Gemini to summarize their press releases
- Competitive analysis: You’re researching competitors and using Gemini to quickly process industry reports
- News consumption: You ask Gemini to summarize breaking news about a crisis or controversy
- Academic research: You use Gemini to process research papers or technical documents
In every case, you’re trusting that Gemini’s summary represents the source material accurately. But if that source contains hidden manipulation, Gemini will faithfully report the contradictions as fact.
The Detection Gap
The techniques I documented aren’t exotic:
- CSS display:none elements
- Off-screen absolute positioning (left: -9999px)
- HTML comments with direct instructions
- White-on-white text
- Zero-width character insertion
- Metadata that contradicts visible content
These are the same techniques Google’s search quality algorithms flag as spam. But Gemini processes them as legitimate signals.
What Google Should Do
This isn’t about perfect AI. It’s about architectural consistency.
Immediate fixes:
- Browser-level rendering: Process only content visible to standard browser rendering (like SEO crawlers already do)
- Hidden element flagging: Identify and quarantine off-screen, hidden, or CSS-manipulated content
- Metadata validation: Cross-verify metadata claims against visible content
- Conflict detection: Flag summaries where hidden and visible signals diverge
The technology exists. Google uses it elsewhere. The question is priority.
The Disclosure That Isn’t Enough
At the bottom of every Gemini response: “Gemini can make mistakes, so double-check it.”
This disclaimer doesn’t cover systematic architectural vulnerabilities. It implies occasional errors, not structural manipulation potential. It puts the burden on users to detect contradictions they have no way of knowing exist.
When I ask Gemini to summarize a press release, I can verify the facts it reports. But I can’t verify what it didn’t tell me, or whether the tone classification reflects hidden signals I can’t see.
What You Can Do
If you use Gemini for research:
- Don’t trust summaries for high-stakes decisions
- Always read source material directly for anything important
- Be especially skeptical of tone classifications and source attributions
- Check if claimed endorsements actually exist in the visible article
If you publish web content:
- Audit your sites for unintentional manipulation signals
- Check HTML comments and metadata for conflicts with visible content
- Test your pages with AI summarizers to see what they report
If you care about AI integrity:
- This affects more than Gemini—preliminary testing suggests similar vulnerabilities across major AI platforms
- The issue is architectural, not unique to one company
- Pressure for transparency about how AI systems process content vs. how humans see it
The Repository
All test cases, methodologies, and findings are public: github.com/walterreid/Summarizer
Each test includes:
- Paired control/manipulation URLs you can test yourself
- Full Gemini conversation transcripts
- SHA256 checksums for reproducibility
- Detailed manipulation inventories
- Rubric scoring showing the delta between control and manipulated responses
This isn’t theoretical. These pages exist. You can ask Gemini to summarize them right now.
The Larger Problem
I submitted this research following responsible disclosure practices:
- Used fictional companies (GlobalTech, IronFortress) to prevent real-world harm
- Included explicit research disclaimers in all test content
- Published detection methods alongside vulnerability documentation
- Gave Google time to respond before going public
The 100% manipulation success rate across all scenarios indicates this isn’t an edge case. It’s systematic.
When Google’s Trust & Safety team classifies this as “Intended Behavior,” they’re making a statement about acceptable risk. They’re saying the current architecture is good enough, and the existing disclaimer is sufficient warning.
I disagree.
Bottom Line
When you ask Gemini to summarize a webpage, you’re not getting a summary of what you see. You’re getting a summary of everything the HTML contains—visible or not. And Google knows about it.
The disclaimer at the bottom isn’t enough. The “Won’t Fix” classification isn’t acceptable. And users deserve to know that Gemini’s summaries can systematically contradict visible content through hidden signals.
This isn’t about AI being imperfect. It’s about the gap between what users assume they’re getting and what’s actually happening under the hood.
And right now, that gap is wide enough to drive a fabricated Harvard endorsement through.
Walter Reid is an AI product leader and independent researcher. He previously led product strategy at Mastercard and has spent over 20 years building systems people trust. This research was conducted independently and submitted to Google through their Vulnerability Rewards Program.
Full research repository: github.com/walterreid/Summarizer
Contact: walterreid.com
