Google Discover optimization

MORE NEWS

DIGITAL MARKETING

SEO

SEM

Invisible watermarking in AI content with Google SynthID

Invisible watermarking is a key innovation in authenticating and protecting content created by generative AI. Google SynthID is a state-of-the-art watermarking system designed to embed imperceptible digital signatures directly into AI-generated images, videos, text,...

Google Search API – A technical deep dive into ranking logic

📑 Key Takeaways from the API Leak If you don't have time to analyze 2,500 pages of documentation, here are the 3 most important facts that reshape our understanding of SEO: 1. Clicks are a ranking factor (End of Debate): The leak confirmed the existence of the...

Information gain in the age of AI

The digital information ecosystem stands at a precipice of transformation that is arguably more significant than the introduction of the hyperlink. For the past twenty-five years, the fundamental contract of the web was navigational. Users queried a search engine, and...

Parasite SEO strategy for weak domains

The barrier to entry for new digital entities has reached unprecedented heights in this year. For professionals entering competitive verticals, such as SaaS or finance, the mathematical reality of ranking algorithms presents a formidable challenge....

The resurrection protocol of toxic expired domains

The digital economy is littered with the remnants of abandoned web properties, often referred to in the cybersecurity sector as zombie domains. These are domain names that have expired, been dropped by their original registrants, and subsequently re-registered or...

Llms.txt guide for AI search optimization

The internet is currently undergoing a fundamental infrastructure shift driven by artificial intelligence. Webmasters and developers are facing a new challenge regarding how content is consumed by machines. Traditionally, we optimized websites for human eyes and...

Beyond the walled garden silo – true ROAS across platforms

Google says your campaign generated 150 sales. Amazon claims 200. Meta swears it drove 180. Add them up and you get 530 conversions. Check your actual revenue and you'll find you sold 250 units total.​ This is the walled garden nightmare every e-commerce marketer...

Data-driven CRO for PPC landing pages

In paid search campaigns, exceptional Quality Scores and high conversion rates don’t happen by accident—they’re the result of rigorous, data-driven optimization that blends user behavior insights with systematic testing. By combining visual tools like heatmaps and...

New YouTube Shorts campaign features in Google Ads

YouTube Shorts advertising has undergone significant transformation in 2025, introducing groundbreaking features that revolutionize how advertisers can target, optimize, and monetize short-form video content. The most notable advancement is the introduction...

The latest changes to Google Ads in 2025

Google Ads has undergone its most significant transformation in 2025, with artificial intelligence taking center stage in nearly every aspect of campaign management and optimization. The platform has evolved from a traditional keyword-based advertising system into a...

Jacek Białas

Holds a Master’s degree in Public Finance Administration and is an experienced SEO and SEM specialist with over eight years of professional practice. His expertise includes creating comprehensive digital marketing strategies, conducting SEO audits, managing Google Ads campaigns, content marketing, and technical website optimization. He has successfully supported businesses in Poland and international markets across diverse industries such as finance, technology, medicine, and iGaming.

Google Discover optimization – technical guide

Dec 4, 2025 | SEO

We have moved from a query-based retrieval model to a predictive push architecture. In this new environment, Google Discover is no longer a secondary traffic source. It is a primary engine for organic growth. The rise of zero-click searches, which now account for approximately 60% of all queries, has forced this pivot. Users are increasingly finding answers directly on the search results page through AI overviews, meaning the traditional click-through economy for informational queries is shrinking.   

Publishers must now master the mechanics of predictive content delivery. Unlike traditional search engine optimization, which reacts to explicit user intent, Discover anticipates latent interests. This system relies on sophisticated machine learning models to map content to user interest vectors without a keyword ever being typed. The implications for technical SEO are profound. We must move beyond keyword density and focus on entity salience and technical congruency.

This report outlines the specific technical frameworks required to succeed. We will analyze the dual encoder architecture, the necessity of the experience signal, and the precise code structures needed for the follow feature.

Neural matching and dual encoder mechanics

To optimize for Discover, one must understand the retrieval system. Google employs a dual encoder architecture, often referenced in research as the two-tower model. This system is designed to handle the scale of billions of users and documents efficiently.   

One tower encodes the user state. This includes search history, location data, and past interactions across the Google ecosystem. The second tower encodes the document candidate. This analyzes the text, headline, and visual assets of your content. The system then calculates the dot product similarity between these two vectors in a high-dimensional space.   

The role of neural matching in retrieval

Neural matching is the bridge between vague user interests and specific content. It allows the algorithm to understand semantic relationships rather than just exact keyword matches. For example, a user interested in “sustainable living” may be served an article about “upcycling furniture” even if the exact terms do not overlap. The neural network maps both concepts to the same vector space neighborhood.   

This technology shifts the optimization focus. Writers must prioritize conceptual clarity over keyword stuffing. The document vector is formed primarily from the headline, the lead image, and the first 200 words of text. If these elements are ambiguous, the document vector becomes “noisy” and fails to align with any specific user interest vector.   

Optimizing for the reranking phase

Once the dual encoders retrieve a set of candidates, a heavier model performs the final ranking. In 2025, this involves multimodal models like Gemini and MUM. These models process text and images simultaneously to determine relevance. They assess semantic congruence between the visual and textual elements. A mismatch here, such as a generic stock photo paired with a specific technical headline, will downgrade the content quality score.   

Entity optimization and the topic layer

The topic layer sits on top of the knowledge graph. It is responsible for understanding how user interests evolve over time. It segments interests into hierarchies and journeys. A user might start with a broad interest in artificial intelligence and mature into a specific interest in large language models.   

Knowledge graph integration strategies

To capture traffic from the topic layer, you must anchor your content to recognized entities. The Cortex framework and similar data architectures illustrate how Google structures this data. You must treat your content as a dataset that feeds into this graph.   

  1. Entity disambiguation – use specific nouns. Do not say “the company”. Say “OpenAI”. This helps the natural language processing API extract the correct entity ID.   
  2. Structured data implementation – use the SameAs property in your schema. Link your mentioned entities to their Wikipedia or Wikidata entries. This removes ambiguity and strengthens the entity signal.   
  3. Topic clustering – organize content to cover an entity exhaustively. This signals to the topic layer that your site is an authority on the entire entity branch, not just a single leaf node.

E-E-A-T and the experience signal

In a feed users do not curate themselves, trust is paramount. Google applies stricter E-E-A-T standards to discover than to web search. The most critical addition for 2025 is the experience signal.   

Demonstrating first-hand experience

Algorithms now aggressively filter generic content. They look for markers of genuine usage or presence. Content that contains phrases like “in our testing” or “we observed” tends to perform better than content that merely summarizes specifications.   

Publishers should adopt a first-person narrative where appropriate. This aligns with the “soft-lens” content that performs well in Discover. Personal stories and unique perspectives act as a proxy for authenticity.   

Author reconciliation and authority

You must help Google connect your authors to real-world people. This process is known as reconciliation. Ensure every author has a robust bio page marked up with ProfilePage schema. Link to their social media profiles and other publications. This builds a private knowledge graph of your site’s expertise that Google can map to its global graph.   

Visual assets and web stories integration

Discover is a visual-first medium. The click-through rate is heavily influenced by the quality and technical specification of the thumbnail image.

The 1200px rule and aspect ratios

Your images must be at least 1200 pixels wide. This is a hard technical requirement for the large image card format, which drives significantly higher engagement than the thumbnail format. You must also include the max-image-preview:large meta tag in the head of your document.   

HTML

<meta name="robots" content="max-image-preview:large">

Avoid using your site logo as the primary image. The multimodal analysis will likely flag it as low-relevance commercial content. Instead, use custom photography or highly specific diagrams that match the semantic context of the headline.

Leveraging google web stories

Web stories remain a potent format for Discover in 2025. They occupy a dedicated carousel and offer a high-visibility entry point. However, they must meet strict technical guidelines.   

  1. Video length – keep video segments under 15 seconds.
  2. Text density – limit text to under 280 characters per page to avoid the “wall of text” penalty.   
  3. Aspect ratio – use 9:16 for all assets.
  4. Metadata – ensure the poster-portrait-src attribute points to a valid 3:4 image file.   

Technical infrastructure for the follow feature

The follow button in Discover allows users to subscribe directly to a publisher. This feature relies entirely on your RSS or Atom feeds. If your feed is broken or generic, you lose the ability to retain an audience.

Feed specifications for optimization

Your feed must be technically flawless to support the follow feature. It should be linked in the <head> of your hub and leaf pages.

  • Full content – do not serve partial snippets. The feed should contain the full article body to allow the algorithm to index the content correctly for the following tab.   
  • Static IDs – never change the <guid> or ID of an article after publication. This causes duplicate content issues in the user’s feed.   
  • Media tags – use strictly defined <media:content> tags to specify the high-resolution image asset. This ensures the correct image appears in the sub-feed.   

The curiosity gap versus clickbait policy

Writing for Discover requires balancing high click-through rates with strict policy compliance. Google penalizes clickbait that withholds information or exaggerates outcomes. However, the curiosity gap is a valid psychological trigger that can be used effectively.   

Defining the safe zone

Clickbait obscures the subject. A curiosity gap highlights a specific, interesting detail about a known subject.   

  • Bad (Clickbait) – “You won’t believe what this company did.” (Subject is hidden).
  • Good (Curiosity Gap) – “The specific feature that makes the Pixel 9 Pro unique.” (Subject is clear, the specific detail is the gap).

The algorithm measures post-click satisfaction. If a headline generates a click but the user returns to the feed immediately, known as pogo-sticking, the headline is reclassified as misleading. This can lead to a manual action or algorithmic suppression.   

Analytics and traffic recovery

Traffic from Google Discover is inherently volatile. It does not follow the predictable patterns of search volume. It moves in waves based on entity trending and user interest spikes.   

If you experience a sudden decline, investigate entity drift. If your site has published content outside its core expertise, the topic layer may have diluted your authority signal for your primary niche.   

Recovery requires a return to core competencies. Prune content that lacks deep expertise. Update evergreen content with fresh data to signal relevance to the freshness algorithm. Do not panic over minor fluctuations. Focus on the long-term trend of active engagement rather than daily session counts.   

Share News on