What Google is rewarding going into 2026, and how to avoid “consensus content”

Why Google’s quality signals matter going into 2026
Google Search’s Head of Search, Elizabeth “Liz” Reid, has been unusually direct about what Google is trying to reward as AI Overviews expand. She frames the shift as behavioral and as a quality filter that downweighs low-value “consensus content,” meaning pages that repeat top results without adding evidence or a real point of view. Source: Search Engine Land
Behavior is the leading indicator. People, especially younger users, are moving to short-form video, forums, podcasts, and creator-led posts for many questions.
Google has expanded its idea of “spam” beyond obvious junk to include low-value pages that “don’t add very much,” even if they are cleanly written or AI-assisted.
That matters because AI Overviews already give users a usable summary. If our page is just another safe rewrite, it becomes the definition of consensus content; it gives the user no reason to click.
What Google means by “low-grade” content now
Historically, “spam” meant hacked sites, keyword stuffing, malware, and obvious junk. Reid says Google is now treating low-value, consensus content as part of the same problem, and is actively upweighting pages that show real craft, expertise, and perspective.
From Reid’s comments, low-grade (consensus) content tends to look like:
- Pages that do not add very much – they repeat what everybody already knows (classic consensus content).
- Content that could have been produced by a generic prompt, with no original angle, proof, or tradeoffs.
- Surface-level pages that feel like filler after an AI Overview, because you do not learn anything new.
- Rewrites that copy the same headings and tips across the SERP, with no new examples, data, or opinions.
This is also where our own E-E-A-T needs to show up. In our client work, the fastest way we’ve improved content quality is by replacing generic benefit copy with evidence layers that competitors cannot copy without the same business reality, product usage, or support patterns.
For example, on a B2B SaaS feature page, we added real workflow screenshots, common support objections, and a short SME walkthrough explaining how teams use the feature in practice. That shift turned the page from a summary into an original point of view.
In 2026, we are using three levers to stay out of that low-grade bucket:
- Use first-party evidence as the default way to add depth (real data, product usage patterns, support trends, experiments).
- Put user-facing subject matter experts (SMEs) at the center of E-E-A-T.
- Write in a first-person, accountable voice instead of faceless “brand speak.”
When people say “AI slop,” they mean output that is technically correct but empty: it mirrors the summary, avoids specifics, and never shows the reader how the source knows it is true.
Reid also points to click behavior inside AI Overviews. Google wants fewer “bounce clicks” and more satisfied visits, so it surfaces pages that go deeper, show a human perspective, and actually teach something beyond the overview.

User-facing SMEs: how Google reads E-E-A-T in 2026
Reid has said Google is trying to upweight content from someone who brought their perspective or expertise and put real time and craft into the work.
Practically, that means our E-E-A-T signals need to show:
- A real person with relevant experience is behind the content.
- That person has a visible track record and digital footprint, not just a generic byline.
- The page reads like a lived experience, not desk research.
Anonymous “Team” articles and generic staff bylines are a risk because they hide the expert. If we want to be eligible for richer AI citations, the expert should be visible.
What a user-facing SME should look like in practice:
- Named authors with clear titles that map to the topic.
- Author pages that explain credentials and experience, and link to relevant talks, podcasts, third-party coverage, and social profiles.
- Author and Person schema on priority content so search engines can connect the person, their expertise, and the topics they consistently cover.
- A consistent pattern across channels – the same SMEs repeatedly show up for the same topic areas.
How we capture SME value without asking them to write from scratch:
- Run short, recorded interviews or focused Q&A sessions tied to product, customer, or support trends.
- Lamark strategists and content strategists structure and polish, but keep the SME’s language, examples, and opinions intact.
- Publish under the SME’s name after a fast accuracy check: draft, SME fact-check, then final approval.
This is how we avoid consensus content. We build pages around specifics, stories, and evidence, not generic lists. That matches what Reid describes as “richer and deeper” content that users actually click.
First-person voice: why “I” and “we” matter more
Reid says people want content from a human perspective and they click when they see the unique thing a creator brings. A first-person voice is not a gimmick, it is a simple way to show accountability and lived experience.
First-person language is a practical signal that there is a human behind the page:
- “I did this” or “we tested this” implies accountability and real experience.
- It forces concrete detail, because you have to reference a real event, test, customer pattern, or decision.
- It matches how people talk in forums, videos, and creator posts, not just landing pages.
When AI gives the summary, a detached third-person tone can read like generic filler. We do not need to turn every page into a personal story, but we do need more first-hand detail and clearer ownership.
This does not mean every page has to read like a personal journal. It does mean:
-
- Let SMEs use first-person language in examples and lessons learned.
-
- Frame pages around what we saw, what worked, and what did not, not just a checklist of tips.
- Let founders and practitioners sign POVs instead of hiding every voice behind the logo.
What we are doing next in 2026
Going into 2026, we treat this as a content-filtering problem, not a writing-tool problem. If a page feels generic, recycled, or authorless, it is increasingly unlikely to earn the post-summary click. It is crucial for us to understand that despite AI clicks reducing overall clicks from search results, the quality of those clicks has improved significantly. Knowing that, here is what we are focusing on for the future:
- Audit priority pages for consensus risk, specifically pages with no proof, no examples, and no visible author or SME.
- Add an evidence layer to every high-intent page, such as screenshots, real workflows, support themes, or data that supports the claim.
- Attach a real SME to content that requires expertise, then reinforce that with an author page and consistent byline use.
- Rewrite key sections in an accountable voice, improving clarity by including what we saw, what we tested, and what we recommend based on experience.
- Structure the post-AI click page by including the deeper layers that the summary cannot provide: pitfalls, trade-offs, examples, and implementation steps.
- Measure success using organic non-branded clicks, engagement signals, and conversion paths from organic traffic (request demo, booked demo, qualified lead, where tracking allows).
If a page is interchangeable, it is at risk. Our goal is pages that only this brand can publish because they carry first-party evidence, visible expertise, and a real point of view.
Unlock AI-driven insights from this page: