All Episodes
1h 3mEpisode 216

216: THE MACHINE LAYER - BUILDING TRUST IN THE AGE OF AI SEARCH WITH DUANE FORRESTER

SpotifyApple
FEATURING
Duane Forrester

DUANE FORRESTER

Unbound Answers

Author of 'The Machine Layer' and search industry veteran who built Bing Webmaster Tools and co-launched Schema.org.

The 30-year playbook for search optimization is breaking down. Checklists, keyword research, and technical SEO still matter, but they're no longer enough. AI systems don't care about your brand story or carefully crafted narratives. They want facts they can cite without risking their credibility, and they're evaluating your content in chunks, not pages.

Duane Forrester, who co-launched Schema.org and built Bing Webmaster Tools, has watched every major shift in search. His verdict on this one is unequivocal: trust has become the algorithm. LLMs develop what he calls machine comfort bias, naturally favoring sources that consistently prove reliable because verifying trust costs fewer computational resources than guessing. The websites that understand this will get cited. Everyone else will wonder where their traffic went.

Machine Comfort BiasChunk-Level Content OptimizationCitation ReadinessSchema.org as Trust InfrastructureThe Multidisciplinary SEO RoleLatent Choice Signals

KEY TAKEAWAYS

  • Put your most important facts, figures, and bullet points at the top of pages. LLMs suffer from 'lost in the middle' syndrome and extract information more reliably from the beginning and end of content.
  • Consistency builds machine trust over time. If your structured data, author markup, and content quality remain reliable over six months to a year, LLMs develop a comfort bias toward citing you.
  • Stop thinking about rankings and start thinking about being THE canonical source. If you haven't added net new information to an LLM's training data, you won't get cited.
  • Each LLM platform has different weights and temperatures, meaning content may need to be optimized per platform rather than using a universal approach.
  • LLMs will guess to save tokens unless you provide explicit information. Give them everything they need so they don't have to make decisions that could introduce errors.

SHOW NOTES

The End of Checklist SEO

Twenty years of industry history taught SEOs that success came from keyword research, gap analysis, technical optimization, and schema deployment. That mental model is now actively harmful. AI discovery systems evaluate trustworthiness across multiple dimensions before deciding whether to cite a source, and traditional ranking factors represent only a fraction of what matters.

The shift requires abandoning departmental silos that separate SEO from branding, conversion, UX, and paid media. These systems synthesize information across all these dimensions to determine citation worthiness. A technically perfect website with weak brand signals or inconsistent messaging won't earn the machine trust required for visibility.

How LLMs Actually Process Your Content

Chunking isn't an SEO buzzword. It's a fundamental machine learning construct describing how systems break content into 100-300 word blocks to capture discrete ideas. A chunk might contain a complete paragraph or cut off mid-sentence, whatever captures a single concept in its totality.

The critical insight comes from research on the "lost in the middle" phenomenon. LLMs extract information more reliably from the beginning and end of long-form content, with middle sections proving less dependable. The practical response: put a TLDR at the top of every page with key facts, figures, and bullet points. This serves both human scanners and AI systems simultaneously.

Does this mean reformatting entire pages into 300-word blocks? Absolutely not. That approach confuses traditional search engines and creates terrible user experiences. The goal is interspersing chunked, fact-dense sections within natural prose.

The Economics of Machine Trust

LLMs want to save computational resources. When given a choice between verifying information across multiple sources or trusting a consistently reliable one, they'll lean toward the trusted source because it costs fewer tokens.

This creates machine comfort bias. Websites that consistently deploy structured data correctly, mark up authors properly, and maintain quality over time become default citation sources. The system isn't making a conscious choice. It's following the path of least computational resistance toward sources that have never given it reason to doubt.

Beyond EEAT

Most SEOs understand EEAT as a framework for ranking higher in Google. That mental model misses the deeper implication. Would the information you publish hold up if someone quoted you in a conversation with a stranger? Would you be setting them up for success or embarrassment?

LLMs need multiple vectors of support for every statement: measurements, efficacy data, statistics, expert attribution. They're not checking whether you mentioned expertise on your about page. They're verifying whether your claims can withstand scrutiny when repeated to millions of users. The platforms themselves face reputational risk from bad citations, making their verification standards necessarily high.

The Canonical Source Imperative

Rankings matter less than becoming THE recognized authority on specific topics. If content merely restates what already exists in training data, there's no reason for an LLM to cite it. The citation goes to whoever originally established that knowledge.

This demands a fundamental shift in content strategy. The question isn't whether content ranks well. It's whether content expands what these systems know about a topic. Net new information, original research, unique data, proprietary insights: these create citation opportunities. Everything else competes for scraps.

WATCH ON YOUTUBE

QUESTIONS ANSWERED

LATEST EPISODES

EP 21529 min

215: The Agent-Broken Web - Why AI Can't See Your Website

Your website might rank #1 on Google but be completely invisible to ChatGPT, Claude, and Perplexity. In this episode, let's break down why a huge chunk of the web is fundamentally broken for AI systems - not because of bad content, but because of technical decisions that made sense for humans but make sites invisible to the AI systems rapidly becoming the front door to the internet.
EP 21137 min

211: Why AI is Killing Your Clicks: The New Metrics for a Zero-Click World with Joe Doveton

The ground beneath the digital marketing industry is shifting. For decades, the mantra was simple: optimize for traffic, measure clicks, and track conversions. But with the rise of Generative AI, Large Language Models (LLMs), and Answer Engines, that rulebook is obsolete. In this powerful episode, I sit down with Joe Doveton to discuss the urgent reality facing every brand that relies on web traffic.

ENJOYING THIS EPISODE?

No Hacks explores how to optimize websites for AI agents, with weekly episodes featuring SEOs, developers, and AI researchers. Subscribe on your favorite platform.

Subscribe Now