10 Generative Engine Optimization Experts Worth Following in 2026


Generative engine optimization has attracted a lot of commentators and very few practitioners. Scroll LinkedIn for five minutes and you will see the same ten tips recycled across a thousand posts: add FAQ schema, write concise answers, use entity-rich language. None of that tells you what is actually happening inside the systems you are trying to rank in.
The people below are different. Each of them runs research, ships tools, or advises teams that are getting cited by ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews right now. They disagree with each other, and the disagreement is the point. Read them together and you will get something closer to a real picture of a field that is still being invented week by week.
How we picked this list
We chose people who meet three tests. They publish original research or frameworks, not recycled checklists. They work with real brands on real AI-search problems, not just theory. And their ideas hold up under scrutiny from other practitioners on this list. No vendors pitching their own tool as the universal answer. No LinkedIn influencers who discovered GEO six months ago. Just the voices that have moved the field forward.
Lily Ray, Amsive and Algorythmic
Lily Ray is VP of SEO and AI Search at Amsive, and founder of the consulting practice Algorythmic. She is the person most likely to read a new AI Overviews patent and publish a breakdown the same day. Her public work is a running audit of what AI engines actually cite, versus what agencies claim they reward.
Her core argument is one many will not want to hear: most GEO tactics are "verbatim recommendations that SEO teams have been making for years." Schema, clear headings, authoritative content. Repackaged, not reinvented. Where she gets interesting is her extension of E-E-A-T into AI search. The same expertise, experience, authoritativeness, and trust signals that matter to Google show up disproportionately in the URLs LLMs cite. GEO does not replace SEO; it amplifies the pages that already earned the right to rank.
Follow her on Search Engine Land and at Amsive Insights.
Kevin Indig, Growth Memo
Kevin Indig writes Growth Memo, which has become the closest thing the field has to a standing research journal. His "State of AI Search Optimization 2026" report is the most cited single piece of analysis in the space, and for a reason: it collapses scattered citation studies into usable numbers.
A few of the findings that keep getting quoted, all from his work or curated by it: 44.2% of LLM citations come from the first 30% of a page. Question-mark headings are cited roughly twice as often as statement headings. Pages with 10 to 15 H2 sections in the 5,000 to 7,500 word range correlate with higher citation rates, and pages above 20,000 characters get 4.3x more citations than shorter ones. None of these are laws. They are the starting hypotheses most teams should be running their own experiments against.
Kevin's other useful contribution is skepticism. His piece "The Alpha is not LLM monitoring" pushes back on the idea that buying yet another dashboard gets you anywhere. The alpha is in the content and the infrastructure behind it. The dashboard just tells you if it worked.
Subscribe at Growth Memo.
Aleyda Solis, Orainti
Aleyda Solis runs Orainti, a boutique consultancy that quietly handles some of the hardest multi-market SEO problems in the industry. She is also the person who turned AI search optimization into a public curriculum. Her free roadmaps at LearningSEO.io and LearningAIsearch.com are where most serious practitioners started.
Her distinctive angle is crawlability. Before you optimize a single sentence for LLM citation, the bots that feed those LLMs have to be able to fetch and parse your pages. AI crawlers behave differently from Googlebot. They hit roadblocks that traditional SEO audits miss: aggressive rate limiting, JavaScript-heavy rendering, Cloudflare rules that block Perplexity and ChatGPT's crawlers by default. Aleyda's AI Search Content Optimization Checklist is the most practical operational playbook we have seen, and it starts with infrastructure rather than with content.
She also publishes the SEOFOMO and AI Marketers newsletters, which remain the best weekly filter for what actually changed in search this week.
Mike King, iPullRank
Mike King founded iPullRank and wrote "The AI Search Manual," an openly published book-length treatment of how generative engines retrieve, rank, and cite content. If you want one long read to understand GEO as a technical discipline, his manual is it.
His framing is worth naming directly: Relevance Engineering. GEO is not a content exercise with schema sprinkled on top. It is an engineering discipline that combines embeddings, vector retrieval, information retrieval theory, UX, and content strategy into one system. The target audience is machines. The test is whether those machines ingest, synthesize, and cite your content accurately. That reframe forces teams to stop treating AI search as a marketing problem and start treating it as a data problem.
Mike is also an unusually clear writer about embeddings, which is rare. If you have ever wondered why your pages feel relevant to a keyword but never surface in AI answers, start with his work on semantic similarity and query fan-out.
The manual lives at iPullRank.
Jason Barnard, Kalicube
Jason Barnard is the most credentialed person on this list, and also the least loud. He coined the phrase "Answer Engine Optimization" back in 2018, years before ChatGPT existed. His company Kalicube has spent a decade helping brands and personalities control how they appear in Google Knowledge Panels, and that discipline turned out to be early practice for the exact problem AI search creates.
His "algorithmic trinity" frame is useful. Every AI assistant is built on three pillars: a language model for synthesis, a knowledge graph for facts, and a search index for freshness. Optimizing for only one of the three leaves citations on the table. Most content teams are optimizing for synthesis by writing answer-shaped prose. Few are working on the knowledge graph layer, where entity relationships live. That is where Jason has spent his career.
If your brand gets misrepresented by ChatGPT, wrong founding year, wrong founder, wrong product category, the fix is almost always at the entity layer. Jason's work at Kalicube is where we point teams with that specific problem.
Rand Fishkin, SparkToro
Rand Fishkin is the skeptic on the list, and generative engine optimization needs its skeptics. He co-founded Moz, then walked away to build SparkToro around a single thesis: attribution is dying, clicks are dying, and most marketing teams are optimizing for the wrong thing.
His most useful recent argument is this: in a zero-click world, traffic is a terrible goal. If AI tools continue doubling annually, they will rival traditional search in raw usage within six to ten years. Long before that, the percentage of searches that end without a click to any website will pass 60%. That changes the target. You are no longer optimizing for visits. You are optimizing for the moment your brand gets mentioned inside someone else's interface, and for whether that mention creates demand you can capture later through direct, branded, or dark-social channels.
Rand is right that most GEO discussions skip over this measurement problem. If you cannot measure it, you cannot manage it, and the tools to measure AI-driven brand lift are still embryonic. Read him at the SparkToro blog.
Bernard Huang, Clearscope
Bernard Huang founded Clearscope, which many content teams first knew as a keyword-density scoring tool and now know as an AI-search content platform. Bernard himself is a quieter voice than some on this list, and that is part of what makes his frameworks worth reading. He ships them and moves on.
Two ideas of his have held up. First: commodity prompting produces commodity output, and commodity output does not rank anywhere, including in AI answers. If ten writers prompt ChatGPT with the same brief, the result is ten interchangeable articles, and LLMs will cite none of them. The only escape is content that adds something the training data does not already contain. Original data. Specific customer stories. Real expertise.
Second: the validation layer. When AI models are unsure, they run a web search to fact-check themselves before answering. That validation layer is a separate optimization target. Pages that get fetched during that recheck step are disproportionately represented in citations. Structuring content to surface there, concise, recent, source-attributed, is a lever most teams have not tried.
Clearscope's webinar library is where most of his thinking is archived.
Britney Muller, Orange Labs
Britney Muller was Senior SEO Scientist at Moz before most people had heard of machine learning, then went to Hugging Face, then founded Orange Labs. That trajectory matters. She has been mixing ML and marketing for longer than almost anyone on this list, and she reads both literatures fluently.
Her contribution to GEO is less about tactics and more about intellectual honesty. When she talks about LLMs, she talks about training data composition, bias, and failure modes. She assesses tools rather than marketing them. If you want to understand why an AI answer engine keeps citing competitor X and ignoring you, she is the person asking the right upstream question: what is in the training data, what was crawled, what was favored, and what structural properties of your content make you legible to the model.
She also runs the Actionable AI course, which is one of the few educational resources aimed at marketers that does not hand-wave about how models work. Her site is britneymuller.com.
Andrea Volpini, WordLift
Andrea Volpini is CEO of WordLift and one of the earliest people to argue that knowledge graphs would be the substrate AI search runs on. Two years ago that sounded academic. It no longer does. Every major assistant now grounds its answers in some form of structured knowledge, and the brands that publish their own machine-readable graphs get picked up first.
WordLift's research on Recursive Language Models over Knowledge Graphs shows why. Using a 150-question benchmark, they found that multi-hop traversal of a knowledge graph improves both evidence quality and citation behavior versus retrieval-augmented generation alone. In plain terms: if your content is wired together by entities and relationships, not just links, LLMs can follow threads through your site and cite you more accurately. If it is a pile of unrelated blog posts, they cannot.
Andrea is the person to read on entity strategy, schema at scale, and knowledge-graph-powered content pipelines. His work is at the WordLift blog.
Chris Long, Go Fish Digital and Nectiv
Chris Long is VP of Marketing at Go Fish Digital and co-founder of Nectiv, a B2B and SaaS-focused AEO and GEO agency. He is the practitioner on this list most willing to run public experiments and publish the results, including the failures.
Two examples stand out. His team at Go Fish Digital ran a case study showing that deliberate editorial changes to a handful of pages shifted what ChatGPT Search recommended, with before-and-after evidence. That is rare. Most AI-citation claims are correlational; his were closer to controlled. Separately, he has shipped a string of practical tools, including an AI Overview Scorecard and an AEO/GEO Content Optimizer, that use Google's own embedding model to score how semantically aligned a page is with a target query.
If you want to understand what actually moves the needle in AI answers at the page level, and you want to see the code and the methodology behind the claim, Chris is the person whose work to copy. Find him at Go Fish Digital.
How to read this list
Nine of these people would disagree with the tenth on any given Tuesday. Lily Ray thinks GEO is mostly E-E-A-T done well. Mike King thinks it is an engineering discipline. Rand Fishkin thinks the whole click-optimization frame is already obsolete. Andrea Volpini thinks none of it matters until your knowledge graph is in order.
They are all partially right. Generative engine optimization is early enough that no single framework is complete, and anyone claiming otherwise is selling you something. The useful move is to build your own reading list from these voices, run your own experiments on your own content, and measure whether ChatGPT, Perplexity, Claude, and Google AI Overviews cite you more this quarter than last.
At Salespeak we are obsessed with the downstream of this. Once an AI agent cites you, what happens when the buyer lands on your site? Most companies have spent a year optimizing to be mentioned in AI answers and zero minutes thinking about whether their front door can answer the follow-up questions those mentions generate. That is the gap we work on, and it is where the traffic from every expert above eventually has to convert.
Start with one or two of the people on this list. Read them for a month. Run one experiment. Then come back and read the rest.



