Stop Optimizing for Mentions. Start Answering Well.

Stop Optimizing for Mentions. Start Answering Well.

The question isn't whether AI mentions you. It's whether AI gets you right.
There's an entire industry forming around "AI optimization." Get mentioned in ChatGPT. Show up in Perplexity. Rank in AI Overviews. The pitch is simple: if AI doesn't mention you, you don't exist.
I get the appeal. But as a CTO watching how LLMs actually work under the hood, I think the industry is optimizing for the wrong metric. Being mentioned is table stakes. The real question is: when AI talks about your product, does it get the answer right?
Perplexity's own team is saying this
A recent Wall Street Journal piece on the future of SEO in an AI world included a quote from Dwyer at Perplexity that stopped me cold. He said AI search pulls in far more information about the person asking the question, making results highly customized and therefore difficult to predict or measure.
His advice to companies? Direct optimization efforts at improving your products and the quality of the information you share, not on technical tweaks aimed at moving an elusive needle. His exact words: "Marketers have the option to follow fake numbers or to focus on building great things. Time will tell which of those strategies is better."
Read that again. The head of a major AI search engine is telling you that gaming mentions is a dead end. The winning strategy is making sure the information you put out there is so good that the AI can't get you wrong.
Why "being mentioned" is a vanity metric
Here's what the mention-chasers miss: LLMs don't rank results like Google did. There's no position one. There's no page one. There's a synthesized answer that pulls from whatever the model thinks is most relevant to that specific user's context.
So when a buyer asks Claude "What's the best AI sales agent for a mid-market SaaS company?", the model isn't checking who optimized their content best. It's assembling an answer from whatever structured, trustworthy information it can find. If your product information is scattered across stale blog posts and outdated G2 reviews, the model will either skip you or hallucinate something wrong.
Being mentioned with wrong information is worse than not being mentioned at all. A confident, incorrect answer about your pricing or capabilities actively sends buyers to competitors.
The shift: from visibility to answer quality
The mental model needs to flip. Instead of asking "How do we get AI to mention us?", the question should be: "If an AI agent tried to answer every question a buyer could ask about our product, how well would it do?"
Try it right now. Open ChatGPT, Claude, or Perplexity. Ask about your product's pricing. Ask about your integrations. Ask about your compliance certifications. Ask how you compare to your top competitor.
The answers will probably shock you. Not because AI ignores you, but because it confidently gets you wrong. Wrong pricing tiers. Features you deprecated two years ago. Comparisons based on outdated data.
That's the gap. And no amount of "mention optimization" fixes it, because the problem isn't visibility. It's answer quality.
What answer quality actually requires
Improving answer quality isn't a content marketing exercise. It's an infrastructure problem. Here's what we've learned building for this:
Structured, machine-readable information. LLMs don't read your website like a human does. They need structured data: Schema.org markup, FAQ architectures, clear product taxonomies. The more structured your information, the less the model has to guess. We wrote about this in our piece on what AEO actually means in 2026.
First-party endpoints, not scraped pages. The next level beyond structured content is giving AI agents a direct line to your product truth. Not a web page to scrape, but a machine-readable endpoint that returns verified, real-time answers. This is what the Agentic Web specification enables: a /.well-known/mcp endpoint where any AI agent can query your product directly.
Real-time accuracy. Training data is months old. If your pricing changed last quarter, the model doesn't know. If you launched a new integration last week, it doesn't exist in AI's world. The only way to solve this is to provide live endpoints that AI can query in real time instead of relying on what it memorized from your blog six months ago.
Progressive disclosure. Not every question deserves the same depth of answer. Anonymous queries get overview information. Qualified buyers get specifics. This is what we built into the Intelligent Front Door concept: treating every AI interaction as a qualified conversation, not a one-size-fits-all data dump.
The infrastructure layer most companies are missing
Here's the thing Perplexity's Dwyer is hinting at that most marketers haven't internalized yet: AI search is personalized in ways that make traditional optimization nearly impossible. The same query from two different users produces different results. You can't A/B test your way into a consistent AI mention because there is no consistent result to optimize for.
What you can control is the quality and structure of the information you make available. If your product data is clean, structured, real-time, and directly queryable, it doesn't matter how the model personalizes the result. Your information is correct regardless of who's asking.
This is an infrastructure problem, not a marketing problem. And it's why we've been building the stack we have: LLM Optimizer for edge-level response quality, Agentic Web endpoints for direct agent-to-company communication, and structured data layers that give models the ground truth they need.
The companies that will win this
The next wave of B2B winners won't be the ones who figured out how to game AI mentions. They'll be the ones who made it impossible for AI to get them wrong.
That means investing in answer quality over mention quantity. It means treating AI agents as a first-class audience for your product information. It means building the infrastructure so that when a buyer asks any AI assistant about your product, the answer is accurate, current, and complete.
Dwyer is right. You can follow fake numbers, or you can build great things. As a CTO, I know which bet I'm making. The companies that focus on answer quality now will own the AI-mediated buying conversation for years to come. The ones still chasing mentions will wonder why the leads stopped converting.
Stop optimizing for mentions. Start answering well.



