How to influence an AI buyer while it's still evaluating you.

How to influence an AI buyer while it's still evaluating you.

How to influence an AI buyer while it's still evaluating you.
Most teams asking "how do I influence AI buyers" end up buying a citation tracker. It shows them, in a clean dashboard, exactly how they lost. That's a real thing to know. It is not the thing they asked for. Tracking what an AI said about you is the read path. Changing what it says next is the write path. Almost the entire tool market sells you the first one and lets you assume it's the second.
The evaluation already happened, and you weren't in the room
Here's the motion a B2B buyer runs now. They open ChatGPT or Claude, describe their problem, and ask which vendors fit. The assistant dispatches agents that read a handful of sites, including yours. It synthesizes. It returns a shortlist and a recommendation. The buyer takes that shortlist as a starting point and narrows from there.
That entire evaluation ran without a single human on your side present. No SDR, no demo, no call. Over the past 30 days Salespeak tracked 640,000 AI agent visits across our customer base, and the overwhelming majority were exactly this: agents reading vendor sites on behalf of a buyer who never identified themselves. The question isn't whether this is happening. It's whether you get a vote while it happens.
What "after the fact" tools actually do
AI visibility and citation tools do one job, and most do it well. They monitor. They tell you which models mention you, for which prompts, with what sentiment, ranked against competitors. If you had no idea where you stood, that's a genuine upgrade over flying blind.
But look closely at the timeline. The tool reports on evaluations that have already concluded. It's the box score after the game. You can study it, spot a pattern, write some new content, and hope the next crawl picks it up and the next buyer's evaluation goes better. That loop is real, and it's slow, and it's indirect. You are still not in the room. You're adjusting inputs and waiting weeks to see if the output moved.
That's the wrong question to optimize. "How do I see what AI says about me" is a monitoring question. "How do I change what AI tells the buyer who's evaluating me right now" is a different question, and no dashboard answers it.
The window almost nobody uses
There's a moment in that motion where you actually have leverage. It's when the buyer's agent is on your site.
Think about what you do and don't control. You don't control ChatGPT's model weights. You don't control how Perplexity ranks a category. You don't own G2. But you completely own your own domain, and that's a place the buyer's agent reliably shows up. 94% of AI agent visits across our customer base go to deep pages: pricing, security, integrations, comparisons. The agent comes to you. That visit is the window.
Most sites waste it. The agent arrives, scrapes whatever static HTML happens to exist, and leaves with whatever it could extract. If your pages don't answer its specific question, it infers, or it fills the gap from somewhere else. You had the agent on your property and said nothing back.
What influencing during the evaluation looks like
Using that window means three things happening on your own site, live:
- Detect the agent in the moment. Recognize that this request is an AI agent acting for a buyer, not a human with a browser and not a threat. That classification is the precondition for everything else.
- Change what it reads. Not the same marketing page a human gets, and not just a markdown reformat of that page either. Author the content and the specific FAQs that answer what the agent is trying to resolve, from a knowledge base you've governed, and serve that to it in real time. If it's checking whether you fit a 12-person RevOps team in fintech, it should read a clear, accurate answer to exactly that, not infer one from a page that never addressed it.
- Capture what it asked. The agent's question is the purest buyer-intent signal your company will ever get. It's the buyer's real problem, in the buyer's words, before any form. Logging it turns an anonymous scrape into demand data.
That's the write path. You're not editing ChatGPT. You're making sure that when an agent forms its view of you, the input it gets from your own site is complete, correct, and current: not a reformat of pages that were missing the answer, but content and FAQs authored to answer what the agent actually asked. This is what we call Dynamic Agent Optimization: the live equivalent of AEO, covering the questions no published page ever answered. It's what Salespeak's LLM Optimizer is built to do, and you can see it at salespeak.ai/control.
"Isn't this just a chatbot?"
Fair question, and no. A chat widget waits for a human to click it and renders a conversation in a browser. An AI agent never clicks anything and never renders anything. It requests pages and parses responses. A widget is invisible to it. Answering agents is a separate capability that operates at the request layer, for a visitor that has no cursor. Same site, different audience, different machinery. One doesn't replace the other.
The honest scope: this doesn't rewrite the open web for you, and it doesn't fix a stale G2 listing. It governs the one surface you own, which happens to be the surface the buyer's agent visits most. Pair it with the slower third-party work and you've covered the evaluation from both sides.
One more honest question this raises: isn't authoring content for agents a form of cloaking? No. Cloaking is showing machines different claims than humans, to game a ranking. What you serve the agent is true, governed by your team, and consistent with your human pages. It's a more complete answer to a specific question, not a different story. The agent and the buyer end up with the same understanding of what's true. The agent just gets there directly.
What to do this week
Step 1: ask ChatGPT and Claude to evaluate your category as if you were a buyer. Note where the agent clearly guessed about you. Each guess is a question your site failed to answer.
Step 2: check whether your pricing, security, and fit details are in clean text an agent can parse, or trapped in PDFs, images, and post-JavaScript renders.
Step 3: decide, honestly, whether your current stack only watches the evaluation or actually answers during it. If everything you've bought reports on the past, you've instrumented the read path and left the write path empty. AI-referred visitors convert at 4.4x the rate of traditional organic traffic. That's the prize for being in the room. A dashboard can't collect it for you.


