How to fix what ChatGPT says about your company.

A red, orange and blue "S" - Salespeak Images

How to fix what ChatGPT says about your company.

Omer Gotlieb Cofounder and CEO - Salespeak Images
Omer Gotlieb
6 min read
May 16, 2026

How to fix what ChatGPT says about your company.

You asked ChatGPT about your own company and it got something wrong. Old pricing. A market you left two years ago. A competitor described as the leader in a category you built. The frustrating part isn't the error. It's that there's no settings page. You can't log in and correct it. Here's what you can actually do, in the order that works.

Why there's no edit button

ChatGPT isn't a profile you own. When someone asks about your company, the model assembles an answer from whatever it can reach: your website as it was crawled or as it's fetched live, third-party pages that mention you, and patterns baked into training. You don't control the output directly. You control the inputs.

So "fix what ChatGPT says" is really four separate jobs on four separate inputs. Naomi Lurie, Head of Product Marketing at Faros AI, described the starting point well: "Before Salespeak, we felt powerless. We didn't know what LLMs were seeing or how to impact it." Powerless is the right word for the moment before you know the inputs. It stops being the right word once you do.

Step 1: Find out exactly what's wrong

Vague frustration won't help you. You need a list. Open ChatGPT, Perplexity, Claude, and Gemini, and run the questions a real buyer runs:

  • What does [your company] do?
  • How much does [your company] cost?
  • Who are the best alternatives to [a competitor in your category]?
  • Is [your company] a good fit for [your ICP]?
  • [Your company] vs [your top competitor].

Write down every error and tag it by type: a stale fact, a missing fact, or a competitor positioned ahead of you. The tag tells you which of the next three steps to run. Most teams find the problem isn't one thing. It's a stale fact and a missing fact and a ranking issue, each with a different cause.

Step 2: Fix your own pages first

Your website is the source the model treats as most authoritative about you, and the only one you fully own. So it's the highest-leverage fix and the one to do first. The errors that trace back here are almost always structural:

  • Facts trapped where an agent can't read them. Pricing inside a PDF. Compliance badges as images with no text. Key claims rendered only after JavaScript runs. The agent can't extract what it can't parse.
  • No page answers the question at all. If nothing on your site states your pricing model in plain text, the model guesses, and a guess is the error you're reading.
  • Two pages contradict each other. An old landing page says one thing, a new one says another. The agent can't tell which is current and may surface the wrong one.

The fix: put every fact a buyer agent needs into clean, current, plain-text pages, and kill the contradictions. This is what being agent-ready means in practice. It's unglamorous content-ops work, and it moves the needle more than anything else on this list.

Step 3: Fix the third-party record

Models lean heavily on sources they consider independent: G2, Capterra, Wikipedia, Reddit, review aggregators, old marketplace listings, analyst pages. When one of those carries a stale fact, the model often trusts it over your own site precisely because it looks neutral.

This is slower and you don't fully control it, but the moves are concrete. Update your G2 and Capterra profiles and the data fields inside them. Get outdated marketplace and directory listings corrected. If a Wikipedia entry exists, make sure it's accurate and well-sourced. Where your category is debated on Reddit or in communities, show up as yourself. You can't rewrite the open web, but you can stop feeding it the wrong inputs.

Step 4: Answer the questions no page covers

Here's the gap steps 2 and 3 leave. A buyer agent asks something oddly specific: "Does this work for a 12-person RevOps team in fintech with a HubSpot stack?" No page anywhere answers that exact question. The agent does what it always does with a gap. It infers, or it reaches for whatever adjacent source it can find. That's where a lot of the quiet misrepresentation happens, and no amount of static-page editing closes it, because you can't pre-write every question.

The fix for this one is different in kind. It's a live content layer on your own site. When a buyer agent arrives, Salespeak's LLM Optimizer authors the content and the specific FAQs that answer what the agent is actually checking, from a knowledge base your team governs, and serves that to the agent in real time. The agent reads a more complete and more accurate source than your static pages carried on their own, and everything it reads is true and consistent with what a human sees. You can see how it works at salespeak.ai/control. That's the difference between hoping an agent finds the right page and making sure it reads the right answer while it's evaluating you.

The "it recommends my competitor" problem

This one feels worse than a wrong fact, and it usually has a simpler cause than it seems. When ChatGPT names a competitor ahead of you, it's rarely a verdict on product quality. It's a verdict on legibility. The competitor has clearer pages, stronger third-party presence, and content that answers the comparison question directly. The model can build a confident answer about them and a thin one about you, so it leads with them.

Run steps 2 through 4 and the ranking tends to move on its own, because you've given the model enough clean material to describe you with the same confidence. If you want the direct treatment, write the comparison page yourself, honestly, so the agent has a real source for "[you] vs [competitor]" instead of inferring one.

How long it takes, and how to measure it

Step 1 is an afternoon. Step 2 is a few weeks of content-ops work. Step 3 runs in the background for a month or two. Step 4 is live the day you turn it on. Don't expect the model's answer to flip overnight. AI search has a strong recency bias, which works in your favor once fresh, correct pages exist for it to pull.

Measure it the way you'd measure anything else: re-run the step 1 question set every two weeks and score it. Track which models cite you and for which queries. If you can't see whether the answer is changing, you can't tell the board it's working, and for most marketing leaders right now, being able to show that is the entire point.

Related reading

No items found.