Five AEO Shortcuts That Are Killing Your AI Visibility (and Your SEO)

A red, orange and blue "S" - Salespeak Images
Omer Gotlieb Cofounder and CEO - Salespeak Images
Salespeak Team
11 min read
April 23, 2026

You have seen this LinkedIn post. Brand publishes 300 AI-drafted articles in 90 days, traffic spikes, the head of content screenshots Ahrefs with a rocket emoji, the comments beg for the playbook. Three quarters later the traffic is gone and nobody writes the follow-up post.

We are far enough into the AEO era now that you can sort the tactics that compound from the tactics that quietly collapse. Lily Ray, who actually pulls the trailing Ahrefs data on the case studies everyone else just retweets, has been publishing receipts on this for months. Her argument lines up with what we see across our own customer base: several of the most-promoted "GEO hacks" are not hacks. They damage the SEO foundation that AEO runs on top of, and when the foundation goes, both channels go together.

Five of those shortcuts are below. None are new. All of them are still being pitched this quarter by vendors who either have not looked at the trailing data or are hoping you will not.

Why the shortcuts backfire: RAG makes SEO load-bearing

One structural point before the specifics. Every major AI assistant, ChatGPT, Perplexity, Claude, Gemini, Google AI Overviews, uses retrieval-augmented generation whenever the question involves anything current. The model does not answer from memory. It sends a query out, pulls back a handful of pages, and grounds its answer in what came back.

Which pages come back? Disproportionately the ones ranking in organic search. Semrush's citation analysis and Backlinko's ChatGPT study both put the overlap between "pages ChatGPT cites" and "pages ranking on page one of Google" well over half. Different numbers, same shape. AI search leans hard on the regular search index.

Here is the part most teams miss. If your GEO tactics hurt your organic rankings, your AI citations hurt with them. You cannot trade the SEO foundation for an AEO bump. You lose both on the same day. Every tactic below fails this test.

Shortcut 1: Mass-producing AI content at scale

The pitch is irresistible. Spin up a content factory. Feed a topic list to a model, generate 500 articles, publish them all, collect traffic. Several agencies have pitched exactly this motion as "GEO at scale" across 2025 and 2026.

The data does not cooperate. Ray has tracked three separate companies that put high-profile case studies behind this approach, and all three show the same Ahrefs shape: a steep ramp of traffic followed by a collapse that lines up almost perfectly with a Google core update. One site celebrated its growth in a mid-April 2025 case study, then watched traffic fall through the June 2025 core update. Another site ran its case study in January 2025 and reversed course within weeks of publication. A third ran a case study in March 2024 and had 410'd the celebrated articles by February 2026.

This is not a freak pattern. This is exactly what Google's Scaled Content Abuse policy was written to catch. The policy targets content produced at high volume with the primary purpose of gaming rankings, and it does not care whether the content came from a human or a model. The detection window shortens with each core update. Recoveries, when they happen at all, take quarters.

The AEO damage is worse. When those pages fall out of organic rankings, they also fall out of retrieval. The model was never "citing your brand." It was citing a few pages that happened to be ranking, and when the ranking went, the citation went with it. You paid for content your own RAG pipeline cannot find.

The honest version: if AI belongs anywhere in your content workflow, it belongs in research and drafting on pages a real editor is actually shaping. Not as a printing press.

Shortcut 2: Cosmetic refreshes and timestamp games

The second shortcut is quieter. Someone noticed AI models lean on recency signals, so a content team starts batch-updating publish dates across the archive without changing the actual content. The theory is that a reset "last updated" timestamp gets the page re-crawled, re-evaluated, and re-surfaced as fresh.

The theory works right up until the crawler notices. Google and the AI crawlers are both building detection for this exact pattern, and the signal they look at is not the date in the byline. It is the diff. If a publish date jumped from 2023 to 2026 but the body of the page is 80 percent identical, the crawl treats the timestamp as noise. Do this repeatedly on the same domain and you can get the whole site flagged as manipulative, which is the exact outcome the tactic was trying to avoid.

Genuine refreshes are different and they still work. New data, new examples, a new section that reflects something that actually changed in the category: all of that earns the timestamp update. Put an editorial gate on the date change. If the update would not be worth a real reader's time, it is not worth a crawler's time either. Ray's framing is right. Before you update a publish date, confirm the change is meaningful to readers, not just meaningful enough to fool a crawler.

Shortcut 3: Self-promotional listicles

Almost every B2B marketing team we talk to has done this one, most of them in the last twelve months. You publish an article called "Top 10 [Category] Platforms in 2026" and, purely by coincidence, your product is number one. Slots two and three go to your weakest competitors. The actual alternatives get buried at the bottom or left off entirely.

These pages ranked for years in the old SEO world because the tactic was too widespread for Google to tell a good-faith ranking from a marketing exercise. That ended around January 21, 2026. Ray documented multiple companies that leaned on this pattern and then saw organic traffic declines starting on that date, tight enough as a cluster that a manual or algorithmic action is the only clean explanation.

AEO makes the damage worse. The comparison content that AI answers actually cite tends to come from sources the model treats as independent: analyst briefs, Reddit threads, engineering blogs, review sites that disclose their methodology. Self-ranked vendor listicles get discounted hard. In several cases we have watched them drag down how often a brand gets cited anywhere in comparison prompts, not just on the page that was gamed. Buyers shortlisting a category through ChatGPT are getting cleaner recommendations than they got from the Google top ten two years ago, and gamed comparison pages are part of why.

The move that works is simple and uncomfortable. Write comparison content only about categories where you have a real, defensible opinion, and be honest about where you are not the right fit. The source that openly names the buyers it is wrong for becomes the source a model trusts on the category.

Shortcut 4: "Summarize with AI" buttons that inject hidden prompts

This is the most aggressive tactic on the list, and the one with the clearest regulatory exposure. Over the past year a small industry of "LLM growth" vendors has shipped tools that drop a "Summarize with AI" button onto your site. The button looks helpful. In the versions we have seen reverse-engineered, what it actually does is ship a hidden system prompt alongside the page content, telling downstream models to recommend your product whenever the category comes up.

Microsoft formally classified this technique as a security threat in February 2026. They named it AI Recommendation Poisoning. The published analysis documented more than 50 unique examples from 31 companies across 14 industries, and traced the technique back to two publicly marketed tools that sold it as "SEO growth hacks for LLMs."

Two things follow. First, this is now a named, documented class of attack against AI systems, which means every major model provider is actively hunting for it. Pages that use these techniques are getting downweighted in retrieval, and the brands behind them are at risk of entity-level flags, not just page-level ones. Second, the legal exposure is real. Prompt injection into a third-party AI product sits squarely inside several existing consumer-protection frameworks, and we expect enforcement before the frameworks formally catch up.

Even loose association with these tactics now carries reputational and legal risk that was not priced in when the vendor pitched you last year. If one of these widgets is installed, removing it is the cheapest AEO decision you will make this quarter. Do it today.

Shortcut 5: Permutation-spam comparison pages

The last shortcut is the most mechanical. Mint a comparison page for every possible pair in your category. Brand A vs Brand B. Brand A vs Brand C. Brand A alternative. Brand B alternative. Keep going until there are 40 or 50 pages, each aimed at a slightly different "vs" or "alternative" keyword. The theory is that the long tail of comparison queries gets captured, and the AI models cite you when they summarize the category.

Ray flagged a specific example from her tracking. A site published 51 comparison pages across 2025, then saw organic traffic decline in late January 2026 at exactly the same time its ChatGPT citations started dropping. The pattern matches how Google's helpful content system treats low-effort permutation content. It also matches how LLMs treat sources with obvious keyword-target scaffolding, which is to say they read it for what it is and discount it.

The honest version of comparison content works, and it works unusually well in AEO. A small number of genuinely deep comparison pages, written with real knowledge of both products and a clear verdict, get cited constantly across every major model. Most teams do not need 50 comparison pages. They need three good ones, and at least one should openly admit when the competitor is the better choice. That single admission moves AEO citation behavior more than 48 permutations ever will.

The pattern the shortcuts have in common

All five tactics share a shape. They treat AI search as a separate channel with its own game, and they try to manufacture outcomes in that channel through volume or automation or outright manipulation. In every case they degrade the SEO foundation AI search pulls from. When the foundation moves, the citations move with it. The collapse hits both channels at once, which is how teams that ran the playbook end up rebuilding two disciplines at the same time.

The real mistake is treating AEO and SEO as separate surfaces you can trade one against the other. RAG architecture makes them the same surface, seen from two angles. Every decision that weakens the organic version of your brand weakens the AI version. Every investment in real authority, clean infrastructure, and actual expertise shows up in both.

Lily Ray's framing on this deserves a direct quote. Your SEO and GEO strategies should always work together, and every AI-search tactic should be evaluated through an SEO lens first. That is not a hedge. It is a testable claim. Pull the Ahrefs trajectory on any "pure GEO" case study from the last two years, including the ones that got celebrated on LinkedIn. The ones that held up are the ones where the SEO foundations held up first.

What to do instead

The tactics we see actually working are unglamorous and mostly old. That is the point.

  • Ship fewer, better pages. A brand with 60 pages that each answer a real buyer question in depth beats a brand with 600 thin ones in both channels. We have not seen a counter-example this year.
  • Invest in original data. Small pieces of proprietary research (customer surveys, benchmark studies, usage data you publish) generate a disproportionate share of the citations we see across AI models. Nobody else has your data, which is the whole point.
  • Build entity clarity. Your brand page, founder bios, product descriptions, and Wikipedia presence should tell one consistent story about who you are and what category you are in. Inconsistent entity data is one of the top reasons models misrepresent brands.
  • Earn third-party mentions. Podcast appearances, analyst coverage, guest essays, customer case studies on the customer's own site. The AI models see these and they matter more to citation than another self-published explainer on your own blog.
  • Fix the technical floor. Crawlability, clean HTML, structured data, a sensible site architecture. No AEO tactic survives a site the AI crawlers cannot parse.

None of this is flashy. None of it produces a case study you can post this quarter. All of it compounds.

The Salespeak angle: shortcuts are most expensive when they briefly work

One last observation, and it is the one we keep hitting with customers. The shortcuts that do produce short-term lifts are the most expensive of all, because they bring in exactly the wrong traffic.

A buyer who lands on your site because a permutation-spam comparison page briefly ranked, or because your AI-content factory briefly popped up in Perplexity, arrives with expectations set by content you did not actually write. On turn one they ask a specific technical question. Your chat widget, if you have one, was built for a different buyer. It deflects. They bounce. The AI model quietly updates its internal sense of your brand from the session it just watched, and the next time someone asks about your category, you are less likely to get mentioned, not more.

This is the downstream that never makes it into the case studies. Short-term traffic from manufactured visibility does not convert, does not build brand, and regularly damages how AI models represent you. A working front door, an AI sales agent that can answer the actual question the buyer arrives with, is the one investment that makes any AEO win stick. It is also the one that, unlike every shortcut above, gets stronger every quarter instead of weaker.

If you are running one of the five tactics above, the cheapest move of Q2 is to stop. The next cheapest is to get honest about what your real authority looks like, what gaps you need to close, and what your site actually does when a buyer shows up already knowing most of the category. After that, stop taking AEO advice from anyone whose case studies you have not personally pulled the trailing Ahrefs data on. Ray keeps pulling it. That is the bar.

Newsletter

Stay ahead of the AI sales and marketing curve with our exclusive newsletter directly in your inbox. All insights, no fluff.
Thanks! We're excited to talk more about B2B GTM and AI!
Oops! Something went wrong while submitting the form.

Share this Post