AI Search & SEO — What Marketers Should Really Be Asking

Posted by Sean on June 30, 2025

When someone claims, “I searched this in ChatGPT and we didn’t show up,” it’s tempting to shrug it off. But in the messy world of AI-powered search, those offhand comments can spark legitimate concerns — and they need a structured response.

🔍 Why Asking the Right Questions Matters

AI chat-based tools like ChatGPT, Perplexity, Gemini, and Copilot are growing fast in visibility. Yet they still make up only around 3% of total web search traffic. By contrast, Google accounts for over 90% market share globally (and around 90% in the UK), with Bing at just 3–4%. That means marketing should still prioritise traditional SEO. AI chat tools are interesting and worth watching — but they’re nowhere near replacing Google or Bing in terms of volume.

Still — someone asking “why don’t we appear in ChatGPT?” deserves a decent response. The same phrasing or tool can produce very different results depending on a few key variables:

  • Prompt wording
  • Follow-up queries
  • Model version and training cut‑off
  • Any stored memory or past chat sessions
  • Plugins, retrieval tools, or browsing enabled

So instead of guessing, we ask the right questions.


🧠 Questions to Ask When Auditing an AI Search Claim

Here’s a quick checklist to help surface the right context before we jump to conclusions:

  1. Which AI tool did they use?
    ChatGPT, Claude, Perplexity, Gemini, Harvey… If they mention self‑hosted tools (e.g. Ollama, DeepSeek) or say things like “13b model,” it’s best to pass it to someone technical.

  2. What model version was used?
    GPT‑4, GPT‑4o, Claude 3.5, Gemini 1.5, etc.

  3. What was the initial prompt?
    The first thing they typed or said. Get a copy if possible.

  4. Were there follow-up questions?
    AI is conversational — one follow-up can change the whole thread.

  5. Was there any stored memory or chat context?
    If they’re not sure, they can ask the AI:

    “What context or memory are you using for this answer?”

  6. Were there any system settings or preferences active?
    Things like tone of voice, safe-search filters, or restricted sources.

  7. Was it a fine‑tuned or company-trained model?
    If yes — that’s a different game entirely. Flag it to a technical contact.

  8. Can they share the full transcript or a chat link?
    Most paid tools (like ChatGPT Plus) allow sharing.

⚙️ Model Size ≠ Output Quality

Marketers sometimes hear terms like “7b,” “13b,” or “70b” and assume bigger = better. That’s not quite right.

Those numbers refer to the number of parameters in a model — the internal “knobs” it uses to understand language. A 70b model has more of these than a 13b one. But that doesn’t guarantee better answers.

Think of it like engine size: more power doesn’t always mean a better ride — it depends on the training, tuning, and context.


✅ Why This All Matters

  • Google has begun surfacing Gemini activity in Search Console, but only for AI Overview impressions — it’s early days, and far from comprehensive.
  • There are now a few unofficial tools that let you peek into Gemini or ChatGPT behaviour — but they’re often fragile, manually operated, and not scalable.
  • Crucially: AI models don’t retrieve results like search engines. They generate responses based on data, prompts, and model behaviour — not a ranked list of pages.

📌 Bottom Line for Marketers

  • Traditional SEO still matters most — it powers both human and AI understanding.
  • Treat AI visibility checks as qualitative research, not a KPI.
  • If someone flags something from an AI chat, ask for the right context before jumping in.
  • And if they say “7b” like it’s gospel — smile, and tell them to come talk to you.