Skip to main content
Methodology

How we measure AI Search Visibility,and what we cannot promise.

Every Velox Digital AI Search Visibility check (free, $9, $19, and the $199 Sprint baseline + recheck) runs against a defined prompt set, a known engine list, and a documented scoring approach. This page is the public, honest version of the methodology. It is also linked from every results email, the Sprint page, and our /llms.txt file so the AI engines themselves can read it.

Velox Digital makes no ranking or citation guarantees on any tier. AI engines change their citation behavior frequently and without notice. The methodology measures what is observable today and tracks whether implementation work moved the needle. Nothing more.

What each tier runs

Five tiers, five different scopes.

Same definition of "visibility" across all tiers. Different prompt count, engine count, and analysis layer. Pick the smallest tier that answers your actual question.

TierEnginesPromptsRun countAnalysis layer
Free Check ($0)1 (Perplexity Sonar Online)5 buyer-intent prompts5 calls / scanMention / no-mention summary, top competitors named
Multi-engine $96: Claude, GPT-4o, Gemini, Perplexity, Grok, Cohere (all live-web)10 buyer-intent prompts60 calls / scanPer-engine mention table, raw output, no interpretation
Multi-engine + Brief $19Same 6 as $9Same 10 as $960 calls + 1 analysis passEverything in $9, plus a 1-page analysis brief: competitor map, top 3 entity / content gaps, recommended next step
Sprint $199Same 6 as paid scans25-prompt baseline150 calls baseline + 150 calls T+30 recheckFull implementation: schema, FAQ, content, entity graph, citations + before / after delta on same prompts
Foundation Build ($299+)Sprint methodology baked into the buildSprint scopeSprint scopeA new website built AEO-ready by default. Sprint workflow runs against the launched site at no additional Sprint charge

Engine slugs (paid tiers, OpenRouter routing): anthropic/claude-3.5-sonnet:online, openai/gpt-4o:online, google/gemini-pro-1.5:online, perplexity/llama-3.1-sonar-large-128k-online, x-ai/grok-2-1212:online, cohere/command-r-plus:online. The :online suffix forces every model to query Exa for live web search before answering. No parametric-memory-only responses.

Definitions

What each term in your report means.

Mention
Your business name (or a documented alias) appears in the engine’s answer text for a given prompt. Detection runs against alias variants you provide at scan-time plus our normalization rules (case, possessives, common abbreviations). A mention is binary (cited or not cited), not a frequency or sentiment score.
Citation / source
A URL the engine returns alongside its answer (Perplexity Sonar Online attaches these natively; the OpenRouter :online suffix attaches them via Exa). We extract these per prompt, dedupe, and surface them on your report. A citation is not a guaranteed link to your domain. It is the source the engine drew from.
Competitor mention
A business name extracted from the engine’s answer that is not yours and looks like a real local business (Title-cased, 2–5 words, not a generic noun, not a city name). Filter rules drop generic categorizers ("Top Picks", "Editor’s Choices") and category lists. Counted across the full prompt set to surface who shows up most often instead of you.
Volatility
How much an engine’s answer drifts on the same prompt across runs. AI engines are non-deterministic. Running the same prompt twice can produce different cited businesses. This is a feature of the engines, not a bug in the scan. Sprint rechecks run against the same prompt set 30 days apart so the delta is measurable despite the underlying drift.
Limitations

What this methodology cannot do.

We are upfront about what every check reports and what it does not. The same five caveats apply from the free check up through the Sprint.

  1. 01

    Point-in-time only.

    Every scan reports what the named engines surfaced at the moment we ran the prompts. AI engines update their citation behavior daily. A scan is a directional snapshot, not a stable ranking.

  2. 02

    Non-deterministic by construction.

    Running the same prompt against the same engine twice can return different cited businesses. We mitigate this with a fixed prompt set and consistent re-runs, not by claiming determinism that does not exist.

  3. 03

    Not a ranking guarantee.

    No tier (including the Sprint) guarantees you will be cited in any specific AI engine for any specific query. We sell the implementation work and the before / after measurement, not a placement.

  4. 04

    Not an SEO substitute.

    AI Search Visibility lives in a different layer of discovery than Google blue-link search. Most of the technical SEO work helps AEO indirectly, but AEO has its own explicit fixes that traditional SEO agencies do not prioritize. Run both.

  5. 05

    Closed-model behavior is uncontrollable.

    We cannot directly observe how ChatGPT, Claude, Gemini, or Grok internally weight evidence when generating an answer. The methodology measures the observable output, not the internal reasoning. Some of the implementation work is aligned with documented engine guidance; some is inferred from observed behavior. We are explicit about which is which in the Sprint deliverables.

Recheck rules

How a Day-1 score becomes a Day-30 delta.

The Sprint includes a T+30 re-measurement on the same prompts. So that the comparison is honest and not gamed by changing the test mid-flight, every Sprint recheck obeys the same five rules.

Same prompt set

The 25-prompt baseline is locked at Day 1 and re-run verbatim at Day 30. No prompt-tuning to make the score look better.

Same engines

Day-30 runs against the same 6 engines as Day 1 (Claude, GPT-4o, Gemini, Perplexity, Grok, Cohere). If an engine is unavailable on the recheck day, we re-run it within 48 hours instead of skipping.

Same aliases

Your alias list (legal name, DBA, common abbreviations) is locked at Day 1. Mention detection at Day 30 uses the same alias rules. No broadening to inflate the number.

Same competitor extraction

The competitor-name filter rules are version-pinned. If we ship a methodology change between Day 1 and Day 30, your scan re-runs against the original version too so the delta is apples-to-apples.

Date-stamped

Every scan record carries the exact run timestamp and the methodology version. Your Day-30 report shows both timestamps + the methodology version pin so you can verify the comparison is valid.

FAQ

The methodology questions buyers actually ask.

Why is the free check single-engine and the paid tiers multi-engine?

Cost discipline. Perplexity Sonar Online via OpenRouter (a single grounded engine with live web search and citation extraction) costs roughly $0.025 per 5-prompt check. That keeps the free tier financially sustainable at any reasonable volume. Multi-engine fan-out across 6 engines costs roughly 30 times more per scan, so it is gated behind the $9 and $19 tiers.

Are the AI engines you test using live web search or trained-knowledge memory?

On the free check, the engine (Perplexity Sonar Online) always uses live web search. On the $9 / $19 paid scan, every engine queries with live web search via either OpenRouter's Exa-grounding suffix (for Claude / GPT-4o / Gemini / Grok / Cohere) or native Perplexity grounding. We deliberately do not run any engine in trained-knowledge-only mode for the paid scan, since those answers reflect cached training data, not the current web.

How often do AI engines change what they cite?

Frequently. Industry observation suggests citation behavior on the same query can drift 40–60% month-to-month. That is why the methodology is point-in-time, why every scan is timestamped, and why the Sprint includes a T+30 recheck on the same prompts so you see your delta, not just a snapshot.

Why do you not run more than 25 prompts per scan?

Diminishing returns + cost discipline. Across hundreds of internal scans, the first 10 buyer-intent prompts capture roughly 80% of the citation signal for a small business. Beyond 25, prompt variance dominates and the picture stops getting clearer. The $9 / $19 tiers run 10 prompts × 6 engines = 60 calls. The Sprint baseline + recheck runs 25 × 6 = 150 calls per measurement.

Do you guarantee my visibility score will improve after the Sprint?

No. We sell the implementation work (schema deployed, citations added, content shipped, entity-graph cleanup done) and a re-measurement on the same prompt set so you see whether the work moved the needle. AI engines are non-deterministic and change citation behavior independently of any individual implementation. Most Velox Sprint clients see directional improvement on most prompts, but no engine offers a deterministic ranking signal we could guarantee against.

Is this an SEO substitute?

No. AI Search Visibility and traditional SEO are different layers. Most SEO work helps AEO indirectly (clean URLs, fast pages, accurate metadata), but AEO has its own explicit fixes (FAQPage schema, entity-graph linkage, topic-cluster content) that Google search SEO does not prioritize. Run both. The Sprint does not replace your SEO work.

What about closed-model behavior?

We cannot directly observe or audit how closed-model AI engines (ChatGPT, Claude, Gemini) decide what to cite. The methodology measures the observable output, not the internal weighting. Some implementation work (FAQPage schema, LocalBusiness JSON-LD, public entity-graph corroboration) is well-aligned with documented engine guidance; some is inferred from observed behavior. We are explicit about which is which in the Sprint deliverables.

Where to go next

Diagnosis or treatment.

Diagnosis

Run the free AI Visibility check.

5 prompts, 1 grounded engine, mention / no-mention output, top competitors named. No follow-up unless you ask.

Run the free check

Treatment

Book the $199 Sprint.

30-day implementation: schema, FAQ, content, entity graph, citation work, + T+30 recheck on the same prompts. Optional $49/mo Maintenance after.

Book the $199 Sprint

See /ai-search-visibility for the full overview of the work. See /sample-ai-visibility-report for a redacted demo of what your $19 / Sprint report will look like. See Terms for refund policy and contract terms.