SEO gives you a rank tracker. AEO gives you... nothing native. No Perplexity position monitor. No ChatGPT citation dashboard. No platform-provided signal that tells you whether last month's work moved the needle.
This is the most common frustration for B2B teams that start doing AEO work: how do you know if it's actually working?
The answer is five signals — each measuring a different part of the causal chain from "we fixed things" to "buyers are finding us in AI answers." Here's the framework.
The Core Problem: No AEO Leaderboard
Unlike Google, where a rank tracker shows you position changes for target keywords, AI search platforms don't expose a native ranking or visibility signal. Perplexity doesn't have a Search Console. ChatGPT doesn't have a Webmaster Tools. Claude doesn't expose API data about citation frequency.
This doesn't mean AEO is unmeasurable — it means measurement requires a deliberate framework rather than a plug-in tool.
Signal 1: Citation Rate (Primary Metric)
What it measures: The percentage of probe queries where your brand URL is cited in AI responses.
How to track it: Run a consistent set of 20–30 queries monthly on Perplexity, Claude, and ChatGPT Browse. Record how many return a citation to your domain. Divide by total queries. This is your citation rate per platform.
What success looks like:
- Month 1 after fixing technical issues: citation rate should begin moving above your baseline (even a 2–5 percentage point increase is meaningful movement)
- Month 3: citation rate should be 5–10 points above your pre-optimization baseline
- Month 6: citation rate should be clearly trending upward, with specific query categories showing strong performance
What failure looks like: Flat or declining citation rate despite technical fixes. This means technical changes alone aren't enough — content or corroboration gaps need attention.
tryansly.com runs 31 automated citation probes and tracks citation rate historically, making month-over-month comparison straightforward.
Signal 2: AEO Score Improvement
What it measures: Your technical AEO readiness across 7 signal categories — a leading indicator for future citation performance.
How to track it: Run a full AEO audit at baseline and monthly thereafter. Track your overall score and per-category scores. Rising scores in llms.txt, AI crawler access, and schema markup are the most direct leading indicators of future citation rate improvement.
What success looks like:
- After Month 1 fixes: 15–25 point improvement in overall AEO score
- Schema markup category moving from below 50 to above 70
- AI crawler access category reaching 100 (all major bots allowed)
- llms.txt category score showing complete and valid file
What failure looks like: Score not improving despite claimed fixes. This usually means the fix wasn't implemented correctly — a fresh audit shows the current technical state without relying on memory.
Use the free AEO audit guide to run an initial baseline, then re-run monthly.
Signal 3: AI Referral Traffic in GA4
What it measures: Sessions arriving at your site from AI platforms — a lagging indicator of citation performance.
How to track it: In GA4, filter traffic sources for:
perplexity.aias a referral sourcechatgpt.comas a referral source- Direct traffic (some AI citations appear as direct due to app-level navigation)
- Google AI Overviews (appears as organic Google search in GA4 but can be filtered with UTM parameters or Search Console)
What success looks like: Measurable growth in Perplexity and ChatGPT referral sessions after 3–6 months of AEO work. Start with absolute numbers rather than percentages — even 20 Perplexity referral sessions per month is meaningful early signal for a B2B site.
Caveat: Not all AI citations produce clicks. Many users read the AI answer without clicking through. AI referral traffic represents the subset of citations that produce sessions — it will always undercount your actual citation rate. This is why citation rate is a better leading indicator than GA4 traffic.
Signal 4: Google Search Console AI Impressions
What it measures: How often your pages appear in Google AI Overview results — a proxy signal for AI engine readiness.
How to track it: In Google Search Console, filter by Search Type to find AI Overview impressions (available in 2026 GSC with AI mode filtering). Growing impressions in Google AI Overviews indicates your content structure is being recognized as citable by AI retrieval systems generally.
What success looks like: AI Overview impressions increasing month-over-month as schema markup, content structure, and freshness signals improve.
Why it matters for AEO broadly: Google AI Overviews and Perplexity/Claude share some underlying citation signals (content structure, schema markup, extractability). Improvement in Google AI Overview impressions often correlates with improvement in other AI platform citation rates.
Signal 5: Direct Visibility Testing
What it measures: Qualitative observation of your brand's presence in real AI answers — the human-readable complement to quantitative probe testing.
How to track it: Monthly, run 5–10 of your most important buyer queries directly in Perplexity and ChatGPT. Note:
- Is your brand cited more frequently than last month?
- Are you appearing in query categories where you previously didn't?
- What are competitors saying in the answers where you don't appear?
This doesn't replace quantitative probe testing, but it provides qualitative context — you can read the actual AI responses and understand the narrative being presented to buyers, not just whether a URL appears.
Measurement Cadence
| Frequency | Action | Signal |
|---|---|---|
| Weekly | Check server logs for AI bot activity | Crawler access verification |
| Monthly | Full citation probe run (20–30 queries) | Citation rate per platform |
| Monthly | Full AEO audit re-run | AEO score per category |
| Monthly | Direct visibility test (5–10 key queries) | Qualitative citation quality |
| Quarterly | GA4 AI referral traffic review | Lagging traffic validation |
| Quarterly | Full probe set refresh | Keep queries current with buyer behavior |
Warning Signs: When AEO Isn't Working
If you've been doing AEO work for 2+ months and see none of these improving, something is blocking progress:
- AEO score not improving: Check whether fixes were actually implemented (common: schema added to one page but not key pages, robots.txt change didn't save correctly)
- Citation rate at zero across all platforms: Likely a persistent crawler access block — recheck all three bots (GPTBot, PerplexityBot, ClaudeBot)
- Perplexity citation rate low despite strong AEO score: Content gap — you don't have content directly answering the query types you're probing
- Claude citation rate low despite strong content: Authority gap — no primary source content; all your content is aggregator-style rather than original analysis
For the full AEO monitoring framework, see the AEO monitoring and tracking guide. For the citation rate metric explained in detail, see What Is Brand Citation Rate. For tools that automate this measurement, see the best AEO tools 2026 guide.