Traditional share of voice measures how loudly your brand speaks relative to competitors — ad spend, search impressions, media mentions. AI search share of voice measures something different: how often AI platforms choose to cite your brand when buyers ask questions about your category.
This is not a marketing awareness metric. It's a retrieval signal. And in 2026, it's increasingly where B2B buying decisions begin.
The Definition
AI search share of voice (AI SOV) is the percentage of citation events your brand captures across a defined probe set, relative to all brand citations on those same queries.
Formula:
AI SOV = (Your brand citations / Total citations across all brands) × 100
Example: You run 30 probe queries on Perplexity and observe 40 total citation events across all brands. Your brand appears in 8 of those citation events. Your Perplexity AI SOV = 8/40 = 20%.
This competitive framing is what makes AI SOV more useful than raw citation rate alone. A 20% citation rate on its own tells you something. A 20% citation rate while your main competitor has a 45% citation rate tells you that you're losing the AI-research phase of your sales funnel.
How AI SOV Differs from Traditional SOV
| Dimension | Traditional Share of Voice | AI Search Share of Voice |
|---|---|---|
| Based on | Ad spend, rankings, impressions | Citation frequency in AI responses |
| Can be bought | Yes (paid search, display) | No — purely earned |
| Measurement unit | % of category mentions/impressions | % of AI citation events |
| Competitor benchmark | Uses same keywords/channels | Uses same probe query set |
| Timeline | Real-time or weekly | Monthly (signal moves slowly) |
The key difference: AI SOV cannot be purchased. It is determined by technical AI readiness (crawler access, schema, llms.txt), content authority, and third-party corroboration signals. This makes it a more authentic measure of brand authority — and a harder one to fake.
Why AI SOV Matters More Than Citation Rate Alone
Consider two scenarios:
Brand A: 15% citation rate on Perplexity, 25% AI SOV in their category Brand B: 15% citation rate on Perplexity, 8% AI SOV in their category
Both brands are cited on 15% of their probe queries. But Brand A is winning its category — competitors are cited less often. Brand B is being significantly outpaced — competitors collectively have 87% of AI citations on the same queries.
AI SOV contextualizes citation rate. It answers the question: "Are we winning or losing AI-assisted buyer research in our market?"
How to Calculate AI SOV
Step 1: Define your probe set
Select 20–30 queries that represent buyer research in your category. These should be category-level and comparison queries — not branded queries. The probe set should be the same one you use across all competitors.
Step 2: Run probes and record all citations
Run each query on your target platform. Record every brand domain cited in each response — not just yours. This requires either:
- Manual testing (recording all cited domains by hand) — feasible for 30 queries per month
- Automated citation probing (tools that record all citations, not just brand-specific ones)
Step 3: Aggregate and calculate
Build a table: queries × brands. For each query, mark which brands were cited. Sum columns to get per-brand citation counts. Divide each brand's count by the total citation events to get AI SOV per brand.
Step 4: Track monthly
AI SOV changes slowly. Monthly snapshots on the same probe set show meaningful trends.
tryansly.com automates the full competitor citation tracking stack — it records which brands are cited across all 31 probes on Perplexity, Claude, and ChatGPT, calculates AI SOV per brand and per platform, and tracks competitive position over time.
Benchmarks by Market Position
In B2B SaaS categories analyzed across tryansly's audit database:
| AI SOV | Competitive Status |
|---|---|
| 30%+ | Dominant — category-defining presence in AI-assisted research |
| 20–30% | Strong — competitive on most high-intent buyer queries |
| 10–20% | Baseline — present but consistently outpaced by one or more competitors |
| Below 10% | At risk — buyers are researching your category and not finding you |
These benchmarks shift based on category size. In a three-competitor market, 30% is achievable at baseline; in a ten-competitor market, 15% AI SOV may represent the leading position.
How to Improve AI SOV
AI SOV responds to the same interventions that improve citation rate — but the competitive dimension adds a layer of prioritization.
High-leverage moves:
- Fix crawler access blockers — if your AI SOV is 0%, this is almost certainly the cause
- Publish content on query types where competitors are winning but you aren't — citation gap analysis identifies these
- Build third-party corroboration on the specific claims your competitors are winning on — match or exceed their corroboration breadth
- Improve content structure for query types where AI SOV is low — FAQ schema, direct answers, H2 headings matching buyer questions
For the underlying citation rate improvement playbook, see How to Increase Your Brand's Citation Rate in Perplexity and Claude. For identifying exactly which queries competitors are winning, see the competitor citation gap analysis guide.
Platform-Specific AI SOV
AI SOV should be calculated and tracked separately by platform, because the competitive landscape differs across ChatGPT, Perplexity, and Claude.
A brand might have 30% AI SOV on Perplexity (strong consensus signals, active third-party coverage) and 5% AI SOV on Claude (no primary source content, no recognized authority signals) — even though these are the same brands competing in the same category.
Platform-specific AI SOV analysis tells you which platform is the highest-priority gap to close and which specific tactics will move that platform's metric.
For AI search visibility tools that calculate platform-specific AI SOV automatically, see the AI search visibility tools guide. For the broader definition of what AI search visibility encompasses, see What Is AI Search Visibility.