anslyansly
AuditPricingBlog
Sign In
anslyansly

AI-readiness platform for websites. Check your visibility in ChatGPT, Claude, and Perplexity.

@tryansly

Product

  • Audit
  • Pricing
  • Blog

Company

  • About
  • Privacy Policy
  • Terms of Service
  • Contact Us
© 2026 ansly. All rights reserved.
PrivacyTermsContact
anslyansly
AuditPricingBlog
Sign In
Home/Blog/How Does an AEO Checker Work? The Technical Guide to AI Readiness Testing
Green code matrix on screen representing technical scanning and automated AEO audit checks
AEO8 min read

How Does an AEO Checker Work? The Technical Guide to AI Readiness Testing

An AEO checker is not a single test — it's a layered system of static analysis and live citation probing. Understanding the mechanics tells you which results to trust and which gaps to prioritize.

ansly Team·April 4, 2026

Most marketers use AEO checkers as black boxes — enter URL, get score, fix the red items. That's a reasonable starting point, but understanding what's actually happening under the hood lets you interpret results more accurately, prioritize fixes more intelligently, and recognize what the tool can and can't tell you.

This guide covers the technical architecture of AEO checkers: what they test, how they test it, how scores are derived, and where the methodology has real limitations.

What an AEO Checker Tests: The 7 Signal Categories

A complete AEO checker evaluates seven categories of signals that influence AI citation behavior. Most tools cover one or two. The best AEO checkers cover all seven.

Category 1: llms.txt (23% weight in tryansly's model)

llms.txt is a file placed at yourdomain.com/llms.txt that provides AI systems with a structured overview of your site's most important content, brand entities, and key pages.

An AEO checker validates:

  • Whether llms.txt exists at the expected path
  • Whether the file follows correct syntax (Markdown format, # Brand Name header, ## Section structure)
  • Whether key pages and entities are included
  • Whether the content accurately represents your brand's scope

A missing or malformed llms.txt is typically the single largest score improvement available — worth more than any other single fix. For the full implementation guide, see the llms.txt complete guide.

Category 2: Schema Markup (20% weight)

Schema.org structured data tells AI retrieval systems what your content is about and what type of content it represents. An AEO checker validates:

  • FAQPage schema: Presence, correct question/answer pair structure, JSON-LD vs. Microdata format
  • HowTo schema: Step-by-step content with correctly structured steps
  • Article schema: Author, publishedAt, headline fields
  • Product schema: For e-commerce and software product pages
  • Organization and BreadcrumbList: Site-level schema signals

The checker fetches page HTML, extracts JSON-LD and Microdata, and validates against the Schema.org specification. It flags invalid properties, missing required fields, and incorrectly nested structures.

Category 3: AI Crawler Access (18% weight)

Each major AI platform operates its own web crawler:

  • OpenAI → GPTBot
  • Anthropic → ClaudeBot
  • Perplexity → PerplexityBot
  • Google → Google-Extended (for AI training)

An AEO checker fetches yourdomain.com/robots.txt and parses every User-agent / Disallow rule. It checks whether each AI bot is explicitly allowed, explicitly blocked, or affected by a blanket User-agent: * rule. Any block is flagged as critical — a blocked bot means zero citations from that platform regardless of content quality.

For the full robots.txt configuration guide, see GPTBot and AI crawlers robots.txt audit.

Category 4: Content Extractability (15% weight)

AI retrieval systems need to extract clean, readable text from your pages. An AEO checker evaluates:

  • JavaScript rendering: Is content present in server-rendered HTML, or only after JS executes?
  • Heading structure: Are H1/H2/H3 headings present and descriptive?
  • Content density: Is the content-to-code ratio sufficient for reliable extraction?
  • Reading accessibility: Is the content in plain paragraphs, or locked behind accordions, tabs, or modals?

This is typically evaluated using a headless HTTP fetch (no JS execution) followed by text extraction and structural analysis.

Category 5: Entity Authority (12% weight)

AI models are entity-aware — they know about brands, products, people, and organizations as structured entities, not just strings of text. An AEO checker evaluates signals that establish your brand as a recognized entity:

  • Whether your brand has a Knowledge Panel in Google Search
  • Whether your brand appears in structured databases (Wikidata, Wikipedia)
  • Whether entity-defining content (clear brand/product/category definitions) exists on your site
  • Whether Organization and Product schema correctly defines your brand entity

For a deep dive on entity authority signals, see entity authority and knowledge graphs for AI search.

Category 6: Citation Probe Performance (8% weight)

This is where static analysis ends and live testing begins. Citation probes submit real queries to AI platforms and record whether your brand URL appears as a cited source.

tryansly.com runs 31 structured probes across Perplexity, Claude, and ChatGPT — covering category discovery queries, comparison queries, problem-solution queries, and buying-intent queries relevant to your domain. For each probe:

  • Perplexity response analyzed for your domain in citations
  • Claude response analyzed for your domain in citations
  • ChatGPT Browse response analyzed for your domain in citations

The result is a citation rate per platform, a total citation count, and identification of which competitor domains appear on probes where you don't.

Category 7: Technical Performance (4% weight)

Core Web Vitals — Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), Interaction to Next Paint (INP) — and overall PageSpeed score affect crawl depth and retrieval reliability. Slow pages are less thoroughly crawled; high CLS can interfere with text extraction.

This is typically evaluated via PageSpeed Insights API, which provides real-world field data.

Static Analysis vs Live Citation Probes: The Critical Distinction

The most important distinction in AEO checker methodology is between static analysis and live citation probes:

Static analysis is like a pre-flight checklist. It tests whether the conditions that should enable citations are in place — bots are allowed, schema is valid, content is extractable. A clean static analysis means citations are possible.

Live citation probes are the actual test flight. They submit real queries to real AI platforms and observe whether citations actually occur. A strong probe performance means citations are happening now, regardless of what the static analysis shows.

A tool with only static analysis can give you false confidence: "everything looks good" — but you might still have zero actual citations. A tool with only probe testing can give you results without diagnostic guidance: "you have 5% citation rate" — but no indication of why or what to fix.

The best AEO checkers — like tryansly.com — run both and correlate them. When static scores are high but citation rates are low, the issue is likely content quality or third-party corroboration gaps. When static scores are low, the technical fixes are the priority.

How Scoring Works

AEO scores are weighted aggregates of category scores, which are themselves averages of individual check results. In tryansly's model:

CategoryWeightExample checks
llms.txt23%File exists, valid syntax, entity completeness
Schema markup20%FAQPage on key pages, valid JSON-LD, required fields
AI crawler access18%GPTBot, PerplexityBot, ClaudeBot all allowed
Content extractability15%Server-rendered content, heading structure, content density
Entity authority12%Knowledge Panel, Wikipedia, Organization schema
Citation probes8%Citation rate across 31 live probes
Technical performance4%LCP, CLS, PageSpeed score

Each individual check returns pass/fail or a graduated score. Category scores are averaged from their checks. The overall score is the weighted average of category scores.

The score is designed to prioritize: a brand with a broken llms.txt and blocked bots should see those issues dominate the score, because fixing them produces the largest citation rate improvement.

What AEO Checkers Can't Measure

No AEO checker can tell you:

  • Whether your brand appears in AI model training data (base knowledge, not retrieval)
  • How AI models describe your brand in generated text (sentiment and framing)
  • Your citation rate on queries outside the probe set
  • Your competitors' exact AEO scores (only your own relative performance)
  • Future citation behavior after algorithm updates

These limitations don't reduce the tool's value — they define where to use it. For trained knowledge representation, GEO strategies are more relevant. For current retrieval citation performance, AEO checker results are the signal to act on.

For a comparison of the available AEO checking tools, see the best AEO tools 2026 guide. To run a full audit right now, visit tryansly.com — 47 checks across 7 categories plus 31 live citation probes, no login required.

On this page

What an AEO Checker Tests: The 7 Signal CategoriesCategory 1: llms.txt (23% weight in tryansly's model)Category 2: Schema Markup (20% weight)Category 3: AI Crawler Access (18% weight)Category 4: Content Extractability (15% weight)Category 5: Entity Authority (12% weight)Category 6: Citation Probe Performance (8% weight)Category 7: Technical Performance (4% weight)Static Analysis vs Live Citation Probes: The Critical DistinctionHow Scoring WorksWhat AEO Checkers Can't Measure

Frequently Asked Questions

What does an AEO checker actually test?▾

A complete AEO checker tests across seven signal categories: llms.txt (presence, syntax, completeness), schema markup (FAQPage, HowTo, Article, Product), AI crawler access (GPTBot, PerplexityBot, ClaudeBot in robots.txt), content extractability (HTML structure, headings, JS rendering), entity authority (brand entity presence, knowledge graph signals), citation probe performance (live AI queries testing actual citation rate), and technical performance (LCP, CLS, INP, page speed).

What is the difference between static AEO analysis and live citation probes?▾

Static analysis fetches your HTML and tests signals that could theoretically allow citations — crawler access, schema, content structure. Live citation probes test whether you're actually being cited right now by submitting real queries to ChatGPT, Perplexity, and Claude and recording whether your URL appears. Static analysis tells you if you could be cited; citation probes tell you if you are. The best AEO checkers do both.

How is the AEO score calculated?▾

AEO scores are weighted aggregates across signal categories. In tryansly's model: llms.txt (23%), schema markup (20%), AI crawler access (18%), content extractability (15%), entity authority (12%), citation probe performance (8%), and technical performance (4%). Category scores are averaged from individual check results, then weighted to produce an overall AEO score from 0–100.

What are the limitations of AEO checkers?▾

AEO checkers cannot measure: AI model training data (whether your brand is in ChatGPT's base knowledge), brand sentiment in AI responses (how the AI describes your brand, not just whether it cites you), query categories not covered by the probe set, or the relative authority of competitors. They measure your site's technical readiness and current citation performance — not the full scope of AI search presence.

How often should I run an AEO audit?▾

Run a full AEO audit monthly, and after any significant site change (CMS migration, robots.txt update, URL restructuring). Monthly aligns with the 30-day content freshness window in AI retrieval systems — you'll catch issues before they compound into significant citation rate drops.

Related Articles

B2B marketing team collaborating around a table discussing AEO vs GEO strategy priorities
AEO9 min read

AEO vs GEO for B2B Marketers in 2026: Which One Actually Moves the Needle?

AEO and GEO are not synonyms. For B2B marketers with limited time, understanding which discipline delivers measurable results faster — and in which order — is the core resource allocation decision of 2026.

ansly Team·Apr 4, 2026
B2B marketing team reviewing AI search results and citation data on a conference room display
AEO10 min read

How B2B SaaS Companies Get Cited in AI Answers in 2026: The Complete Playbook

B2B buyers now use ChatGPT and Perplexity to research software vendors before they ever visit a website. The companies appearing in those AI answers are winning deals at the consideration stage. Here's how to be one of them.

ansly Team·Apr 4, 2026
Digital network connections representing cross-platform brand citation strategy across ChatGPT, Claude and Perplexity
AEO10 min read

How to Get Your Brand Cited by ChatGPT, Claude, and Perplexity: The Cross-Platform Playbook

ChatGPT, Claude, and Perplexity have different citation architectures. A strategy built for one won't automatically work on the others. Here's the unified playbook that covers all three simultaneously.

ansly Team·Apr 4, 2026
← Back to Blog
anslyansly

AI-readiness platform for websites. Check your visibility in ChatGPT, Claude, and Perplexity.

@tryansly

Product

  • Audit
  • Pricing
  • Blog

Company

  • About
  • Privacy Policy
  • Terms of Service
  • Contact Us
© 2026 ansly. All rights reserved.
PrivacyTermsContact