A free AEO audit is the fastest way to know where your site stands with AI engines. Enter your URL at tryansly.com, wait 30 seconds, and you'll have a score across 47 checks in 7 categories — plus a prioritized fix list.
This guide explains what those 7 categories actually measure, how to read the results, and exactly what to do first.
How to Run Your First AEO Audit
- Go to tryansly.com
- Enter your full domain URL (include
https://) - Click "Audit" — no login or account required
- Wait approximately 30 seconds while the tool runs all 47 checks
- Review your score breakdown by category
The results page shows your overall score (0–100), a score for each of the 7 categories, and a list of individual checks with pass/fail/warning status and fix instructions for each failure.
Understanding the 7 AEO Categories
Category 1: llms.txt (23% of your score)
What it checks: Whether your site has a properly formatted llms.txt file at https://yourdomain.com/llms.txt.
The audit validates 7 things: file exists, valid H1 heading, blockquote purpose statement present, multiple H2 sections, valid Markdown links, all links resolve without errors, and file size under 100KB.
Why it weighs 23% — the most of any category: llms.txt is the primary document AI agents use to understand your site's structure and purpose. Without it, AI agents must infer your site's topic and content hierarchy from page content alone — which introduces ambiguity that hurts citation probability. A missing or malformed llms.txt is the single most common reason sites with good content still don't appear in AI answers.
If you fail this category: Read the complete llms.txt guide and implement a valid file before fixing anything else. This is the highest-ROI single action available.
Category 2: Schema & Structured Data (20% of your score)
What it checks: Whether your pages include the structured data schemas AI engines use to understand content type, content structure, and entity information.
Specific checks include: FAQPage schema presence and correctness, Article/BlogPosting schema on editorial content with valid datePublished and dateModified, Organization schema with sameAs links on your homepage, and absence of schema validation errors.
Why it matters: AI engines parse JSON-LD directly from page source during retrieval. FAQPage schema in particular gives AI engines pre-validated question-answer pairs — the exact format they need to construct cited answers. Sites with valid FAQPage schema are cited 2.5x more often than equivalent sites without it.
If you fail this category: Start with FAQPage schema implementation — 15 minutes and the highest citation lift per hour of any fix.
Category 3: AI Crawler Access (18% of your score)
What it checks: Whether GPTBot (ChatGPT), ClaudeBot (Claude/Anthropic), PerplexityBot (Perplexity), Google-Extended (Google AI), and CCBot (Common Crawl) are permitted by your robots.txt.
Why it matters: This is the silent AI visibility killer. A single User-agent: * / Disallow: / rule blocks all five crawlers simultaneously. When any of these crawlers is blocked, your content is invisible to that platform — zero citations, regardless of how good your content is.
If you fail this category: This is a 15-minute fix. Read the AI crawler robots.txt guide for the exact syntax to add.
Category 4: Content Extractability (15% of your score)
What it checks: Whether AI engines can actually parse useful, citable content from your pages. Checks include: semantic HTML structure, heading hierarchy, paragraph length, FAQ section presence, direct Q&A formatting, and absence of content blocked behind JavaScript-only rendering.
Why it matters: Content that exists in your HTML but is unstructured or buried is functionally invisible to AI retrieval. A page with great information organized as a wall of prose will be passed over in favor of a structurally inferior page that leads each section with a direct answer.
If you fail this category: Audit your five most important pages for Q&A structure. Rewrite H2/H3 headings as questions. Move the answer to the first sentence of each section. Add an FAQ section with 4–6 questions. These changes don't require new content — just restructuring what you have.
Category 5: AI Agent Readiness (12% of your score)
What it checks: Technical signals that affect AI agent behavior specifically: sitemap presence and lastmod accuracy, canonical tag correctness, noindex meta tags on pages you want cited, server-side rendering confirmation, and page load characteristics.
Why it matters: AI agents operate like automated researchers with strict time budgets. Pages that redirect excessively, load slowly, or signal "don't index me" via meta tags are deprioritized or skipped entirely. This category catches the technical debt that silently erodes AI visibility.
If you fail this category: Check your sitemap lastmod values — if they're all identical or all set to your last deploy date rather than actual content modification dates, that's a quick fix with meaningful impact.
Category 6: Citation Probe Performance (8% of your score)
What it checks: Live probes to ChatGPT, Perplexity, and Claude asking questions where your domain should appear as a cited source. The tool runs 31 automated probes and records whether your domain is cited in AI-generated answers.
Why it matters: This is the only category that directly measures your output — not your inputs. A site can pass every technical check and still not be cited. Citation probe performance is the ground truth that all the other categories predict.
If you score low here: The fix is not one thing — it's the compound result of improving all six categories above. But the most direct lever is building third-party mention coverage: G2 reviews, press mentions, community discussions. Perplexity in particular is heavily influenced by consensus signals across independent sources.
Category 7: Performance (4% of your score)
What it checks: Core Web Vitals — Largest Contentful Paint (LCP), Total Blocking Time (TBT), and Cumulative Layout Shift (CLS) — along with Time to First Byte (TTFB) and page size.
Why it matters: The smallest category by weight, but failures here are binary: pages that time out before an AI crawler sees the main content don't get indexed. Fix critical performance failures before optimizing everything else.
How to Read Your Score and Prioritize Fixes
Your audit results will show an overall score and individual category scores. Here's how to triage:
Step 1: Check AI Crawler Access first. If any AI crawler is blocked, fix that before anything else. A blocked crawler makes every other optimization irrelevant for that platform.
Step 2: Look at your llms.txt score. It's the largest single category. If you're failing it entirely (no file), implementing a valid llms.txt typically produces the largest single-action score improvement available.
Step 3: Address Schema failures. Specifically: add FAQPage schema to your most important pages (product page, pricing page, about page, and top 3 blog posts). This is the fastest path to improving Citation Probe Performance, which lags schema implementation by 4–8 weeks.
Step 4: Review Content Extractability warnings. These are usually content restructuring tasks — changing how you write rather than what you write. Lower effort than new content creation, higher impact than most technical fixes.
Step 5: Fix AI Agent Readiness issues. Usually sitemap or canonical problems — addressable in a single technical sprint.
Step 6: Re-run the audit after each sprint. The audit is free and takes 30 seconds. Run it after every set of fixes to confirm the changes registered and to see what your next priority should be.
What a Good AEO Score Looks Like at Each Stage
| Score Range | What It Means | Typical Profile |
|---|---|---|
| 0–30 | Critical gaps | Missing llms.txt, AI crawlers blocked, no schema, JS-only rendering |
| 31–50 | Poor baseline | Some schema, crawlers unblocked, but llms.txt missing or malformed |
| 51–70 | Fair — improving | llms.txt present, basic schema, content needs restructuring |
| 71–85 | Good — optimizing | Full schema coverage, strong crawl access, citation rate improving |
| 86–100 | Excellent | All categories passing, active citation monitoring, consistent AI mentions |
Most first-time audits land in the 30–50 range. That's not a failure — it's a baseline with a clear roadmap. The gap between 40 and 70 is closable in 2–4 focused sprints.
Related Reading
- The AEO Audit Checklist 2026: 10 Fixes Ranked by Impact — The same fixes from this guide, ranked by ROI.
- llms.txt: The Complete Guide — How to fix the #1 scoring category.
- FAQPage Schema: The 15-Minute Fix — The highest-impact schema implementation.
- Is GPTBot Blocked From Your Website? — How to fix AI crawler access failures.
- The Best AEO Tools in 2026 (Compared) — Other tools that complement a free AEO audit.