A good AEO audit requires checking 47 distinct signals across 7 categories. If you prefer working through a checklist manually — or need to share the process with a team member who does not have access to the automated tool — this Notion template gives you everything in one place.
Duplicate the free AEO Audit Checklist Notion Template →
Or if you want to skip the manual work: run all 47 checks automatically at tryansly.com in 30 seconds, no login required.
What the Template Includes
The template mirrors the exact 7 categories scored by the AEO audit tool, in priority order. Each section is a toggle with individual checkboxes so you can work through one category at a time.
| Category | Weight | Checks |
|---|---|---|
| AI Crawler Access | 18% | 5 checks |
| llms.txt | 23% | 6 checks |
| Schema & Structured Data | 20% | 6 checks |
| Content Extractability | 15% | 5 checks |
| AI Agent Readiness | 12% | 5 checks |
| Citation Probe Performance | 8% | 5 checks |
| Performance | 4% | 5 checks |
The categories are arranged by priority for manual auditing — crawler access and llms.txt first, since failing either one makes every other fix irrelevant.
How to Use the Template
- Duplicate it — Click the Duplicate button on the Notion page to add it to your workspace
- Enter your domain at the top — The template has a field for your site URL so each check is in context
- Work through categories in order — Start with AI Crawler Access, then llms.txt. Check off each item after verifying it
- Add notes in each toggle — Notion lets you add sub-bullets under each checkbox for observations, links to fixes, or notes for colleagues
- Run the automated audit after each sprint — Use tryansly.com to confirm your fixes registered and to see your updated score
The Full 47-Check Breakdown
Category 1: AI Crawler Access (18%)
This is the silent killer. A single robots.txt rule can block all AI crawlers simultaneously, making every other optimization irrelevant.
- GPTBot is allowed in robots.txt
- PerplexityBot is allowed in robots.txt
- ClaudeBot is allowed in robots.txt
- Google-Extended is allowed in robots.txt
- CCBot (Common Crawl) is allowed in robots.txt
How to verify: Check your robots.txt at https://yourdomain.com/robots.txt and search for each crawler name. If you see Disallow: / under a User-agent: * rule, all crawlers are blocked. See the AI crawler robots.txt guide for the exact fix.
Category 2: llms.txt (23% — highest weight)
llms.txt is the primary document AI agents use to understand your site's purpose and structure. Missing or malformed files are the #1 reason well-optimized sites still get low citation rates.
-
/llms.txtfile exists and is publicly accessible - File has a valid H1 heading
- File has a blockquote with a purpose statement
- File has multiple H2 sections
- All linked URLs in the file resolve without errors
- File size is under 100KB
How to verify: Open https://yourdomain.com/llms.txt in a browser. If it 404s, you have no file. If it loads, check for the required structure. The complete llms.txt guide has a copy-paste template.
Category 3: Schema & Structured Data (20%)
AI engines parse JSON-LD directly from page source. Without the right schemas, AI models must guess what your content is about — and guessing hurts citation rates.
- FAQPage schema present on top 10 pages
- FAQPage schema passes Google Rich Results validation
- Article/BlogPosting schema on all editorial content
- Article schema includes
datePublishedanddateModified - Organization schema present on homepage
- Organization schema includes
sameAslinks (LinkedIn, Twitter/X, G2, Crunchbase)
How to verify: Use Google's Rich Results Test on each page. The FAQPage schema guide has copy-paste JSON-LD for every common use case.
Category 4: Content Extractability (15%)
Great content that is structurally unextractable is effectively invisible to AI retrieval. This category checks whether AI engines can parse useful, citable information from your pages.
- Pages use semantic HTML heading hierarchy (H1 → H2 → H3)
- H2/H3 headings are written as questions, not topics
- Each section leads with a direct answer in the first sentence
- At least one FAQ section with 4–6 Q&A pairs on key pages
- Main content is not hidden behind JavaScript-only rendering
How to verify: View source on your most important pages. If your content only appears in the browser (not in the HTML source), it is JS-only rendered. Restructure headings as questions manually — it requires no technical changes.
Category 5: AI Agent Readiness (12%)
AI agents operate like automated researchers with strict time budgets. Technical signals that confuse or slow them down reduce your citation probability.
-
sitemap.xmlexists and is submitted to search engines -
lastmoddates in sitemap reflect actual content changes (not all identical) - Canonical tags are correctly set on all pages
- No key pages are accidentally marked
noindex - Pages are server-side rendered (not dependent on client-side JS)
How to verify: Check your sitemap at https://yourdomain.com/sitemap.xml. If every lastmod is the same date, it is likely being set to your last deployment date rather than actual content modification dates.
Category 6: Citation Probe Performance (8%)
This is the only category that measures output directly — whether your site actually appears in AI-generated answers. It lags all other improvements by 4–8 weeks.
- Domain cited in ChatGPT answers for core product queries
- Domain cited in Perplexity answers for core product queries
- Domain cited in Claude answers for core product queries
- G2 or Capterra profile exists with reviews
- Crunchbase or LinkedIn company profile is complete
How to verify: Ask ChatGPT, Perplexity, and Claude the top questions your customers search for. See whether your domain appears in citations. The automated audit at tryansly.com runs 31 of these probes simultaneously.
Category 7: Performance (4%)
The smallest category by weight, but critical failures here mean AI crawlers never fully load your content — a binary visibility problem.
- Largest Contentful Paint (LCP) under 2.5 seconds
- Total Blocking Time (TBT) under 300ms
- Cumulative Layout Shift (CLS) under 0.1
- Time to First Byte (TTFB) under 800ms
- No pages larger than 3MB
How to verify: Use PageSpeed Insights for any page. Fix critical failures first; fine-tuning marginally passing scores is a low-ROI use of time.
Manual vs. Automated: Which to Use
| Situation | Use |
|---|---|
| First-time team audit, documenting your process | Notion template |
| Onboarding a client or new team member | Notion template |
| Getting a score fast (30 seconds) | tryansly.com |
| Monthly progress tracking | tryansly.com |
| Verifying a fix registered correctly | tryansly.com |
| Running citation probes across 3 AI platforms | tryansly.com |
The Notion template is a documentation and collaboration tool. The automated audit is your measurement instrument. Use both.
Duplicate the Template
Get the free AEO Audit Checklist Notion Template →
After duplicating it, run a baseline automated audit at tryansly.com to see which of the 47 checks you are already passing — so you can skip those rows in the template and focus only on the gaps.
Related Reading
- The AEO Audit Checklist 2026: 10 Fixes Ranked by Impact - The same checks ranked by ROI, with implementation details for the top two fixes.
- Free AEO Audit: What It Checks and How to Read Your Score - Detailed guide to each of the 7 AEO categories and how to interpret your score.
- llms.txt: The Complete Guide - How to implement the highest-weight category in the checklist.
- FAQPage Schema for AI Search - The 15-minute fix that delivers the most citation lift per hour.
- Is GPTBot Blocked From Your Website? - How to fix AI crawler access, the most common silent failure in audits.