Reference
Answer Engine Optimization Glossary (2026)
Every term used in AEO, GEO, and AI visibility — defined, with examples and cross-references. Bookmark this; the vocabulary is moving fast.
A
AEO
Answer Engine OptimizationThe discipline of optimizing content to be cited by answer engines (Perplexity, ChatGPT, Google AI Overviews, Microsoft Copilot, Claude, Gemini). SEO ranks links; AEO earns citations inside the AI's answer.
Related: GEO, ASEO, Answer engine
Agent readiness
How parseable your site is for AI agentsA category of AEO signals measuring whether AI crawlers can fetch, parse, and understand your site. Includes llms.txt, llms-full.txt, llm.json, structured data, and clean static HTML.
AI crawler
Bot operated by an AI company that fetches public web contentExamples: GPTBot (OpenAI), ClaudeBot (Anthropic), PerplexityBot, Google-Extended, Bingbot, Applebot-Extended, Meta-ExternalAgent, Bytespider. Some fetch for training data, some for real-time queries.
Related: GPTBot, ClaudeBot, PerplexityBot
AIES / AI Exposure Score
0-100 score measuring your AI visibility readinessComposite score across 25+ signals in seven categories: AI Crawl Access, Content Quality, Product Clarity, Structured Data, Agent Readiness, Trust Signals, and EEAT. Score >85 = consistently cited; <50 = invisible to AI.
Answer engine
AI system that generates direct answers with citationsExamples: Perplexity, ChatGPT (with web search), Google AI Overviews, Google AI Mode, Microsoft Copilot, Google Gemini. Distinct from chatbots that don't cite sources.
ASEO
Answer Search Engine OptimizationSynonym for AEO. Less common in 2026 — most practitioners use AEO.
B
Brand citation share
% of category prompts where AI cites your brandHeadline metric for enterprise AI visibility programs. Tracked over time vs competitors. Above 60% = category leadership.
Related: Share of voice, Visibility score
C
ChatGPT-User
OpenAI's real-time browsing crawlerFires when a ChatGPT user asks a question that requires fetching live web content. Different from GPTBot (training) and OAI-SearchBot (ChatGPT Search index). Block this and you lose live citation visibility.
Related: GPTBot, OAI-SearchBot
Citation gap
Sources cited by AI for competitors but not for youWhen a third-party site (Reddit thread, blog, G2 listing) is cited by ChatGPT for a category prompt, and your brand isn't mentioned in that same answer, that's a citation gap. Most actionable signal in AEO — pitch the source or build content that displaces it.
AI cites amplitude.com/compare in 5 product-recommendation answers. You're not in that page → citation gap → pitch them or write a competing comparison.
Citation gap analysis
Surfacing citation gaps systematically across tracked promptsAggregating citation gaps across many prompts to identify the highest-priority sources to influence. Unique feature in some AEO platforms.
ClaudeBot
Anthropic's training crawler for ClaudeCrawls public content for Claude's training corpus. Anthropic also runs Claude-Web for real-time fetches.
Related: AI crawler, Claude-Web
Cross-platform coverage
Number of AI engines where your brand is cited at allOut of ~7 major AI engines, how many cite you for category prompts? Important secondary metric — high coverage means resilient visibility (no single engine dominates your traffic).
E
EEAT
Experience, Expertise, Authoritativeness, TrustworthinessGoogle's quality framework, but equally important for AI visibility. AI engines weigh trust signals — testimonials, team pages, press mentions, customer counts — when deciding what to recommend.
Related: Trust signals
F
FAQ schema / FAQPage JSON-LD
Structured data marking up Q&A pairsJSON-LD schema type that tells AI engines your page contains question-and-answer pairs. The single highest-ROI structured data type for AEO — Q&A maps 1:1 to how AI structures answers.
{ "@type": "FAQPage", "mainEntity": [{ "@type": "Question", ... }] }Related: JSON-LD, Schema.org
G
GEO
Generative Engine OptimizationBroader than AEO — includes optimization for any generative AI output, not just answer engines that cite sources. AEO is a subset of GEO. Most tactics overlap.
Related: AEO
Google-Extended
Google's robots.txt token to opt out of Gemini trainingNot a separate crawler — a token that lets sites disallow training-data use for Gemini and Vertex AI without affecting standard Google Search indexing.
Related: AI crawler
GPTBot
OpenAI's training crawlerFetches content to add to OpenAI's training corpus. Identifies as 'GPTBot/1.2'. Block via robots.txt to opt out of training data without affecting ChatGPT-User or OAI-SearchBot.
Related: AI crawler, ChatGPT-User
J
JSON-LD
JavaScript Object Notation for Linked DataFormat for embedding structured data in web pages, recommended by Schema.org. AI engines parse JSON-LD to understand page entities — products, organizations, articles, FAQs.
Related: Schema.org, FAQ schema
L
llm.json
Machine-readable structured product profile for AIJSON variant of llms.txt — same intent (tell AI what your product is) but structured for programmatic consumption. Used by AI agents that need typed product data.
Related: llms.txt
llms-full.txt
Extended version of llms.txt with full product documentationComprehensive (typically 500-2000 word) document with detailed feature descriptions, all pricing tiers, integrations, technical details. AI fetches this for deep-research queries.
Related: llms.txt
llms.txt
Markdown brief at /llms.txt that tells AI what your product doesPlain-text Markdown file at the root of your domain. Lists product name, one-line description, key features, pricing, target audience. Sites with valid llms.txt get cited 3× more often.
# YourProduct > One-sentence description ## Key Features - ...
Related: llms-full.txt, llm.json
M
Mention rate
% of tracked prompts where your brand appears in AI answersPer-prompt visibility metric. If 10 prompts are tracked and you're cited in 6, mention rate = 60%. Headline metric in most AEO dashboards.
Related: Visibility score
O
OAI-SearchBot
OpenAI's crawler for ChatGPT Search indexBuilds the dedicated index ChatGPT Search uses. Distinct from GPTBot (training) and ChatGPT-User (real-time).
P
PerplexityBot
Perplexity AI's real-time crawlerFetches public content when a user asks Perplexity a question. Identifies as 'PerplexityBot/1.0'. Blocking it removes you from Perplexity entirely.
Related: AI crawler
Prominence tier
Position rank when your brand is cited (1 = first / primary)Tier 1 = primary recommendation, Tier 2 = secondary mention, Tier 3 = listed alongside others, Tier 4 = absent. Higher tiers drive more click-through.
Related: Citation position
R
Real-time crawler
AI bot that fetches content during a live user queryExamples: ChatGPT-User, Claude-Web, PerplexityBot, Meta-ExternalFetcher. Doesn't add to training data — only fetches in response to a specific user question. Block these and AI can't cite your live content.
Related: Training crawler
S
Schema.org
Vocabulary for structured data on the webCooperative project from Google, Microsoft, Yahoo, Yandex defining types like SoftwareApplication, Organization, FAQPage, Product. AI engines use Schema.org markup as a primary signal.
Sentiment classification
Whether AI describes your brand positively, neutrally, or negativelyEven when cited, AI's tone matters. 'Best-in-class' (positive) drives more clicks than 'limited features' (negative). Track sentiment per provider over time.
Share of voice (AI)
Your portion of AI mentions in your category over timeWhen AI answers category-level prompts, what % of brand mentions are yours vs competitors? Tracked as a trend line — moving share of voice up = winning the category.
Related: Brand citation share
T
Tracked prompt
Buyer-style query you monitor across AI enginesSpecific prompt your tool runs on a schedule (e.g., daily/weekly) across multiple AI engines to track whether your brand is cited. Examples: 'best CRM for startups,' 'Stripe alternatives,' 'X vs Y'.
Training crawler
AI bot that fetches content to add to LLM training dataExamples: GPTBot, ClaudeBot, Google-Extended, Applebot-Extended, Meta-ExternalAgent, Bytespider. Block these to opt out of training without losing live citation visibility (allow real-time crawlers).
Related: Real-time crawler
Trust signals
On-page evidence AI uses to gauge credibilityTestimonials with named customers, user counts ('trusted by 5,000+ teams'), customer logos, press mentions, security/privacy pages, About pages with team bios, review platform listings (G2, Trustpilot).
V
Visibility score
Composite metric of AI mention quality across providersCommon formula: presence rate × 80% + cross-model consistency × 20%. Above 50% = healthy; below 25% = invisible. Different from AIES (which measures readiness) — visibility score measures actual outcomes.
Related: Mention rate, AIES
Put the vocabulary to work
Run a free AI visibility audit and see your AIES, citation gaps, and per-engine sentiment in 60 seconds.