AI Exposure Score — Signal Reference
Every signal in the AI Exposure Score with weight, category, and pass criteria. Compact technical companion to the full methodology page.
Contents
Overview
The AI Exposure Score is a 0-100 composite measuring how visible and recommendable a site is to AI answer engines. It is computed across 25+ automated checks grouped into 6 weighted categories totalling 100 points.
The 6 categories
AI Crawl Access
20 pointsCan AI crawlers reach and fetch your site? Robots.txt, sitemap, render path, response codes.
Content Quality
20 pointsIs your content readable and citable by LLMs? Page text, headings, length, freshness, readability.
Product Clarity
15 pointsCan AI clearly describe what your product does, who for, and how to access it? Hero copy, value prop, pricing visibility.
Structured Data & Meta
20 pointsJSON-LD schema types, OpenGraph, Twitter cards, canonical URLs, descriptive titles and meta descriptions.
Agent Readiness
10 pointsllms.txt, llms-full.txt, llm.json — files specifically formatted for AI agents.
Trust & Social Proof
15 pointsTestimonials, customer counts, press mentions, team page, EEAT signals.
Total: 100 points across 6 categories.
Signal reference
Every individual check, the category it rolls up to, and its maximum point value.
Total reference points across all signals: 100. Categories cap their summed points at the category weight.
Grade bands
Cited across multiple AI engines for category prompts. Exemplary AEO posture.
Cited on most engines for branded queries; gaps on category prompts.
Visible to AI but missing 2-3 high-impact signals. Most sites land here on first audit.
AI engines partially understand the product but rarely cite. Significant gaps in crawl access or structured data.
AI engines either can't reach the site or have insufficient first-party signal to cite. Top priority for AEO investment.
Scoring formula
Each signal is graded — most are not pass/fail, they earn partial credit based on quality. The category score is the sum of its signals capped at the category maximum. The total score is the sum of category scores.
signalScore = check(site) → 0..signal.max categoryScore = min(category.maxPoints, Σ signalScore(s) for s ∈ category) totalScore = Σ categoryScore(c) for c ∈ categories // 0..100 grade = bandFor(totalScore) // A | B | C | D | F
Example: if all 4 SoftwareApplication-tier signals earn full marks (4+4+4+4+4 = 20) the Structured Data & Meta category contributes the full 20 points. If they sum to 22, the category still caps at 20.
Highest-leverage fixes
Ranked by points-per-minute. Run these in order on a fresh audit and most sites move from F-tier to C-tier in under 90 minutes of work.
Score freshness & re-scan
AI Exposure Scores are point-in-time. The signals that move fastest:
- • Crawl access changes the moment you edit robots.txt. Re-scan within minutes.
- • JSON-LD is detected on the next crawl — re-scan after deploy.
- • llms.txt and llms-full.txt are also detected immediately after they go live.
- • Content quality signals follow your CMS — scan reflects current state.
Re-scan as often as you ship. Free plan allows unlimited re-scans by URL; Starter and above attach scans to a project for trend tracking.
FAQ
What is the AI Exposure Score?
The AI Exposure Score is a 0-100 score measuring how visible and recommendable your site is to AI assistants like ChatGPT, Perplexity, Claude, and Gemini. It is computed across 25+ signals in 6 categories: AI Crawl Access (20 pts), Content Quality (20 pts), Product Clarity (15 pts), Structured Data & Meta (20 pts), Agent Readiness (10 pts), and Trust & Social Proof (15 pts).
What is a good AI Exposure Score?
Above 85 is consistently cited by AI engines. 70-85 means decent visibility with clear gaps. 50-70 is the typical starting point — you have a real product but AI engines aren't sure how to describe or recommend it. Below 50 means AI engines either can't read your site or don't have enough first-party signal to cite you.
How is the score calculated?
Each signal has a maximum point value. We run automated checks on your site (crawl access, robots.txt rules, page content, JSON-LD presence, llms.txt existence, meta tags, etc.), then sum the points earned. Points are awarded for either passing or earning partial credit — many signals are graded, not pass/fail.
Is the score deterministic — will the same site always get the same score?
Yes for static signals (robots.txt, JSON-LD, llms.txt presence). For content-based signals that use AI evaluation (e.g. clarity of product description), small variance is possible but typically <2 points. Re-running an audit on the same URL within an hour gives the same score in 95%+ of cases.
Why are some categories weighted more than others?
Weights reflect how much each category moves real AI citation rate. AI Crawl Access and Structured Data & Meta are 20 points each because if AI bots can't fetch or parse your site, nothing else matters. Agent Readiness is 10 because llms.txt is high-leverage but most sites don't have one yet — it's a smaller pool by design.
Get your AI Exposure Score
Run the full audit on your domain. Free, no signup. Returns the 0-100 score plus a breakdown by category and signal.