Methodology

How the AI Exposure Score Is Calculated

The AI Exposure Score (0-100) measures how visible, understandable, and recommendable your product is to AI assistants like ChatGPT, Claude, Perplexity, and Gemini. It is calculated across 6 categories and 25+ individual signals.

Score range

0–39Low — AI cannot reliably recommend you
40–59Fair — partial visibility, key gaps
60–74Good — visible, some signals missing
75–89Strong — well-optimized for AI discovery
90–100Excellent — highly citable by AI systems

Category weights

AI Crawl Access20 pts
Content Quality20 pts
Product Clarity15 pts
Structured Data & Meta20 pts
Agent Readiness10 pts
Trust & Social Proof15 pts
Total100 pts
20 points

AI Crawl Access

AI crawlers need permission to index your site and a structured way to understand it. Blocking AI crawlers in robots.txt means ChatGPT, Claude, and Perplexity cannot index your content. llms.txt files act like a product brief for LLMs — giving them exactly what they need to describe you accurately.

Signals checked (5)

  • sitemap.xml present and parseable+5 pts
  • sitemap.xml contains 3+ URLs+2 pts
  • robots.txt allows AI crawlers (GPTBot, ClaudeBot, PerplexityBot)+5 pts
  • llms.txt file exists at /llms.txt+5 pts
  • llms-full.txt file exists at /llms-full.txt+3 pts
Tip:The fastest wins in this category are sitemap.xml and allowing AI crawlers in robots.txt. Both take under 5 minutes to fix.
20 points

Content Quality

AI systems need text they can actually read. JavaScript-heavy sites with minimal crawlable HTML score poorly here because LLMs typically parse the raw source, not rendered JavaScript. High text-to-HTML ratios signal that your pages have real content rather than empty shells.

Signals checked (4)

  • Homepage has 500+ words of crawlable text+5 pts
  • Text-to-HTML ratio above 15%+5 pts
  • 3+ pages reachable from homepage+5 pts
  • Navigation links to key sections+5 pts
Tip:If your site is built with a JS framework, ensure your key marketing content is server-rendered so it appears in the HTML source.
15 points

Product Clarity

AI assistants need to understand what your product does before they can recommend it. A missing or vague H1, no features page, and no pricing information all reduce confidence. When an AI is asked 'what is X?', it needs a clear, unambiguous answer from your site's content.

Signals checked (3)

  • Clear H1 headline present+5 pts
  • Feature keywords or features page found+5 pts
  • Pricing page or pricing information found+5 pts
Tip:Your H1 should state what the product is and who it's for in one sentence. 'The AI visibility audit for founders and developers' outperforms 'Welcome to our platform' every time.
20 points

Structured Data & Meta

Structured data (schema.org JSON-LD) is the highest-fidelity signal you can give AI systems. A SoftwareApplication schema tells LLMs your product name, category, description, pricing, and URL in a machine-readable format they can extract and reference directly. OpenGraph tags improve how your product appears in AI-generated summaries.

Signals checked (5)

  • OpenGraph tags present (og:title, og:description, og:image)+5 pts
  • Page title is set and descriptive+3 pts
  • Meta description is present (150-160 chars)+3 pts
  • JSON-LD structured data (SoftwareApplication, Product, or Organization)+7 pts
  • Canonical URL set+2 pts
Tip:Adding a SoftwareApplication JSON-LD block is a single-file change that can raise your score by 5-7 points immediately. AIExposureTool generates this file for you.
10 points

Agent Readiness

Agent readiness measures whether your site is prepared for autonomous AI agents — tools like Claude, ChatGPT with browsing, and Perplexity that browse your site on behalf of users. llms.txt was proposed by Jeremy Howard (fast.ai) as an AI-native equivalent of robots.txt — a plain-text briefing document designed specifically for LLMs.

Signals checked (2)

  • llms.txt file exists and is readable+5 pts
  • llms-full.txt file exists and is readable+5 pts
Tip:llms.txt should include: product name and tagline, what the product does, who it's for, key use cases, pricing summary, and links to important pages. AIExposureTool generates both llms.txt and llms-full.txt for you based on your site content.
15 points

Trust & Social Proof

AI assistants assess trust before recommending products. Testimonials, social proof, and quantifiable metrics signal that real users have validated the product. This aligns with Google's EEAT framework (Experience, Expertise, Authoritativeness, Trustworthiness) — which also influences how AI systems evaluate content quality.

Signals checked (3)

  • Testimonials present on homepage+5 pts
  • Customer logos or trust marks detected+5 pts
  • Quantifiable metrics (user counts, company counts, scan counts)+5 pts
Tip:Specific numbers outperform vague claims. '1,000+ sites scanned' is more credible to AI systems than 'thousands of users'.

Scoring Model and Aggregation

The AI Exposure Score is a weighted additive model. Each signal is checked independently; points are awarded for passing signals and withheld for failing ones. Partial credit is not awarded — signals are binary pass/fail where possible, with a small number of graduated signals (e.g., word count checked at multiple thresholds).

The total possible score is 100 points. The final score represents the percentage of points earned — a score of 72 means the site earned 72 of 100 possible points across all checked signals.

Scans are run against the live version of your site at the time of the audit. Dynamic JavaScript content is not rendered; the score reflects what AI crawlers and LLMs see when they fetch your site's HTML source directly — the same view that ChatGPT, Claude, and Perplexity use when indexing content.

All scans are passive and read-only. No authentication credentials are required or used. The scanner does not modify your site in any way.

How to Improve Your Score

The highest-ROI improvements, ranked by points per hour of implementation effort:

  1. 1
    Allow AI crawlers in robots.txtUp to 5 pts5 min

    Add User-agent entries for GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, and Bytespider.

  2. 2
    Add a sitemap.xml with 3+ URLsUp to 7 pts10 min

    Most frameworks generate this automatically. Verify it's accessible at /sitemap.xml.

  3. 3
    Add JSON-LD SoftwareApplication schemaUp to 7 pts15 min

    One <script type='application/ld+json'> block in your page head. AIExposureTool generates this for you.

  4. 4
    Create a /llms.txt fileUp to 10 pts20 min

    Plain text file with product name, description, target users, use cases, and key URLs. AIExposureTool generates this for you.

  5. 5
    Write a clear H1 headlineUp to 5 pts5 min

    State what your product does and who it's for in one sentence. Avoid generic phrases like 'Welcome'.

  6. 6
    Add OpenGraph tagsUp to 5 pts10 min

    og:title, og:description, og:image. Most meta tag libraries handle this automatically.

  7. 7
    Add testimonials and quantifiable metricsUp to 10 pts30 min

    Real testimonials with names and specific outcomes. Include at least one metric (user count, scans, companies).

See your AI Exposure Score

Scan your site for free and see exactly which signals you're passing, which you're missing, and what to fix first.

How the AI Exposure Score is Calculated — Methodology | AIExposureTool | AIExposureTool