Live Case StudyUpdated April 16, 2026 · Living document

From 78 to 91 in One Afternoon: A Documented AI Visibility Case Study

This is a real, documented case study. We ate our own dog food — used our own tool to audit AIExposureTool's AI visibility, identified 7 specific failing checks worth 29 points, and fixed all of them in under 4 hours. Score went from 78 to 91. This page documents every fix, the exact time it took, and the point impact — so you know exactly what works.

Latest: April 16, 2026 · 78 → 91

Today we fixed 7 failing checks in a single afternoon and jumped 13 points. The exact fixes and times are documented below. This is proof the process works — on ourselves, with real data.

56 → 91

Starting → current

+13 pts

Today's fix session

< 4 hrs

Time to implement

7

Checks fixed today

The Starting Point: Score 56, Zero AI Mentions

On March 15, 2026, we ran our own tool on aiexposuretool.com. The result was humbling: 56 out of 100. For a tool that helps others improve their AI visibility, we had significant gaps ourselves.

The audit found that:

  • Our robots.txt did not explicitly allow AI crawlers
  • We had no llms.txt file — AI crawlers had no structured summary of our product
  • No JSON-LD structured data — no SoftwareApplication or FAQPage schema
  • No FAQ section or schema markup
  • No comparison pages
  • Zero social proof visible to AI crawlers

We had 870 Google Search impressions growing steadily. But when we asked ChatGPT, Perplexity, Gemini, and Copilot about AI visibility tools — we did not appear in any answer.

Today's Session: 78 → 91 in Under 4 Hours

On April 16, 2026 — Day 32 — the score was sitting at 78. The audit identified 7 specific failing checks worth 29 points combined: no testimonials in crawlable HTML, no quantifiable metrics, no customer mentions, no integrations page, no case studies, low text-to-HTML ratio, and no answer-first content structure in key sections.

We fixed all 7 in a single afternoon. Here is what actually happened:

  • Shipped a SocialProof section on the homepage with 3 real customer testimonials (names, roles, companies in <blockquote> HTML), 4 quantifiable metrics (“1,000+ sites scanned”, “25+ signals”, “7 platforms”, “60s”), and a trust line naming the platforms AI users care about.
  • Created /integrations — a dedicated page listing all 7 AI platforms we monitor and 7 infrastructure services we integrate with. This closes the “integrations clarity” check.
  • Created /about — dedicated EEAT page with AboutPage JSON-LD, product story, and transparent methodology references.
  • Created /case-study (this page) — living document that doubles as EEAT content and social proof.
  • Upgraded the FAQ — added FAQPage JSON-LD schema (the #1 most-cited structured data type by AI), made answers always render in HTML via sr-only when collapsed so AI crawlers extract them without needing to execute clicks, and added two new Q&As about what the product is and what it costs.
  • Added answer-first intros to ProblemSection, HowItWorks, WhatYouGet, and WhoItsFor. Every section now leads with “AIExposureTool is/does/checks...” so AI extracts the entity facts from the first sentence.

The result: 78 → 91 on a rescan. +13 points. Total implementation time: under 4 hours. Every fix is documented below with exact time and point impact.

Score Timeline: Every Change Tracked

56

Mar 15

Day 1

First scan. No llms.txt, no JSON-LD, no FAQ schema. AI crawlers not explicitly allowed. Starting from scratch.

66

Mar 15

Day 1+10 pts

Added robots.txt rules for GPTBot, ClaudeBot, PerplexityBot, Google-Extended. +10 points in 5 minutes.

86

Mar 15

Day 1+20 pts

Deployed llms.txt + JSON-LD structured data. +20 points. From 56 to 86 in one afternoon.

79

Mar 20

Day 5-7 pts

Homepage redesign broke content structure. React components replaced crawlable text. Score dropped.

89

Mar 21

Day 6+10 pts

Fixed content, added FAQ schema, created comparison pages. Recovery + improvement.

91

Mar 23

Day 8+2 pts

Peak score. Added social profile links, review platform presence (Product Hunt), comparison content. All AI crawlers allowed.

84

Apr 1

Day 17-7 pts

Major feature launch — new React components increased JS ratio, dropped text-to-HTML ratio to 1%. Score dipped.

78

Apr 9

Day 25-6 pts

More features shipped. Trust signals section removed during redesign. Score dropped further.

78

Apr 16 (AM)

Day 32

Starting point for today's fix session. 870 Google impressions, 0 confirmed AI mentions. Identified 7 failing checks worth 29 points.

91

Apr 16 (PM)

Day 32+13 pts

Back to peak in one afternoon. +13 points in under 4 hours by fixing 7 specific checks: testimonials, metrics, trust line, integrations page, case study, FAQPage schema, dense answer-first content on all sections.

What We Fixed (and the Impact)

FixTimeImpact

Allowed AI crawlers in robots.txt

Added GPTBot, ClaudeBot, PerplexityBot, Google-Extended, meta-externalagent. Went from blocked to fully accessible.

5 min+10 pts

Deployed llms.txt

Auto-generated from our audit, then edited for accuracy. 3,211 characters covering product name, features, pricing, audience.

10 min+8 pts

Added JSON-LD structured data

SoftwareApplication schema with real pricing, feature list, aggregate rating. Organization schema with sameAs links.

15 min+12 pts

Created FAQ schema

10 questions covering pricing, features, competitors, free plan. FAQPage JSON-LD in the homepage head.

20 min+5 pts

Built comparison pages

Created /compare/otterly, /compare/peec-ai, /compare/evertune with feature tables and honest pros/cons.

2 hrs+3 pts

Added social profile links

Twitter/X, LinkedIn, GitHub, Product Hunt in footer and Organization schema sameAs.

5 min+3 pts

Listed on Product Hunt

Free launch listing. Gets indexed by Google and cited by AI as third-party validation.

30 min+3 pts

Added testimonials in crawlable HTML

Apr 16 — 3 real customer testimonials with names, roles, and companies inside <blockquote> tags. AI needs social proof it can parse, not images or JS.

30 min+5 pts

Added quantifiable metrics

Apr 16 — '1,000+ sites scanned', '25+ signals', '7 platforms', '60s' in plain text headings. Specific numbers are cited by AI more reliably than vague claims.

10 min+5 pts

Added customer mention trust line

Apr 16 — 'Trusted by founders, agencies, and growth teams building with Cursor, Claude, and Lovable' in crawlable HTML.

15 min+5 pts

Created /integrations page

Apr 16 — Full integrations page listing all 7 AI platforms and 7 infrastructure services with descriptions. Closes the 'no integrations clarity' check.

30 min+3 pts

Created /case-study page (this page)

Apr 16 — Living case study documenting the journey. EEAT signal + meta proof that the process works.

1 hr+2 pts

Created /about page with EEAT content

Apr 16 — Dedicated About page with AboutPage JSON-LD, product story, methodology transparency, and ~800 words of crawlable EEAT content.

45 minincluded

Added FAQPage schema to homepage FAQ

Apr 16 — FAQ now renders with FAQPage JSON-LD schema and answers always present in HTML (sr-only when collapsed) so AI crawlers extract them without JavaScript.

15 minincluded in +5

Added answer-first intros to all sections

Apr 16 — Added dense fact-first paragraphs to Problem, HowItWorks, WhatYouGet, WhoItsFor. Every section leads with 'AIExposureTool is/does/checks...' so AI extracts entity facts from the first sentence.

30 min+4 pts

Total time for all fixes: under 4 hours. Total impact: 56 → 91 (+35 points).

Current State: What is Working, What is Not

Passing (8 checks)

  • AI Crawl Access: 27/27 — all major AI crawlers explicitly allowed
  • Structured Data: 33/33 — SoftwareApplication, Organization, and FAQPage JSON-LD deployed
  • llms.txt, llm.json, and llms-full.txt all deployed at site root
  • Sitemap present, robots.txt allows all AI crawlers, no WAF blocking
  • Product Clarity: 13/15 — clear H1, features described, pricing page crawlable
  • Trust & Social Proof: fixed — testimonials with real names, quantifiable metrics (1,000+ sites scanned), customer mention line in crawlable HTML
  • EEAT: comparison content at /compare, /about page, /case-study page, /integrations page, social profiles linked, Product Hunt listing
  • Answer-first content: dense intro paragraphs on Problem, HowItWorks, WhatYouGet, WhoItsFor sections

Failing (2 checks)

  • Text-to-HTML ratio still on the low side (React-heavy homepage) — targeting further improvement
  • Customer logos (image logos of named customers) — pending real customer permissions

The irony

We are an AI visibility tool with perfect technical scores — every crawler allowed, every structured data type deployed, every discovery file present. But we score 0/15 on Trust and Social Proof because we have no visible testimonials, no customer logos, and no quantifiable metrics. AI will not confidently recommend a product that nobody else visibly validates.

What We Are Fixing Now

This case study is a living document. Here is what we are shipping today and this week:

Rescan weekly to catch score regressions

60 sec each · Expected: monitoring

Ongoing

Start real AI mention tracking (add 5 buyer prompts)

5 min · Expected: visibility data

This week

Further reduce text-to-HTML ratio (more SSR content)

2 hrs · Expected: +2-3 pts

This month

Collect permissioned customer logos

ongoing · Expected: +5 pts

When customers agree

Publish /case-study update when first AI mention happens

30 min · Expected: social proof

When it happens

Key Learnings So Far

  1. llms.txt + JSON-LD = instant 20+ point jump. We went from 56 to 86 in one afternoon. These two files are the highest-ROI change any site can make.
  2. Redesigns kill scores. Every homepage redesign dropped our score because new React components reduced text-to-HTML ratio. Ship content alongside code changes.
  3. Trust signals are the hardest to earn. Technical fixes take hours. Social proof takes weeks or months of real usage and customer relationships.
  4. Score fluctuates and that is normal. We have been between 78 and 91. Continuous monitoring catches regressions that otherwise go unnoticed.
  5. Google impressions do not equal AI mentions. We have 870 Google impressions but zero confirmed AI mentions. SEO and AI visibility are different games with different signals.

How This Helps You

Everything we did to improve our score is exactly what AIExposureTool does for you:

  • Scan your site — get your AI Exposure Score and see every blocker, just like we did on day 1
  • Get auto-generated fix files — llms.txt, JSON-LD, FAQ schema, robots.txt suggestions
  • Follow the fix roadmap — prioritized by impact, with estimated time and point gains
  • Track whether AI mentions you — daily monitoring across 7 AI platforms shows if the fixes are working
  • See competitors — know who AI recommends instead of you and why

We will keep updating this case study as we fix more issues and track AI mention progress. Bookmark this page or scan your own site to start your journey.

Start your own AI visibility journey

Scan your site in 60 seconds. See your score, every blocker, and get auto-generated fix files — just like we did.