Understanding Audit Scores

Learn how to read your GEO audit scores, from the AI Visibility Index down to individual sub-scores and benchmarks.

Understanding audit scores

Your audit report breaks AI visibility into layers of scores. Each layer measures a different aspect of your page's readiness for AI citation. This page explains what every score means, how it is calculated, and what targets to aim for.

For a high-level overview of the scoring framework, see Metrics Explained.

AI Visibility Index

The AI Visibility Index (AIVI) is your headline metric. It combines optimization readiness with actual AI platform performance into a single number.

  • Formula: (GEO Readiness × 0.6) + (Share of Voice × 0.4)
  • Range: 0–100

The 60/40 weighting reflects that optimization is what you control directly. Share of Voice — how often AI platforms mention you — tends to lag behind the improvements you make.

GEO Readiness

GEO Readiness measures how well your content is structured for AI citation. It is a dynamic average of your pillar scores.

When you run a standard audit, three pillars contribute: Technical, Content, and Authority. When you enable media analysis, Media becomes the fourth pillar and the average adjusts automatically.

Technical pillar

The Technical pillar evaluates whether AI engines can access and parse your content.

Semantic HTML

Semantic HTML measures how well your page uses structured markup. Three sub-scores contribute to the total.

Sub-ScoreMax PointsWhat it measures
Heading Hierarchy40Valid outline with no skipped levels and a single H1
Semantic Containers30Presence of main, article, nav, aside, header, footer
Clean Scope30Main content ratio vs. noisy sidebar and footer elements

Schema.org validation

Vizzybl checks for structured data types that AI engines use to extract facts: FAQPage, Product, HowTo, and Article. Pages with valid schema markup are flagged as rich-result eligible.

AI bot access

Your robots.txt configuration determines which AI crawlers can index your pages. Vizzybl checks whether major AI bots are allowed or blocked, and whether an llms.txt file is present. Blocked bots reduce your Technical score.

Content pillar

The Content pillar measures the quality and structure of your text for AI extraction. Five metrics contribute.

  • Nugget density — The ratio of concise, quotable facts to total content. Pages with more statistics, data points, and verifiable claims score higher.
  • Structure score — How well your content is organized into logical, extractable sections.
  • Readability — Based on Flesch Reading Ease and Gunning Fog Index. Clear, well-structured prose scores higher.

Freshness

Freshness has the strongest correlation (r=0.68) with AI discoverability in the GEO-16 framework. Three sub-scores contribute.

Sub-ScoreMax PointsWhat it measures
Date Presence30Schema.org dates, meta tag dates, and visible dates on the page
Recency40How recently the content was last updated
Human Visibility30Whether dates are visible to readers, not hidden in metadata

Content updated within the last 30 days earns the maximum recency score. Content older than two years earns the minimum.

Transparency

Transparency measures trust signals that AI models look for when deciding whether to cite a source.

Sub-ScoreMax PointsWhat it measures
Author Attribution30Named author, schema markup, credentials, and linked profiles
Editorial Disclosure30Disclaimers, editor's notes, and sponsored content notices
Provenance40Evidence-based claims, source attributions, and methodology sections

Authority pillar

The Authority pillar measures your brand's recognition and trustworthiness.

Knowledge Graph presence

Vizzybl queries the Google Knowledge Graph API to determine whether your brand is a recognized entity. The result score maps to trust levels.

Result ScoreTrust Level
Above 700High — Strong entity recognition
400–700Moderate — Partial recognition
Below 400Low — Weak or no entity presence

Description alignment

Vizzybl compares your brand description with how the Knowledge Graph describes you. The result is classified as match, partial_match, mismatch, or no_kg_description. Closer alignment signals stronger brand consistency to AI engines.

Media pillar

The Media pillar is optional and appears when you enable media analysis during an audit. It evaluates how well your visual content supports AI extraction.

Image analysis

Each image is scored across four dimensions.

DimensionMax PointsWhat it measures
Contextual Metadata25Alt text quality, filename descriptiveness, caption alignment
Visual Comprehension30Entity recognition, OCR readability, topic alignment
Technical Discoverability25ImageObject schema, file format, size optimization
Originality20Custom vs. stock indicators, branding, metadata richness

Video analysis

Videos are evaluated for timestamp and chapter presence, transcript availability, VideoObject schema completeness, and answer-first structure.

Citation Potential Score

The Citation Potential Score (CPS) predicts how likely AI engines are to cite your page. It combines four weighted factors.

ComponentWeightWhat it measures
Fact Density40%Statistics, percentages, data points, and verifiable claims
Novelty35%How unique your content is compared to existing search results
Link Authority15%Quality of outbound links to authoritative domains
Data Formatting10%Presence of tables, lists, and structured data

Formula: (0.40 × factDensity) + (0.35 × noveltyScore) + (0.15 × linkAuthority) + (0.10 × dataFormatting)

Score benchmarks

Use these benchmarks to interpret any score in your report.

ScoreRatingWhat to do
80–100ExcellentMaintain and monitor for regressions
60–79GoodAddress specific gaps in your weakest pillar
40–59Needs workPrioritize your lowest-scoring pillar for improvement
0–39CriticalFundamental optimization needed across multiple areas

Next steps