Learn how to read your GEO audit scores, from the AI Visibility Index down to individual sub-scores and benchmarks.
Your audit report breaks AI visibility into layers of scores. Each layer measures a different aspect of your page's readiness for AI citation. This page explains what every score means, how it is calculated, and what targets to aim for.
For a high-level overview of the scoring framework, see Metrics Explained.
The AI Visibility Index (AIVI) is your headline metric. It combines optimization readiness with actual AI platform performance into a single number.
(GEO Readiness × 0.6) + (Share of Voice × 0.4)The 60/40 weighting reflects that optimization is what you control directly. Share of Voice — how often AI platforms mention you — tends to lag behind the improvements you make.
GEO Readiness measures how well your content is structured for AI citation. It is a dynamic average of your pillar scores.
When you run a standard audit, three pillars contribute: Technical, Content, and Authority. When you enable media analysis, Media becomes the fourth pillar and the average adjusts automatically.
The Technical pillar evaluates whether AI engines can access and parse your content.
Semantic HTML measures how well your page uses structured markup. Three sub-scores contribute to the total.
| Sub-Score | Max Points | What it measures |
|---|---|---|
| Heading Hierarchy | 40 | Valid outline with no skipped levels and a single H1 |
| Semantic Containers | 30 | Presence of main, article, nav, aside, header, footer |
| Clean Scope | 30 | Main content ratio vs. noisy sidebar and footer elements |
Vizzybl checks for structured data types that AI engines use to extract facts: FAQPage, Product, HowTo, and Article. Pages with valid schema markup are flagged as rich-result eligible.
Your robots.txt configuration determines which AI crawlers can index your pages. Vizzybl checks whether major AI bots are allowed or blocked, and whether an llms.txt file is present. Blocked bots reduce your Technical score.
The Content pillar measures the quality and structure of your text for AI extraction. Five metrics contribute.
Freshness has the strongest correlation (r=0.68) with AI discoverability in the GEO-16 framework. Three sub-scores contribute.
| Sub-Score | Max Points | What it measures |
|---|---|---|
| Date Presence | 30 | Schema.org dates, meta tag dates, and visible dates on the page |
| Recency | 40 | How recently the content was last updated |
| Human Visibility | 30 | Whether dates are visible to readers, not hidden in metadata |
Content updated within the last 30 days earns the maximum recency score. Content older than two years earns the minimum.
Transparency measures trust signals that AI models look for when deciding whether to cite a source.
| Sub-Score | Max Points | What it measures |
|---|---|---|
| Author Attribution | 30 | Named author, schema markup, credentials, and linked profiles |
| Editorial Disclosure | 30 | Disclaimers, editor's notes, and sponsored content notices |
| Provenance | 40 | Evidence-based claims, source attributions, and methodology sections |
The Authority pillar measures your brand's recognition and trustworthiness.
Vizzybl queries the Google Knowledge Graph API to determine whether your brand is a recognized entity. The result score maps to trust levels.
| Result Score | Trust Level |
|---|---|
| Above 700 | High — Strong entity recognition |
| 400–700 | Moderate — Partial recognition |
| Below 400 | Low — Weak or no entity presence |
Vizzybl compares your brand description with how the Knowledge Graph describes you. The result is classified as match, partial_match, mismatch, or no_kg_description. Closer alignment signals stronger brand consistency to AI engines.
The Media pillar is optional and appears when you enable media analysis during an audit. It evaluates how well your visual content supports AI extraction.
Each image is scored across four dimensions.
| Dimension | Max Points | What it measures |
|---|---|---|
| Contextual Metadata | 25 | Alt text quality, filename descriptiveness, caption alignment |
| Visual Comprehension | 30 | Entity recognition, OCR readability, topic alignment |
| Technical Discoverability | 25 | ImageObject schema, file format, size optimization |
| Originality | 20 | Custom vs. stock indicators, branding, metadata richness |
Videos are evaluated for timestamp and chapter presence, transcript availability, VideoObject schema completeness, and answer-first structure.
The Citation Potential Score (CPS) predicts how likely AI engines are to cite your page. It combines four weighted factors.
| Component | Weight | What it measures |
|---|---|---|
| Fact Density | 40% | Statistics, percentages, data points, and verifiable claims |
| Novelty | 35% | How unique your content is compared to existing search results |
| Link Authority | 15% | Quality of outbound links to authoritative domains |
| Data Formatting | 10% | Presence of tables, lists, and structured data |
Formula: (0.40 × factDensity) + (0.35 × noveltyScore) + (0.15 × linkAuthority) + (0.10 × dataFormatting)
Use these benchmarks to interpret any score in your report.
| Score | Rating | What to do |
|---|---|---|
| 80–100 | Excellent | Maintain and monitor for regressions |
| 60–79 | Good | Address specific gaps in your weakest pillar |
| 40–59 | Needs work | Prioritize your lowest-scoring pillar for improvement |
| 0–39 | Critical | Fundamental optimization needed across multiple areas |