seo-audit
Use this skill when performing a comprehensive SEO audit - technical audit, on-page audit, content audit, off-page audit, and AEO/GEO readiness assessment. Provides a structured scorecard with 30-40 checks rated PASS/FAIL/WARN across all SEO categories, prioritized recommendations, and links to specialized skills for deep fixes. This is the master audit skill that orchestrates all other SEO skills.
marketing seoseo-auditauditscorecardtechnical-auditsite-auditWhat is seo-audit?
Use this skill when performing a comprehensive SEO audit - technical audit, on-page audit, content audit, off-page audit, and AEO/GEO readiness assessment. Provides a structured scorecard with 30-40 checks rated PASS/FAIL/WARN across all SEO categories, prioritized recommendations, and links to specialized skills for deep fixes. This is the master audit skill that orchestrates all other SEO skills.
seo-audit
seo-audit is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex, and 1 more. Performing a comprehensive SEO audit - technical audit, on-page audit, content audit, off-page audit, and AEO/GEO readiness assessment. Provides a structured scorecard with 30-40 checks rated PASS/FAIL/WARN across all SEO categories, prioritized recommendations, and links to specialized skills for deep fixes. This is the master audit skill that orchestrates all other SEO skills.
Quick Facts
| Field | Value |
|---|---|
| Category | marketing |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex, mcp |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill seo-audit- The seo-audit skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
The SEO audit skill provides a systematic methodology for evaluating a website's search optimization across all dimensions: technical infrastructure, on-page elements, content quality, off-page signals, and AI search readiness. It produces a structured scorecard with PASS/FAIL/WARN ratings and prioritized recommendations so every finding is actionable, not just observed. This is the orchestration layer that connects all 13 specialized SEO skills - use it to diagnose, then hand off to the right skill for each fix.
Tags
seo seo-audit audit scorecard technical-audit site-audit
Platforms
- claude-code
- gemini-cli
- openai-codex
- mcp
Related Skills
Pair seo-audit with these complementary skills:
Frequently Asked Questions
What is seo-audit?
Use this skill when performing a comprehensive SEO audit - technical audit, on-page audit, content audit, off-page audit, and AEO/GEO readiness assessment. Provides a structured scorecard with 30-40 checks rated PASS/FAIL/WARN across all SEO categories, prioritized recommendations, and links to specialized skills for deep fixes. This is the master audit skill that orchestrates all other SEO skills.
How do I install seo-audit?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill seo-audit in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support seo-audit?
This skill works with claude-code, gemini-cli, openai-codex, mcp. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
SEO Audit
The SEO audit skill provides a systematic methodology for evaluating a website's search optimization across all dimensions: technical infrastructure, on-page elements, content quality, off-page signals, and AI search readiness. It produces a structured scorecard with PASS/FAIL/WARN ratings and prioritized recommendations so every finding is actionable, not just observed. This is the orchestration layer that connects all 13 specialized SEO skills - use it to diagnose, then hand off to the right skill for each fix.
When to use this skill
Trigger this skill when the task involves:
- Running a full SEO audit across a website or specific section
- Performing a technical-only audit (crawlability, indexing, performance)
- Running a content audit (thin pages, cannibalization, freshness, intent)
- Pre-launch SEO check before a new site or major redesign goes live
- Periodic SEO health check (monthly, quarterly)
- Comparing your site's SEO posture against a competitor
- Producing a structured SEO report for a client or stakeholder
Do NOT trigger this skill for:
- Implementing specific fixes - once audit findings are known, load the specialized skill
(e.g.,
core-web-vitals,schema-markup,link-building) for the actual fix work - Keyword research phase before content planning - use
keyword-researchskill instead
Key principles
- Audit systematically - never skip a category because you assume it is fine. The most damaging SEO issues are often in the areas least recently checked.
- Prioritize by impact x effort - not all issues are equal. A missing sitemap outranks a missing alt text. Score severity honestly.
- PASS/FAIL/WARN scoring removes subjectivity - every check has a defined threshold. WARN means partially compliant or approaching a threshold, not "kind of okay".
- An audit without actionable recommendations is useless - every FAIL and every WARN must link to a specific next action, tool, or specialized skill.
- Re-audit quarterly to catch regressions - deployments, CMS updates, and content changes silently break SEO. Set a recurring audit cadence.
Core concepts
The 5 audit categories
| Category | Checks | Covers |
|---|---|---|
| Technical SEO | 10 | Crawlability, indexing, performance, rendering |
| On-Page SEO | 8 | Titles, metas, headings, images, internal links, OG |
| Content SEO | 7 | Thin pages, cannibalization, clusters, E-E-A-T, freshness |
| Off-Page SEO | 5 | Backlinks, toxic links, anchor text, local, citations |
| AEO & GEO Readiness | 5 | Featured snippets, schema, AI extraction, entities, LLMs.txt |
Scoring methodology
| Status | Meaning |
|---|---|
| PASS | Meets best practice - no action required |
| WARN | Partially compliant, approaching a threshold, or inconsistently applied |
| FAIL | Missing, broken, or clearly below best practice - fix required |
| N/A | Check does not apply to this site type (mark explicitly, never skip silently) |
Priority matrix
| Priority | Definition | Timeframe |
|---|---|---|
| Critical | Directly blocks indexing or causes major ranking loss | Fix immediately (this week) |
| High | Measurable ranking/traffic impact, moderate effort | Fix this sprint |
| Medium | Best practice gap, gradual compounding effect | Fix this quarter |
| Low | Nice to have, marginal gain | Backlog |
The audit-fix-verify cycle
- Audit - run all 35 checks, record status and evidence
- Prioritize - assign Critical/High/Medium/Low to each FAIL and WARN
- Assign - link each finding to a specialized skill or tool for the fix
- Verify - re-run the specific check after the fix is deployed (not a full re-audit)
- Re-audit - full audit quarterly to catch new issues
Common tasks
Run a full SEO audit
Present the complete scorecard to the user. Fill in Status and Details as you analyze the site. Every row must have a status - never leave a row blank.
Section 1: Technical SEO
| # | Check | Status | Details |
|---|---|---|---|
| T1 | Robots.txt configured correctly | ||
| T2 | XML sitemap valid and submitted | ||
| T3 | Canonical URLs set on all pages | ||
| T4 | No redirect chains (max 1 hop) | ||
| T5 | HTTPS everywhere, no mixed content | ||
| T6 | Mobile-friendly (responsive or adaptive) | ||
| T7 | Core Web Vitals pass (LCP, CLS, INP) | ||
| T8 | No orphan pages | ||
| T9 | Clean URL structure (no parameters, lowercase) | ||
| T10 | Rendering strategy appropriate (SSR/SSG/CSR) |
Section 2: On-Page SEO
| # | Check | Status | Details |
|---|---|---|---|
| O1 | Unique title tags (50-60 chars) | ||
| O2 | Meta descriptions present (120-160 chars) | ||
| O3 | Single H1 per page with target keyword | ||
| O4 | Heading hierarchy correct (H1 > H2 > H3) | ||
| O5 | Image alt text present on all images | ||
| O6 | Internal linking structure is logical | ||
| O7 | Open Graph tags complete (og:title, og:description, og:image) | ||
| O8 | Semantic HTML used (article, nav, main, aside) |
Section 3: Content SEO
| # | Check | Status | Details |
|---|---|---|---|
| C1 | No thin content pages (< 300 words on indexable pages) | ||
| C2 | No keyword cannibalization (one URL per keyword cluster) | ||
| C3 | Topic clusters defined with pillar + supporting pages | ||
| C4 | E-E-A-T signals present (author, credentials, date, sources) | ||
| C5 | Content freshness maintained (no stale pages > 18 months) | ||
| C6 | No duplicate content (internal or cross-domain) | ||
| C7 | Search intent alignment (informational/commercial/transactional match) |
Section 4: Off-Page SEO
| # | Check | Status | Details |
|---|---|---|---|
| L1 | Backlink profile is healthy (quality > quantity) | ||
| L2 | No toxic or spammy links pointing to the site | ||
| L3 | Anchor text diversity (branded, generic, partial-match mix) | ||
| L4 | Local SEO configured (if applicable: GMB, NAP consistency) | ||
| L5 | Brand mentions and citations exist on authoritative sources |
Section 5: AEO & GEO Readiness
| # | Check | Status | Details |
|---|---|---|---|
| A1 | Content optimized for featured snippets (definitions, lists, tables) | ||
| A2 | FAQ and HowTo schema implemented where applicable | ||
| A3 | Content structured for AI extraction (clear Q&A, headers, summaries) | ||
| A4 | Entity authority signals present (linked data, Wikidata, consistent mentions) | ||
| A5 | LLMs.txt present (if applicable to the site's AI discoverability goals) |
Scorecard summary:
| Category | PASS | WARN | FAIL | N/A | Score |
|---|---|---|---|---|---|
| Technical SEO (10) | /10 | ||||
| On-Page SEO (8) | /8 | ||||
| Content SEO (7) | /7 | ||||
| Off-Page SEO (5) | /5 | ||||
| AEO & GEO (5) | /5 | ||||
| Total (35) | /35 |
Score interpretation:
- 32-35 PASS: Excellent - monitor and maintain
- 26-31 PASS: Good - address WARNs before they become FAILs
- 18-25 PASS: Needs work - prioritize Critical and High items
- Under 18 PASS: Significant SEO debt - consider a full remediation project
Prioritize audit findings
After completing the scorecard, categorize every FAIL and WARN into the priority matrix:
Critical - fix immediately:
- Checks that block indexing: robots.txt disallowing key pages, no sitemap, canonical loops
- Checks that cause major visibility loss: HTTPS failures, redirect chains on key pages
- Rendering issues causing entire page sections to be invisible to Googlebot
High - fix this sprint:
- Missing or duplicate title tags across more than 10% of pages
- Core Web Vitals failing on high-traffic templates
- Keyword cannibalization on money pages
- Toxic backlinks that could trigger a manual penalty
Medium - fix this quarter:
- Missing alt text, inconsistent heading hierarchy
- Thin content on secondary pages
- Missing OG tags, incomplete schema markup
- Stale content on secondary pages
Low - backlog:
- LLMs.txt optimization
- Minor anchor text imbalances
- Brand mention acquisition on marginal sources
Present findings as a prioritized table:
| Priority | Check ID | Finding | Recommended action | Skill |
|---|---|---|---|---|
| Critical | T1 | Robots.txt disallowing /products/ | Update robots.txt to allow crawl | technical-seo |
| High | T7 | LCP > 4s on mobile (template-wide) | Optimize hero images, reduce TTFB | core-web-vitals |
| ... |
Generate audit report
Use the full report template in references/audit-report-template.md. The report has
four sections:
- Executive summary - overall score, top 3 findings, business impact statement
- Scorecard - the full 35-check table with status and evidence
- Detailed findings - one entry per FAIL/WARN with: what was found, why it matters, how to fix it, and which specialized skill to use
- Action plan - week 1, month 1, and quarter 1 timeline with assigned owners
Load references/audit-report-template.md to get the full report template with
placeholder text and formatting.
Run a quick SEO health check
For a rapid 10-check assessment (15-20 minutes, not a full audit):
| # | Check | Status |
|---|---|---|
| Q1 | Site accessible and indexable (not blocked by robots.txt) | |
| Q2 | HTTPS and no mixed content | |
| Q3 | Sitemap present and submitted to GSC | |
| Q4 | No redirect chains on homepage and top 5 pages | |
| Q5 | Title tags unique and under 60 chars on top pages | |
| Q6 | Mobile-friendly (Google Mobile-Friendly Test pass) | |
| Q7 | Core Web Vitals: at least PASS on mobile for key templates | |
| Q8 | No obvious cannibalization on primary keywords | |
| Q9 | Backlink profile: no manual actions in GSC | |
| Q10 | Structured data: at least one schema type implemented |
A quick health check surfaces showstopper issues only. Any FAIL here is Critical priority. For a complete picture, run the full 35-check audit.
Gotchas
GSC data lags by 2-3 days - Google Search Console performance data has a processing delay. If you run an audit on Monday, the most recent data in GSC is from Thursday or Friday. Do not interpret the most recent 2-3 days as a traffic drop - it is always incomplete.
Robots.txt check T1 can return false PASS if crawled from the wrong user-agent - A robots.txt that disallows Googlebot but allows all other agents will pass generic crawl tests. Always test with
User-agent: Googlebotspecifically, or use the robots.txt tester in Google Search Console directly.Canonical to a redirecting URL creates a canonical-redirect chain - If the canonical tag points to a URL that itself 301s to another URL, Google may choose to follow the redirect rather than the canonical, or may not honor either. Canonicals must point to the final destination URL, not an intermediate redirect.
Core Web Vitals field data vs lab data diverge - PageSpeed Insights lab data (Lighthouse) and the CrUX field data (real user measurements) frequently disagree. A page can show LCP < 2.5s in lab but fail in field data due to geography, network conditions, or device mix. Always treat field data as the ground truth for audit scoring.
Hreflang errors silently break international rankings - Invalid hreflang attributes (missing return links, wrong locale codes, self-referencing inconsistencies) are not reported as errors in GSC - they are silently ignored. International SEO issues must be audited with a dedicated hreflang validator tool.
Anti-patterns / common mistakes
| Anti-pattern | Problem | Fix |
|---|---|---|
| Cherry-picking only easy wins | Leaves Critical issues unresolved while the site hemorrhages traffic | Always start with the priority matrix - easy is not the same as important |
| Auditing without GSC and Analytics access | You are guessing at traffic impact and missing crawl error data | Get read access before starting; a blind audit is decoration |
| Treating all FAIL items as equal priority | Team burns time on alt text while a canonical loop causes deindexing | Use the priority matrix on every engagement, no exceptions |
| Auditing once and never re-checking | Deployments break SEO silently; regressions are invisible without cadence | Schedule quarterly audits; automate continuous checks where possible |
| Reporting findings without recommended fixes | Stakeholders don't know what to do; audit report sits unread | Every finding must link to a specific action and a responsible party |
| Running only a technical audit and calling it done | Content and off-page issues often cause more ranking loss than technical issues | Always cover all 5 categories even at a surface level |
| Confusing WARN with acceptable | WARNs compound - five WARNs in the same category indicate a systemic issue | Treat three or more WARNs in a category the same as a FAIL |
References
For detailed audit checklists with per-check verification methods and fix guidance, load:
references/audit-checklist-technical.md- Detailed technical SEO checks with how-to-verify steps, tool recommendations, and PASS/FAIL/WARN criteriareferences/audit-checklist-content.md- On-page and content audit methodology with verification methods for all 15 checksreferences/audit-checklist-offpage.md- Off-page and AEO/GEO audit methodology for all 10 checksreferences/audit-report-template.md- Full audit report template with executive summary, scorecard, findings, and action plan sections
Only load a reference file when actively working on that audit category - they are detailed and will consume context if loaded all at once.
For deep fixes on specific categories, load the specialized skill:
keyword-research- Search intent analysis and keyword opportunity mappingschema-markup- Structured data implementation (JSON-LD, FAQ, HowTo, Product)core-web-vitals- LCP, CLS, INP optimization and performance budgetstechnical-seo- Crawlability, indexing, rendering, redirect managementcontent-seo- Topic clusters, E-E-A-T, cannibalization resolutionlink-building- Backlink acquisition strategy and toxic link removallocal-seo- Google Business Profile, NAP consistency, local citationsinternational-seo- Hreflang, multi-language/region SEO architectureecommerce-seo- Product pages, faceted navigation, category SEOprogrammatic-seo- Page generation at scale, template optimizationaeo-optimization- Featured snippets, voice search, AI answer optimizationgeo-optimization- Generative engine optimization for ChatGPT, Perplexity, Geminion-site-seo- Framework-specific on-page fixes (Next.js, Nuxt, WordPress, etc.)
References
audit-checklist-content.md
On-Page and Content SEO Audit Checklist
Detailed verification steps, PASS/FAIL/WARN criteria, and fix guidance for all 15 on-page and content SEO checks (O1-O8 and C1-C7). For each check: what to look for, how to verify, what status to assign, and how to fix it.
On-Page SEO Checks (O1-O8)
O1 - Unique Title Tags (50-60 Characters)
What to check:
- Every page has a
<title>tag - All title tags are unique across the site (no duplicates)
- Length is between 50-60 characters (shorter may underuse keyword opportunity; longer gets truncated in SERPs)
- Target keyword is present, ideally near the start
- Brand name is appended at the end:
Target Keyword - Page Description | Brand
How to verify:
- Screaming Frog: Page Titles tab > filter by Duplicate, Missing, or Over/Under length
- GSC > Search Appearance > check for "Missing title tags" messages
PASS/FAIL/WARN criteria:
- PASS: All pages have unique titles in the 50-60 char range with primary keyword
- WARN: Some pages over 60 chars (truncation risk), a few missing titles, or keyword near end
- FAIL: More than 10% of pages missing title tags, widespread duplicates, or titles under 30 chars
Common fixes:
- Set unique title templates per page type:
{Product Name} - {Category} | Brand - For programmatic pages: use dynamic title generation based on content fields
- Load
on-site-seoskill for framework-specific<head>management patterns
O2 - Meta Descriptions Present (120-160 Characters)
What to check:
- Every indexable page has a unique meta description
- Length is 120-160 characters (shorter misses persuasion opportunity; longer gets truncated)
- Contains a call to action or value proposition - not just a keyword list
- Does not duplicate the title tag content
How to verify:
- Screaming Frog: Meta Description tab > filter Duplicate, Missing, Over/Under length
- Note: Google may rewrite meta descriptions. Source-level audit is required; GSC does not show your set descriptions.
PASS/FAIL/WARN criteria:
- PASS: All pages have unique descriptions in the 120-160 char range with a value proposition
- WARN: Some pages are missing descriptions, or descriptions are under 100 chars
- FAIL: More than 25% of pages missing meta descriptions, or site-wide duplicates
Common fixes:
- Write templates by page type: for product pages use
{product benefit} - {feature} - Shop now. - For blog posts: use the first 150 chars of the article intro as a fallback
- Avoid auto-generating descriptions that are identical across a template type
O3 - Single H1 Per Page With Target Keyword
What to check:
- Every indexable page has exactly one H1 tag
- H1 contains the primary target keyword (naturally, not stuffed)
- H1 is different from the
<title>tag (though similar is acceptable - not identical) - H1 is the first visible heading on the page
How to verify:
- Screaming Frog: H1 tab > filter for Missing H1, Multiple H1, or Duplicate H1
PASS/FAIL/WARN criteria:
- PASS: Single unique H1 on every page, contains keyword, is first heading
- WARN: Some pages have H1 identical to title tag, or a few pages with multiple H1s
- FAIL: Pages missing H1, or H1 used purely for styling (e.g., a logo in an H1 tag), or H1 has no keyword relevance
Common fixes:
- Separate the visual heading (CSS
font-size) from the semantic heading (<h1>) - For CMSs: ensure the page title field maps to H1 in the template
- Do not use H1 for site name/logo in the header - use an
<a>tag with appropriate aria-label
O4 - Heading Hierarchy Correct (H1 > H2 > H3)
What to check:
- Headings follow a logical nested structure: H1 > H2 > H3, no levels are skipped
- H2s represent major sections; H3s represent subsections of H2s
- No H3 appears before any H2 on the page
- No heading tags used purely for visual styling (bold text formatted as H2)
How to verify:
- Screaming Frog: Headings tab to view all heading structures per page
- Browser extension HeadingsMap (Chrome/Firefox) shows heading tree visually
PASS/FAIL/WARN criteria:
- PASS: Logical heading hierarchy throughout; no skipped levels; H2s and H3s present
- WARN: Occasional skipped level (H1 > H3 without H2) on secondary pages
- FAIL: No H2s on key long-form pages, or heading tags used purely for visual decoration
Common fixes:
- Restructure content so major sections are H2 and subsections are H3
- Use CSS classes to control heading visual styles independently of semantic level
O5 - Image Alt Text Present on All Images
What to check:
- All
<img>elements have analtattribute - Alt text is descriptive and relevant to the image content (not stuffed with keywords)
- Decorative images use
alt=""(empty string) - not missing the attribute entirely - Product images include the product name and key attribute in alt text
How to verify:
- Screaming Frog: Images tab > filter for Missing Alt Text or Empty Alt Text
PASS/FAIL/WARN criteria:
- PASS: All content images have meaningful alt text; decorative images use
alt="" - WARN: Some content images missing alt text, especially on secondary pages
- FAIL: More than 20% of content images missing alt text, especially hero images and product images on key pages
Common fixes:
- For CMS content: enforce alt text as a required field in the media uploader
- For programmatic image generation: use product name + key attributes as alt template
- Run a content editor audit to add missing alt text to existing images in bulk
O6 - Internal Linking Structure Is Logical
What to check:
- Key pages receive internal links from multiple relevant pages (link equity distribution)
- Anchor text is descriptive and keyword-relevant (not "click here" or "read more")
- Pillar pages link to cluster pages; cluster pages link back to pillar pages
- No broken internal links (linking to 404 or redirected URLs)
- Deep pages are accessible within 3-4 clicks from the homepage
How to verify:
- Screaming Frog: Inlinks tab on key pages; filter for pages with 0-2 inlinks
- Ahrefs Site Audit or Semrush: internal link distribution reports
PASS/FAIL/WARN criteria:
- PASS: Key pages have 5+ internal links from relevant pages with descriptive anchors; no broken links; pillar-cluster linking structure implemented
- WARN: Some important pages with fewer than 3 internal links; occasional generic anchor text
- FAIL: Key pages with zero or one internal link; widespread broken internal links; no topic cluster linking strategy
Common fixes:
- Add contextual internal links in blog post body copy to relevant pillar and product pages
- Update anchor text from generic ("learn more") to descriptive ("see our guide to SEO audits")
- Load
content-seoskill for topic cluster and internal linking architecture patterns
O7 - Open Graph Tags Complete
What to check:
og:titlepresent and accurate (can differ from<title>for social context)og:descriptionpresent (120-200 chars for social)og:imagepresent with dimensions at least 1200x630pxog:urlmatches the canonical URLog:typeset appropriately (websitefor homepage,articlefor blog posts)- Twitter Card tags present (
twitter:card,twitter:title,twitter:image)
How to verify:
- Facebook Sharing Debugger:
developers.facebook.com/tools/debug/ - Screaming Frog: Directives tab > check for OG tags via custom extraction
PASS/FAIL/WARN criteria:
- PASS: All five core OG tags present on all indexable pages; image meets size requirements
- WARN: OG tags missing on some page types (e.g., tag or author pages), or image too small
- FAIL: No OG tags site-wide, or
og:imagemissing on blog/product pages that are frequently shared
Common fixes:
- Next.js: use
metadata.openGraphingenerateMetadataorlayout.tsx - WordPress: Yoast or RankMath auto-generate OG tags from post data
- Ensure the OG image is uploaded at 1200x630px minimum; use a dynamic OG image generator (Vercel OG, Cloudinary) for programmatic content
O8 - Semantic HTML Used
What to check:
<article>wraps blog posts and standalone content pieces<nav>wraps navigation menus<main>wraps primary page content (only one per page)<aside>wraps related or sidebar content<header>and<footer>used for site-level structure- Not using
<div>for everything when semantic elements exist
How to verify:
- View source and inspect the document structure for landmark element usage
- Accessibility testing: axe DevTools or WAVE will flag landmark region issues
PASS/FAIL/WARN criteria:
- PASS: All major landmark elements used correctly; article/main/nav/aside applied appropriately
- WARN:
<main>or<article>missing on content pages;<div>used where semantic element fits - FAIL: Entire layout built with non-semantic divs; no structural HTML elements present
Common fixes:
- Replace
<div class="main-content">with<main> - Replace
<div class="blog-post">with<article> - Wrap site navigation in
<nav aria-label="Main navigation">
Content SEO Checks (C1-C7)
C1 - No Thin Content Pages
What to check:
- No indexable pages with fewer than 300 words of meaningful body text
- Pages with low word count that serve a real purpose (contact page, privacy policy) should be noindexed if they offer no search value
- Thin pages that dilute crawl budget and domain authority
How to verify:
- Screaming Frog: custom extraction for word count, or use the Word Count column
- Ahrefs Site Audit: Content Quality report for thin pages
- GSC: check pages with impressions but very low CTR - often thin pages ranking poorly
PASS/FAIL/WARN criteria:
- PASS: All indexable content pages above 500 words; thin utility pages are noindexed
- WARN: Some landing pages in the 300-500 word range; a few utility pages unnecessarily indexed
- FAIL: Blog posts or product pages under 300 words indexed at scale; empty category pages indexed
Common fixes:
- Expand thin content with relevant supporting information, examples, or FAQs
- Add
<meta name="robots" content="noindex, follow">to utility pages with no search value - Consolidate multiple thin pages on the same topic into one comprehensive page
C2 - No Keyword Cannibalization
What to check:
- Only one URL targets each primary keyword cluster
- Two or more pages are not competing for the same keyword (splitting ranking signals)
- Homepage and a blog post are not both targeting the same branded keyword
- Product page and a category page are not both optimized for the same product keyword
How to verify:
- Google:
site:domain.com "target keyword"to find competing pages - Ahrefs: Keywords Explorer > site filter to see which pages rank for the same keyword
- Semrush: Keyword Cannibalization report
PASS/FAIL/WARN criteria:
- PASS: One clearly designated URL per primary keyword; other pages supporting without competing
- WARN: Two pages occasionally ranking for similar (not identical) terms; minor overlap
- FAIL: Two or more pages directly targeting the same keyword with similar title, H1, and content
Common fixes:
- Designate one canonical URL for the keyword; redirect or consolidate competing pages
- Differentiate the cannibalized page to target a different keyword cluster
- Add internal links from the weaker page to the canonical page to consolidate signals
- Load
content-seoskill for cannibalization resolution strategy
C3 - Topic Clusters Defined
What to check:
- Pillar pages exist for core topics (long-form, comprehensive, 2000+ words)
- Cluster pages (supporting articles) cover subtopics and link to the pillar
- Pillar pages link to all relevant cluster pages
- Cluster pages link back to the pillar page with relevant anchor text
- No isolated content exists outside any cluster structure
How to verify:
- Map existing content to topics; pillar pages should show high inlink counts from cluster pages
- Ahrefs or Semrush: check which pages rank for topic variations - clusters should reinforce one another
PASS/FAIL/WARN criteria:
- PASS: Clear pillar-cluster structure for each core topic; bidirectional internal linking in place
- WARN: Pillar pages exist but cluster pages don't consistently link back; some topics lack clusters
- FAIL: No pillar pages; all content exists as isolated articles with no topic cluster structure
Common fixes:
- Identify top 3-5 core topics; create or designate a pillar page for each
- Audit existing cluster pages for missing links back to pillar; add contextual links in body copy
- Load
content-seoskill for full topic cluster architecture methodology
C4 - E-E-A-T Signals Present
What to check:
- Experience: first-hand experience evident in the content (personal examples, case studies)
- Expertise: author bio with credentials, professional background, or relevant experience
- Authoritativeness: links to and from authoritative sources in the niche
- Trustworthiness: clear publication and last-updated dates; citing sources; no misleading claims
How to verify:
- Manually review a sample of key content pages for author attribution, bio, and dates
- Check for About page, team page, and individual author pages
- Look for outbound links to authoritative sources (studies, official sources)
PASS/FAIL/WARN criteria:
- PASS: Author bios with credentials on all content pages; dates visible; sources cited; About page present
- WARN: Author attribution present but minimal bio; dates not shown or approximate
- FAIL: No author attribution on YMYL (Your Money Your Life) content; no About page; no sourcing; or content presents unverified claims without citation
Common fixes:
- Add author bio blocks with name, credentials, and LinkedIn or portfolio link to all articles
- Display visible publication and "last updated" dates on all content
- Create an About page and team page if absent
- Load
content-seoskill for full E-E-A-T implementation strategy
C5 - Content Freshness Maintained
What to check:
- No key pages with a "last updated" date more than 18 months ago (unless evergreen accuracy confirmed)
- High-traffic pages (top 20 by impressions) have been reviewed and updated in the past year
- Outdated statistics, product names, or pricing have been corrected
- Seasonal or time-sensitive content has date-appropriate messaging
How to verify:
- Export top 50 pages by impressions from GSC; check content update dates
- CMS admin: sort all published posts by last modified date; flag everything over 18 months
PASS/FAIL/WARN criteria:
- PASS: All top-traffic pages reviewed in the past 12 months; no visibly outdated statistics or dates
- WARN: Some secondary pages showing stale dates (12-24 months); minor outdated content
- FAIL: High-traffic pages with 2+ year old dates; outdated year references (e.g., "best tools of 2021"); product pages referencing discontinued features
Common fixes:
- Set a quarterly content review calendar; prioritize top 20 pages by impressions
- Update
dateModifiedin Article schema when content is substantively refreshed - Remove or redirect content that is too outdated to refresh cost-effectively
C6 - No Duplicate Content
What to check:
- No internal duplicate pages (same content on multiple URLs - with/without trailing slash,
with/without
www, HTTP vs HTTPS) - No near-duplicate pages (product variants, color/size pages with near-identical copy)
- No content scraped or syndicated from other sites without canonical attribution
- Paginated pages don't duplicate the content of page 1
How to verify:
- Siteliner:
siteliner.comscans for internal duplicate and near-duplicate content - Screaming Frog: Content tab > filter Near Duplicates
PASS/FAIL/WARN criteria:
- PASS: No significant internal duplication; all URL variants canonicalize correctly
- WARN: Some near-duplicate product variant pages; minor pagination duplication
- FAIL: Multiple URLs serving identical content without canonicals; large-scale internal duplication
Common fixes:
- Add canonical tags: all URL variants (
?sort=price,/page/1,/index.html) point to the clean canonical URL - For product variants: create unique differentiating copy for each variant page, or canonicalize all variants to the main product page
- Configure 301 redirects:
www> non-www, HTTP > HTTPS, trailing slash consistency
C7 - Search Intent Alignment
What to check:
- Informational keywords target blog posts, guides, or FAQ pages (not product/category pages)
- Commercial investigation keywords target comparison pages, review articles, or category pages
- Transactional keywords target product pages, pricing pages, or signup/checkout flows
- Navigational keywords target brand or specific product pages
- Content format matches intent: step-by-step guides for "how to" queries, lists for "best X" queries
How to verify:
- Search the target keyword in Google and review the SERP: what format and type are the top 10 results?
- If top results are all listicles and your page is a product page - intent mismatch
- Ahrefs: SERP overview for a keyword shows the content type Google is rewarding
PASS/FAIL/WARN criteria:
- PASS: Content type and format match what Google is already ranking for the target keyword
- WARN: Content is directionally correct (informational) but format is wrong (guide vs. listicle)
- FAIL: Product page targeting an informational query; blog post targeting a transactional query; content format entirely misaligned with what Google is rewarding
Common fixes:
- Remap misaligned URLs: convert a product page that targets an informational keyword into a guide, or create a new informational page and let the product page target transactional variants
- Reformat content: convert a long-form article into a numbered list if top results are all listicles
- Load
keyword-researchskill for intent classification methodology
audit-checklist-offpage.md
Off-Page and AEO/GEO Audit Checklist
Detailed verification steps, PASS/FAIL/WARN criteria, and fix guidance for all 10 off-page and AEO/GEO readiness checks (L1-L5 and A1-A5). For each check: what to look for, how to verify, what status to assign, and how to fix it.
Off-Page SEO Checks (L1-L5)
L1 - Backlink Profile Is Healthy
What to check:
- Total referring domains (quality matters more than count)
- Domain Rating (DR) or Domain Authority (DA) trend - growing, stable, or declining
- Diversity: backlinks come from multiple unique domains and industries, not just one source
- Topical relevance: linking sites are in or adjacent to your niche
- Editorial vs. paid vs. directory: editorial backlinks (someone linked because the content is good) are the most valuable
How to verify:
- Ahrefs: Site Explorer > Backlink Profile > Referring Domains (trending over time)
- Semrush: Backlink Analytics > Referring Domains with Authority Score distribution
- Moz Link Explorer: Domain Authority and Link profile overview
- GSC: Links report shows Google's confirmed view of your top linked pages and linking sites
PASS/FAIL/WARN criteria:
- PASS: Growing or stable referring domain count, majority from relevant and authoritative sites, diverse link sources, no manual penalty in GSC
- WARN: Flat or slightly declining referring domain count, many links from low-DR directories, concentrated from a single vertical or link type
- FAIL: Declining referring domain count with no recovery, majority of links from low-quality or irrelevant sites, manual action reported in GSC
Common fixes:
- Invest in editorial link acquisition through original research, data studies, or expert content
- Build relationships with niche-relevant sites for guest posts or co-marketing
- Load
link-buildingskill for full backlink acquisition strategy
L2 - No Toxic or Spammy Links
What to check:
- No links from known link farms, PBNs (Private Blog Networks), or paid link schemes
- No sudden spike of unnatural links (link bomb or negative SEO attack)
- No manual action notification in Google Search Console for unnatural links
- Disavow file is current and accurate if previously filed
How to verify:
- Ahrefs: Referring Domains > filter by low DR (under 5) with high link volume
- Semrush: Backlink Audit Tool > Toxic Score identifies risky links
- GSC: Manual Actions > check for any unnatural links manual action
- Review disavow file if one exists in GSC: Search Console > Links > Disavow
PASS/FAIL/WARN criteria:
- PASS: No manual actions, no significant toxic link clusters, disavow file up to date if applicable
- WARN: Small percentage of low-quality links but no manual action; disavow file not reviewed recently
- FAIL: Manual action received; large volume of toxic links identified; evidence of negative SEO attack with no disavow response
Common fixes:
- Attempt link removal requests to the linking domain first (contact webmaster)
- Submit or update the disavow file in GSC for links that cannot be removed
- Monitor with automated alerts for sudden link spikes (Ahrefs Alerts > New Backlinks)
- Load
link-buildingskill for toxic link removal and disavow strategy
L3 - Anchor Text Diversity
What to check:
- Anchor text distribution is natural: mix of branded (most common), naked URLs, generic ("click here"), and partial/exact-match keyword anchors
- Exact-match keyword anchors should not exceed 10-15% of total anchor text
- No single anchor text phrase dominates the profile (sign of manipulative link building)
- Branded anchors (company name, domain name) should represent 30-50% of anchors
How to verify:
- Ahrefs: Site Explorer > Anchors (shows all anchor text with frequency)
- Semrush: Backlink Analytics > Anchors tab
- Export anchor distribution; calculate percentage of each type
PASS/FAIL/WARN criteria:
- PASS: Branded anchors are most common; exact-match keyword anchors under 15%; healthy mix of generic, naked URL, and partial-match anchors
- WARN: Exact-match keyword anchors at 15-25%; limited branded anchor presence
- FAIL: Exact-match keyword anchors dominate (over 30%); clearly unnatural anchor pattern that suggests link scheme participation
Common fixes:
- When actively building links: vary anchor text intentionally; use brand name or target URL more often than exact-match keyword
- If over-optimized anchors exist historically: dilute by acquiring new links with brand/generic anchors
- Load
link-buildingskill for anchor text strategy guidelines
L4 - Local SEO Configured (If Applicable)
What to check:
- Only check this if the business has a physical location(s) or serves customers in specific geographic areas. Mark N/A for fully remote or global digital businesses.
- Google Business Profile (GBP) is claimed, verified, and fully completed
- NAP (Name, Address, Phone) is consistent across GBP, website, and all directory listings
- Reviews: business has recent reviews and owner is responding to them
- Local schema markup (
LocalBusiness,PostalAddress) present on the site - Embedded Google Map on the contact page
How to verify:
- Search
business name + cityin Google - GBP panel should appear on the right - Moz Local or BrightLocal: NAP consistency audit across major directories
- Schema validator:
validator.schema.orgto check LocalBusiness schema
PASS/FAIL/WARN criteria:
- PASS: GBP claimed and fully filled (categories, hours, photos, description); NAP consistent; local schema present; recent reviews with responses
- WARN: GBP exists but incomplete (missing hours, no photos); occasional NAP discrepancies in minor directories
- FAIL: GBP not claimed; significant NAP inconsistencies; no local schema; no reviews in 6+ months
Common fixes:
- Claim and fully complete GBP profile: add all categories, photos, products/services, hours
- Run a NAP audit with Moz Local or BrightLocal; fix inconsistencies in top directories
- Load
local-seoskill for full local SEO optimization methodology
L5 - Brand Mentions and Citations
What to check:
- Brand name is mentioned on authoritative external websites, even without a link
- Brand is mentioned on industry publications, news sites, or relevant directories
- Brand appears in lists and roundups in the niche (e.g., "Top 10 tools for X")
- Unlinked brand mentions exist that could be converted to backlinks
- NAP citations in general business directories (Yelp, Crunchbase, LinkedIn company page)
How to verify:
- Google Alerts: set up for brand name to track new mentions
- Ahrefs: Content Explorer > search brand name with
highlight_unlinkedto find unlinked mentions - Semrush: Brand Monitoring tool for mention tracking
- Manual search:
"brand name" -site:yourdomain.comin Google
PASS/FAIL/WARN criteria:
- PASS: Regular new brand mentions on authoritative sites; unlinked mentions tracked and being converted; presence in major industry directories and listings
- WARN: Few brand mentions outside of owned content; not present in key industry lists
- FAIL: No external brand mentions; brand essentially unknown to the external web
Common fixes:
- Reach out to sites with unlinked mentions and ask them to add a link
- Submit to industry-relevant directories and lists: Crunchbase, G2, Product Hunt, etc.
- Issue press releases or original research to earn media coverage and citations
- Load
link-buildingskill for digital PR and brand mention acquisition tactics
AEO & GEO Readiness Checks (A1-A5)
A1 - Content Optimized for Featured Snippets
What to check:
- Definition questions answered with a concise 40-60 word paragraph directly after the question
- "How to" content uses numbered steps formatted as an
<ol>list - Comparison or "best" content uses HTML tables
- "What is" and definitional queries have a bold or clearly structured first-answer paragraph
- Target question appears in the H2 or H3 directly above the answer
How to verify:
- Search your target keyword in Google and check if you hold the featured snippet (position 0)
- If a competitor holds it, analyze their format: paragraph, list, or table
- Ahrefs: SERP Features report shows which of your keywords trigger featured snippets
PASS/FAIL/WARN criteria:
- PASS: Key question-based keywords have a formatted direct-answer section; site holds featured snippets on some target keywords
- WARN: Question content exists but is buried in body paragraphs without direct-answer formatting; no featured snippet wins yet
- FAIL: No structured Q&A formatting on informational content; all answers buried in paragraphs
Common fixes:
- Reformat answers: add a question as H2/H3, then answer in 40-60 word paragraph immediately below
- Convert "how to" body paragraphs into
<ol>numbered lists - Load
aeo-optimizationskill for featured snippet capture methodology
A2 - FAQ and HowTo Schema Implemented
What to check:
- FAQ schema (
FAQPage+Question+acceptedAnswer) present on pages with Q&A content - HowTo schema (
HowTo+HowToStep) present on tutorial and step-by-step guide pages - Schema is valid JSON-LD (not Microdata), placed in
<script type="application/ld+json"> - Schema accurately reflects the visible content (Google rejects schema that doesn't match content)
- No over-implementation: FAQ schema only on pages that actually have FAQ sections
How to verify:
- Google Rich Results Test:
search.google.com/test/rich-resultsfor page-level validation - Schema.org Validator:
validator.schema.orgfor JSON-LD syntax checking - GSC: Enhancements > FAQ / HowTo > check for errors and valid items
PASS/FAIL/WARN criteria:
- PASS: FAQ schema on FAQ pages, HowTo schema on tutorial pages, all valid in Rich Results Test, no GSC enhancement errors
- WARN: Schema present but some errors in GSC; or schema missing on some eligible page types
- FAIL: No FAQ or HowTo schema implemented despite site having significant Q&A and tutorial content; or schema present but invalid (causing GSC errors)
Common fixes:
- Add FAQ JSON-LD block to blog posts with FAQ sections - use a standard template
- For HowTo: ensure each
HowToStephas anameandtextproperty - Load
schema-markupskill for full structured data implementation patterns
A3 - Content Structured for AI Extraction
What to check:
- Key information is presented in clearly labeled, extractable formats (not buried in prose)
- Definitions and explanations are in the first 100 words of a section, not at the end
- Content uses clear question-as-heading + answer-below structure throughout
- Tables and lists are used for comparative or enumerated information
- No critical information is locked inside images or PDFs without text equivalents
- Summary sections or TL;DR blocks at the top of long content
How to verify:
- Manually read the page as if you are an AI trying to answer "what is X" - can you find a clear, citable answer in the first few sentences?
- Check if the content appears in ChatGPT, Perplexity, or Google AI Overviews responses for target queries (manual spot-check)
PASS/FAIL/WARN criteria:
- PASS: Key answers are in the opening sentences of sections; information is in extractable text formats; content appears in AI answer boxes for target queries
- WARN: Information exists but requires reading full paragraphs to extract; some critical content is image-based
- FAIL: All key information buried in long prose paragraphs; heavy use of images for informational content; no Q&A structure
Common fixes:
- Add a "Quick Answer" or "TL;DR" block at the top of long articles
- Restructure paragraphs so the answer comes first, then the elaboration (inverted pyramid)
- Load
geo-optimizationskill for full AI search optimization methodology
A4 - Entity Authority Signals Present
What to check:
- Brand entity is consistent across the web: same name, description, and category everywhere
- Wikipedia or Wikidata entry exists (for established brands)
- Knowledge Panel appears in Google for branded searches
- Content consistently references and is referenced by authoritative entities in the niche
- Structured data uses
sameAsproperty to link to authoritative entity sources (LinkedIn, Crunchbase, Wikipedia, official social profiles)
How to verify:
- Search
brand namein Google - does a Knowledge Panel appear on the right? - Check Wikidata:
wikidata.org/wiki/Special:Searchfor the brand name - Inspect
OrganizationorPersonschema forsameAsproperties
PASS/FAIL/WARN criteria:
- PASS: Knowledge Panel present for brand;
sameAsin Organization schema pointing to LinkedIn, Crunchbase, and social profiles; consistent entity description across the web - WARN: No Knowledge Panel; Organization schema missing
sameAslinks; inconsistent brand descriptions across platforms - FAIL: No Organization schema; no entity presence on any authoritative third-party platform; brand name completely absent from the knowledge graph
Common fixes:
- Add
Organizationschema withsameAsarray linking to all authoritative profiles - Ensure GBP, LinkedIn company page, Crunchbase, and social profiles all use consistent brand name and description
- Load
schema-markupskill forOrganizationand entity schema implementation
A5 - LLMs.txt Present (If Applicable)
What to check:
https://domain.com/llms.txtexists and is accessible (returns 200)llms.txtaccurately describes what the site is, who it serves, and its key sections- File follows the
llms.txtspecification: title, summary, and section links llms-full.txtoptionally present for extended context- Relevant pages are included in
llms.txtfor AI crawlers to prioritize
How to verify:
- Fetch
https://domain.com/llms.txtdirectly in the browser - Check that the file format matches the specification at
llmstxt.org - Verify the linked pages in the file are accessible and content-rich
PASS/FAIL/WARN criteria:
- PASS:
llms.txtexists, well-formatted, includes key pages and accurate site description - WARN:
llms.txtexists but outdated or incomplete (missing key sections) - FAIL:
llms.txtmissing entirely (FAIL only applies to sites where AI discoverability is a stated goal; mark N/A for sites that have no AI-search strategy) - N/A: Site has no AI search optimization goals or is purely local/private
Common fixes:
- Create
llms.txtat the domain root following thellmstxt.orgspecification - Include: site title, summary paragraph, and links to the most important content sections
- Update quarterly when site structure or major content sections change
- Load
geo-optimizationskill for fullllms.txtand GEO optimization methodology
audit-checklist-technical.md
Technical SEO Audit Checklist
Detailed verification steps, PASS/FAIL/WARN criteria, and fix guidance for all 10 technical SEO checks. For each check: what to look for, how to verify, what status to assign, and which tool or action fixes it.
T1 - Robots.txt Configured Correctly
What to check:
- File exists at
https://domain.com/robots.txt - No critical paths are disallowed (product pages, category pages, key content)
- Sitemap location is declared in the file
- No syntax errors (incorrect
User-agent, typos in paths)
How to verify:
- Fetch the robots.txt directly in the browser
- Use Google Search Console > Settings > robots.txt Tester to check specific URLs
- Use Screaming Frog: Configuration > Robots.txt to simulate Googlebot crawl
PASS/FAIL/WARN criteria:
- PASS: File exists, sitemap declared, no critical pages disallowed, valid syntax
- WARN: File exists but sitemap not declared, or some low-traffic paths inadvertently blocked
- FAIL: File missing entirely,
/is disallowed, critical templates are blocked, or robots.txt returns a non-200 status code
Common fixes:
- Add
Sitemap: https://domain.com/sitemap.xmlas the last line - Replace
Disallow: /with specific path-level disallows for admin/staging routes - For WordPress: Settings > Reading > uncheck "Discourage search engines"
T2 - XML Sitemap Valid and Submitted
What to check:
- Sitemap exists at
https://domain.com/sitemap.xml(or sitemap index) - All important URLs are included; no noindex URLs are listed
- No URLs returning 404 or 301 are in the sitemap
- Sitemap is submitted to Google Search Console and Bing Webmaster Tools
lastmodvalues are present and accurate (not all the same date)
How to verify:
- Fetch the sitemap URL directly; validate XML with W3C XML Validator or Screaming Frog
- GSC > Sitemaps > check submitted sitemaps and last read date
- Screaming Frog > Sitemaps > Sitemap XML Export: compare to crawl data
PASS/FAIL/WARN criteria:
- PASS: Sitemap valid, submitted to GSC, no 404/redirect URLs inside, lastmod present
- WARN: Sitemap exists but not submitted to GSC, or contains a few redirect URLs
- FAIL: No sitemap, sitemap returns 404, XML is malformed, or more than 10% of URLs in sitemap return errors
Common fixes:
- Generate sitemap with Yoast SEO (WordPress), next-sitemap (Next.js), or Astro sitemap plugin
- Submit via GSC: Index > Sitemaps > Add new sitemap URL
- Remove noindex pages and 404s from sitemap; update programmatically via CMS hooks
T3 - Canonical URLs Set on All Pages
What to check:
- Every indexable page has a
<link rel="canonical" href="...">in<head> - Canonical URLs are self-referencing on canonical pages (not pointing elsewhere)
- Paginated pages either canonicalize to page 1 or have unique canonicals (not all to page 1)
- URL parameter variants (filters, sort, tracking) canonicalize to the clean URL
- No canonical chains (canonical pointing to a page that itself has a different canonical)
How to verify:
- Screaming Frog: crawl site, Internal > filter by "Canonical" column
- Check
<head>source on a sample of pages (View Source > Ctrl+F "canonical") - Search Console: URL Inspection tool shows canonical Google selected vs declared
PASS/FAIL/WARN criteria:
- PASS: All indexable pages have a self-referencing canonical; parameter variants canonicalize correctly
- WARN: Canonical is missing on some page types, or inconsistent on pagination
- FAIL: No canonicals site-wide, canonical chains present, or canonical points to wrong URL (e.g., staging URL, HTTP instead of HTTPS)
Common fixes:
- Next.js: use
metadata.alternates.canonicalinlayout.tsxorgenerateMetadata - WordPress + Yoast: canonicals are auto-generated; check for plugin conflicts
- For URL parameter variants: use
rel="canonical"in<head>pointing to the parameter-free URL
T4 - No Redirect Chains (Max 1 Hop)
What to check:
- No URL redirects through more than 1 intermediate URL before reaching the final destination
- Internal links point to final destination URLs, not to URLs that redirect
- Old campaign/vanity URLs redirected directly to final pages
- HTTP > HTTPS redirect is a single 301 (not 301 > 301)
How to verify:
- Screaming Frog: Bulk Export > Response Codes > Redirects; look for chains flagged in the tool
- HTTPStatus.io or Redirect Path Chrome extension for spot-checking specific URLs
- Screaming Frog: Configuration > Spider > Follow Internal Redirects ON to find chain sources
PASS/FAIL/WARN criteria:
- PASS: All redirects are single-hop 301s; no chains detected
- WARN: A few redirect chains exist on low-traffic or old URLs; core pages are clean
- FAIL: Redirect chains on high-traffic pages, the homepage, or key landing pages; any 302 where a 301 should be used permanently
Common fixes:
- Update all internal links to point to the final destination URL
- Consolidate redirect chains by pointing the original URL directly to the final destination in the server config or
.htaccess - HTTP > HTTPS: configure server to do a single 301 redirect at the server level (not via CMS)
T5 - HTTPS Everywhere, No Mixed Content
What to check:
- Site loads exclusively on HTTPS; HTTP redirects to HTTPS (301)
- No mixed content warnings in browser console (HTTP resources loaded on HTTPS page)
- SSL certificate is valid and not expiring within 30 days
- HSTS header present (
Strict-Transport-Security) wwwand non-wwwresolve consistently (not both serving content)
How to verify:
- Open browser DevTools > Console and Network tabs; look for mixed content warnings
- SSL Labs:
ssllabs.com/ssltest/for full certificate and HSTS assessment - Check response headers:
curl -I https://domain.comto verify HSTS header
PASS/FAIL/WARN criteria:
- PASS: All resources HTTPS, valid cert, HSTS set, consistent www/non-www resolution
- WARN: Mixed content on a few pages, cert expiring within 60 days, or no HSTS header
- FAIL: Site serving on HTTP with no redirect, mixed content warnings on key pages, expired or invalid SSL certificate
Common fixes:
- Update all hardcoded
http://asset URLs in content and code tohttps:// - Renew SSL certificate; use Let's Encrypt with auto-renewal for cost-free certificates
- Set HSTS header:
Strict-Transport-Security: max-age=31536000; includeSubDomains - Configure 301 redirect from
wwwto non-www(or vice versa) at DNS/server level
T6 - Mobile-Friendly (Responsive or Adaptive)
What to check:
- Site passes Google's Mobile-Friendly Test
- Viewport meta tag present:
<meta name="viewport" content="width=device-width, initial-scale=1"> - No horizontal scrolling on mobile
- Touch targets (buttons, links) are at least 48x48px with adequate spacing
- Text readable without zooming (16px minimum body font size)
- No intrusive interstitials that block content on mobile
How to verify:
- Google Search Console: Mobile Usability report for site-wide issues
- Google Mobile-Friendly Test:
search.google.com/test/mobile-friendly - DevTools: Toggle Device Toolbar (Ctrl+Shift+M) and test at 375px and 768px
PASS/FAIL/WARN criteria:
- PASS: Passes Google's Mobile-Friendly Test, no GSC mobile usability errors, good touch targets
- WARN: Passes the test but some minor layout issues (small tap targets, slightly clipped content)
- FAIL: Fails Mobile-Friendly Test, viewport meta missing, site not usable on mobile screens
Common fixes:
- Add
<meta name="viewport" content="width=device-width, initial-scale=1">if missing - Use CSS
min-height: 48px; min-width: 48pxfor all interactive elements - Replace fixed pixel widths with relative units (
%,vw,rem) in CSS
T7 - Core Web Vitals Pass (LCP, CLS, INP)
What to check:
- LCP (Largest Contentful Paint): under 2.5s is PASS, 2.5-4s is WARN, over 4s is FAIL
- CLS (Cumulative Layout Shift): under 0.1 is PASS, 0.1-0.25 is WARN, over 0.25 is FAIL
- INP (Interaction to Next Paint): under 200ms is PASS, 200-500ms is WARN, over 500ms is FAIL
- Audit both mobile and desktop; Google uses mobile for ranking signals
- Check field data (real user data) not just lab data - they can differ significantly
How to verify:
- Google Search Console: Core Web Vitals report (field data from CrUX)
- PageSpeed Insights:
pagespeed.web.devfor both lab and field data per URL - Chrome DevTools: Performance panel and Lighthouse tab for lab measurements
web-vitalsJS library for RUM (Real User Monitoring) measurement
PASS/FAIL/WARN criteria:
- PASS: All three metrics pass on both mobile and desktop in field data (CrUX)
- WARN: One or two metrics in the "Needs Improvement" range; or passing in lab but failing in field
- FAIL: Any metric failing on mobile in field data; or template-wide failures affecting many URLs
Common fixes:
- LCP: preload hero image, use
fetchpriority="high"on LCP image, optimize image format (WebP/AVIF), reduce TTFB with CDN and server-side caching - CLS: always specify
widthandheightattributes on images; useaspect-ratioCSS; avoid injecting content above the fold after load - INP: reduce main thread blocking, split long tasks, use web workers for heavy computation
T8 - No Orphan Pages
What to check:
- Every indexable page is reachable via internal links from at least one other page
- No pages exist only in the sitemap or accessed only via direct URL
- Orphans often appear after CMS migrations, content pruning, or faceted navigation removal
- Paginated pages are linked to from the page before them
How to verify:
- Screaming Frog: crawl site from homepage, then cross-reference crawl output against sitemap URLs
- Pages in sitemap but not found in crawl = orphan candidates
- Sitebulb or Ahrefs Site Audit: dedicated orphan pages report
PASS/FAIL/WARN criteria:
- PASS: All sitemap URLs are also found in crawl; no pages with zero internal links
- WARN: A small number of low-traffic or archive pages are orphaned
- FAIL: Key pages (product pages, landing pages, blog posts) have zero internal links
Common fixes:
- Add links to orphaned pages from relevant hub pages, category listings, or navigation
- For paginated series: ensure page N links to page N+1 and N-1
- Add a dynamic "related posts" or "related products" module to ensure new content gets linked
T9 - Clean URL Structure
What to check:
- URLs use lowercase letters and hyphens (not underscores or spaces)
- No tracking parameters (
?utm_source) in canonical or crawlable URLs - URL depth: no more than 3-4 levels deep for key content (
/blog/category/post-slug/) - No session IDs, user-specific tokens, or auto-generated numeric identifiers in indexed URLs
- URL slug contains the target keyword (where practical)
How to verify:
- Screaming Frog: URL column; filter for uppercase, underscores, or very long URLs
- Export all indexed URLs from GSC and check for parameter pollution
PASS/FAIL/WARN criteria:
- PASS: All URLs lowercase, hyphens only, no parameters on indexed pages, max 4 levels deep
- WARN: Some legacy URLs with underscores, a few deep hierarchy paths, minor parameter exposure
- FAIL: URLs with session IDs indexed, large-scale parameter pollution in index, URLs using non-ASCII characters without encoding
Common fixes:
- 301 redirect old underscore URLs to hyphen equivalents
- Add URL parameter handling in GSC: Settings > URL Parameters (legacy but still functional)
- Implement canonical tags on all parameterized URL variants
T10 - Rendering Strategy Appropriate
What to check:
- For client-side rendered (CSR) apps: key content is not hidden behind JavaScript that Googlebot can't execute reliably
- For server-side rendered (SSR) or static (SSG) apps: confirm full HTML is in the initial response, not populated by client-side JS
- Dynamic rendering (serving static HTML to bots, JS to browsers) is documented and working if implemented
<noscript>tags provide meaningful fallback content where applicable
How to verify:
- View source (
Ctrl+U) vs. Inspect Element: source should show content, not empty<div id="app"> - Google Search Console > URL Inspection > View Crawled Page > HTML and Screenshot tabs confirm what Googlebot actually sees
- Fetch as Google via URL Inspection: compare rendered and non-rendered HTML
PASS/FAIL/WARN criteria:
- PASS: Content fully present in initial HTML response OR Google's URL Inspection confirms full rendering with no content gaps
- WARN: Some secondary content or pagination loaded via JS but primary content is in HTML
- FAIL: Primary content (headings, body text, links) only visible after JavaScript execution; Google's crawl snapshot shows blank or skeleton page
Common fixes:
- Next.js: use
generateStaticParams(SSG) orexport default async function Page()with server components to ensure HTML is served - Vue/React SPAs: migrate key pages to SSR or pre-rendering; use Next.js, Nuxt, or Astro
- Load
on-site-seoskill for framework-specific rendering fix patterns
audit-report-template.md
SEO Audit Report Template
A complete, fill-in-the-blank audit report template. Replace all [BRACKETED] placeholders
with actual findings. Do not delete sections - fill every section or explicitly mark as N/A.
SEO Audit Report: [SITE NAME]
Prepared by: [Auditor Name / Team] Date: [YYYY-MM-DD] Site URL: [https://domain.com] Audit type: [Full audit / Technical only / Content only / Pre-launch] Period covered: [Date range of data used, e.g., GSC data Jan - Mar 2025]
1. Executive Summary
Overall Score
| Metric | Value |
|---|---|
| Total checks | 35 |
| PASS | [N] |
| WARN | [N] |
| FAIL | [N] |
| N/A | [N] |
| Overall score | [PASS count]/[35 - N/A count] |
Score interpretation
[Copy the appropriate line from the scoring guide:]
- 32-35 PASS: Excellent - monitor and maintain
- 26-31 PASS: Good - address WARNs before they become FAILs
- 18-25 PASS: Needs work - prioritize Critical and High items
- Under 18 PASS: Significant SEO debt - consider a full remediation project
Top 3 findings
These are the most impactful issues discovered in this audit:
[FINDING 1 - highest priority] - [One sentence describing the issue and its impact.] Fix: [Brief description of the fix and which specialized skill to use.]
[FINDING 2] - [One sentence describing the issue and its impact.] Fix: [Brief description of the fix and which specialized skill to use.]
[FINDING 3] - [One sentence describing the issue and its impact.] Fix: [Brief description of the fix and which specialized skill to use.]
Business impact statement
[2-4 sentences summarizing the expected organic traffic and revenue impact of the current SEO issues. Be specific where data is available: "Based on GSC data, the 3 Critical issues are estimated to be causing X% crawl inefficiency and affecting Y pages."]
2. Full Scorecard
Section 1: Technical SEO
| # | Check | Status | Evidence / Details |
|---|---|---|---|
| T1 | Robots.txt configured correctly | [PASS/FAIL/WARN/N/A] | [What was found] |
| T2 | XML sitemap valid and submitted | ||
| T3 | Canonical URLs set on all pages | ||
| T4 | No redirect chains (max 1 hop) | ||
| T5 | HTTPS everywhere, no mixed content | ||
| T6 | Mobile-friendly | ||
| T7 | Core Web Vitals pass | ||
| T8 | No orphan pages | ||
| T9 | Clean URL structure | ||
| T10 | Rendering strategy appropriate | ||
| Section score | [N] PASS / [N] WARN / [N] FAIL |
Section 2: On-Page SEO
| # | Check | Status | Evidence / Details |
|---|---|---|---|
| O1 | Unique title tags (50-60 chars) | ||
| O2 | Meta descriptions present (120-160 chars) | ||
| O3 | Single H1 per page with target keyword | ||
| O4 | Heading hierarchy correct | ||
| O5 | Image alt text present | ||
| O6 | Internal linking structure logical | ||
| O7 | Open Graph tags complete | ||
| O8 | Semantic HTML used | ||
| Section score | [N] PASS / [N] WARN / [N] FAIL |
Section 3: Content SEO
| # | Check | Status | Evidence / Details |
|---|---|---|---|
| C1 | No thin content pages | ||
| C2 | No keyword cannibalization | ||
| C3 | Topic clusters defined | ||
| C4 | E-E-A-T signals present | ||
| C5 | Content freshness maintained | ||
| C6 | No duplicate content | ||
| C7 | Search intent alignment | ||
| Section score | [N] PASS / [N] WARN / [N] FAIL |
Section 4: Off-Page SEO
| # | Check | Status | Evidence / Details |
|---|---|---|---|
| L1 | Backlink profile healthy | ||
| L2 | No toxic links | ||
| L3 | Anchor text diversity | ||
| L4 | Local SEO configured | ||
| L5 | Brand mentions and citations | ||
| Section score | [N] PASS / [N] WARN / [N] FAIL |
Section 5: AEO & GEO Readiness
| # | Check | Status | Evidence / Details |
|---|---|---|---|
| A1 | Featured snippet optimization | ||
| A2 | FAQ / HowTo schema implemented | ||
| A3 | Content structured for AI extraction | ||
| A4 | Entity authority signals present | ||
| A5 | LLMs.txt present | ||
| Section score | [N] PASS / [N] WARN / [N] FAIL |
3. Detailed Findings
For each FAIL and WARN, provide a detailed entry. Group by priority level.
Critical Priority (Fix Immediately)
[Check ID] - [Check Name]
Status: FAIL
What was found:
[Specific description of the issue. Include URLs, numbers, or screenshots where relevant.
E.g., "Robots.txt at https://domain.com/robots.txt contains Disallow: /products/ which
blocks Googlebot from crawling the entire product catalog (approximately 1,200 URLs)."]
Why it matters: [Business impact in plain language. E.g., "Google cannot crawl or index any product pages, meaning they receive zero organic traffic from product keyword searches."]
How to fix it:
[Specific, actionable steps. E.g., "Remove the Disallow: /products/ rule from robots.txt.
Replace with more targeted blocks for admin routes: Disallow: /admin/ and Disallow: /wp-login.
After updating, use GSC URL Inspection to request re-crawl of key product pages."]
Specialized skill: technical-seo
Estimated effort: [Low / Medium / High]
Estimated impact: [Low / Medium / High / Critical]
[Repeat for each Critical finding]
High Priority (Fix This Sprint)
[Check ID] - [Check Name]
Status: [FAIL / WARN]
What was found: [...]
Why it matters: [...]
How to fix it: [...]
Specialized skill: [skill-name] Estimated effort: [Low / Medium / High] Estimated impact: [Low / Medium / High / Critical]
[Repeat for each High priority finding]
Medium Priority (Fix This Quarter)
[Use the same entry format as above, but abbreviated descriptions are acceptable]
Low Priority (Backlog)
[Use a table format for efficiency:]
| Check ID | Check Name | Status | Brief description | Skill |
|---|---|---|---|---|
| [O5] | Image alt text | WARN | [~50 images missing alt text on blog pages] | on-site-seo |
| ... |
4. Action Plan
Week 1 - Critical Issues
| Task | Owner | Check ID | Skill |
|---|---|---|---|
| [Fix robots.txt to allow /products/] | [Developer] | T1 | technical-seo |
| [Submit sitemap to GSC] | [SEO/Dev] | T2 | technical-seo |
| [Resolve canonical loop on /blog/ pagination] | [Developer] | T3 | technical-seo |
Month 1 - High Priority Issues
| Task | Owner | Check ID | Skill | Target date |
|---|---|---|---|---|
| [Fix Core Web Vitals on product template] | [Performance] | T7 | core-web-vitals |
[Date] |
| [Fix duplicate title tags on category pages] | [SEO] | O1 | on-site-seo |
[Date] |
| [Resolve cannibalization on [keyword]] | [Content] | C2 | content-seo |
[Date] |
Quarter 1 - Medium Priority Issues
| Task | Owner | Check ID | Skill | Target date |
|---|---|---|---|---|
| [Add author bios to all blog posts] | [Content] | C4 | content-seo |
[Date] |
| [Implement FAQ schema on /faq/ page] | [Developer] | A2 | schema-markup |
[Date] |
| [Build topic cluster for [primary topic]] | [Content] | C3 | content-seo |
[Date] |
Backlog - Low Priority
[List remaining low-priority tasks or link to the issue tracker where they are filed]
5. Appendix
Tools Used in This Audit
| Tool | Purpose | Notes |
|---|---|---|
| Google Search Console | Crawl errors, index status, Core Web Vitals (field data), links | Free; requires site ownership verification |
| Screaming Frog SEO Spider | Full site crawl, title/meta/heading analysis, redirect detection | Free up to 500 URLs; paid license for larger sites |
| PageSpeed Insights | Core Web Vitals lab + field data per URL | Free; powered by Lighthouse + CrUX |
| Ahrefs Site Explorer | Backlink profile, referring domains, anchor text, toxic links | Paid subscription required |
| Semrush Site Audit | Comprehensive technical and content audit | Paid subscription; has a limited free tier |
| Google Rich Results Test | Schema markup validation | Free |
| SSL Labs | HTTPS certificate and configuration audit | Free |
| Siteliner | Internal duplicate content detection | Free up to 250 pages |
| BrightLocal | NAP consistency and local citation audit | Paid; free trial available |
| Google Mobile-Friendly Test | Mobile usability validation | Free |
Re-Audit Schedule
| Audit type | Frequency | Next due date |
|---|---|---|
| Full 35-check audit | Quarterly | [Date + 90 days] |
| Quick 10-check health check | Monthly | [Date + 30 days] |
| Core Web Vitals spot check | Monthly | [Date + 30 days] |
| Backlink profile review | Monthly | [Date + 30 days] |
GSC Access Required
Before the next audit, ensure the following GSC access is available:
- Performance report (last 3 months, all pages)
- Coverage report (indexed vs. not indexed breakdown)
- Core Web Vitals report (field data)
- Manual Actions report
- Sitemaps report (submitted sitemaps and last read dates)
- Links report (top linked pages and top linking sites)
Frequently Asked Questions
What is seo-audit?
Use this skill when performing a comprehensive SEO audit - technical audit, on-page audit, content audit, off-page audit, and AEO/GEO readiness assessment. Provides a structured scorecard with 30-40 checks rated PASS/FAIL/WARN across all SEO categories, prioritized recommendations, and links to specialized skills for deep fixes. This is the master audit skill that orchestrates all other SEO skills.
How do I install seo-audit?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill seo-audit in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support seo-audit?
seo-audit works with claude-code, gemini-cli, openai-codex, mcp. Install it once and use it across any supported AI coding agent.