keyword-research
Use this skill when performing keyword research, search intent analysis, keyword clustering, SERP analysis, competitor keyword gaps, long-tail keyword discovery, or evaluating keywords for snippet opportunity, AI Overview presence, and tri-surface keyword reports. Covers organic (SEO), answer engine (AEO snippets/PAA), and AI citation (GEO AI Overviews/ChatGPT Search/Perplexity) surfaces.
marketing seokeywordssearch-intentserp-analysiscontent-strategycompetitor-analysisaeogeoai-searchtri-surfacecitation-scoreWhat is keyword-research?
Use this skill when performing keyword research, search intent analysis, keyword clustering, SERP analysis, competitor keyword gaps, long-tail keyword discovery, or evaluating keywords for snippet opportunity, AI Overview presence, and tri-surface keyword reports. Covers organic (SEO), answer engine (AEO snippets/PAA), and AI citation (GEO AI Overviews/ChatGPT Search/Perplexity) surfaces.
keyword-research
keyword-research is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex, and 1 more. Performing keyword research, search intent analysis, keyword clustering, SERP analysis, competitor keyword gaps, long-tail keyword discovery, or evaluating keywords for snippet opportunity, AI Overview presence, and tri-surface keyword reports.
Quick Facts
| Field | Value |
|---|---|
| Category | marketing |
| Version | 1.0.0 |
| Platforms | claude-code, gemini-cli, openai-codex, mcp |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill keyword-research- The keyword-research skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
Quick start (5 steps): Seed keywords > Expand with tools/autocomplete > Classify intent for each keyword > Tri-score (organic + AEO + GEO) > Cluster by topic with surface annotations. See the common tasks below for details.
Keyword research is the foundation of all organic and AI search strategy. It is the process of discovering what words and phrases people type into search engines and AI assistants, understanding why they search (intent), and evaluating which of three surfaces - organic results, answer engine features, or AI-generated citations - offers the best opportunity for each keyword.
In 2026, keyword research must account for three surfaces simultaneously:
- Organic blue links (SEO) - traditional rankings on Google, Bing, etc.
- Answer engine features (AEO) - featured snippets, People Also Ask, voice results
- AI-generated citations (GEO) - Google AI Overviews, ChatGPT Search, Perplexity
This skill covers the full research workflow - from seed topic to prioritized, tri-surface-scored keyword report. It tells you WHAT to target and WHERE the opportunity is. For HOW to optimize content once you have chosen your targets,
Tags
seo keywords search-intent serp-analysis content-strategy competitor-analysis aeo geo ai-search tri-surface citation-score
Platforms
- claude-code
- gemini-cli
- openai-codex
- mcp
Related Skills
Pair keyword-research with these complementary skills:
Frequently Asked Questions
What is keyword-research?
Use this skill when performing keyword research, search intent analysis, keyword clustering, SERP analysis, competitor keyword gaps, long-tail keyword discovery, or evaluating keywords for snippet opportunity, AI Overview presence, and tri-surface keyword reports. Covers organic (SEO), answer engine (AEO snippets/PAA), and AI citation (GEO AI Overviews/ChatGPT Search/Perplexity) surfaces.
How do I install keyword-research?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill keyword-research in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support keyword-research?
This skill works with claude-code, gemini-cli, openai-codex, mcp. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Keyword Research
Quick start (5 steps): Seed keywords > Expand with tools/autocomplete > Classify intent for each keyword > Tri-score (organic + AEO + GEO) > Cluster by topic with surface annotations. See the common tasks below for details.
Keyword research is the foundation of all organic and AI search strategy. It is the process of discovering what words and phrases people type into search engines and AI assistants, understanding why they search (intent), and evaluating which of three surfaces - organic results, answer engine features, or AI-generated citations - offers the best opportunity for each keyword.
In 2026, keyword research must account for three surfaces simultaneously:
- Organic blue links (SEO) - traditional rankings on Google, Bing, etc.
- Answer engine features (AEO) - featured snippets, People Also Ask, voice results
- AI-generated citations (GEO) - Google AI Overviews, ChatGPT Search, Perplexity
This skill covers the full research workflow - from seed topic to prioritized,
tri-surface-scored keyword report. It tells you WHAT to target and WHERE the
opportunity is. For HOW to optimize content once you have chosen your targets,
use the companion skills aeo-optimization (snippet/PAA formatting) and
geo-optimization (AI citation optimization).
When to use this skill
Trigger this skill when the user:
- Wants to find keywords for a new website, product page, or blog
- Asks to analyze search intent for a keyword list
- Needs to group keywords into topic clusters or content pillars
- Wants to discover competitor keyword gaps or ranking opportunities
- Asks to find long-tail variations of a seed keyword
- Needs to prioritize a list of keywords by opportunity or difficulty
- Wants to understand what SERP features appear for a target keyword
- Asks to detect keyword cannibalization across existing pages
- Wants to evaluate keywords for snippet opportunity or featured snippet potential
- Asks to assess AI Overview presence for target keywords
- Needs a tri-surface keyword report scoring organic, AEO, and GEO opportunity
- Wants to understand which keywords AI search engines answer vs. defer
Do NOT trigger this skill for:
- Paid search (PPC/Google Ads) bid strategy - ad-specific match types, Quality Scores, and CPC optimization are a different domain
- Brand naming or tagline development - that is copywriting, not search research
- Formatting content to win snippets or PAA - that is
aeo-optimization - Making content more citable by AI engines - that is
geo-optimization
Key principles
Search intent is more important than volume - A keyword with 500 monthly searches and clear transactional intent will drive more revenue than a 50,000-search keyword that is purely informational. Always qualify intent before qualifying volume.
Cluster keywords by topic, not individual pages - One page should own a cluster of semantically related terms. Building one page per keyword creates duplication, splits authority, and fragments the user experience.
The SERP + AI Overview is the source of truth - No tool tells you more about what Google wants to rank than the current top 10 results and whether an AI Overview fires. Content type, length, format, featured snippet presence, and AI Overview citations all reveal the implicit standard for a keyword.
Long-tail keywords convert better - Longer, more specific queries have lower volume but higher purchase intent and lower competition. A content strategy built on long-tail clusters outperforms chasing high-volume head terms in most niches.
Competitor gaps reveal the fastest wins - Finding keywords where competitors rank in positions 4-15 (or not at all) is faster than trying to beat them on keywords where they dominate. Gaps are the entry points.
Every keyword has three surfaces to evaluate - A keyword that looks mediocre for organic ranking may have excellent snippet opportunity (AEO) or strong AI citation potential (GEO). Evaluating all three surfaces prevents blind spots and reveals non-obvious wins that single-surface research misses.
Research and optimization are separate phases - This skill identifies WHAT to target and WHERE the opportunity is. Optimization (HOW to format for snippets, HOW to boost AI citations) is a downstream activity handled by
aeo-optimizationandgeo-optimization. Do not mix phases - complete research before starting optimization.
Core concepts
Search intent taxonomy classifies every keyword into one of four categories based
on what the searcher is trying to accomplish. Informational intent ("how does X work",
"what is Y") signals content and education needs. Navigational intent ("brand name",
"site login") signals the user knows where they want to go. Transactional intent
("buy X online", "X pricing", "X discount code") signals readiness to act.
Commercial investigation ("best X", "X vs Y", "X review") sits between informational
and transactional - the user is evaluating options before deciding. See
references/search-intent-mapping.md for detailed classification guidance including
intent-to-surface mapping.
Keyword difficulty (KD) is a 0-100 score estimating how hard it is to rank on page one for a keyword, based primarily on the backlink authority of the current top-ranking pages. High difficulty does not mean impossible - it means you need more authority, better content, or a more specific angle to win. Treat KD as a relative filter, not an absolute gate.
Search volume vs. traffic potential are related but different. Search volume is the average monthly searches for one keyword. Traffic potential is the estimated traffic the top-ranking page receives for the entire cluster of keywords it ranks for. A keyword with 200 monthly searches may have traffic potential of 2,000 if the ranking page captures dozens of related terms. Always evaluate traffic potential over raw volume.
Keyword cannibalization occurs when two or more pages on the same site compete for the same keyword, splitting ranking signals and confusing Google about which page to surface. Symptoms include ranking oscillation, positions that drop when publishing new content, and two pages from the same domain appearing for the same query. Resolve by merging, redirecting, or clearly differentiating the pages.
Tri-surface keyword scoring evaluates every keyword across three surfaces:
organic opportunity (0-10), AEO opportunity (0-10), and GEO opportunity (0-10).
The composite score (0-30) reveals total opportunity, and the priority surface
tells you where to focus optimization efforts. See references/tri-surface-scoring.md
for the full scoring rubrics and worked examples.
SERP feature landscape - SERP features (featured snippets, PAA, video carousels, image packs, shopping results, AI Overviews) are research signals, not just ranking features. Their presence or absence tells you which surfaces are active for a keyword. A keyword with a featured snippet and an AI Overview has tri-surface potential. A keyword with only shopping results is organic-only. Map features during research to inform scoring.
AI search query patterns - AI search engines (Google AI Overviews, ChatGPT Search,
Perplexity) do not answer every query. They tend to fire on informational, comparison,
and multi-step queries while skipping navigational, transactional, and simple factual
queries. Understanding these patterns helps predict GEO opportunity during research.
See references/search-intent-mapping.md for the full AI Overview intent pattern table.
Common tasks
1. Map search intent for a keyword list
For each keyword, classify it using the four-type taxonomy. Apply this decision order:
- Check modifiers first - Words like "buy", "order", "coupon", "discount" signal transactional. Words like "best", "top", "review", "vs", "alternative" signal commercial investigation. Words like "how", "what", "why", "guide", "tutorial" signal informational. Brand name only = navigational.
- When modifiers are absent, check the SERP - Look at the top 3 results. Are they product pages, comparison articles, definitions, or brand homepages? The content type Google rewards reveals the intent.
- Assign a primary intent and note a secondary if relevant - Many keywords blend types. "Best project management software" is primarily commercial investigation with transactional secondary (the user may click through to pricing).
Output format: a table with columns
keyword | intent | confidence | content_type | snippet_type | ai_overview_present.
The last two columns feed into AEO and GEO scoring. Record snippet type as paragraph/list/table/none and AI Overview presence as yes/sometimes/no.
See references/search-intent-mapping.md for the full classification guide.
2. Build a keyword cluster from a seed topic
Start with one seed keyword and expand outward:
- Generate variants - Use a keyword tool to pull: questions (People Also Ask), autocomplete suggestions, related searches, and lexical variants. For the seed "project management software", variants include "best project management tools", "project management app for teams", "free project management software", etc.
- Group by SERP overlap - Keywords that return the same top-ranking URLs belong in the same cluster. If "project management software" and "task management tool" return 6 of the same top-10 results, one page can rank for both.
- Identify the primary keyword - The one with the highest traffic potential becomes the primary term (used in title, H1, URL). All others are secondary terms woven into subheadings and body copy.
- Name the cluster - Give it a descriptive label: "project management software - top-of-funnel commercial". This label drives content brief decisions.
- Annotate with dominant surface - Score the cluster's keywords using tri-surface scoring and assign a surface label: [ORG], [AEO], [GEO], or [AEO+GEO]. This tells the content team which optimization approach to take after writing.
See references/keyword-clustering.md for semantic, SERP-based, modifier-based,
and surface-aware clustering methods.
3. Tri-surface scoring
This is the signature task of modern keyword research. For each keyword (or cluster), produce scores across all three surfaces.
Process:
- Gather raw data - For each keyword, collect: search volume, traffic potential,
KD, SERP features present, featured snippet format and holder, PAA count, and
AI Overview presence (check Google, optionally ChatGPT Search and Perplexity).
See
references/tool-specific-workflows.mdfor tool-by-tool data collection steps. - Score organic (0-10) - Based on traffic potential, KD inversion, intent-business alignment, and content gap existence.
- Score AEO (0-10) - Based on snippet presence, format match, PAA count, voice search likelihood, and current holder strength. Score 0 for navigational and most transactional keywords where snippets don't fire.
- Score GEO (0-10) - Based on AI Overview trigger, citation density, query type match, entity relevance, and content uniqueness. Score 0 when no AI Overview fires.
- Calculate composite - Sum the three scores (0-30). Apply business-goal weighting if needed.
- Assign priority surface - Highest-scoring surface becomes the priority. If two are within 2 points and both above 5, mark as dual-surface opportunity.
See references/tri-surface-scoring.md for the complete rubrics, weighting tables,
and three worked examples.
Output format:
keyword | intent | organic_score | aeo_score | geo_score | composite | priority_surface
4. Identify competitor keyword gaps (surface-aware)
A keyword gap is a keyword where a competitor ranks in the top 20 but you do not.
Framework for surface-aware gap analysis:
- Pull keyword rankings for 3-5 competitors using a tool (Ahrefs, Semrush). Export keywords where competitor is in positions 1-20.
- Filter out keywords where your site already ranks positions 1-5 (already winning).
- Filter for keywords matching your target intent.
- Sort by traffic potential descending.
- New: Surface gap analysis - For the top 50 gap keywords, check whether they trigger featured snippets and AI Overviews. A keyword gap where the competitor ranks organically but doesn't hold the snippet or isn't cited by AI is a multi-surface gap - you can potentially win the snippet or AI citation even if the organic position takes time to capture.
- Cross-reference with your content inventory. Existing pages that can be optimized are quick wins; missing pages are content creation opportunities.
See references/tool-specific-workflows.md for tool-specific gap analysis steps.
5. Find long-tail variations with surface annotations
Long-tail keywords (typically 3+ words, lower volume, higher specificity) are easier to rank for and often signal stronger intent. To find them:
- Question modifiers: "how to", "what is", "why does", "when should"
- Qualifier modifiers: "for small business", "for beginners", "without X", "with Y"
- Comparison modifiers: "vs", "alternative to", "better than", "instead of"
- Location modifiers: city, region, "near me", "in [country]"
- Feature modifiers: "free", "open source", "enterprise", "API", "integration"
Surface annotations for long-tail keywords:
- Question-format long-tails ("how to X for Y") score high on AEO (snippet targets)
- Comparison long-tails ("X vs Y for Z") score high on GEO (AI engines love comparisons)
- Feature/qualifier long-tails ("X with API for enterprise") are usually organic-only
- Location long-tails are almost always organic-only (local pack, not snippets or AI)
Target long-tail keywords with dedicated FAQ sections, comparison pages, or use-case landing pages. Annotate each with its likely priority surface.
6. Produce a keyword research report
The keyword research report is the primary deliverable of this skill. It synthesizes all research into an actionable document.
Report structure:
Executive summary (1 paragraph) - Total keywords analyzed, top clusters, dominant surface opportunity, and 3 biggest findings.
Intent distribution - Pie chart or table showing the breakdown of informational, commercial, transactional, and navigational keywords in the dataset.
Surface opportunity map - Table showing how many keywords have their priority surface as organic, AEO, GEO, dual, or tri. This reveals whether the overall strategy should lean toward traditional SEO, snippet optimization, or AI citation.
Top keyword clusters - For each cluster: name, primary keyword, keyword count, average composite score, dominant surface, and recommended content type.
Quick wins - Keywords where you have the best ratio of opportunity to effort: striking-distance organic keywords (positions 5-15), snippet-eligible keywords with no current holder, and AI Overview queries citing weak sources you can displace.
Content recommendations - Map clusters to content calendar items: new pages, existing pages to optimize, FAQ additions, and comparison articles to create.
Full keyword data - Appendix table with all keywords and their scores. Use the spreadsheet template from
references/tool-specific-workflows.md.
7. Detect keyword cannibalization
Run a site search for the target keyword (site:yourdomain.com "keyword phrase") and
audit Google Search Console for pages sharing the same top query.
Diagnosis:
- Two pages ranking for the same query: Check which page Google prefers (higher avg. position). The preferred page keeps the keyword; the other page is re-optimized for a different term or redirected.
- Rankings oscillating week to week: Classic cannibalization signal. Consolidate the weaker page's content into the stronger one via a 301 redirect.
- New page tanked the ranking of an existing page: Re-differentiate the new page's focus term or merge it back into the original.
- Snippet cannibalization: If two of your pages alternate holding the featured snippet for the same query, Google may drop both. Consolidate to ensure one definitive page owns the snippet-eligible answer.
Anti-patterns
| Mistake | Why it's wrong | What to do instead |
|---|---|---|
| Chasing volume over intent | A high-volume keyword that doesn't match your buyer's stage sends irrelevant traffic that bounces | Filter by intent first, then sort by volume within the right intent category |
| One page per keyword | Creates thin, near-duplicate pages that split link equity and rarely rank | Cluster semantically related keywords to one page; build depth |
| Ignoring the SERP | Targeting a keyword without checking what type of content currently ranks leads to mismatched format | Always check the top 10 before writing a brief; match dominant content type |
| Targeting KD 70+ with a new site | New domains lack the authority to rank on competitive terms | Start with KD < 30 to earn rankings, traffic, and links; build up to harder terms |
| Skipping competitor gap analysis | Building content only from brainstorming misses proven opportunities | Always run a gap report before finalizing your content calendar |
| Never updating keyword research | Search behavior evolves; queries from 2 years ago may have shifted in intent or volume | Audit top content annually; refresh keyword targets based on current SERP data |
| Ignoring snippet and AI Overview presence | Treating all keywords as organic-only misses AEO and GEO opportunities that may be easier to win than organic rankings | Record SERP features and AI Overview status for every keyword during research |
| Treating every keyword as organic-only | Defaulting to traditional SEO without checking if a keyword is better won through snippets or AI citations | Run tri-surface scoring for all priority keywords; assign a priority surface |
| Scoring GEO when no AI Overview fires | Assigning GEO opportunity to a keyword where AI engines don't generate an answer wastes effort | Always verify AI Overview presence manually; GEO = 0 if no AI answer fires |
| Mixing research and optimization | Trying to format content for snippets or AI citations before finishing keyword research leads to premature decisions | Complete the full research workflow (seed > expand > classify > score > cluster) before any optimization work |
Gotchas
Traffic potential and search volume diverge most on commercial keywords - A keyword with 300 monthly searches may be the primary term for a page that also ranks for 40 related variants, giving it 8,000 monthly visits. Conversely, a 10,000-volume head term may have traffic potential of only 6,000 because ranking page 1 earns only a small CTR share. Always pull traffic potential from your tool (Ahrefs "TP", Semrush "Traffic"), not raw volume.
SERP feature presence changes by location, device, and login state - An AI Overview you see in a logged-in US Chrome session may not fire in an incognito session or in another country. Always verify SERP features in incognito mode and, where relevant, from the target country using a VPN or tool like SERP API, before assigning AEO or GEO scores.
Keyword cannibalization diagnosis is wrong if you use position average - Google Search Console averages positions across all queries and dates. Two pages fighting for the same query may each show position 8 in the average, but the reality is one shows at 3 and the other at 15, alternating week by week. Filter GSC by specific queries and look for multiple pages appearing or for high impression/low click patterns on the same query across different pages.
Tri-surface scoring becomes meaningless if you score GEO for navigational queries - Navigational queries ("brand login", "product dashboard") almost never trigger AI Overviews. Assigning any GEO score above zero to navigational keywords inflates composite scores and misdirects content effort. GEO score must be zero unless you have verified an AI Overview fires for that query.
Keyword clusters built from tool "related keywords" lists ignore SERP overlap - Tool-suggested related keywords group by semantic similarity, not by whether Google actually returns the same URLs for both queries. Two semantically similar keywords may trigger completely different SERPs (different content types, different competition). Validate cluster membership by checking that 5+ of the top 10 results overlap between the keywords.
References
For detailed content on specific topics, read the relevant file from references/:
references/search-intent-mapping.md- Deep dive into the four intent types, classification signals, intent-to-content-type matrix, intent-to-surface mapping for AEO and GEO, AI Overview intent patterns, and how to validate intent assumptions from SERP data. Load when classifying a keyword list or writing a content brief.references/keyword-clustering.md- Methods for clustering keywords (semantic, SERP-based, modifier-based, surface-aware), building pillar-and-spoke topic clusters, surface-aware cluster annotation, and tooling options. Load when building a cluster or planning a content architecture.references/tri-surface-scoring.md- Complete scoring rubrics for organic (0-10), AEO (0-10), and GEO (0-10) opportunity scores, composite scoring, business-goal weighting, three worked examples, and limitations. Load when scoring keywords or producing a tri-surface keyword research report.references/tool-specific-workflows.md- Step-by-step workflows for Ahrefs, Semrush, Google Search Console + free tools, and ChatGPT/Perplexity manual audits. Includes spreadsheet template with columns, formulas, and conditional formatting. Load when performing hands-on keyword research with specific tools.
Only load a references file if the current task requires deep detail on that topic.
References
keyword-clustering.md
Keyword Clustering
Keyword clustering is the process of grouping related keywords so that a single page can rank for all of them, rather than creating separate thin pages for each term. It is the bridge between raw keyword research and a content architecture. Without clustering, a site accumulates redundant, competing pages. With clustering, every page is built to own a topic - ranking for dozens or hundreds of related terms while building concentrated topical authority.
Why clustering matters
Search engines rank pages, not keywords. A well-optimized page targeting a primary keyword will naturally rank for dozens of semantically related variants. Building separate pages for "project management software" and "project management tools" and "project management app" splits the backlinks, internal links, and topical signals that should all feed into one authoritative page.
Benefits of proper clustering:
- Reduces the total number of pages needed (quality over quantity)
- Consolidates link equity to fewer, stronger pages
- Prevents cannibalization before it starts
- Maps clearly to a content calendar (one cluster = one content item)
- Creates a hierarchy that supports pillar-and-spoke internal linking
Clustering methods
Method 1: SERP-based clustering (most accurate)
Group keywords based on which URLs appear in their top search results. If two keywords return the same ranking URLs, Google considers them the same topic.
How to do it:
- For each keyword in your list, record the top 5-10 ranking URLs.
- Compare URL overlap across keywords. Use a clustering tool (Keyword Insights, Surfer SEO, KeyClusters) or do it manually for smaller lists.
- Keywords sharing 3+ of the same top-5 URLs belong in the same cluster.
- Keywords with 0-1 URL overlap belong in different clusters (or need their own page).
Threshold guidance:
- Same 3+ of top 5 URLs: same cluster (high confidence)
- Same 2 of top 5 URLs: likely same cluster (verify manually)
- Same 1 of top 5 URLs: borderline (check if topically related)
- 0 shared URLs: separate clusters
Advantages: Most accurate method because it reflects actual Google behavior, not just keyword similarity. Catches non-obvious clusters where synonyms rank together.
Disadvantages: Time-intensive at scale without tooling; requires live SERP data.
Method 2: Semantic/lexical clustering
Group keywords by shared words, stems, and synonyms using linguistic similarity. Faster than SERP-based clustering, appropriate for early-stage research or lists where you can not pull SERP data.
Approaches:
Exact match grouping - Keywords containing the same root phrase go together:
- "email marketing" + "email marketing tools" + "email marketing tips" = one cluster
Modifier stripping - Remove modifiers and group by the core concept:
- "best CRM for small business" + "top CRM tools" + "CRM software for startups" all strip to "CRM" -> same cluster
Synonym mapping - Identify synonyms and near-synonyms that mean the same thing:
- "project management software" / "task management tool" / "work management platform" are often synonymous; verify with SERP overlap
Advantages: Fast, works without tool access, good for brainstorming.
Disadvantages: Less accurate than SERP-based. Can over-cluster (merging keywords that Google treats as different) or under-cluster (splitting keywords that Google treats the same).
Method 3: Modifier-based clustering
Group keywords by the type of modifier that qualifies the seed keyword. Useful for mapping the content types needed across a topic.
Modifier categories:
| Modifier type | Examples | Content type |
|---|---|---|
| Question modifiers | "how to X", "what is X", "why X" | How-to guide, explainer |
| Comparison modifiers | "X vs Y", "X alternative", "X compared to Y" | Comparison/versus page |
| Audience modifiers | "X for small business", "X for developers" | Use-case landing page |
| Feature modifiers | "X with API", "X free plan", "X enterprise" | Feature/plan pages |
| Location modifiers | "X in NYC", "X near me" | Local landing page |
| Stage modifiers | "X tutorial", "X examples", "X best practices" | Educational content |
Apply this method after semantic clustering to identify which clusters need sub-pages (a large cluster with many modifier types may split into a pillar + spoke pages).
Building pillar-and-spoke topic clusters
The pillar-and-spoke model is the standard architecture for topical authority.
Pillar page:
- Covers the broadest, highest-volume keyword in the cluster
- Provides a comprehensive overview of the topic
- Links out to each spoke page for deep dives
- Typically 2,000-5,000 words; comprehensive but not exhaustive
- Targets commercial investigation or informational intent at the category level
- Example: "Email Marketing: The Complete Guide"
Spoke pages:
- Each covers one sub-topic of the pillar in depth
- Targets a more specific, often long-tail keyword cluster
- Links back to the pillar page
- Typically 1,000-3,000 words; thorough on the specific sub-topic
- Examples: "Email Marketing Automation", "Email Marketing A/B Testing", "Email Marketing for E-commerce"
Hub page (optional):
- A topical hub is a navigation-oriented page listing pillar pages in a domain
- Useful for large sites covering multiple pillars under one umbrella topic
- Example: a "Marketing Resources" hub linking to pillars on Email, SEO, Social
Internal linking rules:
- Every spoke links to its pillar (reinforces pillar authority)
- Pillar links to all spokes (signals topical depth)
- Spokes can link to related spokes within the same pillar
- Never link from a pillar to a page on a different pillar without purpose
Cluster size guidelines
| Cluster size | What it means | Action |
|---|---|---|
| 1-3 keywords | Small, specific cluster | One targeted page; may be a spoke |
| 4-15 keywords | Standard cluster | One well-optimized page targeting all terms |
| 16-40 keywords | Large cluster | Consider splitting into pillar + 2-3 spokes |
| 40+ keywords | Very large cluster | Definitely needs pillar + spoke architecture |
Signs you are over-clustering (merged too much):
- Your page would need to cover 5+ distinct sub-topics to address all keywords
- The keywords span multiple intent types (informational + transactional in one cluster)
- The top SERP results for different keywords in the cluster are completely different pages
Signs you are under-clustering (split too much):
- You have two planned pages where the top SERP results are 70%+ the same URLs
- Your pages would be shorter than 600 words to cover the "separate" topics
- Your planned pages are semantic synonyms of each other
Manual clustering process (no tooling)
For lists under 100 keywords, manual clustering is practical:
- Export to a spreadsheet - One keyword per row, with search volume and intent.
- Sort by root word - Alphabetical sort often groups related keywords together.
- Create a "cluster" column - Assign a cluster name to each keyword.
- Merge by SERP spot-check - For any two clusters you are unsure about, search both keywords and compare the top 3 results. Same URLs? Merge the clusters.
- Name each cluster - Use the highest-volume keyword as the cluster name.
- Count keywords per cluster - Any cluster with 15+ keywords may need a pillar/spoke split.
- Assign content type - Based on intent and cluster size, assign: new page, existing page to optimize, FAQ addition, or pillar + spokes.
Tooling options
| Tool | Method | Best for |
|---|---|---|
| Keyword Insights | SERP-based + NLP | Large lists (1,000+); automated clustering |
| Surfer SEO | SERP-based | Clusters tied to content editor workflow |
| KeyClusters | SERP-based | Standalone clustering at low cost |
| Ahrefs / Semrush | Semantic (built-in grouping) | Quick grouping during research; less precise |
| Screaming Frog + custom script | Custom SERP scrape | Technical teams building their own workflows |
| Manual spreadsheet | Semantic + modifier | Lists under 100 keywords |
No-tool fallback: For any cluster where you are unsure, search both keywords in the same incognito browser and compare the top 5 results. If 3 or more results match, they are the same cluster.
Cluster validation checklist
Before finalizing a cluster and briefing a writer:
- Primary keyword identified (highest traffic potential in the cluster)
- All secondary keywords can be naturally addressed in one piece of content
- SERP-verified: top results for primary keyword match the content type you plan
- No existing page on your site already owns this cluster (check for cannibalization)
- Intent is consistent across all keywords in the cluster
- Cluster size is appropriate (4-15 keywords for a standard page; larger = pillar)
- Internal links planned: this page links to its pillar; pillar links back
A validated cluster becomes a one-to-one mapping to a content brief. One cluster = one content item = one calendar slot. This is the link between keyword research and content execution.
Surface-aware clustering
In tri-surface keyword research, clusters should be annotated with their dominant surface opportunity. This annotation tells content creators not just what to write about but which surface to optimize for first.
How to annotate clusters with a dominant surface
After building your clusters using any of the methods above, add a surface annotation step:
Score each keyword in the cluster using the tri-surface scoring rubric from
references/tri-surface-scoring.md. You need organic, AEO, and GEO scores per keyword.Average the scores across the cluster to get cluster-level surface scores:
- Cluster organic score = average of all keywords' organic scores
- Cluster AEO score = average of all keywords' AEO scores
- Cluster GEO score = average of all keywords' GEO scores
Assign the dominant surface based on the highest cluster-level score. If two scores are within 1 point, mark it as a dual-surface cluster.
Add the surface label to the cluster name:
- "email marketing automation [ORG]" - organic-dominant
- "how to improve email deliverability [AEO]" - AEO-dominant (snippet opportunity)
- "best email marketing tools for startups [GEO]" - GEO-dominant (AI citation opportunity)
- "email marketing vs SMS marketing [AEO+GEO]" - dual-surface
Surface patterns by cluster type
Certain cluster shapes naturally align with specific surfaces:
| Cluster type | Dominant surface | Why |
|---|---|---|
| Question clusters ("how to X", "what is X", "why X") | AEO | Question queries trigger featured snippets and PAA boxes at high rates. Voice search also favors question-format content. |
| Comparison clusters ("X vs Y", "best X for Y", "X alternatives") | GEO | AI engines produce detailed comparison answers with multiple citations. These are the highest-value GEO clusters. |
| Data-rich clusters ("X statistics", "X benchmarks", "X report") | GEO | AI engines cite sources with original data and statistics. If your content has proprietary data, GEO opportunity is high. |
| Transactional clusters ("buy X", "X pricing", "X free trial") | Organic | Neither snippets nor AI Overviews fire for purchase-intent queries. Pure organic play. |
| Tutorial clusters ("X tutorial", "X step by step", "X for beginners") | AEO + Organic | Step-by-step content wins list snippets and ranks well organically. Some GEO opportunity if the tutorial is comprehensive. |
| Definition clusters ("what is X", "X definition", "X explained") | AEO | Paragraph snippets fire for definition queries. AI engines answer these from training data with fewer citations, so GEO is lower. |
Impact on content brief
The dominant surface annotation changes the content brief requirements:
- Organic-dominant clusters: standard SEO brief - focus on depth, internal linking, keyword placement, and matching the SERP content type
- AEO-dominant clusters: brief must include snippet-formatted answer blocks,
PAA subheadings, concise answer paragraphs (40-60 words), and structured lists/tables.
Reference the
aeo-optimizationskill for formatting details. - GEO-dominant clusters: brief must include citable claims with supporting data,
clear expert positioning, unique analysis or original research, and structured
content that AI engines can extract. Reference the
geo-optimizationskill for citation optimization details. - Dual/tri-surface clusters: layer the requirements - lead with the primary surface format, then address secondary surface needs in supporting sections
search-intent-mapping.md
Search Intent Mapping
Search intent (also called user intent or query intent) is the underlying goal a person has when they type a query into a search engine. Google's core mission is to match results to intent, which means intent is the single most important dimension of any keyword. A technically well-optimized page targeting the wrong intent will not rank because it does not satisfy what the searcher needs.
The four intent types
Informational
The searcher wants to learn, understand, or find an answer. No immediate action is implied. These are the highest volume queries on the web.
Signals:
- Question words: "how", "what", "why", "when", "who", "where"
- Explanatory phrases: "guide to", "tutorial", "explained", "definition of", "examples of"
- Research phrases: "statistics on", "history of", "overview of"
Examples:
- "how does compound interest work"
- "what is a product-led growth strategy"
- "why is my React app re-rendering"
- "JavaScript closures explained"
Content types Google rewards:
- Long-form guides and tutorials
- How-to articles with step-by-step structure
- Definitions and explainers
- FAQ pages
- Wikipedia-style reference articles
Conversion potential: Low to medium. Informational content builds awareness and trust, feeds remarketing audiences, and earns backlinks. It rarely converts directly but is essential for top-of-funnel and authority building.
Navigational
The searcher already knows where they want to go and is using the search engine as a shortcut to get there. They have a specific destination in mind.
Signals:
- Brand names: "Stripe dashboard", "Notion login", "Figma"
- Site-specific phrases: "GitHub repo for X", "Ahrefs keyword explorer"
- Destination phrases: "sign in", "login", "account", "download"
Examples:
- "Notion templates"
- "Vercel dashboard login"
- "Tailwind CSS docs"
- "react-query GitHub"
Content types Google rewards:
- Official brand homepages
- Login and account pages
- Official documentation sites
- Official download pages
Conversion potential: Very high for your own brand (the searcher is already a user). Very low for competitors' brand terms - do not waste resources trying to rank for "Salesforce login" unless you are Salesforce.
Key rule: Do not build content targeting competitors' pure brand navigational terms. Comparison pages ("X vs Salesforce") work; pure navigational terms do not.
Transactional
The searcher is ready to take a specific action, often a purchase, signup, or download. Intent to convert is explicit.
Signals:
- Purchase intent: "buy", "order", "purchase", "shop for"
- Conversion intent: "sign up", "get started", "free trial", "download"
- Deal signals: "coupon", "discount", "promo code", "deal", "cheap"
- Subscription signals: "pricing", "plans", "subscription"
Examples:
- "buy mechanical keyboard online"
- "Notion pricing"
- "free project management software"
- "HubSpot free trial"
- "Figma discount code"
Content types Google rewards:
- Product pages with clear CTAs
- Pricing pages
- Free trial / signup landing pages
- Category/collection pages (e-commerce)
- Service pages with contact forms
Conversion potential: Highest of all four types. These searchers have money or action intent. Optimize for clarity, trust signals, and minimal friction.
Commercial Investigation
The searcher is evaluating options before committing. They know the category or problem but are researching which solution to choose. This is the "consideration" stage of the funnel.
Signals:
- Comparison: "vs", "versus", "or", "compared to", "difference between"
- Evaluation: "best", "top", "review", "reviews", "rated", "recommended"
- Alternatives: "alternative to", "instead of", "like X but"
- Use-case qualifying: "for small business", "for developers", "for teams"
Examples:
- "best CRM for startups"
- "Ahrefs vs Semrush"
- "Notion alternatives for project management"
- "Stripe reviews"
- "top email marketing tools 2024"
Content types Google rewards:
- "Best X" listicles and comparison guides
- Side-by-side comparison tables
- Review articles with pros/cons
- Use-case landing pages
- Buyer's guides
Conversion potential: High. The searcher is actively deciding. Content here is some of the most valuable in the funnel - it captures buyers at decision time and can include affiliate links, CTAs to trials, or soft conversion offers.
How to classify intent from SERP features
When modifier words are absent or ambiguous, the SERP itself is the ground truth. Analyze the top 3-5 results:
| SERP signal | Likely intent |
|---|---|
| All results are product/service pages | Transactional |
| All results are "best X" or comparison articles | Commercial investigation |
| All results are how-to guides, tutorials, or explainers | Informational |
| Top result is the brand's homepage or login page | Navigational |
| Mix of product pages and review articles | Transactional + commercial investigation |
| Mix of guides and product pages | Informational + transactional (blended) |
| Featured snippet with a definition | Informational |
| Featured snippet with a step list | Informational (task-oriented) |
| Shopping carousel at the top | Transactional (strong product signal) |
| Local pack (map + 3 businesses) | Transactional with local modifier |
SERP features as secondary signals:
- People Also Ask (PAA) - PAA questions are almost always informational sub-intents within the main query. They are subheading opportunities regardless of the primary intent.
- Video carousel - Strong signal that the audience prefers video. Consider a companion video or a YouTube-embedded section in your article.
- Image pack - Visual intent. Include strong imagery and optimize alt text.
- Knowledge panel - Entity/brand query. Content strategy here is brand PR, not SEO.
- Sitelinks - Navigational signal; the brand is dominant for this query.
Intent-to-content-type mapping matrix
| Intent | Primary format | Page type | CTA type |
|---|---|---|---|
| Informational | Long-form guide, how-to, explainer | Blog post, wiki, resource hub | Newsletter signup, content upgrade, internal links |
| Navigational | Brand page, login, docs | Homepage, login page, docs site | None (user already committed) |
| Transactional | Product/service page, category page | Product page, pricing page, signup landing | Buy now, start free trial, add to cart |
| Commercial investigation | Listicle, comparison, review | "Best X" article, versus page, buyer's guide | Soft CTA: "see pricing", "compare plans", affiliate link |
Blended intent pages - Some keywords require satisfying two intents in one page. "Email marketing pricing" is both transactional (show me plans) and commercial investigation (how does it compare?). In these cases, lead with the primary intent in the hero section and address the secondary intent in a supporting section below.
Validating intent assumptions
Assumptions made from keyword modifiers are sometimes wrong. Always validate against the live SERP before committing to a content format.
Common false assumptions:
- "Email marketing" sounds informational but the SERP is dominated by tool homepages - it has a strong navigational/commercial intent overlay.
- "Notion" sounds navigational (brand name) but also returns templates, tutorials, and feature comparison content - there is informational intent in the mix.
- "Best practices for X" sounds informational but for some categories Google favors product comparison pages because searchers want tool recommendations.
Validation checklist:
- Search the keyword in an incognito window (or use a rank tracker with SERP preview).
- Note the content type of results 1, 2, and 3 (article, product page, video, etc.).
- Note the average content length (use a word count tool on 3-5 top results).
- Check whether a featured snippet is present and what format it uses (paragraph, list, table).
- Note any special SERP features (PAA, video, shopping, local pack).
- Update your intent classification and content format before writing the brief.
Intent mismatch consequences:
- Writing a how-to guide for a keyword where Google ranks product pages = won't rank
- Writing a product page for a keyword where Google ranks comparison articles = won't rank
- Writing a 500-word definition for a keyword where the top results average 3,000 words = won't rank
Always let the SERP tell you the format. Your job is to produce the best version of what is already working, not to invent a new format.
Assigning intent to a full keyword list
When you have 50-500 keywords to classify, use this workflow:
Auto-classify by modifier scan - Write a simple rule: if the keyword contains any of [buy, order, purchase, pricing, discount, coupon, free trial] -> transactional. If it contains [best, vs, review, alternative, compared] -> commercial investigation. If it contains [how, what, why, when, guide, tutorial, explained] -> informational. If it contains [brand name, login, sign in, download] -> navigational.
Flag ambiguous keywords - Any keyword that fires zero rules or fires multiple conflicting rules needs manual SERP review. These are typically 10-20% of a list.
Batch-verify the auto-classified results - Spot-check 5-10 keywords from each intent category against the live SERP. If your auto-classification is wrong more than 20% of the time, refine your modifier lists.
Output a classification table with columns:
keyword | primary_intent | secondary_intent | content_type | priority
This classification table becomes the input to your content calendar and content brief creation process.
Intent signals for AEO and GEO surfaces
Traditional intent classification maps keywords to content types for organic search. In tri-surface keyword research, intent also determines which non-organic surfaces are viable for a keyword. Not every intent type works on every surface.
Intent-to-surface mapping
| Intent type | Organic | AEO (snippets/PAA/voice) | GEO (AI citations) | Notes |
|---|---|---|---|---|
| Informational | High | High | High | Best tri-surface opportunity. AI engines answer informational queries extensively and cite sources. Snippets fire frequently. |
| Commercial investigation | High | Medium | High | AI engines produce detailed comparison answers with citations. Snippets are less common (Google prefers showing multiple results) but PAA is active. |
| Transactional | High | Low | Low | Snippets rarely fire for "buy X" queries. AI Overviews usually don't appear for direct purchase intent - Google shows shopping results instead. |
| Navigational | Low (unless your brand) | None | None | AI engines redirect to the brand. No snippet or AI citation opportunity for third parties. |
How intent affects AEO scoring
High AEO intent signals:
- Question-format queries ("how to", "what is", "why does") - these directly trigger featured snippets and are natural voice search patterns
- Definition queries ("X definition", "X meaning", "what is X") - paragraph snippets
- Process queries ("how to X step by step") - list snippets
- Comparison queries with clear structure ("X vs Y") - table snippets
- "Best" queries with listicle format - list snippets
Low/zero AEO intent signals:
- Brand navigational queries - no snippet opportunity
- Direct purchase queries ("buy X", "order X") - shopping results, not snippets
- Login/account queries - Google shows the brand's page directly
- Price-only queries ("X pricing") - sometimes a snippet, but usually product page
How intent affects GEO scoring
High GEO intent signals:
- Multi-factor evaluation queries ("best X for Y") - AI engines love producing structured comparisons and cite multiple sources
- How-to queries with multiple steps - AI engines generate step-by-step answers and cite authoritative guides
- "Explain" queries ("how does X work") - AI engines synthesize explanations from multiple cited sources
- Emerging topic queries - AI engines pull from recent sources when training data is insufficient, increasing citation opportunity
Low/zero GEO intent signals:
- Simple factual queries ("population of France") - AI answers from training data without citations
- Navigational queries - AI engines redirect, don't generate answers
- Very recent event queries - AI engines may not have indexed recent sources
- Highly subjective queries ("is X worth it") - AI engines are cautious and may not generate an answer at all
AI Overview intent patterns
Google AI Overviews and other AI search engines do not fire equally across all query types. Understanding which patterns trigger AI answers helps you predict GEO opportunity during keyword research.
Query types that consistently trigger AI Overviews
| Query pattern | Example | AI Overview behavior |
|---|---|---|
| "How to" process queries | "how to set up a home network" | Detailed step-by-step answer with 3-6 citations |
| "What is" explainer queries | "what is retrieval augmented generation" | Definition + context with 2-4 citations |
| "Best X for Y" comparisons | "best project management tool for remote teams" | Structured comparison with 4-8 citations |
| "X vs Y" comparisons | "PostgreSQL vs MySQL for web apps" | Side-by-side analysis with 3-5 citations |
| Multi-factor decision queries | "should I use TypeScript or JavaScript" | Pros/cons analysis with 3-6 citations |
| Complex informational queries | "how does mRNA vaccine technology work" | Detailed explanation with 4-7 citations |
Query types that rarely trigger AI Overviews
| Query pattern | Example | Why AI Overview doesn't fire |
|---|---|---|
| Direct purchase intent | "buy iPhone 16 Pro" | Google serves shopping results instead |
| Brand navigational | "Netflix login" | Direct link is more useful |
| Simple factual | "height of Mount Everest" | Knowledge panel handles this |
| Very local queries | "pizza near me" | Local pack is more appropriate |
| Current news/events | "election results today" | Top stories/news box serves this |
| YMYL health/finance (sometimes) | "should I take aspirin daily" | Google is cautious with AI answers on health/finance topics |
Using AI Overview patterns in keyword research
During the research phase, use these patterns to quickly estimate GEO potential:
- If a keyword matches a "consistently triggers" pattern, start with GEO score 5+
- If a keyword matches a "rarely triggers" pattern, start with GEO score 0-2
- Always verify with a manual search - patterns have exceptions
- For borderline keywords, test on both Google and Perplexity - if either fires a detailed cited answer, there is GEO opportunity
tool-specific-workflows.md
Tool-Specific Keyword Research Workflows
This reference provides step-by-step workflows for performing tri-surface keyword research using popular SEO tools and free alternatives. Each workflow produces the data needed to score keywords across organic, AEO (answer engine), and GEO (AI search) surfaces.
Ahrefs workflow
Ahrefs is best for traffic potential estimates, keyword difficulty, and SERP feature analysis.
Step 1: Seed keyword expansion (Keywords Explorer)
- Enter your seed keyword(s) in Keywords Explorer
- Go to "Matching terms" for direct variants and "Related terms" for semantic expansions
- Apply filters:
- Volume: set minimum based on your niche (usually 50+ for B2B, 200+ for B2C)
- KD: filter to your site's realistic range (new sites: 0-30; established: 0-60)
- SERP features: check the "SERP features" column - note which keywords trigger featured snippets, PAA, and AI Overviews
- Export the filtered list (include columns: keyword, volume, KD, traffic potential, SERP features)
Step 2: Competitor gap analysis (Content Gap)
- Go to Content Gap tool
- Enter 3-5 competitor domains in "Show keywords that the following rank for"
- Enter your domain in "But the following target doesn't rank for"
- Filter results:
- Positions: 1-20 for competitors
- Intersect: at least 2 of the competitors rank (validates the keyword)
- Export and merge with your seed expansion list
- In the export, note the SERP features column for each keyword - this feeds AEO scoring
Step 3: SERP feature audit for AEO scoring
- For your top 50-100 priority keywords, click into each keyword's SERP Overview
- Record in your spreadsheet:
- Featured snippet present? (yes/no) and format (paragraph/list/table)
- Who holds the snippet? (domain + DR)
- PAA count (number of People Also Ask boxes)
- Video carousel present?
- These data points feed directly into the AEO Opportunity Score rubric
Step 4: AI Overview check for GEO scoring
Ahrefs has begun tracking AI Overview presence in SERP features (as of 2025). Check the "AI Overview" filter in Keywords Explorer. For keywords where Ahrefs shows AI Overview data:
- Note whether AI Overview is present
- For critical keywords, manually verify on Google (Ahrefs data may lag)
- Record in your spreadsheet for GEO scoring
Semrush workflow
Semrush excels at keyword clustering, intent classification, and SERP feature filtering.
Step 1: Seed expansion (Keyword Magic Tool)
- Enter seed keyword in Keyword Magic Tool
- Use the topic groups on the left sidebar to explore sub-topics
- Apply filters:
- Volume: minimum threshold for your niche
- KD%: realistic range for your domain
- Intent: Semrush auto-classifies intent (I, N, T, C) - use this as a starting point
- SERP features: filter by "Featured snippet", "People Also Ask", "AI Overview"
- Export with all columns including intent and SERP features
Step 2: Competitor gap (Keyword Gap)
- Go to Keyword Gap tool
- Enter your domain vs 3-4 competitors
- Select "Missing" tab (keywords competitors rank for but you don't)
- Apply intent filter to focus on your target intent types
- Sort by traffic potential or volume
- Export and merge with seed expansion data
Step 3: SERP feature analysis
- In the exported data, Semrush includes SERP feature indicators
- Filter for keywords with featured snippets - these are your AEO candidates
- Filter for keywords with "AI Overview" - these are your GEO candidates
- For keywords with both, you have dual-surface opportunity
- Cross-reference with the tri-surface scoring rubric
Step 4: Intent validation
Semrush's auto-intent is a good starting point but not always correct:
- Spot-check 10-15% of your list against live SERPs
- For any keyword where Semrush says "informational" but the SERP shows product pages, override the classification
- Pay special attention to keywords Semrush marks as "commercial investigation" - these often have the highest AEO and GEO opportunity
Google Search Console + free tools workflow
For teams without paid tool access, this workflow uses GSC and free resources.
Step 1: Mine existing rankings (Google Search Console)
- Go to GSC > Performance > Search results
- Set date range to last 3 months
- Export all queries with impressions > 10
- Sort by impressions descending - these are keywords Google already associates with your site
- Identify "striking distance" keywords: position 5-20 with decent impressions
- These are your fastest organic wins
- Add columns for intent classification (manual, using modifier rules from search-intent-mapping.md)
Step 2: Expand with free tools
Google Autocomplete:
- Type your seed keyword into Google search
- Note all autocomplete suggestions
- Add a letter after the seed ("seed keyword a", "seed keyword b"...) for more suggestions
- Record all unique suggestions
People Also Ask mining:
- Search your seed keyword on Google
- Click on PAA questions to expand them (this loads more questions)
- Record all PAA questions - each is a potential keyword with high AEO value
- PAA questions are inherently snippet-eligible, so they score high on AEO
AlsoAsked.com:
- Enter your seed keyword
- Export the question tree (shows how PAA questions branch from each other)
- Use the question tree to build your keyword cluster hierarchy
Google Trends:
- Compare seed keyword variants to see which has higher/growing interest
- Check "Related queries" for rising terms (potential emerging opportunities)
- Use trend data to break ties between similar keywords
Step 3: Manual SERP audit for AEO/GEO scoring
Without paid tools, you must manually check SERPs for AEO and GEO signals:
- For your top 30-50 keywords, search each one in an incognito browser
- For each keyword, record:
- Featured snippet? Format? Holder domain?
- PAA count?
- AI Overview present? How many sources cited?
- Also test on ChatGPT Search (chat.openai.com with search enabled) and Perplexity (perplexity.ai):
- Does the query trigger a detailed answer?
- How many sources are cited?
- What types of sources are cited? (blogs, official docs, news, forums)
- Record all data in your scoring spreadsheet
ChatGPT Search and Perplexity manual audit
This audit is essential for GEO scoring and should be performed monthly for priority keywords.
ChatGPT Search audit
- Open ChatGPT with search enabled
- Enter your keyword as a search query
- Record:
- Does it generate a search-augmented response? (some queries don't trigger search)
- How many source citations appear?
- What domains are cited? (note domain authority/type)
- What content format do cited sources use? (list, how-to, data-rich, comparison)
- Is your site cited? If not, which competitor is?
Perplexity audit
- Open Perplexity.ai
- Enter the same keyword
- Record the same data points as ChatGPT Search
- Perplexity tends to cite more sources per answer - note the full citation list
- Compare which sources appear in both ChatGPT Search and Perplexity - consistent citations indicate strong GEO positioning
Monthly cadence
- Audit your top 20-30 priority keywords monthly
- Track citation changes over time (are you gaining or losing citations?)
- New keywords entering your pipeline should get a one-time audit before scoring
- Quarterly: re-audit your full keyword list (top 100) for GEO score updates
Spreadsheet scoring template
Use this column layout for your tri-surface keyword research spreadsheet:
Required columns
A: keyword
B: intent (informational / navigational / transactional / commercial)
C: search_volume (monthly)
D: traffic_potential (estimated total traffic for ranking page)
E: keyword_difficulty (0-100)
F: cluster_name
G: snippet_present (yes / no / n/a)
H: snippet_format (paragraph / list / table / n/a)
I: snippet_holder_domain
J: paa_count (0, 1-3, 4+)
K: ai_overview_present (yes / sometimes / no)
L: ai_overview_citation_count (number)
M: organic_score (0-10, calculated)
N: aeo_score (0-10, calculated)
O: geo_score (0-10, calculated)
P: composite_score (M + N + O)
Q: weighted_composite (with business goal weights applied)
R: priority_surface (organic / aeo / geo / dual / tri)
S: action (new page / optimize existing / FAQ / deprioritize)Formulas (Google Sheets / Excel)
Organic Score (simplified):
=ROUND((MIN(D2/700,3)*3 + (3-MIN(ROUND(E2/23),3))*3 + IF(B2="transactional",2,IF(B2="commercial",2,IF(B2="informational",1,0.5)))*2) / 3, 0)AEO Score (simplified):
=IF(B2="navigational",0, ROUND((IF(G2="yes",IF(I2="",2,1),IF(B2="informational",2,0))*2 + IF(G2="n/a",0,2)*2 + IF(J2="4+",2,IF(J2="1-3",1,0))*2) / 2, 0))GEO Score (simplified):
=IF(K2="no",0, ROUND((IF(K2="yes",3,IF(K2="sometimes",1,0))*3 + MIN(L2,2)*2 + IF(B2="informational",2,IF(B2="commercial",2,1))*2) / 2, 0))Note: these formulas are approximations of the full rubric. For the most accurate scores, evaluate each factor individually using the rubric in references/tri-surface-scoring.md.
Conditional formatting
- Green fill (score >= 7): high opportunity on this surface
- Yellow fill (score 4-6): moderate opportunity
- Red fill (score 0-3): low or no opportunity
- Bold text in priority_surface column
- Sort by weighted_composite descending to see your best opportunities first
Pivot table suggestions
- Pivot by cluster_name to see aggregate opportunity per topic cluster
- Pivot by priority_surface to see the distribution of organic vs AEO vs GEO opportunities
- Pivot by intent to see which intent type has the most tri-surface opportunity
tri-surface-scoring.md
Tri-Surface Keyword Scoring
In 2026, every keyword can drive traffic from three distinct surfaces: traditional organic blue links (SEO), answer engine features like featured snippets, People Also Ask, and voice results (AEO), and AI-generated search results from Google AI Overviews, ChatGPT Search, and Perplexity (GEO). Scoring a keyword on all three surfaces reveals where the real opportunity lies - some keywords are best won organically, others through snippet capture, and others by earning AI citations. This reference defines the scoring rubrics for each surface and a composite scoring method.
Why tri-surface scoring matters
Traditional keyword research only evaluates organic ranking opportunity. You look at search volume, keyword difficulty, and intent alignment, then decide whether to pursue the keyword based on whether you can realistically rank on page one. This single-surface view was sufficient when organic blue links captured the vast majority of clicks on the results page. That is no longer the case.
Answer engines now serve direct answers for roughly 30-40% of informational queries. Featured snippets, People Also Ask boxes, and voice search results pull users away from clicking through to organic listings. If you rank #3 organically but someone else holds the featured snippet, your effective click-through rate drops significantly. Meanwhile, AI search engines - Google AI Overviews, ChatGPT Search, and Perplexity - are creating an entirely new traffic channel by citing sources in their generated answers. Being cited in an AI Overview is functionally a new kind of "ranking" that exists outside the traditional ten blue links.
A keyword might have low organic opportunity (KD 80+) but high AEO opportunity (no featured snippet holder, strong PAA presence) or high GEO opportunity (AI Overview fires consistently but cites weak, outdated sources you could displace). Evaluating all three surfaces prevents blind spots and reveals non-obvious wins. Without tri-surface scoring, you might skip a keyword because organic difficulty is too high, completely missing that it represents a wide-open opportunity on answer engine or AI-generated surfaces.
Surface 1: Organic Opportunity Score (0-10)
Score each keyword on its traditional organic ranking potential.
Rubric
| Factor | Weight | Scoring |
|---|---|---|
| Traffic potential | 3 | 0 = <100/mo, 1 = 100-500, 2 = 500-2K, 3 = 2K+ |
| KD inversion | 3 | 0 = KD 70+, 1 = KD 50-69, 2 = KD 30-49, 3 = KD <30 |
| Intent-business alignment | 2 | 0 = no match, 1 = awareness only, 2 = consideration/commercial |
| Content gap exists | 2 | 0 = top 3 results are strong, 1 = gaps in top 10, 2 = clear content gap |
Score = sum of (factor score * weight) / max possible * 10, rounded to nearest integer
Simplified: add up the weighted scores. Max raw = 30. Divide by 3 to get 0-10.
How to evaluate
- Traffic potential: use Ahrefs/Semrush traffic potential metric (not raw volume)
- KD inversion: lower difficulty = higher score (inverted because easy = more opportunity)
- Intent alignment: does this keyword's intent match a page type you offer or plan to create?
- Content gap: are the current top results thin, outdated, or mismatched to intent?
Surface 2: AEO Opportunity Score (0-10)
Score each keyword on its potential for winning answer engine features (featured snippets, PAA, voice results).
Rubric
| Factor | Weight | Scoring |
|---|---|---|
| Snippet presence | 2 | 0 = no snippet possible (navigational/transactional), 1 = snippet exists with strong holder, 2 = snippet exists with weak/no holder OR no snippet yet on eligible query |
| Format match | 2 | 0 = your content can't match the snippet format, 1 = partial match, 2 = you can produce the exact format (paragraph, list, table) |
| PAA count | 2 | 0 = no PAA, 1 = 1-3 PAA questions, 2 = 4+ PAA questions (more PAA = more entry points) |
| Voice search likelihood | 2 | 0 = unlikely voice query, 1 = possible, 2 = natural spoken question pattern |
| Current holder strength | 2 | 0 = held by a top-3 authority domain (Wikipedia, gov), 1 = held by a mid-authority site, 2 = no holder or held by low-authority site |
Score = sum of (factor score * weight) / max possible * 10, rounded to nearest integer
Max raw = 20. Divide by 2 to get 0-10.
How to evaluate
- Snippet presence: search the keyword and check if a featured snippet appears; note format
- Format match: can you create a concise paragraph answer (40-60 words), a numbered list, or a table that matches?
- PAA count: count People Also Ask boxes on the SERP
- Voice likelihood: question-format queries ("how do I...", "what is...") are voice-likely
- Holder strength: who currently holds the snippet? A DR 90 site is hard to displace; a DR 30 blog is vulnerable
Key insight
An AEO score of 0 is correct for navigational and most transactional keywords - snippets don't fire for "buy X" or "brand login". Do not force an AEO score onto keywords where answer features are irrelevant.
Surface 3: GEO Opportunity Score (0-10)
Score each keyword on its potential for earning citations in AI-generated search results.
Rubric
| Factor | Weight | Scoring |
|---|---|---|
| AI Overview trigger | 3 | 0 = no AI Overview fires for this query, 1 = AI Overview fires sometimes, 2 = AI Overview fires consistently, 3 = fires on Google + ChatGPT Search/Perplexity |
| Citation density | 2 | 0 = AI answer cites 0-1 sources, 1 = cites 2-3 sources, 2 = cites 4+ sources (more citations = more opportunity for you to be one) |
| Query type match | 2 | 0 = query type AI engines skip (navigational, very simple), 1 = AI engines summarize but rarely cite deeply, 2 = AI engines provide detailed cited answers (comparisons, how-tos, multi-step) |
| Entity/topical relevance | 1 | 0 = your site has no topical authority here, 1 = some, 2 = strong entity match |
| Content uniqueness | 2 | 0 = your content would be generic/duplicative, 1 = somewhat differentiated, 2 = unique data, original research, or proprietary insight |
Score = sum of (factor score * weight) / max possible * 10, rounded to nearest integer
Max raw = 20. Divide by 2 to get 0-10.
How to evaluate
- AI Overview trigger: search the keyword on Google (logged out) and check if AI Overview appears; also test on ChatGPT Search or Perplexity
- Citation density: count how many source links the AI answer includes
- Query type match: comparison queries, multi-factor evaluations, and "how to" queries tend to trigger detailed AI answers with citations; simple factual queries get one-line answers with fewer citation opportunities
- Entity relevance: does your site have established authority on this topic? AI engines prefer citing known entities
- Content uniqueness: AI engines cite sources that add information beyond what's already in their training data - original research, proprietary data, unique frameworks, and first-party case studies get cited more
Critical rule: GEO = 0 when no AI Overview fires
If you search a keyword and no AI Overview, ChatGPT Search answer, or Perplexity answer appears, the GEO score is 0. Do not speculate about future AI coverage - score based on current observable behavior. Re-evaluate quarterly as AI search coverage expands.
Composite Score and Priority Surface
Calculating the composite score
The composite score is the simple sum of the three surface scores:
Composite Score = Organic Score + AEO Score + GEO ScoreRange: 0-30. Higher = more total opportunity across all surfaces.
Applying business-goal weighting
Not all surfaces matter equally to every business. Apply a multiplier based on your primary goal:
| Business goal | Organic weight | AEO weight | GEO weight |
|---|---|---|---|
| Maximum traffic volume | 1.5 | 1.0 | 0.8 |
| Brand authority / thought leadership | 1.0 | 0.8 | 1.5 |
| Lead generation / conversions | 1.2 | 1.2 | 0.8 |
| Awareness in AI-first audiences | 0.8 | 1.0 | 1.5 |
| Balanced / default | 1.0 | 1.0 | 1.0 |
Weighted composite = (Organic * Ow) + (AEO * Aw) + (GEO * Gw)
Determining priority surface
After scoring, assign a priority surface to each keyword:
- If one surface score is 3+ points higher than both others: that surface is priority
- If two surfaces are within 2 points and both above 5: dual-surface opportunity
- If all three are within 2 points and above 4: full tri-surface opportunity
- If all scores are below 3: low opportunity keyword - deprioritize
The priority surface informs which optimization skill to invoke next:
- Organic priority -> standard SEO content optimization
- AEO priority -> invoke
aeo-optimizationskill for snippet/PAA formatting - GEO priority -> invoke
geo-optimizationskill for citation optimization - Dual/tri-surface -> optimize content for primary surface first, then layer secondary
Worked examples
Example 1: Informational keyword - "how to improve email deliverability"
| Surface | Factor scores | Total |
|---|---|---|
| Organic | Traffic: 2, KD inv: 2, Intent: 1, Gap: 2 = 21/30 | 7 |
| AEO | Snippet: 2, Format: 2, PAA: 2, Voice: 2, Holder: 1 = 18/20 | 9 |
| GEO | Trigger: 2, Citations: 2, Query: 2, Entity: 1, Unique: 1 = 16/20 | 8 |
Composite: 24/30. Priority surface: AEO (highest score, clear snippet opportunity). Action: create a step-by-step guide formatted to win the featured snippet, include PAA subheadings. The strong GEO score means also structuring for AI citation (clear claims, data points, authoritative tone).
Example 2: Commercial keyword - "best project management software for remote teams"
| Surface | Factor scores | Total |
|---|---|---|
| Organic | Traffic: 3, KD inv: 1, Intent: 2, Gap: 1 = 20/30 | 7 |
| AEO | Snippet: 1, Format: 1, PAA: 2, Voice: 0, Holder: 1 = 10/20 | 5 |
| GEO | Trigger: 3, Citations: 2, Query: 2, Entity: 1, Unique: 1 = 18/20 | 9 |
Composite: 21/30. Priority surface: GEO (AI engines love comparison queries and cite multiple sources). Action: create a comparison article with unique evaluation criteria and original analysis. Optimize for AI citation by including clear verdict statements, structured data, and original scoring methodology. Also target organic since score is decent.
Example 3: Transactional keyword - "buy standing desk adjustable"
| Surface | Factor scores | Total |
|---|---|---|
| Organic | Traffic: 2, KD inv: 1, Intent: 2, Gap: 1 = 17/30 | 6 |
| AEO | Snippet: 0, Format: 0, PAA: 0, Voice: 0, Holder: 0 = 0/20 | 0 |
| GEO | Trigger: 0, Citations: 0, Query: 0, Entity: 0, Unique: 0 = 0/20 | 0 |
Composite: 6/30. Priority surface: Organic (only viable surface). Action: pure product/category page optimization. AEO=0 because transactional queries don't trigger snippets. GEO=0 because AI Overviews don't fire for direct purchase queries. This is a traditional SEO play.
Limitations and caveats
AI Overview volatility - Google's AI Overviews are still evolving rapidly. A keyword that triggers an AI Overview today may not trigger one next month, and vice versa. GEO scores should be re-evaluated quarterly. Do not over-invest in GEO optimization for keywords with inconsistent AI Overview behavior.
Tool immaturity - As of 2026, no single tool provides reliable automated tri-surface scoring. The rubrics above require manual evaluation for the AEO and GEO components. Tools like Ahrefs and Semrush are beginning to add SERP feature and AI Overview tracking, but the data is not yet comprehensive enough to fully automate scoring.
Scores are relative, not absolute - A keyword scoring 8/10 on organic is not "objectively" easy to rank for. It means it scores high relative to the rubric's criteria. Always cross-reference with domain-specific factors (your site's authority, content production capacity, competitive landscape).
Research, not optimization - This scoring framework tells you WHERE the opportunity is. It does not tell you HOW to capture it. For execution:
- Organic optimization: standard content SEO best practices
- AEO capture: use the
aeo-optimizationskill - GEO citation: use the
geo-optimizationskill
Sample size - When checking AI Overview triggers, test at least 3 times across different days and browsers. AI Overview appearance can vary by session, location, and Google's ongoing experiments.
Spreadsheet template columns
When building a tri-surface scoring spreadsheet, use these columns:
keyword | intent | volume | traffic_potential | kd | organic_score | aeo_score | geo_score | composite | weighted_composite | priority_surface | cluster | actionConditional formatting suggestions:
- Green highlight: any surface score >= 7
- Yellow highlight: surface score 4-6
- Red highlight: surface score 0-3
- Bold: priority_surface column
- Sort by: weighted_composite descending
Frequently Asked Questions
What is keyword-research?
Use this skill when performing keyword research, search intent analysis, keyword clustering, SERP analysis, competitor keyword gaps, long-tail keyword discovery, or evaluating keywords for snippet opportunity, AI Overview presence, and tri-surface keyword reports. Covers organic (SEO), answer engine (AEO snippets/PAA), and AI citation (GEO AI Overviews/ChatGPT Search/Perplexity) surfaces.
How do I install keyword-research?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill keyword-research in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support keyword-research?
keyword-research works with claude-code, gemini-cli, openai-codex, mcp. Install it once and use it across any supported AI coding agent.