lead-scoring
Use this skill when defining ideal customer profiles, building scoring models, identifying intent signals, or qualifying leads. Triggers on lead scoring, ICP definition, scoring models, intent signals, MQL, SQL, lead qualification, BANT, and any task requiring lead prioritization or qualification framework design.
sales lead-scoringicpqualificationintent-signalsmqlsqlWhat is lead-scoring?
Use this skill when defining ideal customer profiles, building scoring models, identifying intent signals, or qualifying leads. Triggers on lead scoring, ICP definition, scoring models, intent signals, MQL, SQL, lead qualification, BANT, and any task requiring lead prioritization or qualification framework design.
lead-scoring
lead-scoring is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Defining ideal customer profiles, building scoring models, identifying intent signals, or qualifying leads.
Quick Facts
| Field | Value |
|---|---|
| Category | sales |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill lead-scoring- The lead-scoring skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
Lead scoring is the discipline of quantifying how likely a prospect is to become a paying customer so sales teams spend time on the right people. A good scoring model combines profile fit (does the company match your ICP?) with behavioral intent (are they actively signaling purchase readiness?). This skill equips an agent to define ICPs, build point-based or predictive scoring models, weight intent signals, set MQL/SQL thresholds, implement score decay, and create a shared sales-marketing framework that drives consistent, measurable pipeline qualification.
Tags
lead-scoring icp qualification intent-signals mql sql
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair lead-scoring with these complementary skills:
Frequently Asked Questions
What is lead-scoring?
Use this skill when defining ideal customer profiles, building scoring models, identifying intent signals, or qualifying leads. Triggers on lead scoring, ICP definition, scoring models, intent signals, MQL, SQL, lead qualification, BANT, and any task requiring lead prioritization or qualification framework design.
How do I install lead-scoring?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill lead-scoring in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support lead-scoring?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Lead Scoring
Lead scoring is the discipline of quantifying how likely a prospect is to become a paying customer so sales teams spend time on the right people. A good scoring model combines profile fit (does the company match your ICP?) with behavioral intent (are they actively signaling purchase readiness?). This skill equips an agent to define ICPs, build point-based or predictive scoring models, weight intent signals, set MQL/SQL thresholds, implement score decay, and create a shared sales-marketing framework that drives consistent, measurable pipeline qualification.
When to use this skill
Trigger this skill when the user:
- Needs to define or refine an Ideal Customer Profile (ICP)
- Wants to build or overhaul a lead scoring model with point values
- Asks how to identify, classify, or weight intent signals (first-party or third-party)
- Needs to set MQL, SQL, or PQL thresholds and handoff criteria
- Wants to implement score decay for aging or disengaged leads
- Asks about BANT, MEDDIC, CHAMP, or any qualification framework
- Needs to validate whether a scoring model is actually predicting conversions
- Wants to align sales and marketing on lead definitions and SLA terms
Do NOT trigger this skill for:
- CRM technical implementation or integration wiring - use a CRM/engineering skill
- General demand generation strategy unrelated to lead qualification
Key principles
Fit + intent = score - Profile fit answers "should we ever sell to this company?" Intent answers "should we reach out right now?" Neither alone is sufficient. A perfect-fit company with zero intent is a nurture candidate. High intent from a poor-fit company wastes sales cycles. Weight both dimensions and require minimum thresholds on each, not just a combined total.
Decay scores over time - A lead who downloaded a whitepaper six months ago and has not engaged since is not still a hot prospect. Apply time-based decay to behavioral scores so inactivity reduces urgency. Fit scores (firmographic, technographic) typically do not decay; behavioral scores should decay 10-25% per month of inactivity.
Align sales and marketing on definitions - "Marketing Qualified Lead" means nothing if sales uses a different threshold to decide whether to work it. Define MQL, SQL, and PQL in a shared document, tie them to specific score thresholds, and measure SLA compliance. Misalignment here is the single largest source of pipeline leakage.
Start simple, iterate often - Begin with a manual point model covering 8-12 attributes. Get sales and marketing to validate it on historical closed-won data before layering in predictive ML. Complexity that has not been validated destroys trust faster than simplicity.
Validate with closed-won data - Build the model, then score your last 100 closed-won deals and 100 lost/no-decision deals. If the model does not clearly separate the two populations, the attribute weights are wrong. Recalibrate before deploying to live pipeline.
Core concepts
Demographic vs. behavioral scoring are the two axes of every lead score. Demographic (also called profile or fit) scoring assigns points based on static attributes: company size, industry, job title, tech stack, geography, funding stage. These attributes describe who the prospect is. Behavioral scoring assigns points based on actions: page visits, content downloads, email opens, webinar attendance, free trial sign-ups. These attributes describe what the prospect is doing right now. Most models maintain separate fit and behavioral sub-scores and require a minimum threshold on each before routing to sales.
MQL / SQL / PQL definitions are the thresholds that gate handoffs between teams. A Marketing Qualified Lead (MQL) has crossed a score threshold indicating marketing believes it warrants sales attention. A Sales Qualified Lead (SQL) is an MQL that sales has accepted as worthy of active pursuit, typically after a discovery call confirms fit, budget, and timeline. A Product Qualified Lead (PQL) is specific to PLG motions - it is a user (not just a lead) who has reached a product activation milestone that predicts conversion, such as inviting a second user, creating three projects, or integrating with a key tool.
Intent signals taxonomy classifies signals by source and strength. First-party signals come from your own properties (website visits, docs engagement, trial usage, email clicks) and are the highest-confidence because you own the data. Second-party signals come from partner ecosystems (co-marketing events, integration marketplace installs, referral partner activity). Third-party intent signals come from vendors like Bombora, G2, TechTarget, or 6sense - they aggregate content consumption across publisher networks to surface companies researching your category. Rank signals from strongest (pricing page visit, free trial start) to weakest (single blog visit, newsletter open).
Score decay is the mechanism that reduces a lead's behavioral score over time without fresh engagement. Without decay, a lead's score only ever increases, making old engagement permanently inflate priority. Implement decay as a scheduled job (daily or weekly) that multiplies behavioral sub-scores by a decay factor (e.g., 0.9 per week of inactivity). Reset the decay clock when a new qualifying action occurs. Fit scores are not decayed because firmographic attributes do not change frequently.
Common tasks
Define an Ideal Customer Profile (ICP)
An ICP is a description of the company type (not individual) most likely to buy, retain, and expand. Build it from closed-won analysis, not intuition.
Firmographic criteria:
Industry verticals: e.g., FinTech, HealthTech, B2B SaaS
Company size (employees): e.g., 50-500
ARR / Revenue range: e.g., $5M-$50M ARR
Geography: e.g., North America, EMEA
Funding stage: e.g., Series A - Series CTechnographic criteria:
Tech stack signals: e.g., uses Salesforce + Slack (integrates well)
Competitor usage: e.g., currently on legacy tool X (displacement motion)
Infrastructure: e.g., AWS/GCP (cloud-native, not on-prem only)Negative ICP (disqualifiers): Explicitly list company types to reject: e.g., solo-founder, pre-revenue, regulated industries you cannot serve, or geographies you do not support. These should auto-fail leads regardless of behavioral score.
Pull your last 50 closed-won deals and cluster them by firmographic attributes. The cluster with the shortest sales cycle and highest NRR is your ICP. Do not define ICP by who you want to sell to - define it by who actually bought and stayed.
Build a scoring model - point system template
A point-based model assigns values to attributes. Sum the points to produce a score from 0 to 100. Divide into a fit sub-score (0-50) and a behavioral sub-score (0-50).
Fit scoring template:
Attribute | Match | Points
-----------------------------|-------------------|-------
Industry match | Exact ICP | +15
| Adjacent | +8
| Outside ICP | 0
Company size | ICP range | +12
| One tier off | +6
Job title / seniority | Economic buyer | +10
| Champion / user | +7
| Unrelated | 0
Technographic signal | Key tech match | +8
Funding stage | ICP stage | +5
Geography | Target region | +0 (neutral)
| Excluded region | -20 (hard block)Behavioral scoring template:
Action | Points | Decay
-----------------------------|--------|------
Pricing page visit | +20 | -3/week
Free trial sign-up | +25 | none (reset point)
Demo request | +30 | none (route immediately)
Webinar attendance | +10 | -2/week
Content download (gated) | +8 | -2/week
Email click (3 in 7 days) | +5 | -1/week
Blog visit (single) | +2 | -1/week
Unsubscribe | -15 | permanent
Competitor domain email | -10 | permanentMQL threshold: Fit >= 25 AND Behavioral >= 20 (total >= 45) SQL threshold: MQL accepted by sales after discovery (BANT/MEDDIC confirmed)
Identify and weight intent signals
Group signals into tiers before assigning point values:
Tier 1 - Purchase intent (highest weight, route to sales immediately if fit >= 25):
- Demo or pricing request (first-party)
- Free trial activation (first-party)
- ROI calculator completion (first-party)
- Third-party intent surge (Bombora/G2) for your exact category
Tier 2 - Solution awareness (medium weight, enroll in fast-track nurture):
- Multiple product page visits in 7 days
- Case study or comparison guide download
- Webinar registration and attendance
- Integration marketplace browse or install
Tier 3 - Early research (low weight, standard nurture):
- Single blog post visit
- Newsletter subscription
- Podcast listen or video view
- Social media follow or engagement
Third-party intent signals should boost a score but never alone qualify a lead. They confirm category interest, not vendor selection. Combine with first-party engagement before routing to sales.
Set MQL and SQL thresholds
Thresholds must be agreed by both sales and marketing before launch.
Threshold-setting process:
- Score your last 100 closed-won deals with the proposed model
- Score your last 100 lost/no-decision deals
- Find the score that best separates the two populations (ROC curve / F1 score)
- Set the MQL threshold at the point with acceptable false-positive rate for sales
- Document the threshold in a shared definition document
- Review and recalibrate quarterly
Recommended SLA after MQL:
MQL created → Sales accepts or rejects within 24 business hours
SQL created → First meaningful outreach within 4 business hours
Demo completed → Follow-up sent within 2 business hoursIf sales rejects more than 25% of MQLs, the threshold is too low or the fit criteria are wrong. Track MQL rejection reasons - they are the most actionable feedback for recalibrating the model.
Implement score decay
Score decay prevents stale behavioral scores from inflating lead priority.
Decay implementation:
-- Pseudocode for weekly decay job
FOR EACH lead WHERE last_behavioral_action > 7 days ago:
behavioral_score = behavioral_score * 0.85 -- 15% weekly decay
IF behavioral_score < 5:
behavioral_score = 0 -- floor to avoid ghost scores
total_score = fit_score + behavioral_score
UPDATE lead recordDecay rate guidance:
Signal type | Decay rate | Rationale
---------------------|-----------------|----------------------------------
Single content click | -20%/week | Low-intent, fades fast
Webinar attendance | -10%/week | Higher effort, slower decay
Trial inactivity | -15%/week | Active usage is what matters
Demo no-show | -30% immediate | Strong disqualification signal
No engagement 90d | Reset to fit | Behavioral slate wiped cleanValidate scoring model against outcomes
Before going live, back-test the model against historical data.
Validation checklist:
- Score last 6 months of closed-won deals - average score should be >60
- Score same period of closed-lost deals - average score should be <40
- Calculate separation ratio: (avg won score - avg lost score) / std dev
- Run precision and recall: what % of deals above MQL threshold actually closed?
- Identify attributes that are over- or under-weighted by inspecting outliers
- Validate with sales: show them the top 20 scored leads, ask if they agree
Healthy model signals:
Metric | Target
-------------------------------|------------------
Avg closed-won score | > 65
Avg closed-lost score | < 35
MQL-to-SQL conversion | > 60%
SQL-to-opportunity conversion | > 40%
MQL rejection rate by sales | < 20%Align sales and marketing on lead handoff SLA
Misalignment on definitions and handoff procedures is the most common reason lead scoring fails to improve pipeline. Build a shared definition document covering:
Shared definition document structure:
1. ICP definition (firmographic + technographic criteria)
2. Negative ICP / auto-disqualify criteria
3. Lead lifecycle stages: Raw → Engaged → MQL → SAL → SQL → Opportunity
4. Score thresholds for each stage transition
5. Handoff SLA: who does what and within how long
6. Rejection protocol: how sales rejects an MQL and what reason codes to use
7. Recycling protocol: how rejected/lost leads re-enter nurture
8. Review cadence: monthly score review, quarterly model recalibrationAnti-patterns
| Anti-pattern | Why it's wrong | What to do instead |
|---|---|---|
| Scoring only behavioral signals | A highly engaged person at a wrong-fit company wastes sales time | Require minimum fit sub-score before behavioral score can trigger MQL |
| Never decaying scores | Old engagement permanently inflates score; leads from 6 months ago stay "hot" | Apply weekly behavioral decay; reset to fit-only score after 90 days of inactivity |
| Setting thresholds without data | Arbitrary thresholds (e.g., "score > 50") produce MQL lists sales ignores | Back-test on closed-won vs. closed-lost before launching; set threshold at the empirical separation point |
| Treating all page visits equally | A pricing page visit is 10x stronger than a blog visit | Tier signals by purchase intent; assign points proportionally |
| Defining MQL without sales buy-in | Marketing routes leads sales won't work; both teams disengage | Co-define MQL criteria with sales leadership; make sales sign off on thresholds |
| Ignoring negative signals | Leads who unsubscribe or use competitor emails stay "qualified" | Apply score penalties or hard blocks for disqualifying actions |
| Building a complex ML model first | Black-box models are hard to debug and lose sales trust | Start with a transparent point model; add ML only after validating the manual model |
Gotchas
Combined score threshold hides bad fits - A lead with a perfect behavioral score (50/50) but a zero fit score (wrong industry, wrong company size) can still hit the MQL threshold if you only gate on the total. Always require a minimum sub-score on both fit AND behavioral independently, not just the combined total.
Score decay runs on stale data when jobs fail silently - Decay jobs that error without alerting leave behavioral scores frozen at their peak indefinitely. The symptom is sales complaining that the pipeline is full of "ghost" hot leads who never respond. Add failure alerting and a last-ran timestamp check to your decay job.
Technographic signals are often stale - Data from providers like Clearbit or ZoomInfo can be 6-18 months out of date. A company that "uses Salesforce" in the data may have migrated off it. Treat technographic data as a weak fit signal, not a strong one, and never make it the sole reason for a high fit score.
PQL thresholds need per-product calibration - A "product qualified lead" in a tool with a 5-minute time-to-value (e.g., Loom) needs a completely different activation milestone than one with a 30-day ramp (e.g., a data platform). Copying PQL definitions from other companies' case studies without calibrating against your own activation data produces high MQL-rejection rates.
Sales rejection reasons are unstructured by default - Most CRMs let reps type free text when rejecting an MQL. Without a fixed reason code dropdown ("wrong company size", "no budget", "competitor", "timing"), you cannot aggregate feedback to recalibrate the model. Enforce a picklist from day one.
References
For detailed content on specific sub-domains, read the relevant file from references/:
references/scoring-models.md- Example scoring models for SaaS B2B, PLG, and enterprise motions with full attribute tables and threshold recommendations. Load when building or comparing scoring model templates for a specific GTM motion.
Only load a references file if the current task requires deep detail on that topic.
References
scoring-models.md
Scoring Models - Example Reference
Three complete scoring model templates for the most common B2B GTM motions. Each model uses a 0-100 scale split between a fit sub-score (0-50) and a behavioral sub-score (0-50). Thresholds and attribute weights should be validated against your own closed-won data before deploying to live pipeline.
Model 1: SaaS B2B (Sales-Led, SMB/Mid-Market)
Target motion: Inside sales team, ACV $10K-$100K, 14-60 day sales cycle. The model prioritizes company fit and buying-intent signals. A single demo request from a fit company should immediately route to SDR.
Fit Sub-Score (0-50)
| Attribute | Criteria | Points |
|---|---|---|
| Industry vertical | Exact ICP industry | +15 |
| Adjacent industry | +8 | |
| Outside ICP | 0 | |
| Company size (employees) | 50-500 (core ICP) | +12 |
| 501-2000 (upmarket stretch) | +7 | |
| 10-49 (downmarket) | +4 | |
| <10 or >2000 | 0 | |
| Job title / seniority | VP or C-level economic buyer | +10 |
| Director or Manager (champion) | +7 | |
| Individual contributor | +3 | |
| Unknown or irrelevant | 0 | |
| Technographic signal | Uses key integration partner tech | +8 |
| Uses complementary tool category | +4 | |
| Funding stage | Series A-C (actively investing) | +5 |
| Bootstrapped with revenue signals | +3 | |
| Pre-seed / pre-revenue | 0 | |
| Geography | Primary target region | 0 (no bonus needed) |
| Excluded region | -20 (hard disqualify) |
Fit score disqualifiers (set score to 0, remove from scoring):
- Competitor employee domain
- Industry on exclusion list (e.g., regulated vertical you cannot serve)
- Company size below minimum viable deal threshold
Behavioral Sub-Score (0-50)
| Action | Points | Decay Rate |
|---|---|---|
| Demo request (form submit) | +30 | No decay - route immediately |
| Pricing page visit | +20 | -3 pts/week inactive |
| Free trial sign-up | +25 | -2 pts/week inactive |
| ROI calculator completion | +18 | -2 pts/week inactive |
| Case study download (gated) | +10 | -2 pts/week inactive |
| Webinar registered + attended | +10 | -2 pts/week inactive |
| Webinar registered, no-show | +3 | -1 pt/week inactive |
| Product comparison page visit | +12 | -3 pts/week inactive |
| Email click (3+ in 7 days) | +8 | -1 pt/week inactive |
| Email click (1 in 7 days) | +3 | -1 pt/week inactive |
| Blog visit (multiple in session) | +4 | -1 pt/week inactive |
| Blog visit (single) | +2 | -1 pt/week inactive |
| Third-party intent surge (Bombora) | +8 | -2 pts/week if no first-party |
| Unsubscribe | -15 | Permanent |
| Competitor email domain | -10 | Permanent |
| Demo no-show (no reschedule) | -10 | Permanent until re-engagement |
Thresholds
MQL: Fit >= 25 AND Behavioral >= 20 (total >= 45)
SAL: Sales accepts MQL within 24 hours
SQL: Discovery call completed, BANT criteria confirmedScore bands:
80-100 Hot - route to AE immediately
60-79 Warm - SDR outreach within 4 hours
45-59 MQL - SDR outreach within 24 hours
25-44 Nurture - marketing automation sequences
0-24 Cold - top-of-funnel content onlyModel 2: PLG (Product-Led Growth)
Target motion: Free tier or trial converts to paid. Sales-assist layer for accounts showing expansion signals. ACV $2K-$30K, touch-less or low-touch.
In a PLG model, the Product Qualified Lead (PQL) replaces the traditional MQL. A PQL is a user or account that has reached a product activation milestone predicting conversion - not just a content consumer.
Fit Sub-Score (0-50)
Same firmographic structure as Model 1, but company size skews smaller:
| Attribute | Criteria | Points |
|---|---|---|
| Company size | 10-200 (PLG sweet spot) | +15 |
| 201-1000 (expansion candidate) | +10 | |
| 1-9 (solo/micro) | +5 | |
| >1000 (enterprise, different motion) | +3 | |
| Job title | Developer / technical user (builds with product) | +12 |
| Manager / operational buyer | +8 | |
| C-level (unusual for PLG self-serve) | +5 | |
| Industry | Exact ICP | +15 |
| Adjacent | +8 | |
| Technographic | GitHub active, AWS/GCP, modern stack | +8 |
PQL Behavioral Sub-Score (0-50)
PQL signals are product usage events, not marketing touch events:
| Product Action | Points | Signal Meaning |
|---|---|---|
| Invited second user to workspace | +25 | Collaboration intent = retention signal |
| Completed activation milestone | +20 | Reached "aha moment" - varies by product |
| Integrated with core work tool | +20 | Embedded in workflow - high switching cost |
| Used product 5+ days in first 14 | +15 | Habitual usage pattern forming |
| Created 3+ projects / workspaces | +12 | Expanding scope of use |
| Exported or shared output externally | +10 | Value realization, evangelism potential |
| Viewed upgrade/pricing page in-app | +18 | Active evaluation of paid plan |
| Reached usage limit (shown paywall) | +15 | Natural conversion trigger |
| Onboarding completed (100%) | +8 | Setup investment made |
| Logged in 0 days in first 7 days | -10 | Early churn risk |
| Deleted workspace or data | -20 | Strong disengagement signal |
Activation milestone (define per product):
Example for a project management tool:
Milestone = Created first project + added a collaborator + completed a task
Until milestone is hit, behavioral score is capped at 15 regardless of visits.PQL Thresholds
PQL: Fit >= 20 AND Product behavioral >= 30 (total >= 50)
Sales-assist trigger: Account has 3+ users OR Fit >= 35 AND PQL score >= 50
Enterprise flag: Company size > 500 employees, auto-route to enterprise AEScore-based action map:
50-100 PQL - Sales-assist outreach (in-product + email)
35-49 Active user - Trigger upgrade campaign sequence
20-34 Engaged - Onboarding nudges, feature discovery emails
0-19 At-risk - Re-engagement sequence or suppressModel 3: Enterprise (Field Sales, ABM)
Target motion: Account-Based Marketing (ABM), enterprise deals $100K+ ACV, 6-18 month sales cycle, multiple stakeholders. Scoring operates at the account level (account score), not individual contact level.
In enterprise ABM, the unit of measurement is the buying committee. An account qualifies when enough of the buying committee is engaged, not just one contact.
Account Fit Score (0-50)
| Attribute | Criteria | Points |
|---|---|---|
| Company size (employees) | 1000-10000 (mid-enterprise) | +15 |
| 10000+ (large enterprise) | +12 | |
| 500-999 (upper mid-market) | +8 | |
| Annual revenue | $100M-$1B | +12 |
| $1B+ | +10 | |
| $50M-$99M | +6 | |
| Industry vertical | Tier 1 ICP industry | +15 |
| Tier 2 adjacent | +8 | |
| Technographic fit | Key enterprise tech stack signals | +8 |
| Strategic priority | On named target account list | +10 |
| Executive relationship | Existing exec contact/intro | +8 |
| Prior engagement | Past opportunity (lost or expired) | +5 |
Account Engagement Score (0-50)
Enterprise behavioral scoring tracks the buying committee collectively:
| Signal | Points | Notes |
|---|---|---|
| Economic buyer engaged (VP/C-level activity) | +20 | Strongest signal in enterprise |
| Champion identified (internal advocate) | +15 | Confirmed via sales discovery |
| 3+ contacts from same account active | +15 | Buying committee breadth |
| Executive attended event or briefing | +12 | High-effort, high-intent signal |
| RFP or security review initiated | +25 | Late-stage, legal motion started |
| Procurement team engaged | +20 | Budget allocation confirmed |
| Competitive evaluation confirmed | +10 | Active deal, not just research |
| Second-party intent: integration install | +12 | Technical validation begun |
| Third-party intent surge (enterprise category) | +8 | Category research confirmed |
| Champion changed roles or left | -15 | Re-qualification needed |
| Decision deferred to next fiscal year | -10 | Recycle to long-nurture |
| Legal/security block identified | -5 | Obstacle, not disqualifier |
Account Thresholds
Tier 1 (Hot): Fit >= 35 AND Engagement >= 30 - AE + SDR coordinated outreach
Tier 2 (Active): Fit >= 25 AND Engagement >= 15 - AE-led outreach, monthly touch
Tier 3 (Nurture): Fit >= 20 AND Engagement < 15 - Marketing-led ABM programs
Deprioritize: Fit < 20 - Remove from named account listBuying committee coverage model:
Role | Required for SQL | Points if engaged
-------------------|------------------|------------------
Economic Buyer | Yes | +20
Champion | Yes | +15
Technical Evaluator| Yes | +10
Legal / Procurement| No (nice to have)| +8
End User | No | +5An enterprise opportunity should not advance to SQL unless an economic buyer is engaged. Champion enthusiasm without economic buyer access is the most common reason enterprise deals stall at procurement.
Model Comparison Summary
| Dimension | SaaS B2B | PLG | Enterprise |
|---|---|---|---|
| Scoring unit | Contact | User/Account | Account |
| Primary fit signal | Firmographic | Firmographic + technographic | Strategic account list |
| Primary intent signal | Demo/pricing request | Activation milestone | Buying committee engagement |
| MQL/PQL trigger | Fit + behavioral threshold | Product usage threshold | Account engagement threshold |
| Score decay | Yes (behavioral weekly) | Yes (usage-based, 30 days) | Partial (engagement, 60 days) |
| Typical model refresh | Quarterly | Monthly | Semi-annually |
| Validate against | Closed-won vs. closed-lost | Trial-to-paid conversion | Won enterprise deals |
Recalibration Checklist
Run this checklist quarterly to keep models accurate:
[ ] Score last 90 days of closed-won deals - are scores above MQL threshold?
[ ] Score last 90 days of closed-lost - are scores below threshold?
[ ] Check MQL rejection rate - is it above 25%? (indicates threshold too low)
[ ] Check MQL-to-SQL conversion - is it below 50%? (recalibrate fit criteria)
[ ] Review new behavioral signals (new product features, new content assets)
[ ] Sync with sales on which leads felt right vs. wrong - collect qualitative signal
[ ] Update negative ICP list if new disqualifier patterns have emerged
[ ] Verify decay rates are not zeroing out legitimate re-engaged leads Frequently Asked Questions
What is lead-scoring?
Use this skill when defining ideal customer profiles, building scoring models, identifying intent signals, or qualifying leads. Triggers on lead scoring, ICP definition, scoring models, intent signals, MQL, SQL, lead qualification, BANT, and any task requiring lead prioritization or qualification framework design.
How do I install lead-scoring?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill lead-scoring in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support lead-scoring?
lead-scoring works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.