customer-success-playbook
Use this skill when building health scores, predicting churn, identifying expansion signals, or running QBRs. Triggers on customer success, health scores, churn prediction, expansion signals, customer QBRs, onboarding playbooks, NRR optimization, and any task requiring customer success strategy or operations.
operations customer-successhealth-scoreschurnexpansionnrrretentionWhat is customer-success-playbook?
Use this skill when building health scores, predicting churn, identifying expansion signals, or running QBRs. Triggers on customer success, health scores, churn prediction, expansion signals, customer QBRs, onboarding playbooks, NRR optimization, and any task requiring customer success strategy or operations.
customer-success-playbook
customer-success-playbook is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Building health scores, predicting churn, identifying expansion signals, or running QBRs.
Quick Facts
| Field | Value |
|---|---|
| Category | operations |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill customer-success-playbook- The customer-success-playbook skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
Customer Success (CS) is the discipline of ensuring customers achieve their desired outcomes using your product - making churn prevention a byproduct of genuine value delivery rather than a reactive damage-control function. This skill covers the full CS operating model: health scoring, onboarding design, churn prediction, expansion identification, QBR execution, segmentation, and team performance measurement.
Tags
customer-success health-scores churn expansion nrr retention
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair customer-success-playbook with these complementary skills:
Frequently Asked Questions
What is customer-success-playbook?
Use this skill when building health scores, predicting churn, identifying expansion signals, or running QBRs. Triggers on customer success, health scores, churn prediction, expansion signals, customer QBRs, onboarding playbooks, NRR optimization, and any task requiring customer success strategy or operations.
How do I install customer-success-playbook?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill customer-success-playbook in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support customer-success-playbook?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Customer Success Playbook
Customer Success (CS) is the discipline of ensuring customers achieve their desired outcomes using your product - making churn prevention a byproduct of genuine value delivery rather than a reactive damage-control function. This skill covers the full CS operating model: health scoring, onboarding design, churn prediction, expansion identification, QBR execution, segmentation, and team performance measurement.
When to use this skill
Trigger this skill when the user:
- Needs to build or improve a customer health scoring system
- Wants to design or audit an onboarding playbook
- Asks how to predict, detect, or prevent customer churn
- Needs to identify expansion and upsell opportunities
- Wants to run effective Quarterly Business Reviews (QBRs)
- Asks how to segment customers by value or risk tier
- Needs to define CS team KPIs or OKRs
- Is working on NRR (Net Revenue Retention) or GRR (Gross Revenue Retention) improvement
Do NOT trigger this skill for:
- Product roadmap prioritization driven purely by engineering constraints (use product-strategy skills)
- Sales prospecting, lead scoring, or new logo acquisition (pre-sale belongs to sales enablement)
Key principles
Proactive, not reactive - CS that only responds to support tickets is account management, not customer success. Intervene before customers feel pain. The best save is the one that never needed saving.
Health scores drive action - A health score that lives in a dashboard but never triggers a workflow is decoration. Every health band must have an associated motion: what the CSM does, when, and how. Score without action is noise.
Onboarding determines lifetime value - The first 30-90 days set the trajectory for the entire customer relationship. Customers who reach their first "aha moment" quickly retain at 2-3x the rate of those who struggle. Invest disproportionately in time-to-value.
Expansion is earned, not sold - Upsells from customers who haven't achieved their desired outcomes produce churn, not growth. Expansion should follow proven value, not quota pressure. The signal to expand is the customer asking for more, not the CSM pitching more.
Segment by value and risk - Not all customers deserve the same coverage model. High-ARR accounts need white-glove, human-led success. Low-ARR accounts need scalable, tech-touch programs. Mismatching coverage to tier burns CSM capacity on accounts that can't justify the cost and underserves accounts that need attention.
Core concepts
The CS lifecycle
Every customer moves through predictable phases, each with distinct success criteria:
| Phase | Duration | Goal | Key risk |
|---|---|---|---|
| Onboarding | Days 1-90 | First value realization | Slow time-to-value, scope creep |
| Adoption | Months 3-12 | Broad, deep usage across team | Shallow single-user adoption |
| Renewal | 90 days before renewal | Confirmed ROI, signed renewal | Surprise objections at renewal |
| Expansion | Post-renewal or milestone | Upsell based on proven value | Premature pitching |
| Advocacy | Ongoing | Reference, case study, promoter | Neglect after expansion |
Health score components
A well-designed health score is a weighted composite of leading indicators that predict renewal probability. Common dimensions and typical weights:
| Dimension | Signal examples | Typical weight |
|---|---|---|
| Product usage | DAU/WAU, feature adoption depth, seats used/licensed | 30-40% |
| Engagement | CSM touchpoint frequency, sponsor responsiveness | 20-25% |
| Outcomes | Goals achieved vs. committed, ROI metrics | 20-25% |
| Support | Ticket volume, CSAT, unresolved critical issues | 10-15% |
| Relationship | Executive sponsor status, champion stability, NPS | 10-15% |
See references/health-score-model.md for the detailed weighted model and threshold design.
Churn indicators
Leading indicators (intervene now):
- Login frequency drops >30% week-over-week for 2+ consecutive weeks
- Champion or executive sponsor changes roles or leaves
- Support ticket volume spikes with CSAT below 3/5
- Renewal conversation not started within 90-day window
- Customer misses two consecutive scheduled CSM touchpoints
Lagging indicators (harder to reverse):
- Customer requests data export
- Customer asks about contract termination clauses
- NPS drops to detractor (0-6) territory
Expansion signals
- Usage ceiling - Feature utilization approaching licensed limit (>80% of seats used)
- Adjacent pain - Customer raising problems the upsell product directly solves
- Organizational spread - Multiple departments asking for access beyond the pilot team
- Renewal enthusiasm - Customer signs renewal early or references product in internal materials
- Executive sponsorship shift - C-suite starts attending success calls
Common tasks
Build a customer health scoring system
Design a weighted, multi-dimensional model that produces a single score (0-100) and a color-coded band (Red / Yellow / Green) with automatic CSM action triggers.
Step 1 - Define dimensions and weights. Select 4-6 dimensions relevant to your business. Product usage should carry the highest weight (30-40%) because it is objective and hardest to fake.
Step 2 - Normalize each dimension to 0-100. Map each raw metric to a 0-100 sub-score using thresholds. Example for usage:
- <20% of seats active in last 30 days = 0
- 20-49% = 40, 50-74% = 70, 75-100% = 100
Step 3 - Apply weights and compute composite.
health_score = (usage * 0.35) + (engagement * 0.25) + (outcomes * 0.20) + (support * 0.10) + (relationship * 0.10)
Step 4 - Define bands and mandatory actions.
| Band | Score range | CSM action |
|---|---|---|
| Green | 75-100 | Expansion motion, reference request |
| Yellow | 50-74 | Scheduled check-in within 7 days, risk assessment |
| Red | 0-49 | Executive escalation within 48 hours, save plan |
Step 5 - Build a feedback loop. Compare health score 90 days prior to renewal against actual renewal outcome. Tune weights until model achieves >75% predictive accuracy for churn.
See references/health-score-model.md for the full scoring template.
Design an onboarding playbook
Onboarding ends when the customer achieves their first committed outcome, not when technical setup is complete. Structure around milestones, not calendar dates.
Milestone 1 - Technical kickoff (Days 1-7)
- Stakeholder alignment: map goals to product capabilities
- Technical setup complete: integrations, SSO, data imports
- Success plan signed: 3-5 measurable goals with target dates
Milestone 2 - First value realization (Days 14-30)
- Core use case live with real data
- At least 3 active users beyond the champion
- Customer can demonstrate the product unaided
Milestone 3 - Team adoption (Days 30-60)
60% of licensed seats active
- Secondary use case identified or live
Milestone 4 - Outcome confirmation (Days 60-90)
- At least one success plan goal achieved or measurably progressing
- EBR scheduled with executive sponsor
- Expansion signals documented for account plan
Predict and prevent churn - early warning system
Tiered alert triggers:
| Alert level | Trigger criteria | Response |
|---|---|---|
| Watch | Health score drops from Green to Yellow | CSM schedules check-in within 7 days |
| Warning | Yellow for 21+ days, or any single dimension at 0 | CSM escalates, builds risk mitigation plan |
| Critical | Health score Red, OR champion departs, OR formal complaint | Executive engagement within 48 hours, save plan |
Save plan template:
- Root cause analysis - what drove the score down?
- Executive alignment - is there internal will to stay?
- Remediation actions - concrete steps with owners and dates
- Success criteria - what does "saved" look like at 30/60/90 days?
- Go/no-go checkpoint - if criteria not met, prepare graceful offboarding
Identify expansion opportunities - signals and timing
Qualification criteria before starting an expansion motion:
- Health score Green for at least 60 consecutive days
- At least one success plan goal achieved with documented ROI
- Champion actively engaged
- Renewal is not within 60 days
Expansion conversation framework:
- Anchor to achieved value - reference a specific metric
- Surface the adjacent pain - ask about problems the expanded product solves
- Quantify the gap - help the customer estimate the cost of not solving it
- Propose a pilot or phased expansion
- Involve the AE for formal commercial motion; CSM does not own the close
Run effective QBRs - agenda
A QBR is a strategic alignment meeting, not a product demo. Target audience is executive sponsors; goal is confirming strategic value and setting next-quarter direction.
QBR agenda (60 minutes):
| Time | Section | Owner |
|---|---|---|
| 0-5 min | Welcome and objectives | CSM |
| 5-20 min | Results: goals vs. actuals from last quarter | CSM + Customer champion |
| 20-30 min | Value realized: ROI story with business metrics | CSM |
| 30-40 min | Challenges and open risks (honest) | Both sides |
| 40-50 min | Goals and success criteria for next quarter | Customer executive |
| 50-60 min | Product roadmap alignment + asks from customer | CSM + AE |
Preparation checklist: Pull 90-day health score trend, document 2 quantified ROI data points, prepare 3 success plan status updates, know the renewal date, brief your exec sponsor, identify one expansion opportunity (if Green health).
Segment and tier CS coverage
| Tier | ARR range | Coverage model | CSM ratio | Touchpoint cadence |
|---|---|---|---|---|
| Enterprise | >$100K | Named CSM, white-glove, proactive | 1:10-20 | Bi-weekly syncs, quarterly EBRs |
| Mid-Market | $20K-$100K | Named CSM, pooled for scale | 1:40-80 | Monthly syncs, semi-annual EBRs |
| SMB / Long Tail | <$20K | Tech-touch: automated email, in-app, community | 1:200+ | Automated lifecycle sequences |
Measure CS team performance - metrics
Lagging metrics:
| Metric | Definition | Target benchmark |
|---|---|---|
| Gross Revenue Retention (GRR) | Revenue retained excluding expansion | >90% for SaaS |
| Net Revenue Retention (NRR) | Revenue retained including expansion, minus churn | >110% signals healthy growth |
| Logo Churn Rate | % of customers lost in a period | <5% annually |
| Renewal Rate | % of renewals closed on time | >95% |
Leading metrics:
| Metric | Why it matters |
|---|---|
| Time-to-First-Value | Predicts long-term retention |
| Health Score Distribution | Portfolio risk visibility |
| QBR Completion Rate | Measures strategic engagement |
| Expansion Pipeline Coverage | Expansion predictability |
Anti-patterns
| Anti-pattern | Why it fails | What to do instead |
|---|---|---|
| Health score theater | Score exists in Salesforce but drives zero workflow | Tie every health band to a mandatory CSM action with SLA |
| One-size-fits-all coverage | Named CSMs on $5K accounts burns capacity; $500K accounts get neglected | Segment by ARR; build tech-touch for the long tail |
| Renewal-only QBRs | Signals the relationship is purely transactional | Run QBRs on a calendar cadence regardless of renewal timing |
| Premature expansion | Pitching upsells before first outcome produces churn, not revenue | Gate expansion on Green health (60+ days) and one achieved goal |
| Champion dependency | Single champion leaves and account collapses | Map at least two stakeholders; involve exec sponsor from onboarding |
| Vanity NPS | Sending surveys without acting on detractors | Close the loop on every detractor within 5 business days |
Gotchas
Health score with no action trigger is decoration - A health score that lives in Salesforce and gets reviewed once a month during pipeline calls is not driving behavior. Every health band must have a mandatory CSM action with a defined SLA. Green without an expansion motion and Red without an escalation protocol are both failures of the system.
Champion departure is not always visible in usage data - Product usage can remain stable for 30-60 days after a champion leaves, because the remaining users keep using the product out of habit. The champion departure is a leading indicator that usage will decline. Monitor LinkedIn/CRM for job changes on key contacts, not just product telemetry.
Premature expansion pitches accelerate churn - Attempting to upsell a customer who has not yet achieved their primary success plan goals communicates that you care more about revenue than their outcomes. It damages trust, poisons renewal conversations, and produces contraction, not expansion. Gate expansion motions strictly on Green health for 60+ days and at least one documented ROI milestone.
QBR attendance without executive preparation - A QBR where the customer executive shows up cold (no agenda sent in advance, no pre-read, no briefing with the champion) quickly turns into a status update that could have been an email. Send the agenda and ROI data 5 business days in advance and pre-brief the champion on what you want the executive to walk away thinking.
Onboarding completion measured by setup, not value - Marking onboarding complete when technical setup is done (SSO configured, data imported) does not indicate the customer has achieved any business value. The real onboarding milestone is first value realization: a user has completed a meaningful workflow with real data and can demonstrate it unaided.
References
references/health-score-model.md- Detailed weighted health score model with dimension definitions, normalization tables, and threshold calibration guidance
Only load the reference file when the task requires designing or auditing a health scoring system in detail.
References
churn-prediction.md
Churn Prediction - Complete Reference
Beyond Health Scores: Velocity-Based Risk Modeling
A health score is a snapshot; churn prediction is a trajectory. The core insight is that the rate of change in health signals is often more predictive than the absolute level.
The Velocity Framework
For each health signal, track three values:
1. Current value (today's score)
2. Velocity (rate of change over 30 days)
3. Acceleration (change in velocity - is the decline speeding up?)
Risk Assessment Matrix:
Current Score | Velocity | Risk Level
--------------|---------------|------------------
High (70+) | Stable/Up | Low - monitor
High (70+) | Declining | Medium - investigate
High (70+) | Rapid decline | High - act now
Mid (40-69) | Improving | Medium - continue support
Mid (40-69) | Stable | Medium-High - proactive outreach
Mid (40-69) | Declining | High - escalate
Low (<40) | Improving | Medium - recovery in progress
Low (<40) | Stable | Critical - recovery stalled
Low (<40) | Declining | Emergency - exec escalationCalculating Velocity
Signal Velocity = (Current_30day_avg - Previous_30day_avg) / Previous_30day_avg
Example:
DAU/MAU last 30 days: 0.35
DAU/MAU previous 30 days: 0.50
Velocity = (0.35 - 0.50) / 0.50 = -0.30 (-30% decline)
Velocity Thresholds:
> +10%: Positive trend (green)
-10% to +10%: Stable (neutral)
-10% to -25%: Declining (yellow)
< -25%: Rapid decline (red)Early Warning System Design
Signal Priority for Early Warning
Not all signals degrade at the same rate before churn. The typical degradation timeline for B2B SaaS (from earliest warning to cancellation):
Timeline Before Churn | Signal
--------------------------|------------------------------------------
6-12 months | Executive sponsor engagement drops
4-8 months | Feature adoption breadth narrows
3-6 months | Login frequency declines
2-4 months | CSM meeting cancellations increase
1-3 months | Support ticket sentiment turns negative
1-2 months | Renewal conversation stalls
2-4 weeks | Customer asks about data export/migration
1-2 weeks | Cancellation request submittedThe earlier signals are harder to detect but give you more time to recover. Invest in tracking executive engagement and feature adoption breadth - they are the 6-month early warning system most CS teams ignore.
Alert Design
Structure alerts in three tiers:
Tier 1 - Automated Watch (no human action required yet):
Trigger: Any single signal crosses yellow threshold
Action: Log to account timeline, include in weekly CSM digest
Example: "Login frequency dropped 15% MoM for Acme Corp"
Tier 2 - CSM Action Required:
Trigger: Two or more signals cross yellow, OR any signal crosses red
Action: Task assigned to CSM with 48-hour SLA to investigate
Example: "Acme Corp: Login decline (-25%) + Support sentiment negative"
Tier 3 - Escalation Required:
Trigger: Health score drops below 30, OR velocity multiplier hits 2.0x
Action: CS manager + CSM sync within 24 hours, exec sponsor assigned
Example: "CRITICAL: Acme Corp health score dropped from 62 to 28 in 30 days"Churn Cohort Analysis
Analyze churn patterns by cohort to identify systemic issues vs. individual account problems.
Building a Churn Cohort View
Group churned accounts by:
1. Signup cohort (when they became a customer)
- Are recent cohorts churning faster than older ones?
- If yes: onboarding or product-market fit problem
2. Segment (Enterprise / Mid-Market / SMB)
- Is churn concentrated in a specific segment?
- If yes: pricing, coverage model, or product fit issue for that segment
3. Churn reason category
- Product gaps, poor adoption, budget, competitor, bad fit, M&A
- Track distribution over time - is "competitor" growing as a reason?
4. Lifecycle stage at churn
- First 90 days: onboarding failure
- 90 days - 1 year: adoption/value failure
- 1+ years: relationship, evolving needs, or competitive displacement
5. Last known health score trajectory
- Sudden drop (event-driven): champion departure, outage, billing issue
- Gradual decline (trend-driven): fading engagement, slow disengagement
- Never healthy (fit-driven): should not have been sold in the first placeInterpreting Churn Cohort Data
Pattern | Root Cause | Action
---------------------------------|---------------------------|---------------------------
New cohorts churn faster | Onboarding degraded | Audit onboarding, time-to-value
Churn spikes in specific months | Renewal bunching | Stagger renewals, prep earlier
One segment churns 2x+ others | Product-market fit gap | Investigate segment-specific needs
"Competitor" reason trending up | Competitive pressure | Win/loss analysis, feature gap review
Most churn from "never healthy" | Sales/CS misalignment | Tighten ICP, improve handoff processPredictive Modeling Approaches
Simple: Rules-Based Scoring
Best for teams with <500 accounts or limited data science resources.
Churn Risk Score = weighted sum of risk factors
Risk Factors:
Health score < 50: +30 points
Health velocity negative: +20 points
Days to renewal < 90: +15 points
Executive sponsor departed: +25 points
2+ negative support tickets: +15 points
CSM meetings declined 2x: +10 points
No login in 14+ days: +20 points
Risk Levels:
0-25: Low risk
26-50: Moderate risk
51-75: High risk
76-100: Critical riskIntermediate: Logistic Regression
Best for teams with 500+ accounts and 50+ churn events to train on.
Approach:
1. Define the target: churned within 90 days (binary: yes/no)
2. Feature set: health score, velocity, engagement metrics, tenure,
contract value, segment, support ticket count/sentiment
3. Train on historical data (12-24 months)
4. Output: probability of churn within 90 days for each account
5. Set action thresholds: >70% probability = critical, >40% = high
Advantages:
- Interpretable (you can explain WHY an account is high risk)
- Works with relatively small datasets
- Easy to update quarterly with new data
Disadvantages:
- Assumes linear relationships
- May miss complex interaction effectsAdvanced: Survival Analysis
Best for teams that need to predict WHEN churn will happen, not just IF.
Approach:
1. Model time-to-churn as a survival function
2. Accounts that haven't churned are "censored" (still at risk)
3. Cox Proportional Hazards model identifies which factors accelerate
or decelerate time-to-churn
4. Output: hazard ratio for each factor and predicted survival curves
Example output:
"Executive sponsor departure increases churn hazard by 2.3x"
"Each additional integrated tool reduces churn hazard by 0.85x"
"This account has a 35% probability of churning within 6 months"Involuntary Churn Prevention
Involuntary churn (failed payments, expired cards) is often 20-40% of total churn for self-serve and SMB segments. It deserves its own playbook.
Dunning sequence (pre-failure):
Day -30: Alert customer that card expires soon
Day -14: Second reminder with one-click update link
Day -7: Final warning, highlight what they'll lose access to
Dunning sequence (post-failure):
Day 0: Payment failed - retry immediately
Day 1: Email: "Payment issue - update your card" (friendly tone)
Day 3: Retry payment + second email
Day 7: Retry + "Your account will be paused in 7 days" email
Day 10: Retry + final warning
Day 14: Account paused (not deleted), email with reactivation link
Day 30: Account cancelled, data retained for 90 days
Best practices:
- Retry failed payments 4-6 times over 14 days (different times/days)
- Use card updater services (Visa Account Updater, Mastercard ABU)
- Send in-app notifications, not just email
- Make the update flow one click, no login required
- Track recovery rate by dunning step to optimize the sequenceChurn Prevention Metrics
Track these metrics to measure the effectiveness of your churn prediction and prevention programs:
Metric | Target | Measurement
------------------------------|----------------|-------------------------------------
Prediction accuracy (AUC) | >0.75 | % of churned accounts flagged 90d prior
False negative rate | <15% | Churned accounts that were scored Green
False positive rate | <30% | Red accounts that did not churn
Recovery rate | >40% | Red accounts saved within 90 days
Time to intervention | <48 hours | Time from alert to first CSM action
Churn reason accuracy | >80% | Validated churn reasons match prediction
Save offer acceptance rate | 20-35% | Accounts that accept a retention offer expansion-playbooks.md
Expansion Playbooks - Complete Reference
Expansion Types
Understanding the different expansion motions helps target the right approach for each account.
Type | Definition | Typical Trigger
--------------|-----------------------------------------|---------------------------
Seat expansion| More users on existing plan | Team growth, new hires
Tier upgrade | Move to a higher plan/tier | Feature limits hit
Module add-on | Purchase additional product modules | New use case identified
Usage upgrade | Higher usage limits or volume tier | Approaching capacity
Cross-sell | New product from your portfolio | Adjacent need discovered
Professional | Services, training, custom development | Complex implementation needSignal Detection Framework
Quantitative Signals (data-driven)
Signal | Detection Method | Readiness
--------------------------------|-------------------------------|----------
Seat utilization > 80% | License usage report | Hot
API calls approaching limit | Usage monitoring dashboard | Hot
New department onboarding | Distinct user groups forming | Warm
Feature requests for higher tier| Support/product feedback data | Warm
Usage growing 20%+ QoQ | Trend analysis | Warm
Integration count increasing | Product analytics | Warm
Admin creating new workspaces | Product event tracking | Hot
Export/reporting usage growing | Feature analytics | WarmQualitative Signals (relationship-driven)
Signal | Detection Method | Readiness
--------------------------------|-------------------------------|----------
Customer mentions new initiative| QBR, check-in calls | Hot
Executive champion advocates | NPS/CSAT comments, referrals | Hot
Customer asks about pricing | CSM conversations, chat logs | Hot
New stakeholders joining calls | Meeting attendee tracking | Warm
Customer shares positive ROI | QBR presentations, emails | Hot
Asks about roadmap features | CSM notes, support tickets | Warm
Participates in case study | Marketing program tracking | Hot
References you in their docs | Web monitoring, CSM reports | WarmExpansion Timing Framework
When to plant seeds vs. when to propose
Phase 1 - Plant Seeds (ongoing, no ask):
- Share relevant case studies from similar customers
- Mention adjacent use cases during regular check-ins
- Include benchmark comparisons in QBRs showing what peers do
- Invite to product demos or webinars for premium features
Duration: Continuous, part of regular CS cadence
Phase 2 - Explore Interest (when 3+ warm signals detected):
- Ask discovery questions about their evolving needs
- "I've noticed your team has grown - how are you managing X?"
- "Other customers in your space have started using Y for Z"
- Map new stakeholders and their pain points
Duration: 2-4 weeks of exploration
Phase 3 - Propose Value (when 3+ hot signals detected):
- Build a business case with quantified ROI
- Connect specific expansion to their stated goals
- Introduce AE or solutions engineer for technical scoping
- Present pricing with anchor to value delivered
Duration: 2-6 weeks to close
Phase 4 - Close and Onboard:
- AE leads commercial negotiation
- CSM ensures smooth onboarding of new capabilities
- Set new success metrics for expanded scope
- Update health score model with new baselineCSM-AE Handoff Protocol
The handoff between CSM and AE for expansion deals is a common failure point.
Handoff checklist
Before introducing AE, CSM must have:
[ ] Confirmed customer interest (not just CSM interpretation)
[ ] Identified the economic buyer or decision maker
[ ] Documented the business need in customer's own words
[ ] Mapped the evaluation process and timeline
[ ] Confirmed budget availability (at least directionally)
[ ] Prepared a brief for AE covering:
- Account history and relationship dynamics
- Current product usage and health score
- Expansion opportunity details
- Key stakeholders and their roles
- Potential objections and sensitivities
- Pricing history and contract terms
Handoff meeting structure:
1. CSM introduces AE as "specialist" (not "salesperson")
2. CSM frames the conversation around customer's goals
3. AE asks discovery questions (not a pitch)
4. CSM stays involved through the process
5. CSM owns onboarding of expanded product/seats after closeCommon handoff failures
| Failure | Impact | Prevention |
|---|---|---|
| AE pitches before discovery | Customer feels sold to, not helped | AE must do discovery call before any proposal |
| CSM disappears after handoff | Customer feels abandoned | CSM attends all expansion meetings |
| AE discounts without CSM input | Undermines pricing integrity | Pricing decisions require CSM + AE alignment |
| No warm introduction | Customer confused by cold outreach from AE | CSM always makes the introduction in context |
| Handoff too early | Customer not ready, feels pressured | Use signal framework - minimum 3 hot signals |
Expansion Conversation Templates
Seed Planting (Phase 1)
"I was reviewing your usage this quarter and noticed your team has been
really active with [feature]. Some of our customers who use [feature]
heavily have also found [adjacent feature/tier] helpful for [specific
outcome]. Just wanted to flag it as something to keep in mind as your
needs evolve."Discovery (Phase 2)
"In our last QBR, you mentioned [business initiative]. I've been thinking
about how we might help with that. A few questions:
- What's the timeline for that initiative?
- Who else on your team is involved?
- Have you evaluated any solutions for that yet?
- Would it be helpful if I shared how [similar customer] approached this?"Value Proposal (Phase 3)
"Based on our conversations, here's what I'm recommending:
Current state: [what they have today]
Desired state: [what they want to achieve]
Gap: [what's missing]
Proposed solution: [specific expansion - seats, tier, module]
Expected outcome: [quantified value - time saved, revenue generated, etc.]
Investment: [pricing, positioned against value]
I'd like to bring in [AE name], who specializes in [area], to walk
through the details and answer any technical questions. Would [date]
work for a 30-minute call?"Expansion Metrics
Track these metrics to measure expansion program effectiveness:
Metric | Target | Measurement
--------------------------------|----------------|-----------------------------------
Expansion revenue rate | >20% of total | Expansion ARR / Total new ARR
Net Revenue Retention (NRR) | >115% | Including expansion, contra, churn
Expansion win rate | >40% | Proposals won / proposals made
Time from signal to proposal | <30 days | Average days from hot signal to AE intro
CSM-sourced pipeline | >30% of expand | Pipeline originated by CS team
Expansion cycle time | <45 days | Average days from proposal to close
Post-expansion health score | Stable/up | Health score 90 days after expansionCross-Sell vs. Upsell Strategy
Upsell (same product, higher tier):
- Lower friction, customer already knows the product
- Focus on feature limits and usage growth
- CSM can often handle without AE for small upgrades
- Typical timing: 6-12 months into contract
Cross-Sell (new product from portfolio):
- Higher friction, requires new evaluation and champion
- Focus on adjacent use cases and new stakeholder needs
- Almost always requires AE involvement
- Typical timing: 12+ months, after strong first product adoption
- Requires identifying new buyer/champion for the new product
Key principle: Upsell the current champion; cross-sell requires a new champion
in a different department or function. health-score-model.md
Health Score Model
A customer health score is a weighted composite metric (0-100) that aggregates multiple behavioral and relational signals into a single risk/opportunity indicator. A well-designed model predicts renewal probability and triggers CSM actions before churn becomes irreversible.
Design principles
- Fewer inputs, higher signal. Start with 4-6 dimensions. Models with 20+ inputs dilute strong predictors and create noise. Add dimensions only when they demonstrably improve predictive accuracy.
- Automate data collection. A dimension you can't populate automatically will drift stale and corrupt the score. If a signal requires manual CSM entry, weight it lower or exclude it from the automated score.
- Every band triggers an action. If a score change doesn't trigger a defined CSM workflow, the score is decorative. Define mandatory actions per band before launch.
- Calibrate continuously. Initial weights are hypotheses. Validate against actual churn and renewal data every quarter and adjust.
Dimensions
1. Product Usage (default weight: 35%)
Measures whether the customer is actively and broadly using the product.
| Sub-signal | What to measure | Red flag threshold |
|---|---|---|
| Login frequency | Active users / licensed seats in last 30 days | <20% of seats active |
| Feature adoption breadth | % of licensed feature modules used in last 30 days | <30% of modules |
| Usage depth | Core action volume per active user per week | <3 core actions/user/week |
| Usage trend | 90-day slope of weekly active users | Declining >20% over 90 days |
Normalization example (login frequency):
| Seats active (30-day) | Sub-score |
|---|---|
| <20% | 0 |
| 20-34% | 20 |
| 35-49% | 40 |
| 50-64% | 60 |
| 65-79% | 80 |
| 80-100% | 100 |
Composite usage sub-score:
usage_score = (login_freq * 0.40) + (feature_breadth * 0.30) + (usage_depth * 0.20) + (trend * 0.10)
2. Engagement (default weight: 25%)
Measures the quality and consistency of the relationship between the customer and the CS team.
| Sub-signal | What to measure | Red flag threshold |
|---|---|---|
| CSM touchpoint cadence | % of scheduled touchpoints completed in last 90 days | <60% completion |
| Executive sponsor accessibility | Days since last exec-level interaction | >60 days |
| Outreach response time | Average hours to respond to CSM outreach | >72 hours |
| EBR / QBR attendance | % of scheduled QBRs attended by exec sponsor | <50% |
Normalization example (touchpoint cadence):
| Scheduled touchpoints completed | Sub-score |
|---|---|
| <40% | 0 |
| 40-59% | 30 |
| 60-74% | 60 |
| 75-89% | 80 |
| 90-100% | 100 |
3. Outcomes (default weight: 20%)
Measures whether the customer is achieving the goals documented in the success plan. This is the hardest dimension to automate but the most predictive of genuine retention.
| Sub-signal | What to measure | Red flag threshold |
|---|---|---|
| Success plan milestone completion | % of milestones completed on schedule | <50% by quarter midpoint |
| Customer-reported ROI | NPS or CSAT score, weighted by recency | NPS <7 or CSAT <3.5/5 |
| Business outcome proxy | Customer-defined metric (e.g., tickets deflected, deals closed, time saved) | Negative or flat trend |
Scoring approach: Because outcome data is partially subjective, weight CSM-assessed milestone completion (60%) and objective NPS/CSAT (40%) to reduce gaming.
outcomes_score = (milestone_pct * 0.60) + (normalized_nps * 0.40)
4. Support (default weight: 10%)
Measures product quality experience and service burden signals.
| Sub-signal | What to measure | Red flag threshold |
|---|---|---|
| Open critical tickets | Count of P1/P2 tickets open >5 business days | Any |
| CSAT on closed tickets | Average CSAT score on resolved tickets in 30 days | <3.5/5 |
| Ticket volume trend | Month-over-month change in total ticket count | >50% spike |
Composite support sub-score:
support_score = (100 if no_open_critical else 0) * 0.40 + (normalized_csat * 0.40) + (volume_trend_score * 0.20)
Map ticket volume trend: declining = 100, flat = 70, increasing = 40, spike = 0.
5. Relationship (default weight: 10%)
Measures stakeholder stability and strategic alignment signals.
| Sub-signal | What to measure | Red flag threshold |
|---|---|---|
| Champion stability | Has the primary champion changed in last 90 days? | Yes = critical flag |
| Multi-threading depth | Number of distinct stakeholders engaged in last 60 days | <2 |
| Executive alignment | Exec sponsor confirmed and engaged | Not confirmed |
Champion departure is a special-case trigger: regardless of composite score, a champion departure should immediately flag the account for Watch status and CSM outreach within 24 hours.
Composite score formula
health_score =
(usage_score * 0.35) +
(engagement_score * 0.25) +
(outcomes_score * 0.20) +
(support_score * 0.10) +
(relationship_score * 0.10)Round to nearest integer. The result is a value from 0 to 100.
Thresholds and action triggers
| Band | Score range | Meaning | Mandatory CSM action | SLA |
|---|---|---|---|---|
| Green | 75-100 | Healthy, on track | Monitor; look for expansion signals; request reference if >90 | Weekly automated review |
| Yellow | 50-74 | At risk; early warning | Schedule proactive check-in; document root cause hypothesis | CSM contact within 7 days |
| Red | 0-49 | Critical risk | Escalate to CS Manager; executive outreach; build save plan | Escalation within 48 hours |
Threshold override rules
These conditions force Red regardless of composite score:
- Any open P1 ticket unresolved for >5 business days
- Primary champion departed without replacement confirmed
- Customer has sent formal cancellation intent or legal notice
- Payment overdue >30 days
Calibration methodology
Step 1 - Collect baseline data
For each account that churned or renewed in the last 4 quarters, pull the health score from 90 days before the renewal date. This is your training set.
Step 2 - Measure predictive accuracy
Calculate the confusion matrix:
True Positive = Red score at T-90 AND customer churned
False Negative = Green/Yellow at T-90 AND customer churned (missed churns)
True Negative = Green score at T-90 AND customer renewed
False Positive = Red at T-90 AND customer renewed (false alarms)Target: False Negative rate <25% (you should catch 75%+ of churns in the Red band).
Step 3 - Adjust weights
If False Negatives are high, the model is under-weighting the signals that churned accounts showed. Analyze what those accounts had in common at T-90 and increase the weight of those dimensions.
If False Positives are high, the model is over-penalizing for signals that don't actually predict churn. Reduce weights for those dimensions or adjust the normalization thresholds.
Step 4 - Validate, then recalibrate quarterly
Never run more than one quarter on uncalibrated weights. Set a calendar reminder to re-run the confusion matrix after every renewal cycle.
Common model pitfalls
| Pitfall | Consequence | Fix |
|---|---|---|
| Usage as the only dimension | High-usage customers who see no ROI will still churn | Always include an outcomes dimension |
| Static thresholds | What was "healthy" in Year 1 may be "at risk" in Year 3 as product evolves | Recalibrate thresholds annually at minimum |
| Ignoring velocity | A stable score at 55 is less dangerous than a score dropping from 80 to 60 | Add a velocity multiplier: declining * 1.5x, dropping fast * 2.0x |
| Over-indexing on NPS | NPS is a lagging, self-reported metric that churning customers often inflate | Weight NPS at <15% of any dimension; prioritize behavioral signals |
| Missing the long tail | SMB accounts with no CSM engagement produce silent churn | Implement automated threshold alerts routed to pooled queue |
Velocity scoring (optional enhancement)
Layer a velocity multiplier onto the composite score to capture directional momentum:
30-day delta = current_score - score_30_days_ago
Velocity bucket:
delta > +5 : improving -> multiplier 0.85 (reduces effective risk)
-5 to +5 : stable -> multiplier 1.00
-15 to -5 : declining -> multiplier 1.25
delta < -15 : rapid drop -> multiplier 1.50 (auto-escalate to Warning)
risk_adjusted_score = health_score / velocity_multiplierUse risk_adjusted_score for triage prioritization but display health_score
to customers and in reporting to avoid confusion.
health-score-models.md
Health Score Models - Complete Reference
Signal Selection Methodology
Start with the outcome you're predicting (churn vs. expansion vs. advocacy), then work backward to identify the signals that correlate with that outcome.
Step 1: Gather candidate signals
Audit every data source available to CS:
Product Analytics:
- DAU/MAU ratio (stickiness)
- Feature adoption breadth (# of paid features used / total available)
- Core action volume (the "aha moment" action, measured per user per week)
- Session duration and depth
- API call volume (for developer products)
- Integration count (connected third-party tools)
Engagement Data:
- CSM meeting attendance rate (attended / scheduled)
- Support ticket volume and sentiment
- Email/message response time from customer
- Event and webinar attendance
- Community forum participation
- Training/certification completion
Outcome Data:
- Progress toward stated success criteria
- Customer-reported satisfaction (NPS, CSAT, CES)
- ROI metrics (if measurable and tracked)
- Business outcome indicators (e.g., revenue influenced, time saved)
Contractual Data:
- Days to renewal
- Contract value trend (expanding, flat, contracting)
- Payment history (on-time %, overdue invoices)
- Multi-year vs. annual vs. monthly contract
- Number of products/modules purchasedStep 2: Validate against historical outcomes
For each candidate signal, run a correlation analysis against actual churn and expansion events from the past 12-24 months.
Validation approach:
1. Pull churned accounts from past 12 months
2. Pull expanded accounts from past 12 months
3. Pull stable (retained, no change) accounts as control group
4. For each signal, compare distributions across the three groups
5. Keep signals where churned accounts show statistically different
values from retained/expanded accounts
6. Discard signals that look the same across all groupsStep 3: Assign weights
Initial weight recommendations by business model:
B2B SaaS (Sales-Led):
Product Adoption: 35%
Engagement: 30% (CSM relationship matters heavily)
Outcomes: 20%
Contractual: 15%
B2B SaaS (Product-Led Growth):
Product Adoption: 45% (usage IS the relationship)
Engagement: 15%
Outcomes: 25%
Contractual: 15%
Enterprise / High ACV:
Product Adoption: 25%
Engagement: 35% (multi-threading, exec sponsors critical)
Outcomes: 25%
Contractual: 15%These are starting points. After 2-3 quarters of data, use logistic regression or a simple decision tree on your actual churn data to derive empirically optimal weights.
Scoring Implementation
Normalize each signal to 0-100
Every raw signal must be normalized before applying weights.
Common normalization approaches:
Percentile-based (recommended for most signals):
Score = Percentile rank of this account among all active accounts
Example: If account's DAU/MAU is in the 75th percentile, score = 75
Threshold-based (good for binary or categorical signals):
Score = 100 if condition met, 50 if partially met, 0 if not met
Example: Executive sponsor identified and engaged = 100,
identified but unresponsive = 50, none identified = 0
Trend-based (for signals where direction matters):
Score = 50 (baseline) + (trend_direction * magnitude_adjustment)
Example: Login frequency up 20% MoM = 50 + 20 = 70
Login frequency down 30% MoM = 50 - 30 = 20Composite score calculation
Health Score = SUM(signal_score_i * weight_i) for all signals
Example:
Product Adoption score: 72, weight: 0.40 -> 28.8
Engagement score: 85, weight: 0.25 -> 21.25
Outcomes score: 60, weight: 0.20 -> 12.0
Contractual score: 90, weight: 0.15 -> 13.5
-----------------------------------------------
Composite Health Score: 75.55 -> rounds to 76 (Green)Threshold Calibration
Initial thresholds (start here, then calibrate)
Score Range | Label | Action
------------|--------|-------------------------------------------
85-100 | Strong | Expansion opportunity - connect with AE
70-84 | Good | Monitor, continue driving deeper adoption
50-69 | Fair | Proactive outreach, investigate weak signals
30-49 | Poor | Urgent intervention, CSM + manager review
0-29 | Crisis | Executive escalation, recovery playbookCalibration process (run quarterly)
1. Pull all accounts that churned last quarter
2. Record their health score 90 days before churn
3. Calculate the distribution:
- What % of churned accounts were scored Green? (false negatives)
- What % of Green accounts actually churned? (false negative rate)
- What % of Red accounts did NOT churn? (false positives)
4. Adjust thresholds to minimize false negatives:
- If many churned accounts were Green, lower the Green threshold
- If many Red accounts are stable, raise the Red threshold
- Target: <10% of churned accounts should have been Green 90 days prior
5. Revalidate signal weights:
- Which signals had the strongest predictive power this quarter?
- Which signals added noise (no correlation with outcomes)?
- Adjust weights accordingly, maximum 10% shift per quarterDo not change thresholds or weights more than once per quarter. Frequent changes make the score unreliable for CSMs who learn to calibrate their intuition against it. Stability builds trust.
Health Score by Customer Lifecycle Stage
Not all signals matter equally at every stage.
Onboarding (0-90 days):
Overweight: Time-to-first-value, onboarding milestone completion,
stakeholder identification, training attendance
Underweight: NPS (too early), expansion signals, renewal proximity
Adoption (90 days - 1 year):
Overweight: Feature adoption breadth, core action frequency,
integration depth, user growth within account
Underweight: Renewal proximity (still far away)
Maturity (1+ year):
Overweight: Outcome achievement, executive engagement, NPS trend,
expansion signals, renewal proximity
Underweight: Onboarding milestones (irrelevant)
Pre-Renewal (90 days before renewal):
Overweight: Renewal signals (budget approval, legal review started),
executive sentiment, competitive mentions in tickets
Underweight: Feature adoption (too late to change)Common Health Score Pitfalls
| Pitfall | Impact | Fix |
|---|---|---|
| Using only product data | Misses relationship and outcome dimensions; scores look "healthy" right up until the customer leaves | Add engagement and outcome signals; validate against churn data |
| Not accounting for customer size | A 500-person company with 10 active users is not healthy; a 5-person company with 5 active users is | Normalize usage metrics by licensed seats or expected users |
| Static thresholds across segments | Enterprise and SMB accounts have fundamentally different usage patterns | Set segment-specific thresholds and weights |
| Ignoring seasonality | Some businesses are seasonal; Q4 dips don't always mean churn risk | Build seasonal baselines; compare to same-period last year |
| No manual override mechanism | Sometimes CSMs have context the data doesn't capture | Allow CSM overrides with required notes; track override accuracy over time |
qbr-templates.md
QBR Templates - Complete Reference
QBR Program Design
Cadence by Segment
Segment | Cadence | Format | Duration | Attendees
-------------|---------------|-----------------|-----------|------------------
Enterprise | Quarterly | In-person/video | 45-60 min | Exec + CSM + AE
Mid-Market | Semi-annual | Video call | 30-45 min | CSM + stakeholders
SMB | Annual | Video call | 20-30 min | CSM (pooled)
Self-Serve | None (auto) | Email report | N/A | AutomatedQBR Readiness Checklist
Before scheduling the QBR, CSM must complete:
Data Preparation:
[ ] Pull usage analytics for the quarter (DAU/MAU, feature adoption, volume)
[ ] Calculate value delivered (ROI, time saved, outcomes achieved)
[ ] Prepare benchmark comparisons (anonymized peer data)
[ ] Review support ticket history and sentiment
[ ] Check health score trend for the quarter
[ ] Pull any NPS/CSAT responses from the period
Stakeholder Preparation:
[ ] Confirm attendees 2 weeks in advance
[ ] Identify if any stakeholder changes occurred
[ ] Pre-align with champion on any sensitive topics
[ ] Send pre-read agenda 1 week before
[ ] Gather customer's priorities for the quarter ahead
Internal Preparation:
[ ] Review previous QBR action items and status
[ ] Align with AE on any expansion/renewal topics
[ ] Prepare product roadmap highlights relevant to this customer
[ ] Brief exec sponsor (if attending) on account context
[ ] Rehearse key talking points and transitionsThe Look Back / Look Around / Look Forward Framework
Look Back (33% of time)
Purpose: Demonstrate value delivered and build credibility.
Structure:
1. Recap goals set in the previous QBR
2. For each goal, show:
- Status: Achieved / On Track / Behind / Missed
- Evidence: data, metrics, or customer confirmation
- If behind/missed: explain why and what was done
3. Highlight unexpected wins (value delivered beyond stated goals)
4. Acknowledge any service issues and remediation steps taken
Slide template:
| Goal | Target | Actual | Status |
|-----------------------|---------------|---------------|-----------|
| Onboard 50 users | 50 users | 62 users | Exceeded |
| Reduce process time | 30% reduction | 22% reduction | On Track |
| Launch API integration | Q2 launch | Delayed to Q3 | Behind |
Key: Green = Exceeded/Achieved, Yellow = On Track, Red = Behind/MissedTalking points:
- Lead with wins. Even if overall performance is mixed, start with what went well to establish positive momentum.
- For missed goals, own the gap honestly and focus on what was learned and what changes are being made.
- Use customer quotes or data from their own systems when possible - third-party validation is more credible than your dashboards.
Look Around (33% of time)
Purpose: Provide external context and position yourself as a strategic partner.
Structure:
1. Industry benchmarks - how does this customer compare to peers?
2. Best practices - what are top-performing customers doing differently?
3. Product updates relevant to their use cases
4. Market/industry trends that affect their business
Benchmark slide template:
| Metric | This Customer | Peer Median | Top Quartile |
|---------------------|---------------|-------------|--------------|
| User adoption rate | 72% | 65% | 85% |
| Feature utilization | 45% | 40% | 70% |
| Time to resolution | 4.2 hours | 6.1 hours | 2.8 hours |
| Support tickets/mo | 12 | 18 | 8 |Rules for benchmarking:
- Always anonymize peer data. Never reveal other customers' identities.
- Position benchmarks as aspirational, not critical. "Here's where you are vs. where the best performers are" not "you're below average."
- Only benchmark metrics where you have statistically meaningful data (minimum 20+ accounts in the comparison set).
- If the customer is already top-quartile, highlight it. Nothing builds confidence like knowing they're leading their peers.
Look Forward (33% of time)
Purpose: Co-create the next quarter's success plan and secure commitment.
Structure:
1. Ask about the customer's business priorities for next quarter
(do NOT assume - let them tell you)
2. Propose 2-3 measurable goals tied to their priorities
3. Define mutual commitments:
- What your team will deliver (training, features, support)
- What their team needs to do (adoption, feedback, stakeholder access)
4. Set milestones and check-in cadence
5. If appropriate: discuss renewal timeline or expansion opportunity
Success plan template:
| Goal | Metric | Target | Owner | Deadline |
|-----------------------------|-----------------|------------|-----------|----------|
| Expand to marketing team | Active users | +25 users | Customer | End Q3 |
| Launch automated workflows | Workflows built | 10 active | Joint | Mid Q3 |
| Achieve 40% time savings | Process audit | 40% faster | Customer | End Q3 |QBR Slide Deck Template
Slide-by-Slide Guide
Slide 1: Title Slide
- Customer logo + your logo
- "Quarterly Business Review - Q[X] [Year]"
- Date and attendees
- Confidential marking
Slide 2: Agenda
- Look Back: Q[X-1] in Review
- Look Around: Benchmarks & Best Practices
- Look Forward: Q[X] Success Plan
- Discussion & Next Steps
- Time allocation for each section
Slide 3: Partnership Summary
- Customer since: [date]
- Current plan/tier: [details]
- Key milestones in the relationship
- Current health score (if shared with customer)
- CSM and support contacts
Slide 4: Goals vs. Actuals (Look Back)
- Table format with RAG status per goal
- Include data visualization for key metrics
- Narrative for each goal: what happened and why
Slide 5: Usage & Adoption Highlights (Look Back)
- 2-3 charts showing usage trends
- Active users over time
- Feature adoption heatmap
- Key workflow completion rates
Slide 6: Value Delivered (Look Back)
- Quantified ROI if available
- Time saved, efficiency gained, revenue impacted
- Customer quotes or testimonials if available
- Before/after comparison
Slide 7: Benchmarks (Look Around)
- Comparison to anonymized peer group
- 3-4 metrics where comparison is meaningful
- Highlight areas of strength and opportunity
Slide 8: Best Practices & Product Updates (Look Around)
- 2-3 tips from top-performing customers
- Relevant product releases from the quarter
- Upcoming features on the roadmap (relevant ones only)
Slide 9: Proposed Q[X] Goals (Look Forward)
- 2-3 measurable goals
- Tied to customer's stated business priorities
- Include metrics, targets, and owners
Slide 10: Success Plan & Action Items (Look Forward)
- Detailed action items with owners and deadlines
- Both customer and CS team commitments
- Check-in schedule for the quarter
- Escalation path if goals are at risk
Slide 11 (optional): Renewal / Expansion Discussion
- Only include if timing is appropriate
- Frame as "how to get more value" not "how to spend more"
- Connect expansion to goals discussed in Look ForwardExecutive QBR Variant
For C-level attendees, modify the standard QBR:
Changes from standard QBR:
- Cut to 30 minutes maximum (executives have less patience)
- Lead with business outcomes, not product usage
- One slide on ROI/value delivered (the only slide that matters to them)
- Strategic alignment: "Here's how our partnership connects to your
company's top 3 priorities this year"
- Skip feature-level details entirely
- End with one clear ask or decision point
Executive QBR Slide Structure (5 slides max):
1. Executive Summary (1 slide, 3-4 bullets)
2. Business Outcomes Delivered (1 slide, quantified)
3. Strategic Alignment (1 slide, connecting product to company priorities)
4. Next Quarter Plan (1 slide, 2-3 goals)
5. Discussion / Decision Point (1 slide, one clear ask)Post-QBR Follow-Up
Timeline | Action
---------|----------------------------------------------------------
Day 0 | Send thank-you email with slide deck attached
Day 1 | Send QBR summary email with action items, owners, deadlines
Day 1 | Update CRM: log QBR, update health score, note next steps
Day 7 | Begin executing on your committed action items
Day 14 | First progress check-in on action items
Day 30 | Mid-quarter check: are we on track for QBR goals?
Day 60 | Pre-QBR data collection begins for next quarter
Day 75 | Schedule next QBR, send pre-read materialsQBR Summary Email Template
Subject: [Customer Name] - Q[X] QBR Summary & Action Items
Hi [Name],
Thank you for a productive QBR today. Here's a summary of what we covered
and our commitments for Q[X]:
**Key Highlights from Q[X-1]:**
- [Achievement 1 with metric]
- [Achievement 2 with metric]
- [Area for improvement and plan]
**Q[X] Goals:**
1. [Goal] - Target: [metric] - Owner: [name] - By: [date]
2. [Goal] - Target: [metric] - Owner: [name] - By: [date]
3. [Goal] - Target: [metric] - Owner: [name] - By: [date]
**Action Items:**
| Action | Owner | Due Date |
|--------|-------|----------|
| [item] | [who] | [when] |
Our next check-in is scheduled for [date]. Please let me know if anything
changes on your end or if you'd like to adjust our goals.
Best,
[CSM Name]QBR Effectiveness Metrics
Metric | Target | Measurement
--------------------------------|-----------|----------------------------------
QBR completion rate | >85% | QBRs held / QBRs scheduled
Executive attendance rate | >50% | QBRs with exec present / total
Action item completion rate | >75% | Items completed by next QBR
Goal achievement rate | >60% | Goals achieved / goals set
Post-QBR health score change | Positive | Health score 30 days post-QBR
Post-QBR NPS/CSAT | >8 | Survey sent within 48 hours
Expansion sourced from QBRs | >25% | Expansion deals with QBR origin
Renewal rate for QBR accounts | >90% | Renewal rate for accounts with QBRs Frequently Asked Questions
What is customer-success-playbook?
Use this skill when building health scores, predicting churn, identifying expansion signals, or running QBRs. Triggers on customer success, health scores, churn prediction, expansion signals, customer QBRs, onboarding playbooks, NRR optimization, and any task requiring customer success strategy or operations.
How do I install customer-success-playbook?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill customer-success-playbook in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support customer-success-playbook?
customer-success-playbook works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.