support-analytics
Use this skill when measuring CSAT, NPS, resolution time, deflection rates, or analyzing support trends. Triggers on CSAT, NPS, resolution time, deflection rate, support metrics, trend analysis, support reporting, and any task requiring customer support data analysis or reporting.
operations support-analyticscsatnpsresolution-timedeflectionmetricsWhat is support-analytics?
Use this skill when measuring CSAT, NPS, resolution time, deflection rates, or analyzing support trends. Triggers on CSAT, NPS, resolution time, deflection rate, support metrics, trend analysis, support reporting, and any task requiring customer support data analysis or reporting.
support-analytics
support-analytics is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Measuring CSAT, NPS, resolution time, deflection rates, or analyzing support trends.
Quick Facts
| Field | Value |
|---|---|
| Category | operations |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill support-analytics- The support-analytics skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
Support analytics turns raw ticket data into operational intelligence. The goal is not to generate reports - it is to change behavior. Whether measuring how satisfied customers are after an interaction, how quickly issues are resolved, or how often customers find answers without contacting support, every metric should connect to a decision. This skill covers the full analytics lifecycle: what to measure, how to measure it, and how to act on what you find.
Tags
support-analytics csat nps resolution-time deflection metrics
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair support-analytics with these complementary skills:
Frequently Asked Questions
What is support-analytics?
Use this skill when measuring CSAT, NPS, resolution time, deflection rates, or analyzing support trends. Triggers on CSAT, NPS, resolution time, deflection rate, support metrics, trend analysis, support reporting, and any task requiring customer support data analysis or reporting.
How do I install support-analytics?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill support-analytics in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support support-analytics?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Support Analytics
Support analytics turns raw ticket data into operational intelligence. The goal is not to generate reports - it is to change behavior. Whether measuring how satisfied customers are after an interaction, how quickly issues are resolved, or how often customers find answers without contacting support, every metric should connect to a decision. This skill covers the full analytics lifecycle: what to measure, how to measure it, and how to act on what you find.
When to use this skill
Trigger this skill when the user:
- Wants to set up or improve a CSAT or NPS measurement program
- Needs to track, report on, or reduce resolution time or first-contact resolution
- Asks about deflection rate or self-service effectiveness
- Wants to analyze support ticket trends, topic clusters, or volume forecasting
- Needs to build a support dashboard for an executive, team lead, or agent
- Is creating a support metrics framework or KPI hierarchy
- Asks about survey design, response rate improvement, or score interpretation
- Needs to segment support data by channel, tier, topic, or agent
Do NOT trigger this skill for:
- Product analytics or funnel metrics (use analytics-engineering instead)
- Infrastructure monitoring, SLOs, or error rate tracking (use backend-engineering instead)
Key principles
Measure what matters, not what's easy - Ticket volume is easy to count but rarely actionable on its own. Focus on metrics that reveal customer experience and operational efficiency: CSAT, resolution time, and deflection rate expose the health of your support operation far more than raw volume does.
Benchmarks are starting points, not goals - Industry benchmarks give you a calibration point, not a finish line. A CSAT of 85% may be excellent for a complex enterprise product and unacceptable for a consumer app. Compare to your own historical trend first; compare to benchmarks second.
Trends matter more than snapshots - A single week's CSAT score means almost nothing. A 12-week trend that is declining 1 point per week means something is systematically wrong. Always show time-series data alongside point-in-time figures. Week-over-week and month-over-month comparisons prevent overreaction to normal variance.
Segment by channel, tier, and topic - Aggregate scores hide the story. A CSAT of 82% overall might mask a chat score of 91% and an email score of 68%. Segmenting by channel, customer tier, product area, and ticket topic reveals where to invest and what is working.
Close the loop - insights to action - An analytics program that produces dashboards no one acts on is a cost center. Every metric should own a DRI (directly responsible individual), a target, and a process for escalating when the target is missed. The cadence is: measure, review, decide, act, re-measure.
Core concepts
Satisfaction metrics
CSAT (Customer Satisfaction Score) - A post-interaction rating, typically 1-5 stars or a thumbs up/down, sent immediately after a ticket closes. Measures satisfaction with a specific support interaction, not the product overall. The score is the percentage of positive responses out of total responses received.
NPS (Net Promoter Score) - A relationship-level survey asking "How likely are you to recommend us to a colleague?" on a 0-10 scale. Promoters (9-10) minus Detractors (0-6) equals the NPS. Transactional NPS (tNPS) is sent after support interactions to capture loyalty impact from a specific resolution.
CES (Customer Effort Score) - Measures how easy it was to get help: "How much effort did you personally have to put forth to handle your request?" Low effort correlates with reduced churn more reliably than high satisfaction does.
Operational metrics
First Contact Resolution (FCR) - The percentage of tickets resolved on the first reply without the customer needing to follow up. High FCR is the single strongest predictor of high CSAT. Improving FCR reduces cost and improves satisfaction simultaneously.
Resolution Time - The elapsed time from ticket creation to resolution. Report as median (p50) and p90 to capture both typical experience and worst-case outliers. Segment by ticket priority, channel, and topic - a blanket average hides whether P1 bugs are being prioritized over billing questions.
Handle Time - Agent-active time spent on a ticket (not elapsed clock time). Useful for capacity planning and identifying where agents need tooling or training improvements.
Reopen Rate - Percentage of resolved tickets reopened by the customer. A high reopen rate indicates resolutions are incomplete or unclear, or that the underlying issue is recurring.
Self-service metrics
Deflection Rate - The percentage of potential support contacts handled by
self-service (docs, chatbot, FAQ) without reaching a human. Calculated as
deflections / (deflections + human contacts). Hard to measure precisely -
proxy methods include doc views before ticket submission and chatbot resolution
rates.
Article Effectiveness - For knowledge bases: the percentage of doc views that end without a support ticket being submitted. Track alongside search-with-no-results counts to identify content gaps.
Containment Rate - For chatbots and IVR: the percentage of sessions that reach a resolution without escalating to a human. A session can be contained but still leave the customer unsatisfied - always pair with a satisfaction signal.
Quality metrics
QA Score - Internal quality assurance review of ticket handling: tone, accuracy, policy adherence, completeness. Typically sampled (5-10% of tickets) and scored on a rubric. Correlates with CSAT but catches issues that surveys miss such as correct but cold responses.
Agent CSAT - CSAT segmented by individual agent. Useful for coaching, not for ranking. Agents on complex ticket queues will have lower scores than agents on simple billing questions - normalize by ticket type before comparing agents.
Common tasks
Set up a metrics framework - KPI hierarchy
Build a three-tier hierarchy: strategic, operational, and diagnostic.
| Tier | Audience | Cadence | Examples |
|---|---|---|---|
| Strategic | Leadership | Monthly / Quarterly | NPS, CSAT trend, cost-per-ticket, deflection rate |
| Operational | Support managers | Weekly | FCR, median resolution time, reopen rate, volume by channel |
| Diagnostic | Team leads, agents | Daily | Queue depth, SLA breach rate, handle time, QA score |
Start by identifying who reads each metric and what decision it drives. If no one owns the decision triggered by a metric, do not track it yet.
Steps:
- List current pain points from support team retrospectives
- Map each pain point to a metric category (satisfaction, operational, quality)
- Define the measurement method and data source for each metric
- Assign a DRI and a target for each metric
- Build the minimal dashboard needed to surface all three tiers
Measure and improve CSAT - survey design and analysis
Survey design checklist:
- Send within 1 hour of ticket close - response rate drops sharply after 24 hours
- Keep to 1-2 questions: the rating plus one optional free-text follow-up
- Use a consistent scale - do not mix 5-star with thumbs up/down across touchpoints
- Personalize the subject line with the agent's name and ticket topic
Calculation:
CSAT = (4-star + 5-star responses) / total responses * 100Analysis steps:
- Segment by channel, agent, ticket category, and customer tier
- Tag all 1-2 star responses within 24 hours - look for patterns in verbatim feedback
- Build a weekly trend chart with 4-week moving average to smooth noise
- Create a detractor recovery workflow: manager outreach within 24 hours for any 1-star
Improving response rate:
- Subject line "How did [Agent Name] do?" outperforms generic phrasing
- Mobile-optimized survey - most customers open on phone
- Remove login requirement - anonymous responses get 2-3x higher response rate
Implement NPS program - collection and segmentation
Collection strategy:
- Send after significant support interactions (not every ticket)
- Trigger rules: send after complex tickets, P1 resolutions, or any escalation closed
- Suppress repeat surveys: do not survey the same customer more than once every 90 days
Calculation:
NPS = Promoters% - Detractors%
Example: 60% promoters, 15% detractors, 25% passives
NPS = 60 - 15 = 45Segmentation framework:
| Segment | Score | Action |
|---|---|---|
| Promoters | 9-10 | Case studies, referral asks, community invites |
| Passives | 7-8 | Identify friction - most at risk of churn on next negative event |
| Detractors | 0-6 | Close-the-loop call within 48 hours; flag to CSM if enterprise tier |
Segment NPS by customer tier, product area, support channel, and account age. New customers tend to score differently than long-tenured accounts.
Track and optimize resolution time
Measurement setup:
- Track
created_attoresolved_atin your ticketing system - Report median (p50) and 90th percentile (p90) - averages mask outlier drag
- Exclude pending-customer time from elapsed calculation (clock pauses when waiting on customer)
SLA framework:
| Priority | Target Resolution | Alert At |
|---|---|---|
| P1 - Service down | 4 hours | 2 hours |
| P2 - Major feature broken | 24 hours | 16 hours |
| P3 - Minor issue / workaround available | 72 hours | 48 hours |
| P4 - Question / enhancement | 7 days | 5 days |
Root cause analysis for high resolution time:
- Identify the top 10% slowest tickets in a period
- Tag reasons: awaiting escalation, waiting on engineering, reassigned, unclear ask
- Quantify each reason as a percentage of slow tickets
- Prioritize fixes by volume x impact - routing logic and escalation paths are typically top two
A declining resolution time with a rising reopen rate means agents are closing tickets prematurely. Always track both together.
Measure deflection rate - self-service effectiveness
Proxy measurement methods (direct deflection is rarely measurable):
- Doc-to-ticket ratio - Track customers who viewed a help article and then submitted a ticket within 30 minutes. Low ratio means effective docs.
- Chatbot containment - % of chatbot sessions that reach resolution without escalating to a human. Target 40-60% for most support types.
- Search abandonment - In your help center, track searches that end without a page view. High abandonment signals a content gap.
- Before/after experiment - Publish a new article on a common topic, compare ticket volume for that topic over the next 30 days vs prior 30 days.
Improving deflection:
- Run monthly content gap analysis: top 20 ticket topics vs help center coverage
- Add article links to auto-acknowledgment emails for common categories
- Implement a post-submission deflection prompt: show matching articles after ticket submit
Analyze support trends - topic clustering and forecasting
Topic clustering workflow:
- Export ticket titles and first customer messages for a 30-90 day window
- Group tickets by existing tags first - identify gaps where >10% have no tag
- Use keyword frequency on untagged tickets to surface emerging topics
- Update your taxonomy - aim for 80%+ of tickets tagged to a specific topic
- Review top 10 topics weekly; track volume trend, CSAT, and resolution time per topic
Volume forecasting:
- Use 12 weeks of weekly ticket volume as baseline
- Apply seasonal adjustment for known events (product launches, billing cycles, holidays)
- 4-week trailing average with +20% buffer as capacity target
- Flag any week where volume exceeds forecast by >30% as an anomaly requiring investigation
Trend signals to monitor:
- New topic appearing in top 10 that was not there last month - possible product regression
- CSAT drop on a specific topic without volume change - agent knowledge gap or policy confusion
- Resolution time increase on one channel only - tooling or routing issue
Build support dashboards - by audience
Executive dashboard (monthly business review):
| Panel | Metric | Visualization |
|---|---|---|
| Customer Sentiment | CSAT 12-month trend + NPS | Line chart with benchmark line |
| Efficiency | Cost per ticket, deflection rate | KPI card + trend sparkline |
| Volume | Total contacts by channel | Stacked bar, MoM comparison |
| Highlights | Top 3 topic drivers, worst-performing category | Table |
Manager dashboard (weekly ops review):
| Panel | Metric | Visualization |
|---|---|---|
| Volume | Tickets opened/closed, backlog | Area chart |
| Quality | CSAT by channel, reopen rate | Bar chart |
| Speed | Median + p90 resolution time vs SLA | Gauge + trend |
| Team | FCR by agent, QA scores | Table with conditional formatting |
Agent dashboard (daily view):
- Personal queue: open tickets, SLA risk, oldest unresolved
- Personal CSAT for last 30 days (not ranked against peers)
- Today's handle time vs personal average
Gotchas
CSAT surveys sent more than 24 hours after ticket close get response bias - Surveys sent days after resolution disproportionately capture customers who had extreme experiences (very positive or very negative) because neutral customers have moved on. Automate delivery within 1 hour of ticket close to get a representative sample.
FCR self-reporting by agents inflates the metric - If agents mark tickets as "resolved first contact" manually, they will mark optimistically. FCR should be measured by the ticketing system based on whether the customer reopened or submitted a new ticket on the same topic within 72 hours, not by agent judgment.
Chatbot containment rate hides frustrated escalation paths - If customers cannot find the escalation button, your containment rate looks great while your CSAT tanks. Always pair containment rate with a post-deflection CSAT signal (even a thumbs up/down) to distinguish genuinely resolved sessions from abandoned ones.
Normalizing agent CSAT by ticket type requires a large sample - Comparing agents with statistical significance requires at minimum 30 surveys per agent per segment. Trying to normalize by ticket type with small sample sizes produces rankings that are noise, not signal. Use QA score for coaching with small agent pools instead.
Volume forecasting without seasonality adjustments leads to understaffing - Applying a flat growth rate to weekly volume ignores known spikes (product launches, billing cycle dates, end-of-fiscal-year surges). Build a seasonal adjustment factor by comparing the same week across prior years before making staffing decisions.
Anti-patterns
| Anti-pattern | Why it's wrong | What to do instead |
|---|---|---|
| Tracking CSAT average without response rate | A 95% CSAT from 3% response rate is meaningless - response bias distorts the score | Always report response rate alongside CSAT; investigate if below 15% |
| Comparing agent CSAT without normalizing by ticket type | Agents on billing queues outscore agents on complex bug reports by default | Segment CSAT by ticket category before comparing agents; use for coaching only |
| Reporting resolution time as an average | Averages are pulled high by a small number of outliers, masking the typical experience | Use median (p50) as primary; add p90 to surface worst-case |
| Measuring deflection rate from chatbot containment alone | Bots can block escalation paths, yielding high containment and low satisfaction | Pair containment with post-deflection CSAT; 0 escalations + low satisfaction is a false positive |
| Building dashboards without a decision owner | Dashboards created without a defined reviewer become shelfware | Identify the decision each dashboard drives before building; assign a weekly reviewer |
| Chasing benchmark NPS without context | A software company and a logistics provider should not share the same NPS target | Set targets relative to your own historical trend and competitive cohort, not generic benchmarks |
References
For detailed content on specific topics, read the relevant file from references/:
references/metrics-benchmarks.md- Industry benchmarks for CSAT, NPS, resolution time, and deflection rate by company size and vertical
Only load a references file if the current task requires deep detail on that topic.
References
benchmarks.md
Support Analytics - Industry Benchmarks
Benchmarks for customer support metrics segmented by company size, vertical, and channel. Use these as starting points for target-setting, not as absolute standards. Every benchmark should be validated against your own historical data.
CSAT benchmarks
By industry vertical
| Vertical | Median CSAT | Top quartile | Notes |
|---|---|---|---|
| SaaS / Software | 78% | 88%+ | Higher for B2B than B2C |
| E-commerce | 80% | 90%+ | Driven by shipping and returns experience |
| Financial services | 75% | 85%+ | Heavily regulated, slower resolution expected |
| Healthcare | 72% | 82%+ | Complex issues, privacy constraints |
| Telecommunications | 68% | 78%+ | Historically lowest due to billing complexity |
| Hospitality / Travel | 76% | 86%+ | High emotional stakes, seasonality |
By support channel
| Channel | Typical CSAT | Why |
|---|---|---|
| Live chat | 82-88% | Fast, convenient, in-context |
| Phone | 78-85% | Personal but wait times hurt scores |
| Email / ticket | 72-80% | Slow, lacks real-time back-and-forth |
| Social media | 65-75% | Public complaints, often escalated issues |
| Self-service (when successful) | 85-92% | Instant, no waiting, user feels empowered |
By company stage
| Stage | Typical CSAT | Context |
|---|---|---|
| Early startup (< 50 customers) | 90%+ | Founders answering tickets, high-touch |
| Growth (50-500 customers) | 78-85% | Scaling pains, first hires learning |
| Scale (500-5000 customers) | 75-82% | Process-driven, specialization emerging |
| Enterprise (5000+ customers) | 72-80% | Tiered support, more complex issues |
NPS benchmarks
By industry
| Vertical | Median NPS | Top quartile | World class |
|---|---|---|---|
| SaaS / Software | +30 | +50 | +70 |
| E-commerce | +40 | +60 | +75 |
| Financial services | +20 | +40 | +60 |
| Healthcare | +15 | +35 | +55 |
| Telecommunications | +5 | +25 | +45 |
NPS interpretation scale
| NPS range | Interpretation |
|---|---|
| -100 to 0 | Critical - systemic issues, high churn risk |
| 0 to +20 | Below average - significant improvement needed |
| +20 to +50 | Good - competitive but room for improvement |
| +50 to +70 | Excellent - strong loyalty and word-of-mouth |
| +70 to +100 | World class - rare, typically category leaders |
Resolution time benchmarks
First Response Time (FRT) by channel
| Channel | Good | Average | Poor |
|---|---|---|---|
| Live chat | < 30 seconds | 1-3 minutes | > 5 minutes |
| Phone | < 60 seconds | 2-5 minutes | > 10 minutes |
| Email / ticket | < 4 hours | 4-12 hours | > 24 hours |
| Social media | < 1 hour | 1-4 hours | > 8 hours |
Average Resolution Time by complexity
| Complexity | Good | Average | Poor |
|---|---|---|---|
| Simple (how-to, password reset) | < 2 hours | 2-8 hours | > 24 hours |
| Medium (config issue, bug report) | < 8 hours | 8-24 hours | > 48 hours |
| Complex (integration, data issue) | < 24 hours | 1-3 days | > 5 days |
| Critical (outage, data loss) | < 4 hours | 4-8 hours | > 12 hours |
Other efficiency benchmarks
| Metric | Good | Average | Needs attention |
|---|---|---|---|
| One-touch resolution rate | > 60% | 40-60% | < 40% |
| Reopen rate | < 5% | 5-10% | > 10% |
| Tickets per agent per day | 8-15 | 15-25 | > 25 (burnout risk) |
| Agent utilization | 60-75% | 75-85% | > 85% (no buffer) |
Deflection rate benchmarks
By self-service channel
| Channel | Good | Average | Poor |
|---|---|---|---|
| Knowledge base / help center | 30-45% | 15-30% | < 15% |
| AI chatbot | 25-40% | 10-25% | < 10% |
| Community forum | 15-25% | 5-15% | < 5% |
| In-app contextual help | 35-50% | 20-35% | < 20% |
Overall deflection by company maturity
| Maturity | Typical rate | What drives it |
|---|---|---|
| No self-service investment | 5-10% | Incidental (users Googling) |
| Basic help center | 15-25% | Static articles for top issues |
| Optimized self-service | 25-40% | Data-driven content, chatbot, in-app help |
| AI-augmented | 35-50%+ | ML-powered search, generative answers, proactive help |
Cost benchmarks
Cost per ticket by channel
| Channel | Low | Median | High |
|---|---|---|---|
| Phone | $6 | $10 | $16 |
| Email / ticket | $3 | $5 | $8 |
| Live chat | $2 | $4 | $6 |
| AI chatbot (automated) | $0.10 | $0.50 | $1.00 |
| Self-service (knowledge base) | $0.05 | $0.25 | $0.50 |
Support cost as percentage of revenue
| Company stage | Typical range | Context |
|---|---|---|
| Early stage | 15-25% | Small team, high per-customer cost |
| Growth | 8-15% | Scaling with process, some automation |
| Scale | 5-10% | Mature processes, deflection working |
| Enterprise | 3-7% | Tiered support, heavy self-service |
How to use these benchmarks
- Start with your own baseline - measure your current metrics for 4-8 weeks before comparing to benchmarks
- Find your closest peer group - match by vertical, company size, and support channel mix
- Set targets relative to your baseline - aim for 10-15% improvement per quarter, not an immediate jump to "top quartile"
- Benchmark internally first - compare teams, channels, and segments within your org before looking externally
- Update annually - industry benchmarks shift as tooling and customer expectations evolve
metric-formulas.md
Support Metrics - Formula Reference
Complete formula reference for all support analytics metrics, including statistical considerations for reliable measurement.
CSAT formulas
Basic CSAT score
CSAT % = (count of 4 and 5 responses) / (total responses) * 100Weighted CSAT (when response volumes differ across segments)
Weighted CSAT = SUM(segment_csat * segment_ticket_volume) / total_ticket_volumeUse weighted CSAT when comparing across segments with very different volumes. A segment with 10 responses and 100% CSAT should not dominate reporting.
CSAT confidence interval
For a given sample size and CSAT score, the 95% confidence interval is:
margin_of_error = 1.96 * sqrt((csat * (1 - csat)) / n)
Example:
CSAT = 0.82 (82%), n = 200 responses
margin = 1.96 * sqrt(0.82 * 0.18 / 200) = 1.96 * 0.0272 = 0.053
95% CI: 76.7% to 87.3%Minimum sample sizes for reliable CSAT:
| Desired margin of error | Required responses (at 80% CSAT) |
|---|---|
| +/- 10% | 62 |
| +/- 5% | 246 |
| +/- 3% | 683 |
| +/- 1% | 6,147 |
If your weekly response count falls below 62, report CSAT as directional only, not as a reliable metric.
Response rate and bias
Response Rate = total_responses / total_resolved_tickets * 100Response rates below 10% introduce severe selection bias - dissatisfied customers are more likely to respond, skewing CSAT downward. Conversely, if only prompted users respond, CSAT may skew upward.
Corrective strategies:
- Target 15-25% response rate as the reliability threshold
- Compare respondent demographics to full ticket population
- Use stratified sampling if certain segments under-respond
NPS formulas
Standard NPS
NPS = (% Promoters) - (% Detractors)
Where:
Promoters: score 9 or 10
Passives: score 7 or 8
Detractors: score 0 through 6
Range: -100 to +100NPS margin of error
NPS standard error = sqrt((p_promoter * (1 - p_promoter) + p_detractor * (1 - p_detractor)
+ 2 * p_promoter * p_detractor) / n)
95% CI = NPS +/- 1.96 * standard_error
Example:
n = 300, Promoters = 40%, Detractors = 20%, NPS = +20
SE = sqrt((0.40 * 0.60 + 0.20 * 0.80 + 2 * 0.40 * 0.20) / 300)
= sqrt((0.24 + 0.16 + 0.16) / 300) = sqrt(0.001867) = 0.0432
95% CI: +20 +/- 8.5 points -> range [+11.5, +28.5]Minimum sample sizes for NPS:
| Desired margin of error | Required responses |
|---|---|
| +/- 10 points | ~100 |
| +/- 5 points | ~400 |
| +/- 3 points | ~1,100 |
NPS trend significance
To determine if an NPS change between two periods is statistically significant:
z = (NPS_2 - NPS_1) / sqrt(SE_1^2 + SE_2^2)
If |z| > 1.96, the change is significant at p < 0.05Resolution time formulas
Percentile-based reporting
Always report resolution time as percentiles, not averages. Averages are skewed by outliers (one 30-day ticket destroys a weekly average).
Key percentiles:
p50 (median): Typical customer experience
p75: Start of the "long tail" - where most frustration lives
p90: Worst common experience
p95: Near-worst case, useful for SLA compliance
SQL:
SELECT
PERCENTILE_CONT(0.50) WITHIN GROUP (ORDER BY resolution_hours) AS p50,
PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY resolution_hours) AS p75,
PERCENTILE_CONT(0.90) WITHIN GROUP (ORDER BY resolution_hours) AS p90,
PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY resolution_hours) AS p95
FROM tickets
WHERE resolved_at IS NOT NULL
AND resolved_at >= NOW() - INTERVAL '28 days';Business hours adjustment
Raw resolution time includes nights and weekends. For teams with defined business hours, calculate business-hours resolution time:
Business Hours Resolution = total_elapsed_time - non_business_hours_in_range
Non-business hours calculation:
For each calendar day in the range:
If weekday: subtract hours outside 9am-6pm (15h per day)
If weekend: subtract full 24h
If holiday: subtract full 24hSLA compliance rate
SLA Compliance % = (tickets resolved within SLA target) / (total tickets) * 100
Report per priority tier:
P1 SLA compliance: tickets_resolved_under_4h / total_p1_tickets * 100
P2 SLA compliance: tickets_resolved_under_8h / total_p2_tickets * 100
...Deflection rate formulas
Standard deflection rate
Deflection Rate = deflected / (deflected + not_deflected) * 100
Where:
deflected = self-service views with NO ticket created within 24h
not_deflected = self-service views WITH ticket created within 24hContent effectiveness score
For individual help articles or chatbot flows:
Article Effectiveness = 1 - (tickets_created_after_view / total_article_views)
Rank articles by:
1. Total views (high traffic = high impact)
2. Effectiveness score (low score = needs improvement)
3. Impact = views * (1 - effectiveness) = potential tickets saved if improvedDeflection ROI
Monthly deflection savings =
deflected_interactions * (avg_ticket_cost - avg_self_service_cost)
Annual ROI of deflection improvement =
(new_deflection_rate - old_deflection_rate) * monthly_contacts * 12
* (avg_ticket_cost - avg_self_service_cost)Trend analysis formulas
Trailing average (smoothing)
4-week trailing average:
avg_t = (value_t + value_t-1 + value_t-2 + value_t-3) / 4
Exponential moving average (more weight on recent data):
EMA_t = alpha * value_t + (1 - alpha) * EMA_t-1
Where alpha = 2 / (N + 1), N = number of periodsWeek-over-week change detection
WoW change % = (this_week - last_week) / last_week * 100
Alert if:
|WoW change| > 2 * standard_deviation_of_weekly_changes
This catches unusual spikes or drops relative to normal variation.Seasonality decomposition
For weekly data with known seasonal patterns:
seasonal_index_w = avg_value_for_week_w / overall_avg
Deseasonalized value:
adjusted_t = actual_t / seasonal_index_for_that_week
Use deseasonalized values for trend detection to avoid false alarms from
predictable seasonal spikes (e.g., post-holiday ticket surges).Sampling guidelines
When ticket volume is too high for 100% survey coverage:
Stratified sampling approach:
1. Define strata: channel x priority x product_area
2. Calculate required sample per stratum (min 30 for statistical validity)
3. Sample proportionally or use minimum allocation per stratum
Simple random sampling minimum:
For 95% confidence, +/- 5% margin on a population of:
500 tickets/week: 217 surveys needed
1,000 tickets/week: 278 surveys needed
5,000 tickets/week: 357 surveys needed
10,000+ tickets/week: 370 surveys needed (diminishing returns) metrics-benchmarks.md
Support Metrics - Industry Benchmarks
Benchmarks give you a calibration point, not a target. Always establish your own baseline over 4-8 weeks before comparing externally. Match your peer group by vertical, company size, and support channel mix before drawing conclusions.
CSAT benchmarks
By industry vertical
| Vertical | Median CSAT | Top quartile | Notes |
|---|---|---|---|
| SaaS / Software | 78% | 88%+ | Higher for B2B than B2C due to relationship depth |
| E-commerce | 80% | 90%+ | Driven primarily by shipping and returns experience |
| Financial services | 75% | 85%+ | Slower resolution expected; regulatory complexity |
| Healthcare | 72% | 82%+ | Complex issues, privacy constraints inflate effort |
| Telecommunications | 68% | 78%+ | Historically lowest - billing complexity and churn |
| Hospitality / Travel | 76% | 86%+ | High emotional stakes, strong seasonality effects |
By support channel
| Channel | Typical CSAT | Driver |
|---|---|---|
| Live chat | 82-88% | Fast, convenient, in-context responses |
| Phone | 78-85% | Personal but wait times hurt scores |
| Email / ticket | 72-80% | Slow, lacks real-time back-and-forth |
| Social media | 65-75% | Public complaints, often pre-escalated issues |
| Self-service (successful) | 85-92% | Instant resolution; user feels empowered |
By company stage
| Stage | Typical CSAT | Context |
|---|---|---|
| Early startup (< 50 customers) | 90%+ | Founders answering tickets, very high-touch |
| Growth (50-500 customers) | 78-85% | First hires learning; scaling pains visible |
| Scale (500-5,000 customers) | 75-82% | Process-driven; specialization emerging |
| Enterprise (5,000+ customers) | 72-80% | Tiered support; issues are more complex |
Response rate benchmarks
| Response rate | Interpretation |
|---|---|
| > 25% | Excellent - survey timing and delivery are healthy |
| 15-25% | Good - standard for most SaaS support programs |
| 10-15% | Acceptable - watch for response bias in low scorers |
| < 10% | Investigate - survey fatigue, wrong channel, or delivery issue |
NPS benchmarks
By industry vertical
| Vertical | Median NPS | Top quartile | World class |
|---|---|---|---|
| SaaS / Software | +30 | +50 | +70 |
| E-commerce | +40 | +60 | +75 |
| Financial services | +20 | +40 | +60 |
| Healthcare | +15 | +35 | +55 |
| Telecommunications | +5 | +25 | +45 |
NPS interpretation scale
| NPS range | Interpretation |
|---|---|
| -100 to 0 | Critical - systemic issues, high churn risk |
| 0 to +20 | Below average - significant improvement needed |
| +20 to +50 | Good - competitive but meaningful room to improve |
| +50 to +70 | Excellent - strong loyalty and word-of-mouth |
| +70 to +100 | World class - rare, typically category leaders |
Transactional NPS (tNPS) vs relationship NPS
| Type | When to send | What it measures | Cadence |
|---|---|---|---|
| Relationship NPS | Periodic, no trigger event | Overall brand loyalty | Quarterly |
| Transactional NPS (tNPS) | After significant support interaction | Loyalty impact of specific resolution | Per qualifying interaction |
tNPS scores run 10-15 points higher than relationship NPS for the same company because the survey primes the customer to think about a specific resolution. Do not compare them directly.
Resolution time benchmarks
First Response Time (FRT) by channel
| Channel | Good | Average | Needs attention |
|---|---|---|---|
| Live chat | < 30 seconds | 1-3 minutes | > 5 minutes |
| Phone (hold time) | < 60 seconds | 2-5 minutes | > 10 minutes |
| Email / ticket | < 4 hours | 4-12 hours | > 24 hours |
| Social media | < 1 hour | 1-4 hours | > 8 hours |
Average Resolution Time by complexity tier
| Complexity | Good | Average | Needs attention |
|---|---|---|---|
| Simple (how-to, password reset) | < 2 hours | 2-8 hours | > 24 hours |
| Medium (config issue, bug report) | < 8 hours | 8-24 hours | > 48 hours |
| Complex (integration, data issue) | < 24 hours | 1-3 days | > 5 days |
| Critical (outage, data loss) | < 4 hours | 4-8 hours | > 12 hours |
Efficiency metric benchmarks
| Metric | Good | Average | Needs attention |
|---|---|---|---|
| First Contact Resolution (FCR) | > 70% | 50-70% | < 50% |
| One-touch resolution rate | > 60% | 40-60% | < 40% |
| Reopen rate | < 5% | 5-10% | > 10% |
| Tickets per agent per day | 8-15 | 15-25 | > 25 (burnout risk) |
| Agent utilization | 60-75% | 75-85% | > 85% (no buffer for quality) |
Deflection rate benchmarks
By self-service channel
| Channel | Good | Average | Needs attention |
|---|---|---|---|
| Knowledge base / help center | 30-45% | 15-30% | < 15% |
| AI chatbot | 25-40% | 10-25% | < 10% |
| Community forum | 15-25% | 5-15% | < 5% |
| In-app contextual help | 35-50% | 20-35% | < 20% |
Overall deflection by self-service maturity
| Maturity level | Typical rate | What drives it |
|---|---|---|
| No self-service investment | 5-10% | Incidental (users searching externally) |
| Basic help center | 15-25% | Static articles for top issues |
| Optimized self-service | 25-40% | Data-driven content, chatbot, in-app help |
| AI-augmented self-service | 35-55%+ | ML-powered search, generative answers |
Warning: deflection rates above 50% often indicate the product has UX or reliability issues that self-service is papering over. Investigate with CSAT and reopen rate before celebrating a high deflection number.
Cost benchmarks
Cost per contact by channel
| Channel | Low | Median | High |
|---|---|---|---|
| Phone | $6 | $10 | $16 |
| Email / ticket | $3 | $5 | $8 |
| Live chat | $2 | $4 | $6 |
| AI chatbot (automated) | $0.10 | $0.50 | $1.00 |
| Self-service (knowledge base) | $0.05 | $0.25 | $0.50 |
Figures reflect US/Western Europe SaaS companies. Costs vary significantly by geography, labor market, and tooling stack.
Support cost as a percentage of revenue
| Company stage | Typical range | Context |
|---|---|---|
| Early stage | 15-25% | Small team, high per-customer cost |
| Growth | 8-15% | Scaling with process; some automation in place |
| Scale | 5-10% | Mature processes; deflection working |
| Enterprise | 3-7% | Tiered support; heavy self-service investment |
How to use these benchmarks
- Establish your own baseline first - measure for 4-8 weeks before comparing
- Match your peer group - filter by vertical, company size, and channel mix
- Set incremental targets - aim for 10-15% improvement per quarter, not an immediate jump to top quartile
- Prioritize internal comparisons - channel vs channel, team vs team, and week vs week tells you more than external benchmarks do
- Revisit annually - benchmarks shift as tooling and customer expectations evolve
Frequently Asked Questions
What is support-analytics?
Use this skill when measuring CSAT, NPS, resolution time, deflection rates, or analyzing support trends. Triggers on CSAT, NPS, resolution time, deflection rate, support metrics, trend analysis, support reporting, and any task requiring customer support data analysis or reporting.
How do I install support-analytics?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill support-analytics in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support support-analytics?
support-analytics works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.