pricing-strategy
Use this skill when designing pricing models, packaging products into tiers, building freemium funnels, implementing usage-based billing, structuring enterprise pricing, or running price tests. Triggers on pricing pages, monetization strategy, willingness-to-pay research, price sensitivity analysis, free-to-paid conversion, seat-based vs consumption pricing, and A/B testing prices.
product pricingmonetizationpackagingfreemiumsaasgrowthWhat is pricing-strategy?
Use this skill when designing pricing models, packaging products into tiers, building freemium funnels, implementing usage-based billing, structuring enterprise pricing, or running price tests. Triggers on pricing pages, monetization strategy, willingness-to-pay research, price sensitivity analysis, free-to-paid conversion, seat-based vs consumption pricing, and A/B testing prices.
pricing-strategy
pricing-strategy is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Designing pricing models, packaging products into tiers, building freemium funnels, implementing usage-based billing, structuring enterprise pricing, or running price tests.
Quick Facts
| Field | Value |
|---|---|
| Category | product |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill pricing-strategy- The pricing-strategy skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
A practical framework for designing, packaging, and testing software pricing. Pricing is the highest-leverage growth lever most teams ignore - a 1% improvement in pricing yields 2-4x the revenue impact of a 1% improvement in acquisition. This skill covers the full pricing lifecycle: choosing a model (freemium, usage-based, seat-based, flat), packaging features into tiers, building enterprise plans that close six-figure deals, and running price tests without torching customer trust. Agents can use this to draft pricing pages, evaluate model trade-offs, design packaging, and structure experiments.
Tags
pricing monetization packaging freemium saas growth
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair pricing-strategy with these complementary skills:
Frequently Asked Questions
What is pricing-strategy?
Use this skill when designing pricing models, packaging products into tiers, building freemium funnels, implementing usage-based billing, structuring enterprise pricing, or running price tests. Triggers on pricing pages, monetization strategy, willingness-to-pay research, price sensitivity analysis, free-to-paid conversion, seat-based vs consumption pricing, and A/B testing prices.
How do I install pricing-strategy?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill pricing-strategy in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support pricing-strategy?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Pricing Strategy
A practical framework for designing, packaging, and testing software pricing. Pricing is the highest-leverage growth lever most teams ignore - a 1% improvement in pricing yields 2-4x the revenue impact of a 1% improvement in acquisition. This skill covers the full pricing lifecycle: choosing a model (freemium, usage-based, seat-based, flat), packaging features into tiers, building enterprise plans that close six-figure deals, and running price tests without torching customer trust. Agents can use this to draft pricing pages, evaluate model trade-offs, design packaging, and structure experiments.
When to use this skill
Trigger this skill when the user:
- Is designing or redesigning a pricing page or pricing model
- Needs to decide between freemium, free trial, usage-based, or seat-based pricing
- Wants to package features into tiers (good/better/best)
- Is building an enterprise tier and needs to structure negotiation levers
- Wants to run a price test or willingness-to-pay survey
- Needs to set or change prices for a SaaS product
- Is evaluating free-to-paid conversion rates or upgrade triggers
- Asks about price anchoring, decoy pricing, or price discrimination strategies
Do NOT trigger this skill for:
- Billing system implementation details (use a payments or Stripe skill)
- General business strategy or go-to-market planning unrelated to pricing
Key principles
Value before price - Price is a function of perceived value, not cost. Before setting any number, articulate the measurable outcome customers get. If you cannot name the outcome, you cannot defend the price.
Packaging is strategy, pricing is tactics - Which features go in which tier matters more than the dollar amount on each tier. Get packaging wrong and no price point saves you. Get packaging right and you have room to adjust prices later.
One metric to rule them all - The best pricing models charge on a single value metric that scales with the customer's success. Seats work when every user gets equal value. API calls work when consumption correlates with revenue. Pick the metric where "more usage = more value for the customer."
Segment by willingness to pay, not by cost to serve - Tiers should separate customers who get different amounts of value, not customers who cost you different amounts. A startup and an enterprise both use the same servers, but the enterprise extracts 100x more value - price accordingly.
Test prices, not just features - Most teams A/B test buttons and headlines but never test prices. Price sensitivity data is the highest-signal input to pricing decisions and can be gathered safely with the right methodology.
Core concepts
Value metric is the unit you charge on - seats, API calls, messages sent, revenue processed, GB stored. The ideal value metric is easy to understand, scales with customer value, and is predictable enough for customers to budget. Slack charges per seat. Twilio charges per message. Stripe charges per transaction. Each aligns price with the value the customer receives.
Packaging is the act of grouping features into tiers. The standard model is good/better/best (three tiers). The middle tier should be the target - it is where 60-70% of customers should land. The top tier exists to make the middle tier look reasonable (anchoring) and to capture high-willingness-to-pay customers.
Price fences are the criteria that separate tiers. They must be objective and hard to game. Good fences: number of seats, API volume, data retention period, SLA level. Bad fences: company size (self-reported), "startup" vs "enterprise" (subjective).
Willingness to pay (WTP) is the maximum price a customer segment will accept. It varies dramatically by segment. You discover WTP through Van Westendorp surveys, Gabor-Granger analysis, or conjoint studies - never by asking "what would you pay?" directly.
Common tasks
Design a three-tier SaaS pricing page
Framework: Good / Better / Best
- Name tiers by persona, not size - "Starter / Team / Business" beats "Small / Medium / Large." Names signal who the tier is for.
- Anchor with the top tier - Show the most expensive tier first (left or top). It reframes the middle tier as reasonable.
- Highlight the target tier - Use a "Most Popular" badge on the middle tier. 60-70% of signups should land here.
- Limit to 3-4 tiers - More tiers create decision paralysis. If you need a fourth, make it "Enterprise - Contact Sales."
- Feature differentiation checklist:
- Free/Starter: core value proposition, hard usage cap, no integrations
- Team/Pro: collaboration features, higher limits, basic integrations
- Business/Enterprise: SSO, audit logs, SLA, dedicated support, custom contracts
Pricing page copy pattern:
[Tier name]
[One sentence: who this is for]
[$X / mo per seat]
[3-5 feature bullets, starting with the most differentiating]
[CTA button]Choose between freemium and free trial
| Factor | Freemium | Free Trial |
|---|---|---|
| Best when | Product has viral/network effects, low marginal cost | Product value is obvious but needs time to discover |
| Conversion rate | 2-5% free to paid (typical) | 15-25% trial to paid (typical) |
| Risk | Freeloaders consume resources without converting | Short trial may not show full value |
| Examples | Slack, Dropbox, Figma | Salesforce, HubSpot, Netflix |
Decision rule: Use freemium when free users create value for paid users (network effects, content creation, referrals). Use free trial when the product's value requires sustained use to appreciate but does not benefit from a large free base.
Hybrid option: Free trial of the paid tier, then downgrade to a limited free tier. This shows users the full value, then lets them keep a foothold. Zoom does this well.
Implement usage-based pricing
When to use: The customer's value scales linearly with consumption, and usage is measurable and predictable. Good fits: API platforms, cloud infrastructure, messaging services, data pipelines.
Structure options:
- Pure pay-as-you-go - No commitment, pay per unit. Low barrier, but revenue is unpredictable. Best for developer tools (Twilio, AWS Lambda).
- Committed use + overage - Base commitment at a discount, then per-unit overage. Gives revenue predictability. Best for mid-market and enterprise (Snowflake).
- Tiered volume - Price per unit drops as volume increases. Incentivizes growth. Best when you want customers to consolidate spend (Stripe's volume discounts).
Implementation checklist:
- Pick one value metric (not two or three)
- Set a minimum monthly commitment (even $0 with a credit card on file)
- Provide a usage dashboard and spend alerts
- Offer committed-use discounts for annual contracts
- Bill in arrears with a clear invoice breakdown
Gotcha: Usage-based pricing makes revenue forecasting harder. Pair it with annual commitments or minimum spend agreements for enterprise customers.
Structure an enterprise tier
Enterprise pricing is not a number on a webpage - it is a negotiation framework.
Must-have enterprise features (price fences):
- SSO / SAML integration
- Audit logs and compliance certifications (SOC 2, HIPAA)
- Dedicated support (named CSM, SLA with uptime guarantee)
- Custom contracts and invoicing (NET 30/60/90)
- Data residency and security controls
- Admin controls, role-based access, and user provisioning (SCIM)
Pricing levers for negotiation:
- Seat count - volume discount at 100+, 500+, 1000+ thresholds
- Contract length - 10-20% discount for multi-year commits
- Payment terms - annual upfront is default; quarterly or monthly at a premium
- Usage tiers - committed volume at lower per-unit cost
- Professional services - onboarding, migration, custom integrations as add-ons
Pricing floor rule: Never discount more than 30% off list price. If the customer needs more than 30% off, restructure the deal (fewer seats, shorter term, fewer features) rather than deepening the discount. Deep discounts set bad renewal precedents.
Run a price test
Method 1: Van Westendorp Price Sensitivity Meter
Ask four questions to a sample of target customers:
- At what price would this be so cheap you would question quality? (Too Cheap)
- At what price is this a bargain - a great value? (Cheap)
- At what price is this getting expensive but you would still consider? (Expensive)
- At what price is this too expensive - you would never buy? (Too Expensive)
Plot the cumulative distributions. The intersection of "Too Cheap" and "Expensive" gives the optimal price point. The range between "Cheap/Too Expensive" intersection and "Too Cheap/Expensive" intersection gives the acceptable price range.
Method 2: A/B test with geographic or cohort splits
Never show different prices to the same market simultaneously - it destroys trust if discovered.
Safe approaches:
- Test in different geographic markets (e.g., US vs UK)
- Test on new signups only (grandfather existing customers)
- Test different packaging (features per tier) at the same price
- Use time-based splits (this month vs next month for new cohorts)
Method 3: Gabor-Granger for demand curve
Show a price and ask "would you buy at this price?" Vary the price across respondents. Plot price vs % who would buy. Find the revenue-maximizing point (price * conversion).
Golden rule: Never test prices on existing paying customers. Only test on new prospects or in new markets.
Set initial prices for a new product
Step 1 - Competitor anchoring: List 3-5 competitors and their pricing. You are not matching them - you are using them to understand the market's reference frame.
Step 2 - Value quantification: Calculate the economic value your product creates. If your tool saves 10 hours/month of a $100/hr employee's time, the value created is $1,000/month. Price at 10-20% of value created.
Step 3 - Segment analysis: Identify 2-3 customer segments by willingness to pay. Map features to segments. Price the top segment first, then work down.
Step 4 - Round and simplify: End prices in 9 for consumer ($49, $99, $199). Use round numbers for enterprise ($500, $1,000). Never use decimal prices for SaaS.
Step 5 - Launch high, discount down: It is dramatically easier to lower prices than raise them. Launch at the top of your acceptable range and adjust based on conversion data. A product that is "too expensive" still gets feedback; a product that is "too cheap" leaves money on the table silently.
Design upgrade triggers for freemium
Upgrade triggers are the moments when a free user hits a limit that motivates them to pay. Design these intentionally.
Effective triggers:
- Usage limits - "You have used 95% of your free storage" (Dropbox)
- Feature gates - "Upgrade to unlock advanced analytics" (Mixpanel)
- Collaboration gates - "Add more than 3 team members" (Notion)
- Time-based - "Your premium trial ends in 3 days" (LinkedIn)
- Export/integration gates - "Export to CSV requires Pro" (Airtable)
Design rules:
- Let users experience the core value loop before hitting the gate
- Show what they are missing (preview locked features, not just a lock icon)
- Trigger upgrade prompts at moments of high engagement, not frustration
- Make the upgrade path one click - pre-select the right tier based on their usage
Anti-patterns / common mistakes
| Mistake | Why it's wrong | What to do instead |
|---|---|---|
| Pricing based on cost-plus | Your costs are irrelevant to what customers will pay; you leave massive value on the table | Price based on value delivered, use competitor pricing as reference frame |
| Too many tiers (5+) | Decision paralysis reduces conversion; operational complexity increases | Stick to 3 tiers plus an enterprise "Contact Sales" option |
| Identical feature sets across tiers (only limits differ) | Customers see no qualitative difference; defaults to cheapest tier | Differentiate tiers by feature category (collaboration, security, support) not just quantity |
| Offering monthly-only pricing | Revenue is unpredictable; churn is higher on monthly plans | Default to annual billing with a monthly option at 20-30% premium |
| Discounting to close every deal | Trains the market to expect discounts; erodes pricing power over time | Discount only with a trade (longer term, case study, referral) - never for free |
| Changing prices on existing customers without notice | Destroys trust, spikes churn, generates negative press | Grandfather existing customers or give 90+ days notice with clear value justification |
| Hiding pricing entirely | Creates friction; self-serve buyers leave; only works for true enterprise sales | Show pricing for self-serve tiers; use "Contact Sales" only for enterprise |
Gotchas
Grandfathering existing customers breaks new pricing model adoption - Grandfathering legacy pricing protects current customers but creates a permanent two-tier customer base where your highest-engaged users are also your lowest-paying. Every new feature you ship subsidizes old pricing. If you need to change pricing, give existing customers a generous migration window (90-180 days) and an incentive to switch, but set a hard cutover date.
Usage-based pricing without spend alerts causes customer shock and churn - Customers who receive a bill 5x what they expected due to unanticipated usage spikes rarely renew, regardless of the value delivered. Every usage-based product must provide real-time usage dashboards and configurable spend alerts before GA. Pricing surprise is a top cause of B2B churn.
Free trial that defaults to paid after expiry without clear warning violates trust - Auto-converting a trial to a paid subscription when no credit card was required at signup, or failing to send a prominent reminder before billing begins, generates chargebacks, negative reviews, and potential regulatory exposure (ROSCA in the US). Always send email reminders at 7 days and 1 day before any trial-to-paid conversion.
Showing annual pricing as "per month" without making the billing frequency obvious is deceptive - "$49/month billed annually" shown as just "$49/mo" in a pricing table misleads customers into expecting monthly billing. Many buyers discover the $588 charge on their card and dispute it. Always show both the monthly equivalent AND the total annual amount prominently in the pricing UI.
Price testing on logged-in users who can compare notes destroys trust - Unlike geographic or cohort splits, showing different prices to users in the same market who know each other (B2B teams, developer communities) will be discovered and publicized. The reputational damage from a perceived price-fixing discovery exceeds any revenue optimization gain.
References
For detailed frameworks on specific pricing sub-domains, read the relevant file
from the references/ folder:
references/pricing-models.md- deep comparison of all pricing model types (flat-rate, per-seat, usage-based, hybrid, reverse trial) with decision trees and real-world examplesreferences/price-testing.md- detailed methodology for Van Westendorp, Gabor-Granger, conjoint analysis, and safe A/B testing approaches with sample survey templates
Only load a references file when the current task requires it - they are long and will consume context.
References
price-testing.md
Price Testing Methodology
Overview
Price testing answers the question: "What price maximizes revenue (or profit, or adoption) for a given customer segment?" There are three primary methodologies, each with different trade-offs on cost, accuracy, and risk.
| Method | Cost | Accuracy | Risk | Best for |
|---|---|---|---|---|
| Van Westendorp | Low | Moderate | None (survey) | Early-stage price discovery |
| Gabor-Granger | Low | Moderate-High | None (survey) | Demand curve estimation |
| Conjoint analysis | High | High | None (survey) | Multi-feature packaging + pricing |
| A/B test (live) | Medium | Highest | Moderate (trust risk) | Validating a specific price point |
Van Westendorp Price Sensitivity Meter
How it works
Survey respondents answer four questions about a product description:
- Too Cheap - "At what price would you consider this product to be so cheap that you would question its quality?"
- Cheap (Bargain) - "At what price would you consider this product a bargain - a great buy for the money?"
- Expensive - "At what price would you consider this product to be getting expensive - you would still consider it, but you would have to think about it?"
- Too Expensive - "At what price would you consider this product to be so expensive that you would never consider buying it?"
Analyzing results
Plot four cumulative distribution curves:
- Too Cheap (cumulative from high to low)
- Cheap (cumulative from high to low)
- Expensive (cumulative from low to high)
- Too Expensive (cumulative from low to high)
Key intersections:
- Point of Marginal Cheapness (PMC) = Too Cheap intersects Expensive
- Point of Marginal Expensiveness (PME) = Too Expensive intersects Cheap
- Optimal Price Point (OPP) = Too Cheap intersects Too Expensive
- Indifference Price Point (IDP) = Cheap intersects Expensive
The acceptable price range is PMC to PME. The optimal price point (OPP) is where the fewest people reject the price on either end.
Sample size
Minimum 100 respondents per segment. 200+ is recommended for stable curves. Segment by company size, role, or use case - never aggregate all respondents into one curve.
Survey template
We are developing [product description - 2-3 sentences of what it does and
the problem it solves].
Thinking about the value this product provides, please answer the following:
1. At what monthly price would you consider this product to be priced so low
that you would question its quality? $____
2. At what monthly price would you consider this product to be a bargain -
a great buy for the money? $____
3. At what monthly price would you start to think this product is getting
expensive - you would still consider buying it, but you would have to
think about it? $____
4. At what monthly price would you consider this product so expensive that
you would never consider buying it, regardless of its quality? $____Gabor-Granger Demand Curve
How it works
Show respondents a price and ask a binary purchase intent question. Vary the price across respondents (between-subjects) or sequentially for one respondent (within-subjects with ascending or descending price ladder).
Between-subjects approach (recommended):
- Divide respondents into 5-7 groups
- Each group sees a different price point
- Ask: "At $X/month, how likely are you to purchase?" (Definitely would / Probably would / Might or might not / Probably would not / Definitely would not)
- Count "Definitely + Probably would" as demand at that price
Within-subjects approach:
- Start at the highest price: "Would you buy at $199/mo?"
- If no, step down: "Would you buy at $149/mo?"
- Continue until yes, or until the lowest price
- Record the price where they convert
Building the demand curve
Plot: Price (x-axis) vs % who would buy (y-axis). This gives you a demand curve.
Revenue at each price = Price * % who would buy * Total addressable market.
The revenue-maximizing price is the point where (Price * Conversion %) is highest.
Sample size
Minimum 30 respondents per price point (between-subjects). For 6 price points, that is 180 respondents minimum.
Conjoint Analysis
When to use
Conjoint is the most powerful method but also the most complex and expensive. Use it when you need to simultaneously optimize:
- Which features go in which tier
- What price to charge for each tier
- How much each feature contributes to willingness to pay
How it works
Respondents see pairs (or sets) of product configurations and choose which they prefer. Each configuration varies on multiple attributes (features, price, support level, etc.). Statistical analysis decomposes choices into the relative value of each attribute.
Attribute design
Choose 4-6 attributes with 2-4 levels each:
Attribute 1: Price -> $29, $49, $99, $149
Attribute 2: Seats -> 5, 25, unlimited
Attribute 3: Storage -> 10GB, 100GB, 1TB
Attribute 4: Support -> Email only, Priority email, Dedicated CSM
Attribute 5: Integrations -> Basic (5), Advanced (20), Custom APIOutput
Conjoint analysis produces:
- Part-worth utilities - The value contribution of each attribute level
- Relative importance - Which attributes drive purchase decisions most
- Willingness to pay - Dollar value of each feature
- Optimal bundles - The packaging that maximizes revenue or market share
Practical advice
- Use a conjoint analysis platform (Sawtooth, Conjointly, or SurveyMonkey's conjoint module) - do not build this from scratch
- Minimum 200 respondents for stable results
- Limit to 6 attributes maximum or respondent fatigue degrades data quality
- Pre-test with 10-15 respondents to verify attribute levels make sense
Live A/B testing prices
The trust problem
Showing different prices to different users in the same market at the same time is risky. If customers discover it (and they will), it destroys trust. The backlash can outweigh any insight gained.
Safe A/B testing approaches
1. Geographic splits
- Test Price A in one country, Price B in another
- Adjust for purchasing power parity
- Works well for global products with low cross-market communication
2. New cohort splits
- All existing customers keep current pricing
- New signups are randomly assigned to Price A or B
- Measure conversion rate, not existing customer reaction
- Run for 4-8 weeks to get statistical significance
3. Feature packaging tests (price held constant)
- Same price, different feature bundles
- "Would you rather have Plan A ($49/mo with X, Y) or Plan B ($49/mo with X, Z)?"
- Reveals feature value without price perception risk
4. Landing page tests
- Show different pricing pages to different traffic sources
- Measure click-through on "Start Free Trial" or "Buy Now"
- Does not require actual different prices at checkout
- Tests price perception, not willingness to pay
5. Time-based sequential tests
- Week 1-4: Price A for all new signups
- Week 5-8: Price B for all new signups
- Compare conversion rates between periods
- Risk: external factors (seasonality, marketing campaigns) can confound
Statistical requirements
- Minimum 1,000 visitors per variant for pricing page conversion tests
- Run for at least 2 full billing cycles to capture churn effects
- Measure not just signup conversion but also:
- 30-day retention
- Expansion revenue (upgrades)
- Net revenue per visitor (accounts for different conversion rates)
What to measure
| Metric | Why it matters |
|---|---|
| Signup conversion rate | Direct price sensitivity signal |
| Trial-to-paid conversion | Whether the price survives the evaluation period |
| ARPU (average revenue per user) | Higher price * lower conversion may still win |
| Net revenue per visitor | The composite metric that matters most |
| 90-day retention | Cheap prices attract low-intent users who churn |
| NPS / satisfaction score | Detect if pricing is causing resentment |
Common price testing mistakes
| Mistake | Consequence | Fix |
|---|---|---|
| Testing on existing customers | Churn and trust erosion | Only test on new prospects |
| Testing too many prices at once | Insufficient sample size per variant | Test 2-3 prices max |
| Running for less than 4 weeks | Seasonal and weekly variation skews results | Minimum 4 weeks, ideally 8 |
| Measuring only conversion rate | A lower price converts better but may yield less revenue | Always calculate net revenue per visitor |
| Not segmenting results | Aggregate data hides segment differences | Analyze by company size, source, and use case |
| Announcing the test publicly | Customers wait for the lower price | Never announce pricing experiments |
pricing-models.md
Pricing Models Deep Dive
Model comparison matrix
| Model | How it works | Best for | Watch out for |
|---|---|---|---|
| Flat-rate | One price, one plan, all features | Simple products with homogeneous users | Leaves money on the table with power users; no upgrade path |
| Per-seat | Charge per user per month | Collaboration tools where every user gets equal value | Seat consolidation (users share logins); penalizes adoption |
| Usage-based | Charge per unit consumed | APIs, infrastructure, messaging platforms | Revenue unpredictability; customers may throttle usage to save money |
| Tiered (good/better/best) | 3-4 plans with increasing features and limits | Most SaaS products | Too many tiers cause paralysis; too few miss segments |
| Freemium | Free tier + paid tiers | Products with viral loops or network effects | Freeloaders consume resources; free tier cannibalizes paid |
| Free trial | Full access for limited time | Products with high activation energy | Short trials may not show value; long trials delay revenue |
| Reverse trial | Start on paid, auto-downgrade to free | Products where free users need to experience premium first | Churn spike at trial end if value is not proven |
| Hybrid (seat + usage) | Base per-seat fee + usage overage | Products with both collaboration and consumption value | Complex to understand; hard to predict bills |
| Revenue share | Take a percentage of customer's revenue | Marketplaces, payment processors | Revenue tied to customer success; volatile |
| Credit-based | Buy credits, spend on actions | AI/ML platforms, multi-product suites | Credit pricing is opaque; customers hoard or waste credits |
Decision tree for model selection
Step 1: Identify your value metric
Ask: "When the customer gets more value, what number goes up?"
- More users collaborating -> per-seat
- More API calls / messages / compute -> usage-based
- More revenue processed -> revenue share
- Hard to isolate a single metric -> tiered flat-rate
Step 2: Assess usage predictability
- Customer can predict monthly usage -> usage-based works
- Usage is spiky or unpredictable -> add committed tiers or caps
- Customer budgets annually -> offer annual commitment discounts
Step 3: Evaluate network effects
- Free users create value for paid users (content, network, data) -> freemium
- Free users do not create value for anyone but themselves -> free trial
- Product requires team adoption to show value -> reverse trial
Step 4: Consider sales motion
- Self-serve (credit card, no human) -> transparent pricing on website
- Sales-assisted (demo, then close) -> show base pricing, "Contact Sales" for enterprise
- Enterprise-only (all deals negotiated) -> no public pricing, focus on value selling
Per-seat pricing deep dive
When it works:
- Every seat gets roughly equal value (Slack, Figma, Notion)
- The product is inherently collaborative
- Adding users increases the product's value (network effect within the org)
When it fails:
- Some users are admins who log in once a month (penalized by per-seat)
- The product is used by one power user who generates value for many (analytics dashboards)
- Customers have seasonal workers or contractors (seat count fluctuates)
Variants:
- Per active user - Only charge for users who log in. Reduces friction for large orgs. Slack pioneered this.
- Tiered seats - Different seat types at different prices (viewer vs editor vs admin). Figma uses this.
- Platform fee + seats - Base platform fee plus per-seat cost. Covers fixed infrastructure costs. HubSpot uses this model.
Usage-based pricing deep dive
Choosing the value metric:
The metric must satisfy three criteria:
- Easy to understand - Customers can explain it to their CFO in one sentence
- Scales with value - More usage = more value for the customer
- Predictable - Customers can estimate their monthly bill before it arrives
Good metrics: API calls, messages sent, GB stored, compute hours, events tracked. Bad metrics: "AI tokens" (opaque), "credits" (arbitrary), "processing units" (vague).
Pricing tiers for usage-based:
Tier 1: 0 - 10,000 units -> $0.01 per unit
Tier 2: 10,001 - 100,000 -> $0.008 per unit
Tier 3: 100,001 - 1,000,000 -> $0.005 per unit
Tier 4: 1,000,000+ -> Custom pricingVolume vs graduated pricing:
- Volume pricing - All units priced at the tier you land in. Simpler but creates cliff effects (going from 10,000 to 10,001 units suddenly reprices everything).
- Graduated pricing - Each unit priced at the tier where it falls. No cliff effects. More complex to explain. Stripe uses this.
Always use graduated pricing unless you have a strong reason not to. Cliff effects cause customer frustration and gaming behavior.
Freemium design framework
The three laws of freemium:
The free tier must deliver real value - Not a crippled demo. Users must complete the core value loop on free. If free users are frustrated, they leave - they do not upgrade.
The free tier must have clear limits - Limits should be generous enough to prove value but restrictive enough that growing teams naturally outgrow them. Good limits: storage, seats, features. Bad limits: time-bombing, watermarks.
The upgrade path must be obvious - Users should encounter the paywall at a moment of success ("You have used all 5 projects - upgrade to create more") not failure ("Error: limit exceeded").
Freemium benchmarks:
| Metric | Healthy range |
|---|---|
| Free to paid conversion | 2-5% |
| Time to first paid conversion | 14-90 days |
| Free user to paid user ratio | 20:1 to 50:1 |
| Monthly active free users | Should grow month over month |
| Viral coefficient from free users | > 0.3 (each free user brings 0.3 new users) |
Reverse trial model
A reverse trial starts every new user on the full paid experience for a limited period (typically 14 days), then automatically downgrades them to a free tier.
Why it works:
- Users experience the premium value before being asked to pay
- The loss aversion of downgrading is stronger than the aspiration of upgrading
- Eliminates the "I never tried the paid features" objection
When to use:
- Your paid features require setup/configuration time to show value
- The gap between free and paid is large and hard to preview
- You already have a freemium model with low conversion and want to boost it
Examples: Airtable, Loom, Zapier (variations of this approach)
Implementation notes:
- Clearly communicate the trial duration upfront (no surprises)
- Send reminders at day 7, day 12, and day 14
- At downgrade, show exactly which features they are losing with one-click re-upgrade
- Do not require a credit card for the reverse trial (that makes it a standard trial)
Frequently Asked Questions
What is pricing-strategy?
Use this skill when designing pricing models, packaging products into tiers, building freemium funnels, implementing usage-based billing, structuring enterprise pricing, or running price tests. Triggers on pricing pages, monetization strategy, willingness-to-pay research, price sensitivity analysis, free-to-paid conversion, seat-based vs consumption pricing, and A/B testing prices.
How do I install pricing-strategy?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill pricing-strategy in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support pricing-strategy?
pricing-strategy works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.