product-launch
Use this skill when planning go-to-market strategy, running beta programs, creating launch checklists, or managing rollout strategy. Triggers on product launch, go-to-market, GTM strategy, beta programs, launch checklist, rollout strategy, launch tiers, and any task requiring product release planning or execution.
product product-launchgtmbetalaunch-checklistrolloutgo-to-marketWhat is product-launch?
Use this skill when planning go-to-market strategy, running beta programs, creating launch checklists, or managing rollout strategy. Triggers on product launch, go-to-market, GTM strategy, beta programs, launch checklist, rollout strategy, launch tiers, and any task requiring product release planning or execution.
product-launch
product-launch is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Planning go-to-market strategy, running beta programs, creating launch checklists, or managing rollout strategy.
Quick Facts
| Field | Value |
|---|---|
| Category | product |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill product-launch- The product-launch skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
A practical framework for planning, executing, and measuring product launches. Most launches fail not because of the product but because of poor coordination, rushed checklists, and no rollback plan. This skill covers the full launch lifecycle: scoping a go-to-market strategy, designing a beta program, building function-level launch checklists, planning tiered rollouts, writing internal and external communications, coordinating cross-functional teams, and measuring launch success. Agents can use this to draft GTM plans, run betas, sequence rollouts, and run launch retrospectives.
Tags
product-launch gtm beta launch-checklist rollout go-to-market
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair product-launch with these complementary skills:
Frequently Asked Questions
What is product-launch?
Use this skill when planning go-to-market strategy, running beta programs, creating launch checklists, or managing rollout strategy. Triggers on product launch, go-to-market, GTM strategy, beta programs, launch checklist, rollout strategy, launch tiers, and any task requiring product release planning or execution.
How do I install product-launch?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill product-launch in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support product-launch?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Product Launch
A practical framework for planning, executing, and measuring product launches. Most launches fail not because of the product but because of poor coordination, rushed checklists, and no rollback plan. This skill covers the full launch lifecycle: scoping a go-to-market strategy, designing a beta program, building function-level launch checklists, planning tiered rollouts, writing internal and external communications, coordinating cross-functional teams, and measuring launch success. Agents can use this to draft GTM plans, run betas, sequence rollouts, and run launch retrospectives.
When to use this skill
Trigger this skill when the user:
- Is planning or executing a product launch or major feature release
- Needs to build a go-to-market strategy or launch plan
- Wants to design or run a beta program (closed, open, or pilot)
- Needs a launch checklist broken down by function (eng, marketing, support, legal)
- Is planning a tiered or phased rollout (dark launch, beta, GA)
- Needs to write launch communications - internal announcements, press releases, blog posts, or customer emails
- Wants to coordinate a cross-functional launch squad or assign RACI ownership
- Needs to define or review launch success metrics and run a launch retrospective
- Asks about launch tiers, rollout strategy, or go-to-market execution
Do NOT trigger this skill for:
- Detailed pricing model design (use the pricing-strategy skill instead)
- Feature flag implementation details or release engineering tooling (use a deployment or CI/CD skill)
Key principles
Launch is a process, not an event - A launch begins weeks before the public announcement and ends weeks after. The announcement date is one milestone in a longer arc that includes internal readiness, beta feedback, and post-launch iteration. Plan the full timeline, not just the day-of.
Tier launches by impact - Not every launch needs a press release and a company all-hands. Match the launch tier to the blast radius of the change. A bug fix ships silently. A new pricing model gets a full GTM program. Define launch tiers upfront and assign different checklists to each tier.
Internal launch before external - Every external launch should be preceded by an internal launch. Sales, support, and success teams must be able to answer customer questions before the announcement goes live. "Internal GA" is a real milestone; treat it as one.
Measure launch success - Define success metrics before launch, not after. Decide in advance what a successful week one looks like: activation rate, trial starts, pipeline influenced, NPS, support ticket volume. Agree on a green threshold. Measure against it. Run a retrospective within 30 days.
Have a rollback plan - For every launch, define the criteria that would trigger a rollback or pause and the exact steps to execute it. Rollback plans are written in calm; they are executed in chaos. Write them now.
Core concepts
Launch tiers classify releases by scope and customer impact. Tier 1 is a major launch (new product, new market, major pricing change) - full GTM, press, exec involvement. Tier 2 is a significant feature - blog post, in-app announcement, sales enablement. Tier 3 is a minor improvement - release notes, internal notification. Tier 4 is a patch or internal change - changelog entry only. Assign tier at kickoff; the tier drives which checklist items are required.
GTM components are the building blocks of a go-to-market plan: target segment (who is this for), positioning (why this over alternatives), pricing and packaging, distribution channel (self-serve, sales-led, partner-led), launch timing, and success metrics. All six must be aligned before any launch activity begins.
Beta program types serve different purposes. A closed beta gives access to a hand-picked cohort for deep feedback - high quality signal, slow velocity. An open beta removes the gate - broader signal, noisier data. A pilot is a time-limited full deployment with a single customer or segment - best for enterprise products or regulated industries. Choose the type based on what you need to learn and how fast.
Launch metrics fall into three buckets: awareness (reach, share of voice, press mentions), activation (trial starts, sign-ups, demo requests, product qualified leads), and retention (week-1 return rate, feature adoption, churn delta in the 30 days post-launch). Track all three; optimize for the bucket that is weakest.
Common tasks
Create a GTM strategy
A go-to-market strategy answers six questions. Answer them in this order:
- Who - Define the target segment precisely. Not "SMBs" but "engineering managers at Series A-C SaaS companies with 10-50 engineers."
- Problem - State the specific pain the product solves for that segment. Quantify it if possible ("spend 4 hours/week on X").
- Positioning - Write a positioning statement: "For [segment], [product] is the [category] that [key benefit] unlike [alternative] which [limitation]."
- Pricing and packaging - Which tier or plan is the entry point? What is the upgrade path? Is there a free trial or freemium component?
- Distribution - How does the segment discover and buy? Self-serve (product-led), sales-assisted (sales-led), or through a partner channel?
- Success metrics - What does a successful launch look like at 7, 30, and 90 days? Name specific numbers and who owns them.
Codify all six in a one-page GTM brief before any execution begins.
Design a beta program
Recruitment
- Define who qualifies: segment, use case, technical maturity, time commitment
- Aim for 15-50 participants for a closed beta (enough signal, manageable noise)
- Recruit from existing waitlist, active customers, or outbound to target accounts
- Set clear expectations: duration, feedback cadence, and what they get in return (early access, pricing discount, public recognition, direct product influence)
Feedback loops
- Weekly check-in cadence: short async survey (5-7 questions) plus optional 30-minute call slot
- Track usage data alongside qualitative feedback - usage tells you what they do, interviews tell you why
- Maintain a shared tracker of reported issues, feature requests, and blockers with status updates visible to beta participants
Graduation criteria
- Define graduation gates before the beta starts, not during
- Minimum criteria: core user journey completion rate above threshold, no critical bugs open, support volume sustainable at scale, NPS or CSAT baseline established
- Graduation is a decision meeting with eng, product, and CS present - not an automatic date flip
Build a launch checklist
Break the checklist by function and time horizon. See
references/launch-checklist.md for the full copy-paste checklist.
Engineering - feature flags, performance benchmarks, error monitoring, rollback procedure, capacity plan, data migration tested
Product - positioning finalized, pricing approved, feature documentation complete, beta graduation signed off, launch tier confirmed
Marketing - landing page live (dark or staged), blog post drafted, social copy approved, email sequence queued, press brief sent (if Tier 1)
Sales - pitch deck updated, objection handling doc written, demo environment current, sales training completed, CRM fields updated for tracking
Customer Success / Support - help center articles published, support scripts written, escalation path defined, internal FAQ distributed, surge plan in place
Legal / Compliance - Terms of Service updated (if needed), privacy review completed, trademark cleared, any regulated market approvals obtained
Plan a tiered rollout
A tiered rollout reduces risk by exposing new functionality progressively.
Stage 1 - Dark launch (internal only)
- Feature is live in production but gated to internal users (flag or allowlist)
- Goal: validate infrastructure, monitor error rates, confirm logging is correct
- Exit criteria: zero critical errors over 48 hours of internal use
Stage 2 - Closed beta (1-5% of users or hand-picked cohort)
- Enable for a small, willing cohort that opted in
- Goal: gather qualitative feedback, surface UX friction, confirm core value
- Exit criteria: beta graduation threshold met
Stage 3 - Limited GA (10-25% traffic or specific segment)
- Ramp up via feature flag or regional rollout
- Goal: validate at scale, monitor support volume, watch activation metrics
- Exit criteria: activation rate at or above target, support ticket rate below cap
Stage 4 - General availability (100%)
- Full launch with marketing activation
- Monitor closely for 72 hours post-launch; have on-call coverage
- Rollback trigger: error rate spike above baseline, P0 bug, or activation rate below 50% of target after 48 hours
Write launch communications
Internal announcement (pre-launch, T-5 days) Structure: what is launching, who it is for, how it works (one paragraph), what changed from the previous version, key dates, and where to get help. Distribute to all-hands Slack, sales channel, and support channel simultaneously.
External blog post (launch day) Structure: problem being solved (lead with customer pain), solution overview (show don't tell - screenshot or short video), customer quote or beta user story, availability and pricing, call to action. Keep under 800 words. Publish at 9am in the target market's timezone.
Customer email (launch day or T+1) Subject line: lead with the benefit, not the feature name ("Save 3 hours a week on X" beats "Introducing Y"). Body: one paragraph on the problem, two sentences on the solution, one CTA button. No attachments. Mobile-optimized.
Press release (Tier 1 only) Format: headline, dateline, lead paragraph (who/what/when/where/why), product details paragraph, customer or partner quote, boilerplate, contact info. Send to press contacts under embargo 48 hours before publication.
Coordinate cross-functional launch
Use a RACI to eliminate ambiguity on every launch deliverable.
| Deliverable | R (Does) | A (Owns) | C (Consulted) | I (Informed) |
|---|---|---|---|---|
| GTM brief | Product | Product | Marketing, Sales | Exec |
| Landing page | Marketing | Marketing | Product, Design | All |
| Blog post | Marketing | Marketing | Product | All |
| Sales enablement | Sales | Sales | Product, Marketing | CS |
| Help center articles | CS/Support | CS | Product | Support |
| Feature flags / rollout | Eng | Eng Lead | Product | All |
| Press outreach | Marketing | Marketing | Exec | Legal |
| Launch metrics dashboard | Data/Eng | Product | Analytics | All |
Run a launch readiness review 48 hours before go-live. All R and A owners confirm their deliverables are complete. Any open blocker halts the launch.
Measure launch success
Pre-launch: define the scorecard
Before launch, fill in this template and get stakeholder agreement:
| Metric | Target (Day 7) | Target (Day 30) | Owner |
|---|---|---|---|
| Trial starts / sign-ups | X | X | Marketing |
| Activation rate (core action) | X% | X% | Product |
| Week-1 retention | X% | - | Product |
| NPS / CSAT | X | X | CS |
| Support ticket volume | < X/day | - | Support |
| Pipeline influenced | $X | $X | Sales |
Post-launch retrospective (Day 30)
- What did we target vs achieve on each metric?
- What went well in the launch process?
- What friction did the cross-functional team experience?
- What would we change in the checklist or process for next time?
- Are there follow-up product changes driven by launch feedback?
Document the retro output and add lessons to the team's launch playbook.
Anti-patterns / common mistakes
| Mistake | Why it fails | What to do instead |
|---|---|---|
| Setting the launch date before the product is ready | Creates pressure to ship incomplete work; leads to support surges and negative first impressions | Set dates from graduation criteria upward, not from a calendar downward |
| Skipping internal launch | Sales and support get blindsided; customers hear conflicting information | Ship internally at T-5; hold a readiness call with every customer-facing team |
| Launching without a rollback plan | When something breaks post-launch, teams scramble without clear ownership or steps | Write rollback criteria and procedure before the launch; test it in staging |
| Measuring only top-of-funnel metrics | Awareness numbers look good while activation and retention quietly fail | Define and track all three buckets: awareness, activation, retention |
| Big-bang rollout for a risky change | A bug at 100% exposure reaches all users simultaneously | Always ramp via feature flags or staged rollout; reserve 100% for confirmed stable state |
| Treating every release as a Tier 1 launch | Team exhaustion; diminishing attention; cry-wolf effect with press and customers | Define launch tiers at kickoff; reserve full GTM effort for true Tier 1 releases |
Gotchas
Launch date set before graduation criteria met - Teams often lock a public date first and then work backward, creating pressure to skip beta graduation gates. Push back on calendar-first planning; let graduation criteria drive the date.
Internal launch skipped as a "nice to have" - When timelines slip, internal readiness is the first thing cut. This means sales and support get blindsided on launch day. Treat T-5 internal launch as a hard dependency, not optional polish.
Rollback plan written as a vague intention - "We can roll back if needed" is not a rollback plan. An agent drafting a launch plan must produce specific rollback criteria (e.g., error rate > 2x baseline for 30 minutes) and named steps, not a note to "define later."
Launch tier not assigned at kickoff - Without a tier decision upfront, every stakeholder lobbies for full Tier 1 treatment. Assign tier at the first planning meeting and use it to automatically scope which checklist items are required.
Success metrics defined after launch - Post-hoc metric selection lets teams cherry-pick numbers that look good. The launch scorecard must be agreed and signed off before any external communications go out.
References
For a detailed, ready-to-use checklist broken down by function and launch phase, load the reference file:
references/launch-checklist.md- comprehensive launch checklist organized by function (Engineering, Product, Marketing, Sales, CS/Support, Legal) and time horizon (T-30 through T+30), suitable for copy-paste into any project management tool
Only load the reference file when actively building or running a launch - it is long and will consume context.
References
beta-programs.md
Beta Programs
A beta program is a structured feedback loop with real users. The goal is to validate assumptions, find critical issues, and build confidence before general availability. This reference covers program design, recruitment, feedback collection, and graduation.
Beta Program Structure
Phase Model
Most products benefit from a two-phase beta:
Closed Beta (4-6 weeks)
- 20-50 hand-picked users
- High-touch: weekly check-ins, dedicated Slack channel
- Goal: Find critical bugs, validate core workflow
- Exit criteria: Zero P0 bugs, core workflow completion rate > 80%
Open Beta (2-4 weeks)
- 200-1000 self-service sign-ups
- Low-touch: in-app feedback widget, community forum
- Goal: Stress-test at scale, validate onboarding, collect usage patterns
- Exit criteria: Error rate < 0.5%, onboarding completion > 60%, NPS > 30Beta Goals Template
Define 3-5 specific questions the beta will answer:
1. Can users complete [core workflow] without assistance?
Metric: Unassisted completion rate > 80%
2. Is the onboarding flow intuitive?
Metric: Time-to-first-value < 10 minutes
3. Does the system handle real-world data patterns?
Metric: Zero data corruption or loss incidents
4. What are the top 3 feature requests?
Metric: Categorized feedback from 50+ users
5. Is the product stable under concurrent usage?
Metric: p99 latency < 500ms at 100 concurrent usersRecruitment
Criteria Matrix
Score potential beta users on fit:
| Criterion | Weight | Description |
|---|---|---|
| ICP match | High | Matches your target audience profile |
| Technical sophistication | Medium | Can provide detailed, actionable feedback |
| Engagement likelihood | High | Has responded to previous communications or is an active community member |
| Use case diversity | Medium | Represents a different workflow or industry from other participants |
| Reference potential | Low | Willing to be a public case study or reference |
Recruitment Channels
Ranked by quality of participants:
- Existing power users - Already trust you, provide the best feedback
- Waitlist sign-ups - Self-selected interest, high engagement
- Community members - Active in your Discord/Slack/forum
- Social media followers - Broader reach, lower engagement rate
- Cold outreach - Targeted emails to ICP-matching companies
Recruitment Email Template
Subject: You're invited to beta test [Product] - [one-line benefit]
Hi [Name],
We're building [product] to solve [problem], and we'd love your feedback
before we launch publicly.
As a beta tester, you'll get:
- Early access to [product] starting [date]
- Direct line to our product team via a private Slack channel
- [Incentive: e.g., 3 months free, lifetime discount, swag]
What we ask in return:
- Use [product] for [specific workflow] at least [frequency]
- Complete a 5-minute feedback survey each week
- Report any bugs or issues you encounter
The beta runs from [start date] to [end date]. Interested?
[CTA button: Join the Beta]Onboarding Beta Users
Day-1 Welcome Kit
Send immediately upon access:
- Welcome email with login credentials and getting-started link
- Quick-start guide (5 minutes to first value)
- Known limitations document - set expectations early
- Feedback channels - where to report bugs, request features, ask questions
- Beta agreement - expectations, NDA (if needed), timeline
Slack/Discord Channel Structure
#beta-announcements - Product team posts updates (read-only for users)
#beta-feedback - Users share feedback, vote on ideas
#beta-bugs - Bug reports with reproduction steps
#beta-general - Open discussion, questions, tipsFeedback Collection
Weekly Survey Template
Keep it short (under 5 minutes). Rotate questions to avoid fatigue.
Core questions (every week):
- How would you rate your experience this week? (1-5)
- Did you encounter any bugs or issues? (Yes/No + description)
- What is the ONE thing we should improve next?
Rotating questions (pick 2 per week):
- How easy was it to [specific feature]? (1-5)
- Compared to your current tool, how does [product] perform? (Much worse to Much better)
- Would you recommend [product] to a colleague? (0-10 NPS)
- What feature are you most excited about?
- What almost made you stop using [product]?
Feedback Categorization
Tag all feedback into these buckets:
| Category | Action | Priority |
|---|---|---|
| Bug - P0 (data loss, crash) | Fix immediately | Blocker for GA |
| Bug - P1 (broken workflow) | Fix before GA | High |
| Bug - P2 (cosmetic, edge case) | Fix if time allows | Medium |
| Feature request | Add to backlog, cluster by theme | Evaluate |
| UX friction | Investigate, may fix before GA | High |
| Positive feedback | Share with team, use in marketing | Low (but valuable) |
| Out of scope | Acknowledge, explain timeline | Low |
Usage Analytics to Track
Instrument these events during beta:
- account_created (timestamp, source)
- onboarding_started / onboarding_completed (timestamp, duration)
- core_action_performed (action_type, timestamp, success/failure)
- feature_used (feature_name, timestamp, duration)
- error_encountered (error_type, timestamp, user_action)
- session_started / session_ended (duration, pages_viewed)
- feedback_submitted (type, sentiment_score)Exit Criteria and Graduation
Go/No-Go Checklist
Before graduating from beta to GA, verify:
- All P0 bugs resolved
- All P1 bugs resolved or have documented workarounds
- Core workflow completion rate > target (e.g., 80%)
- Onboarding completion rate > target (e.g., 60%)
- Error rate < target (e.g., 0.5%)
- NPS > target (e.g., 30)
- Support documentation covers top 10 beta questions
- Performance meets SLA under expected GA load
- Beta users notified of GA timeline and any changes
Communicating Beta End
Subject: [Product] beta is graduating - here's what's next
Hi [Name],
Thank you for being part of our beta program! Your feedback directly shaped
[product] - here are the top 3 changes we made based on your input:
1. [Change based on feedback]
2. [Change based on feedback]
3. [Change based on feedback]
What happens next:
- [Product] launches publicly on [date]
- Your account transitions automatically - no action needed
- As a beta tester, you get [incentive: e.g., 6 months free, special badge]
Thank you for helping us build something better.Beta Anti-patterns
- Too many users, too early - 500 users in closed beta produces overwhelming noise. Start with 20-50 and add more as you fix initial issues.
- No feedback structure - "Let us know what you think" produces nothing. Use specific, weekly surveys with concrete questions.
- Ignoring feedback - If beta users report issues that are not addressed or acknowledged, they disengage. Close the loop on every report.
- Indefinite beta - A beta without an end date signals lack of confidence. Set a fixed duration and stick to it (extend only with clear rationale).
- Beta as free tier - Some users join betas for free access with no intention of providing feedback. Screen for engagement likelihood.
gtm-strategy.md
Go-to-Market Strategy
A GTM strategy answers four questions: who is the audience, why should they care, how will they find out, and what does success look like. This reference provides a complete GTM template and channel playbooks.
GTM Plan Template
Use this structure for any product or feature launch. Adapt depth to launch size - a major product launch needs all sections; a small feature may only need sections 1-4.
1. Target Audience
Define the audience with specificity. Use the ICP (Ideal Customer Profile) format:
Company profile:
- Industry: [e.g., B2B SaaS]
- Size: [e.g., 50-500 employees]
- Stage: [e.g., Series A to Series C]
- Tech stack: [e.g., uses React, deploys on AWS]
Buyer persona:
- Title: [e.g., VP of Engineering]
- Pain: [e.g., CI/CD pipelines take 45 min, blocking developer productivity]
- Current solution: [e.g., Jenkins with custom scripts]
- Budget authority: [Yes/No, approx range]
User persona (if different from buyer):
- Title: [e.g., Senior Software Engineer]
- Daily workflow: [e.g., pushes 3-5 PRs/day, waits for CI]
- Success metric: [e.g., PR merge time under 30 minutes]2. Positioning Statement
Use Geoffrey Moore's positioning template:
For [target audience]
who [statement of need or opportunity],
[product name] is a [product category]
that [key benefit / reason to believe].
Unlike [primary competitive alternative],
our product [primary differentiation].Example:
For engineering teams at growing startups
who lose hours waiting for slow CI pipelines,
FastCI is a cloud-native CI/CD platform
that runs builds 10x faster using intelligent parallelization.
Unlike Jenkins or CircleCI,
FastCI requires zero configuration and scales automatically.3. Messaging Matrix
Create a message for each persona at each awareness stage:
| Persona | Unaware | Problem-aware | Solution-aware | Product-aware |
|---|---|---|---|---|
| VP Engineering | "Your CI is costing you $X/month in lost developer time" | "Fast CI exists - 10x faster builds" | "Here's how FastCI compares to Jenkins" | "Start free, see results in 5 min" |
| Senior Engineer | "What if CI never blocked your PR?" | "Parallel test execution is the answer" | "FastCI auto-parallelizes with zero config" | "Install in 2 commands, keep your yaml" |
4. Channel Strategy
Rank channels by expected impact. Focus on 2-3 primary channels, not all of them.
For developer tools:
| Channel | Effort | Impact | Timeline |
|---|---|---|---|
| Product Hunt launch | Medium | High (day-1 spike) | Launch day |
| Hacker News / Show HN | Low | High (if it hits front page) | Launch day |
| Blog post (own site) | Medium | Medium (SEO long-tail) | Launch day |
| Twitter/X thread | Low | Medium | Launch day |
| Dev community posts (Reddit, Discord) | Low | Medium | Launch week |
| YouTube demo video | High | Medium-High | Launch week |
| Conference talk | High | High (but delayed) | Post-launch |
| Paid ads (Google, LinkedIn) | Medium | Varies | Post-launch |
For B2B SaaS:
| Channel | Effort | Impact | Timeline |
|---|---|---|---|
| Email to existing customers | Low | High | Launch day |
| Sales enablement (battle cards, demo script) | Medium | High | Pre-launch |
| Partner co-marketing | High | High | Launch week |
| Webinar / live demo | Medium | Medium-High | Launch week |
| Case study with beta customer | High | High (long-tail) | Post-launch |
| Analyst briefing (Gartner, Forrester) | High | High (enterprise) | Post-launch |
5. Pricing Strategy
If the launch involves pricing, document:
Model: [Free / Freemium / Paid / Usage-based / Hybrid]
Free tier: [What's included, limits]
Paid tiers:
- Starter: $X/mo - [features, limits]
- Pro: $X/mo - [features, limits]
- Enterprise: Custom - [features, limits]
Rationale: [Why this model fits the audience and competitive landscape]
Migration: [How existing users transition - grandfathered, grace period, etc.]6. Timeline
Map key milestones to dates:
T-4 weeks: GTM plan finalized, stakeholders aligned
T-3 weeks: Content creation begins (blog, docs, emails)
T-2 weeks: Sales enablement materials ready, support briefed
T-1 week: Launch readiness review, all content approved
T-0 (Launch): Blog published, emails sent, social posted
T+1 week: Measure day-1 to day-7 metrics
T+2 weeks: Post-launch retrospective
T+4 weeks: First metrics review against targets7. Success Metrics
Define 3-5 metrics with targets. Use a mix of leading and lagging indicators:
| Metric | Target | Measurement | Leading/Lagging |
|---|---|---|---|
| Sign-ups (day 1) | 500 | Analytics | Leading |
| Activation rate (day 7) | 30% | Analytics | Leading |
| Conversion to paid (day 30) | 5% | Billing system | Lagging |
| Support ticket volume | < 2x baseline | Helpdesk | Leading |
| NPS from new users (day 14) | > 40 | Survey | Lagging |
GTM Anti-patterns
- Launching everywhere at once - Spreading thin across 10 channels produces no signal. Pick 2-3, execute well, measure, then expand.
- Positioning by features - Users care about outcomes, not features. Lead with the problem solved, not the technology used.
- No competitive positioning - If you cannot articulate why you are different from the top 2 alternatives, neither can your users.
- Ignoring existing users - Your best launch channel is often your existing user base. They already trust you and will amplify the message.
- Success metrics after launch - If you define metrics after seeing the data, you will unconsciously pick metrics that look good. Define them before.
launch-checklist.md
Launch Checklist
A comprehensive launch checklist organized by function and time horizon. Copy this into your project management tool (Linear, Jira, Notion, etc.) and assign an owner and due date to every item. Every item is binary: done or not done. "Almost done" is not done.
How to use this checklist
- Assign a launch tier (T1-T4) at kickoff. Skip sections marked as "Tier 1 only" or "Tier 1-2 only" for lower-tier launches.
- Assign an owner to every checked item before the launch plan is shared.
- Set a launch readiness review 48 hours before go-live. Every item must be marked done or explicitly waived with a documented reason.
- Any open blocker at the readiness review halts the launch.
T-30 to T-14: Strategy and alignment
Product
- Launch tier assigned (T1, T2, T3, or T4)
- GTM brief written and circulated to all stakeholders
- Target segment and positioning statement finalized
- Pricing and packaging confirmed (if applicable)
- Success metrics defined with numeric targets for Day 7 and Day 30
- Launch scorecard template distributed to metric owners
- Beta graduation criteria documented (if beta phase applies)
Engineering
- Scope finalized and code-complete date agreed
- Feature flag strategy defined (on/off, percentage, cohort, or none)
- Rollout stages and hold points documented
- Rollback procedure written and owner assigned
- Database migration reversibility confirmed (if migrations apply)
- Dependent service owners notified of upcoming change
Legal / Compliance (Tier 1-2)
- Terms of Service changes identified
- Privacy review scheduled (if new data collection or processing)
- Trademark search completed for any new product name or brand
- Regulatory approvals identified for relevant markets (GDPR, HIPAA, etc.)
T-14 to T-7: Build and prepare
Engineering
- Feature code merged to main and deployed behind feature flag
- Internal dogfood / dark launch enabled for internal users
- Load testing completed at 2x expected peak traffic
- Error monitoring and alerting configured for new code paths
- Latency and throughput baseline recorded
- Capacity plan reviewed and any scaling provisioned
- Runbook or on-call guide updated with launch-specific context
- Database migrations tested on a production-equivalent dataset
Product
- In-app onboarding flow or tooltips implemented and QA'd
- Release notes written and reviewed
- Feature documentation or help center articles drafted
Marketing (Tier 1-2)
- Blog post / announcement drafted
- Landing page copy and design drafted
- Social media posts drafted for launch day and T+3
- Email campaign to existing users drafted
- Press list identified and brief prepared (Tier 1 only)
- Embargo date and press contacts confirmed (Tier 1 only)
Sales (Tier 1-2)
- Pitch deck updated with new feature or product
- Objection handling guide written for common questions
- Demo environment updated to show new functionality
- Sales training session scheduled
T-7 to T-2: Review and approve
Cross-functional
- Launch readiness review meeting scheduled (T-2 from go-live)
- RACI matrix shared with all owners
- Go/no-go criteria documented (what blockers halt the launch)
Engineering
- Rollback procedure tested end-to-end in staging
- On-call rotation confirmed for launch day and T+72
- Alert thresholds calibrated (not too noisy, not too quiet)
- Feature flag configuration reviewed and locked
Product
- Feature documentation published or scheduled to publish at launch
- In-app messaging tested across all supported browsers and devices
- Analytics instrumentation verified (events firing correctly)
Marketing (Tier 1-2)
- Blog post finalized and approved by stakeholders
- Landing page live in staging and reviewed
- Email campaign reviewed, tested on mobile, and scheduled
- Social media posts approved and scheduled
- Press brief sent to journalists under embargo (Tier 1 only)
Customer Success / Support
- Support team briefed on new feature or product (live or async)
- Help center articles published or staged for publish
- Known issues documented with workarounds
- Internal FAQ distributed to support agents
- Escalation path defined: who gets paged for P0s on launch day
- Support surge plan in place (extra coverage on launch day and T+1)
Sales (Tier 1-2)
- Sales training completed
- Demo environment reviewed by sales lead
- CRM fields or opportunity stages updated to track launch influence
Legal / Compliance (Tier 1-2)
- Terms of Service changes approved and publish date confirmed
- Privacy policy updated (if applicable)
- Compliance sign-off email received and filed
T-2: Launch readiness review
Run a 30-45 minute meeting. Walk through every open item:
- Status - Done, in progress, or blocked?
- Owner - Confirmed and available on launch day?
- Risk - What is the impact if this item is incomplete?
- Decision - Launch, delay, or waive with documented reason?
End with a formal go/no-go decision. Record the outcome and distribute to all stakeholders. If go with conditions, state conditions explicitly.
Launch day (T-0)
Pre-launch (T-0, morning)
- Final go/no-go check with eng, product, and marketing leads
- Feature flag enabled for Stage 1 (dark launch or closed beta) or rolled to target percentage if skipping beta
- Monitoring dashboard open and active on launch day
- On-call engineer confirmed available and pager tested
- Support team confirmed available and ready
- Rollback procedure printed or pinned in incident channel
Launch actions
- Blog post published (if Tier 1-2)
- Social media posts published or released from scheduled queue
- Email campaign sent (if applicable)
- Landing page set to public (if staged)
- In-app announcements or banners enabled
- Press embargo lifted and journalist follow-up sent (Tier 1 only)
- Internal announcement sent (all-hands Slack, company email)
Post-launch monitoring (T+0 to T+72)
- Error rate vs pre-launch baseline tracked hourly for first 4 hours
- p99 latency vs SLA monitored
- Activation metric tracked against Day 7 target (leading indicator)
- Support ticket volume tracked against surge plan threshold
- Feature flag percentage incremented per rollout plan (if phased)
- Rollback criteria checked at each percentage increment hold point
T+7 to T+30: Post-launch
Metrics review (T+7)
- Day 7 actuals vs targets recorded in launch scorecard
- Any metric below 50% of target triggers an investigation
- Rollout completed to 100% (if still ramping)
- Known issues resolved or triaged to backlog
Retrospective (T+30)
- Retrospective meeting scheduled with cross-functional leads
- Metrics vs targets reviewed (all six buckets)
- What went well documented
- What went wrong documented (process issues, not blame)
- Action items written with owners and due dates
- Launch checklist updated based on retrospective findings
- Lessons added to team's launch playbook
Rollback procedure template
Fill this in before every Tier 1-2 launch:
Rollback trigger criteria:
- Error rate exceeds
___% above pre-launch baseline for___minutes - P0 or P1 bug reported affecting
___% of users - Activation rate below
___% of target after___hours at 100% - Any data loss or security incident
Rollback steps:
- Incident commander declared:
[name] - Feature flag disabled or percentage rolled back to 0%:
[engineer name] - Database migration reversed (if applicable):
[engineer name] - Internal incident Slack channel opened:
#launch-incident-[feature] - Customer-facing status page updated (if user-visible):
[owner] - External communications paused:
[marketing owner] - Post-incident review scheduled within 48 hours
rollout-strategy.md
Rollout Strategy
Rollout strategy defines how a feature moves from 0% to 100% of users. The right strategy depends on risk level, reversibility, infrastructure capabilities, and whether the launch has a marketing component requiring a specific date.
Rollout Strategy Matrix
Choose based on risk and launch type:
| Strategy | Risk Tolerance | Reversibility | Best For |
|---|---|---|---|
| Internal only | Minimal | Instant (flag off) | Dogfooding, early validation |
| Percentage rollout | Low-Medium | Fast (flag adjustment) | Most feature launches |
| Cohort-based | Medium | Moderate (per-cohort flags) | Pricing, billing, onboarding changes |
| Geographic | Medium-High | Moderate (per-region flags) | Infrastructure, latency-sensitive features |
| Time-based (canary) | Low | Fast (flag adjustment) | Backend/API changes, database migrations |
| Big-bang | High | Slow (requires hotfix) | Marketing-driven launches, major announcements |
Percentage Rollout Playbook
The most common strategy. Use feature flags to gradually increase the percentage of users who see the new feature.
Standard Rollout Schedule
Stage 1: Internal team (dogfood) - 1-2 weeks
Stage 2: 1% of external users - 24 hours hold
Stage 3: 5% of external users - 24 hours hold
Stage 4: 25% of external users - 48 hours hold
Stage 5: 50% of external users - 48 hours hold
Stage 6: 100% of external users (GA) - monitor for 2 weeks
Stage 7: Remove feature flag - after 2-4 weeks of stabilityHold Point Checks
At each stage, verify ALL of the following before advancing:
[ ] Error rate: < baseline + 0.1%
[ ] p99 latency: < SLA threshold (e.g., 500ms)
[ ] P0/P1 bugs: Zero open
[ ] Support tickets: < 2x baseline for affected feature area
[ ] Core business metrics: No degradation (conversion, revenue, engagement)
[ ] User feedback: No critical UX issues reportedIf ANY check fails, do NOT advance. Investigate, fix, re-verify, then proceed.
Accelerated Schedule (Low-Risk Features)
For low-risk, easily reversible changes (UI tweaks, copy changes, non-critical features):
Stage 1: Internal team - 2-3 days
Stage 2: 10% of users - 24 hours hold
Stage 3: 50% of users - 24 hours hold
Stage 4: 100% of users (GA) - monitor for 1 weekConservative Schedule (High-Risk Features)
For database migrations, billing changes, auth changes, or platform-level features:
Stage 1: Internal team only - 2 weeks minimum
Stage 2: 1% of users (new accounts) - 1 week hold
Stage 3: 5% of users - 1 week hold
Stage 4: 10% of users - 1 week hold
Stage 5: 25% of users - 1 week hold
Stage 6: 50% of users - 1 week hold
Stage 7: 100% of users (GA) - monitor for 4 weeksFeature Flag Patterns
Flag Types
| Type | Description | Example |
|---|---|---|
| Release flag | Controls rollout percentage, removed after GA | new_checkout_flow |
| Experiment flag | A/B test with measurement, removed after decision | pricing_page_variant_b |
| Ops flag | Kill switch for degraded performance, kept permanently | disable_recommendations |
| Permission flag | Unlocks paid/enterprise features, kept permanently | advanced_analytics |
Flag Naming Convention
<type>_<feature>_<optional_variant>
Examples:
release_new_dashboard
experiment_onboarding_v2
ops_disable_search_indexing
permission_sso_loginFlag Lifecycle
1. Create flag (disabled by default)
2. Develop feature behind flag
3. Test with flag on/off in staging
4. Roll out using percentage strategy
5. Reach 100% (GA)
6. Monitor for 2-4 weeks
7. Remove flag from code (make feature permanent)
8. Delete flag from flag management systemStep 7 is critical and often skipped. Flags left in code become technical debt. Track flag age and set alerts for flags older than 90 days that are at 100%.
Stale Flag Cleanup
Schedule monthly flag cleanup:
For each flag older than 30 days at 100%:
1. Verify feature is stable (no related incidents in 30 days)
2. Remove flag checks from code (replace with unconditional path)
3. Remove flag from flag management system
4. Delete dead code from the "off" path
5. Update tests to remove flag-based branchingCohort-Based Rollout
Roll out to user segments instead of random percentages. Useful when different user types have different risk profiles.
Common Cohort Strategies
Strategy 1: Free before paid
1. Free tier users (lower risk, no revenue impact)
2. Starter plan users
3. Pro plan users
4. Enterprise users (highest risk, most revenue)
Strategy 2: New before existing
1. New sign-ups (no migration, no habit disruption)
2. Low-activity existing users
3. High-activity existing users
Strategy 3: Internal before external
1. Employees and contractors
2. Design partners / beta users
3. All external usersWhen to Use Cohort-Based
- Pricing or billing changes (free tier absorbs risk first)
- Onboarding flow changes (new users have no expectations)
- Breaking API changes (internal consumers migrate first)
- Data migration (small accounts are faster to rollback)
Geographic Rollout
Roll out region by region. Useful for infrastructure changes, compliance requirements, or when latency matters.
Standard Geographic Order
1. Development region (where your team is, fastest to debug)
2. Lowest-traffic region (minimizes blast radius)
3. Moderate-traffic regions (one at a time)
4. Highest-traffic region (last, after confidence is high)Region-Specific Considerations
- Data residency - Ensure new feature complies with local data laws (GDPR, etc.)
- Latency - Test from the region, not just to the region
- Peak hours - Roll out during off-peak for each region
- Support coverage - Ensure support is available in the region's timezone
Rollback Procedures
Rollback Decision Criteria
Define these BEFORE launch. When any condition is met, execute rollback:
IMMEDIATE rollback (no discussion needed):
- Data loss or corruption
- Authentication/authorization bypass
- Error rate > 5x baseline
- Complete feature unavailability
ESCALATED rollback (discuss with on-call lead, decide in < 15 min):
- Error rate > 2x baseline for > 10 minutes
- p99 latency > 2x SLA for > 10 minutes
- > 10 support tickets about the same issue in 1 hour
- Revenue-impacting bug confirmedRollback Execution Steps
1. Set feature flag to 0% (or "off")
- This is the fastest rollback and should resolve most issues
- Verify error rate returns to baseline within 5 minutes
2. If flag rollback insufficient, deploy previous version
- Revert the most recent deployment
- Verify system health
3. If data migration was involved
- Execute reverse migration script (must be pre-tested)
- Verify data integrity with validation queries
- Notify affected users if data was temporarily inconsistent
4. Communicate
- Update status page (if public-facing)
- Notify support team
- Notify stakeholders via Slack/email
- If user-facing impact > 5 minutes, draft incident communicationPost-Rollback Actions
1. Confirm system is stable (monitor for 30 minutes)
2. Create incident report with timeline
3. Root cause analysis within 24 hours
4. Fix identified issues
5. Re-test in staging
6. Schedule re-rollout with stakeholder approvalMonitoring During Rollout
Key Metrics to Watch
| Metric | Tool | Alert Threshold |
|---|---|---|
| Error rate (5xx) | APM / Datadog / New Relic | > baseline + 0.5% |
| p99 latency | APM | > SLA threshold |
| CPU / Memory | Infrastructure monitoring | > 80% utilization |
| Support tickets | Helpdesk (Zendesk, Intercom) | > 2x hourly baseline |
| Business metrics | Analytics (Amplitude, Mixpanel) | > 10% degradation |
| User feedback | Social media, community | Qualitative monitoring |
Rollout Communication Template
Post in your team's launch channel at each stage:
[Rollout Update] <Feature Name>
Stage: X% -> Y%
Time: YYYY-MM-DD HH:MM UTC
Metrics since last stage:
- Error rate: X.XX% (baseline: X.XX%)
- p99 latency: XXXms (SLA: XXXms)
- Support tickets: X (baseline: X)
- Issues found: [None | List]
Decision: Advancing to Y% | Holding | Rolling back
Next check: YYYY-MM-DD HH:MM UTC Frequently Asked Questions
What is product-launch?
Use this skill when planning go-to-market strategy, running beta programs, creating launch checklists, or managing rollout strategy. Triggers on product launch, go-to-market, GTM strategy, beta programs, launch checklist, rollout strategy, launch tiers, and any task requiring product release planning or execution.
How do I install product-launch?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill product-launch in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support product-launch?
product-launch works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.