growth-hacking
Use this skill when designing viral loops, building referral programs, optimizing activation funnels, or improving retention. Triggers on growth loops, referral programs, activation funnels, retention strategies, viral coefficient, product-led growth, AARRR metrics, and any task requiring growth experimentation or optimization.
marketing growthviral-loopsreferralactivationretentionplgWhat is growth-hacking?
Use this skill when designing viral loops, building referral programs, optimizing activation funnels, or improving retention. Triggers on growth loops, referral programs, activation funnels, retention strategies, viral coefficient, product-led growth, AARRR metrics, and any task requiring growth experimentation or optimization.
growth-hacking
growth-hacking is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Designing viral loops, building referral programs, optimizing activation funnels, or improving retention.
Quick Facts
| Field | Value |
|---|---|
| Category | marketing |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill growth-hacking- The growth-hacking skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
Growth hacking is a discipline that combines product, data, and marketing to find the most efficient levers for sustainable user and revenue growth. Unlike traditional marketing, it is rooted in rapid experimentation, quantitative measurement, and closed-loop feedback between product behavior and acquisition channels.
The best growth practitioners treat retention as the foundation, activation as the multiplier, and virality as the compounding force. Hacks without retention are just churn machines. This skill gives an agent the frameworks, vocabulary, and tactical playbooks to design experiments, build growth systems, and reason about compounding growth.
Tags
growth viral-loops referral activation retention plg
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair growth-hacking with these complementary skills:
Frequently Asked Questions
What is growth-hacking?
Use this skill when designing viral loops, building referral programs, optimizing activation funnels, or improving retention. Triggers on growth loops, referral programs, activation funnels, retention strategies, viral coefficient, product-led growth, AARRR metrics, and any task requiring growth experimentation or optimization.
How do I install growth-hacking?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill growth-hacking in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support growth-hacking?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Growth Hacking
Growth hacking is a discipline that combines product, data, and marketing to find the most efficient levers for sustainable user and revenue growth. Unlike traditional marketing, it is rooted in rapid experimentation, quantitative measurement, and closed-loop feedback between product behavior and acquisition channels.
The best growth practitioners treat retention as the foundation, activation as the multiplier, and virality as the compounding force. Hacks without retention are just churn machines. This skill gives an agent the frameworks, vocabulary, and tactical playbooks to design experiments, build growth systems, and reason about compounding growth.
When to use this skill
Trigger this skill when the user:
- Wants to design or audit a growth loop or viral loop
- Needs to build or improve a referral program
- Asks about optimizing an activation funnel or improving time-to-value
- Wants to reduce churn or improve retention using cohort analysis
- Asks about AARRR metrics, pirate metrics, or north star metric selection
- Needs to run growth experiments and prioritize them (ICE, PIE scoring)
- Is implementing product-led growth (PLG) or a freemium model
- Wants to find the "aha moment" and engineer onboarding toward it
Do NOT trigger this skill for:
- Pure paid advertising campaign execution (creative, ad spend optimization) - use a performance marketing skill instead
- Brand strategy and positioning work disconnected from product or funnel metrics
Key principles
Measure everything - Every growth decision must be anchored to data. Define metrics before running experiments. If you can't measure it, you can't improve it. Instrument events, track cohorts, and baseline before changing anything.
One metric that matters (OMTM) - Focus each growth phase on a single north star metric that best predicts long-term value. Optimizing many metrics at once diffuses effort and obscures causality.
Experiment velocity wins - Teams that run more experiments per week consistently outperform those that run fewer but "bigger" experiments. Lower the cost of an experiment, raise the volume. Most experiments fail - that's fine, fail fast.
Retention is the foundation - Acquiring users into a leaky bucket is burning money. Fix retention first. A product with 40% Day-30 retention can grow efficiently; one with 5% cannot be saved by acquisition spend.
Sustainable growth over hacks - Short-term hacks (spam, dark patterns, manufactured virality) destroy trust and churn users. Build growth systems that deliver genuine value at each step so growth compounds rather than collapses.
Core concepts
AARRR pirate metrics
Dave McClure's framework maps the full user lifecycle into five measurable stages:
| Stage | Question | Example metric |
|---|---|---|
| Acquisition | How do users find you? | CAC, channel attribution, organic vs paid split |
| Activation | Do users have a great first experience? | Day-1 activation rate, aha moment conversion |
| Retention | Do users come back? | Day-7/30/90 retention, churn rate, DAU/MAU |
| Referral | Do users tell others? | Viral coefficient (K), NPS, referral invite rate |
| Revenue | Do you make money? | MRR, LTV, LTV:CAC ratio, expansion revenue |
Always diagnose which stage is broken before prescribing a fix. See
references/growth-frameworks.md for the full AARRR diagnostic template.
Growth loops vs funnels
A funnel is linear and one-way: Acquire -> Activate -> Retain -> Monetize. Every user enters at the top and exits somewhere below. Funnels are necessary but not sufficient for compounding growth.
A growth loop is circular: the output of one cycle becomes the input of the next. Examples:
- Viral loop: User invites friend -> friend signs up -> friend invites more friends
- Content loop: User creates content -> content ranks in search -> new users find it -> create more content
- Sales-assisted loop: Lead signs up -> sales converts -> expansion revenue funds more sales
Loops compound; funnels don't. Design for loops. See references/growth-frameworks.md
for loop templates.
Viral coefficient (K-factor)
K = invites_sent_per_user * conversion_rate_of_invite
- K > 1: viral growth (each user brings more than one new user)
- K = 0.5-1: strong word of mouth, supplements other channels
- K < 0.3: product is not meaningfully viral; focus elsewhere
Improving K requires either increasing invites sent (motivation) or increasing invite conversion (landing page, offer, trust).
Cohort analysis
Group users by the time period they first performed a key action (signup, first purchase, etc.) and track their behavior over subsequent periods. Cohort analysis isolates the effect of product changes from the noise of a changing user mix.
Key cohort views:
- Retention curve: % of cohort active at Day N - flat curve = good retention
- Revenue cohort: cumulative LTV by cohort - improving means product is getting better
- Activation cohort: % that hit aha moment within Day 1, 3, 7
North star metric
A single metric that best captures the value your product delivers to users AND correlates with long-term business health. It aligns the entire company on what matters.
| Company | North Star Metric |
|---|---|
| Slack | Messages sent per active team |
| Airbnb | Nights booked |
| Spotify | Time spent listening |
| HubSpot | Weekly active teams using 5+ features |
A good north star is: measurable, leads revenue, reflects user value, actionable
by the team. See references/growth-frameworks.md for the selection template.
Common tasks
Design a growth loop
- Map the current user journey end-to-end
- Identify the "output" of one user's experience that could become an "input" for another user (shared content, invites, referrals, SEO-indexed pages)
- Name the loop type: viral, content, paid, sales-assisted, or product-embedded
- Define the loop's single conversion rate to optimize (e.g., invite acceptance rate)
- Instrument every step, establish a baseline, then run experiments on the weakest link
Example - viral loop for a doc tool: Create doc -> Share with external collaborator -> Collaborator views -> Prompted to sign up -> Signs up and creates their own doc -> Loop restarts
Build a referral program
A referral program amplifies natural word-of-mouth with structured incentives.
Design checklist:
- Define the trigger: when is the user most likely to refer? (post-aha moment, post-purchase)
- Choose reward structure: double-sided (sender + receiver both win) outperforms one-sided
- Set reward type: cash, credits, upgrade, or social recognition
- Make sharing frictionless: pre-written message, one-click send, email + link options
- Confirm referral loop is closed: referred user's experience must deliver the same aha moment that motivated the invite
- Track: referral invite rate, referral conversion rate, K-factor, referred-user LTV vs organic LTV
Reward tiers by product type:
- B2C consumer app: credits or cash (Uber, Airbnb model)
- B2B SaaS: seat upgrades, feature unlocks, or billing credits
- Marketplace: transaction credits valid on next purchase
Optimize activation funnel
Activation is the bridge between acquisition and retention. A user is "activated" when they experience the core value of the product for the first time (the aha moment).
Optimization process:
- Define your aha moment concretely (e.g., "creates first project with one collaborator")
- Map every step from signup to aha moment
- Measure drop-off at each step
- Prioritize the step with the largest absolute drop-off (not percentage)
- Run A/B tests: reduce friction (fewer fields, social login), add guidance (tooltips, progress bars), or add incentives (template library, example data)
Common activation levers:
- Reduce time-to-value: pre-populate sample data so users see value before entering their own
- Remove setup friction: defer configuration until after first value is delivered
- Personalize onboarding: route users to different paths based on role or use case
- Add social proof at friction points: show "2,000 teams set this up in 3 minutes"
Improve retention with cohort analysis
- Pull cohort retention curves segmented by: acquisition channel, onboarding path, company size, or feature adoption
- Identify which cohort has the flattest retention curve (best retention)
- Find the behavioral difference between high-retention and low-retention cohorts (which features did they use? how fast did they reach aha moment?)
- Build that behavior into the default onboarding path for all new users
- Re-run cohorts 4-8 weeks later to confirm improvement
Retention benchmarks by product type:
| Product | Good Day-30 Retention |
|---|---|
| Consumer social | 25-40% |
| B2B SaaS | 40-70% |
| E-commerce | 10-25% |
| Mobile game | 10-20% |
Run growth experiments (ICE framework)
Score each experiment on three dimensions (1-10 each):
- Impact: How much will this move the target metric if it works?
- Confidence: How sure are you it will work, based on data or analogues?
- Ease: How fast and cheap is it to run this experiment?
ICE Score = (Impact + Confidence + Ease) / 3
Run the highest-scoring experiments first. Document hypothesis, metric, baseline,
result, and learning for every experiment regardless of outcome. See
references/growth-frameworks.md for the full ICE scoring template.
Design onboarding for the aha moment
The job of onboarding is to get users to the aha moment as fast as possible.
Onboarding design principles:
- Delay account setup (email verification, profile completion) until after first value
- Use empty state screens to show what the product looks like when it's working, not a blank canvas
- Guide the user through exactly one action that delivers immediate value
- End the first session with a "save your progress" hook that creates a reason to return
Aha moment discovery process:
- Pull data on users who churned in week 1 vs users who retained to week 4
- Find the feature/action that correlates most strongly with retention
- Find the time-to-that-action for retained users (e.g., "within 3 days")
- Make that action the explicit goal of onboarding
Implement product-led growth (PLG)
PLG makes the product itself the primary driver of acquisition, activation, and expansion.
PLG motion types:
- Freemium: Free tier acquires users; paid tier converts power users
- Free trial: Full access for a limited time; urgency converts
- Usage-based: Pay as you grow; low friction entry, aligned incentives
PLG implementation checklist:
- Identify the natural sharing or collaboration moments in the product
- Build a free tier that delivers genuine value (not a crippled demo)
- Define upgrade triggers: usage limits, collaboration features, or admin controls
- Instrument product qualified leads (PQLs): users showing intent signals (hitting limits, inviting many teammates, high usage frequency)
- Build sales-assist motion that surfaces PQLs to the sales team in real time
Anti-patterns
| Anti-pattern | Why it fails | What to do instead |
|---|---|---|
| Optimizing acquisition before fixing retention | You fill a leaky bucket - CAC rises, LTV falls | Achieve 30% Day-30 retention before scaling acquisition spend |
| Vanity metric focus | Total signups, downloads, or followers don't predict revenue or retention | Pick a north star metric that reflects active value delivery |
| Running too many experiments at once | Interactions between experiments contaminate results | Run one experiment per user surface at a time; isolate variables |
| Copying competitor tactics without understanding context | A tactic that works for Dropbox at scale fails for a 500-user startup | Understand why a tactic works before adopting it; validate with your own data |
| Dark patterns for short-term conversion | Fake urgency, hidden unsubscribe, forced virality - all damage trust and LTV | Every growth mechanic should deliver value to the user, not just extract it |
| Skipping cohort segmentation | Aggregate retention curves hide the signal in the noise | Always segment cohorts by acquisition source, onboarding path, and key feature adoption |
Gotchas
Optimizing activation before you understand what the aha moment actually is - Teams often build onboarding flows toward the wrong milestone. "Completed profile" or "uploaded first file" feels like activation, but if it doesn't correlate with Day-30 retention, you've optimized the wrong funnel step. Always validate the aha moment against retention cohort data before optimizing toward it.
Viral K-factor calculations ignore invite fatigue cycles - K-factor measured in week 1 post-launch will overestimate steady-state virality because early adopters are your most enthusiastic inviters. Measure K-factor across 90-day cohorts, not just the launch burst, to get a realistic picture of your viral loop's durability.
A/B test contamination from multiple simultaneous experiments - Running two experiments on the same user surface at the same time (e.g., two onboarding copy tests) means users may see combinations of variants, making it impossible to attribute results to a single change. One experiment per user surface, enforce isolation in your experimentation platform.
Referral programs that reward too early produce fraudulent referrals - Triggering referral rewards at signup (rather than at activation or first payment) creates an arbitrage opportunity where users refer fake accounts for the reward. Tie rewards to the same activation milestone that predicts real retention.
Freemium free tier that's too good prevents upgrades - If the free tier covers all core use cases, users have no natural reason to upgrade. The free tier must deliver genuine value at a scope that naturally hits a ceiling for power users - time, seats, usage volume, or collaboration features are common upgrade triggers. Define this ceiling before launching freemium, not after watching conversion rates disappoint.
References
For detailed templates and frameworks, load the relevant file from references/:
references/growth-frameworks.md- AARRR diagnostic template, ICE scoring sheet, north star selection guide, growth loop templates, viral coefficient calculator
Only load a references file if the current task requires deep detail on that topic.
References
growth-frameworks.md
Growth Frameworks Reference
Deep-dive templates and calculators for the core frameworks referenced in the growth-hacking skill. Load this file when a task requires detailed framework application rather than conceptual understanding.
AARRR Diagnostic Template
Use this template to systematically audit which stage of the funnel is broken before recommending a growth lever.
Step 1 - Instrument the funnel
Map every measurable event to its AARRR stage:
| Stage | Key events to track | Benchmark to beat |
|---|---|---|
| Acquisition | Visits, signups, CAC by channel | CAC < LTV / 3 |
| Activation | % completing onboarding, time-to-aha | >30% reach aha in 24h |
| Retention | Day-1 / Day-7 / Day-30 active users | D30 >25% consumer, >40% B2B |
| Referral | Invites sent per user, K-factor | K > 0.3 meaningful, K > 1 viral |
| Revenue | MRR, LTV, LTV:CAC ratio | LTV:CAC > 3:1 |
Step 2 - Identify the leakiest stage
Calculate the conversion rate between each adjacent stage:
Acquisition -> Activation conversion = activated_users / new_signups
Activation -> Retention conversion = retained_day7_users / activated_users
Retention -> Referral conversion = users_who_invited / retained_users
Retention -> Revenue conversion = paying_users / activated_usersThe stage with the lowest conversion rate OR the largest absolute user drop is the bottleneck. Fix that stage first.
Step 3 - Diagnose the bottleneck
If Activation is broken:
- Time-to-value is too long (reduce setup steps, add sample data)
- Aha moment is unclear to the user (add inline guidance, progress indicators)
- Onboarding asks for too much before delivering value (defer configuration)
If Retention is broken:
- Aha moment is not repeated in subsequent sessions (identify habit triggers)
- Product does not have a natural usage cadence (add notifications, streaks, digests)
- Users don't understand the full value of the product (feature discovery campaigns)
If Referral is broken:
- No sharing mechanics built into the product workflow
- Reward is not motivating enough or too hard to redeem
- Referral prompt fires at the wrong moment (too early, before aha)
If Revenue is broken:
- Upgrade trigger is unclear or not tied to the aha moment
- Pricing is misaligned with the user's value realization
- Free tier is too generous (no compelling reason to upgrade)
ICE Scoring Template
Use ICE to stack-rank a backlog of growth experiments before each sprint.
Scoring rubric
Impact (1-10): Estimated effect on the north star metric if the experiment succeeds.
| Score | Meaning |
|---|---|
| 9-10 | Could move the metric by >20% |
| 6-8 | Could move the metric by 5-20% |
| 3-5 | Could move the metric by 1-5% |
| 1-2 | Marginal effect (<1%) |
Confidence (1-10): How certain are you the experiment will produce a positive result?
| Score | Meaning |
|---|---|
| 9-10 | Strong data or validated analogues from similar products |
| 6-8 | Qualitative user research supports the hypothesis |
| 3-5 | Educated guess, no direct evidence |
| 1-2 | Gut feel only |
Ease (1-10): How quickly and cheaply can you run this experiment?
| Score | Meaning |
|---|---|
| 9-10 | Copy/config change, ship in < 1 day |
| 6-8 | Small engineering task, ship in 1-3 days |
| 3-5 | Medium feature work, 1-2 week sprint |
| 1-2 | Major engineering effort, > 2 weeks |
ICE scoring sheet
Experiment: _______________________________________________
Hypothesis: If we [change], then [metric] will [increase/decrease]
because [reason], as evidenced by [data or analogue].
Impact score: ___ / 10 Reason: ___________________
Confidence score: ___ / 10 Reason: ___________________
Ease score: ___ / 10 Reason: ___________________
ICE Score = (Impact + Confidence + Ease) / 3 = ___
Primary metric: ___________________
Secondary metric: ___________________
Baseline: ___________________
Minimum detectable effect: ___ %
Required sample size: ___
Experiment duration: ___ daysPost-experiment log
Result: [ ] Win [ ] Loss [ ] Inconclusive
Outcome: Primary metric moved from ___ to ___
Learning: ___________________________________________________
Next step: [ ] Ship it [ ] Iterate [ ] Kill itNorth Star Metric Selection Guide
Criteria for a good north star metric
A north star metric must satisfy all five criteria:
- Leads revenue - Correlates strongly with long-term MRR or LTV (not a vanity metric)
- Reflects user value - Goes up when users get more value from the product
- Measurable - Can be instrumented precisely and tracked in real time
- Actionable - The team can run experiments that directly move it
- Lagging indicator of acquisition - Not just "signups" (acquisition is an input, not a north star)
Selection process
Step 1 - List candidate metrics. Brainstorm 5-10 metrics that could reflect your product's core value delivery. Examples: messages sent, files created, reports generated, tasks completed, bookings made.
Step 2 - Run the "would you trade" test. For each candidate, ask: "Would you trade 10% more signups for 5% more [metric]?" If yes, the metric reflects deeper value than acquisition.
Step 3 - Run the retention correlation test. Pull cohorts of users who did vs did not hit the candidate metric in week 1. If the cohort that hit the metric shows materially better Day-30 retention, it is a strong north star candidate.
Step 4 - Decompose into inputs. A good north star decomposes into 3-5 input metrics the team can own. If you can't decompose it, it is too abstract.
North Star: [Weekly active teams using core feature]
|
+-- Input 1: New teams completing onboarding (Activation team)
+-- Input 2: Existing teams returning weekly (Retention team)
+-- Input 3: Teams expanding feature usage depth (Expansion team)Common north star metrics by product type
| Product type | Example north star |
|---|---|
| Collaboration SaaS | Active collaborators per workspace per week |
| Marketplace | Successful transactions per week |
| Consumer social | Daily active users who posted or commented |
| Developer tool | Projects with > 1 deploy per week |
| E-learning | Lessons completed per active learner per week |
| Healthcare | Appointments booked and attended per month |
Growth Loop Templates
Template 1 - Viral invitation loop
[Existing user] --invites--> [New prospect]
^ |
| signs up
| |
+---- creates value <-- reaches aha momentKey metric to optimize: Invite acceptance rate Lever: Reward structure, invite copy, landing page relevance
Template 2 - Content / SEO loop
[User creates content] --> [Content indexed by search]
^ |
| [New user discovers]
| |
creates more content <-- signs up and engagesKey metric to optimize: Content creation rate per active user Lever: Make content creation a natural output of using the product (not an extra step)
Template 3 - Paid acquisition loop
[Revenue] --> [Ad spend] --> [New user acquisition]
^ |
| activation
| |
+----------- LTV expansion <-- retentionKey metric to optimize: LTV:CAC ratio (must stay > 3:1 for the loop to be healthy) Lever: Improve LTV via retention and expansion; reduce CAC via conversion rate optimization
Template 4 - Product-embedded sharing loop
[User produces artifact] --> [Artifact shared externally]
^ |
| [External viewer sees it]
| |
produces more <-- signs up to create their ownExamples: Loom video links, Figma share links, Notion pages, Typeform results Key metric to optimize: Share rate (% of users who share an artifact externally) Lever: Make sharing a natural part of the workflow; add "powered by" attribution to shared artifacts so viewers know where it came from
Viral Coefficient Calculator
K = i * c
Where:
i = average number of invites sent per active user per period
c = conversion rate of those invites to new signups (0.0 to 1.0)
Example:
i = 3 invites/user
c = 0.25 (25% of invitees sign up)
K = 3 * 0.25 = 0.75
Interpretation:
K >= 1.0 -> Viral growth: product grows without any external acquisition
K = 0.5 -> Strong WOM: every 2 users bring 1 more; good supplement to paid
K = 0.1 -> Weak WOM: negligible organic amplificationImproving K-factor
To raise K without degrading user experience:
Increase invite intent (raise i): Move the referral prompt to post-aha moment, not post-signup. Users who have experienced value are 3-5x more likely to share.
Increase invite conversion (raise c): Personalize the invite (sent from a real person's name), make the landing page reflect the context of the invite, offer a double-sided reward.
Reduce invite friction: One-click share, pre-written message, multiple channels (email, Slack, link copy). Each additional step in the sharing flow reduces i.
Frequently Asked Questions
What is growth-hacking?
Use this skill when designing viral loops, building referral programs, optimizing activation funnels, or improving retention. Triggers on growth loops, referral programs, activation funnels, retention strategies, viral coefficient, product-led growth, AARRR metrics, and any task requiring growth experimentation or optimization.
How do I install growth-hacking?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill growth-hacking in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support growth-hacking?
growth-hacking works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.