product-strategy
Use this skill when defining product vision, building roadmaps, prioritizing features, or choosing frameworks like RICE, ICE, or MoSCoW. Triggers on product vision, roadmapping, prioritization, RICE scoring, product strategy, feature prioritization, OKRs for product, and any task requiring product direction or planning decisions.
product product-strategyroadmapprioritizationricevisionplanningWhat is product-strategy?
Use this skill when defining product vision, building roadmaps, prioritizing features, or choosing frameworks like RICE, ICE, or MoSCoW. Triggers on product vision, roadmapping, prioritization, RICE scoring, product strategy, feature prioritization, OKRs for product, and any task requiring product direction or planning decisions.
product-strategy
product-strategy is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Defining product vision, building roadmaps, prioritizing features, or choosing frameworks like RICE, ICE, or MoSCoW.
Quick Facts
| Field | Value |
|---|---|
| Category | product |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill product-strategy- The product-strategy skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
A practical framework for defining product vision, building roadmaps, and making prioritization decisions that compound over time. Product strategy is the connective tissue between a company's business goals and the day-to-day work of a product team. Without it, teams ship features that users don't value, roadmaps become wish lists, and product-market fit erodes without anyone noticing. This skill covers the full strategy lifecycle: crafting a vision, building outcome-based roadmaps, scoring and sequencing work with prioritization frameworks, setting OKRs, and communicating direction to stakeholders. Agents can use this to draft strategy documents, evaluate feature trade-offs, run scoring sessions, and structure planning artifacts.
Tags
product-strategy roadmap prioritization rice vision planning
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair product-strategy with these complementary skills:
Frequently Asked Questions
What is product-strategy?
Use this skill when defining product vision, building roadmaps, prioritizing features, or choosing frameworks like RICE, ICE, or MoSCoW. Triggers on product vision, roadmapping, prioritization, RICE scoring, product strategy, feature prioritization, OKRs for product, and any task requiring product direction or planning decisions.
How do I install product-strategy?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill product-strategy in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support product-strategy?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Product Strategy
A practical framework for defining product vision, building roadmaps, and making prioritization decisions that compound over time. Product strategy is the connective tissue between a company's business goals and the day-to-day work of a product team. Without it, teams ship features that users don't value, roadmaps become wish lists, and product-market fit erodes without anyone noticing. This skill covers the full strategy lifecycle: crafting a vision, building outcome-based roadmaps, scoring and sequencing work with prioritization frameworks, setting OKRs, and communicating direction to stakeholders. Agents can use this to draft strategy documents, evaluate feature trade-offs, run scoring sessions, and structure planning artifacts.
When to use this skill
Trigger this skill when the user:
- Needs to write or refine a product vision or strategy document
- Wants to build or restructure a product roadmap
- Is deciding which features to build next or how to sequence work
- Asks about RICE, ICE, MoSCoW, or Kano scoring
- Needs to write product OKRs or connect features to business outcomes
- Is preparing a roadmap presentation for stakeholders or executives
- Wants to make a build vs. buy vs. partner decision
- Is evaluating product-market fit signals or pivoting direction
- Asks about north star metrics or success metrics for a product area
Do NOT trigger this skill for:
- Sprint planning mechanics or ticket estimation (use an agile-scrum skill)
- Pricing model decisions (use a pricing-strategy skill)
Key principles
Strategy is about saying no - A roadmap that includes every request is not a strategy, it is a backlog. Every "yes" to one initiative is implicitly "no" to five others. The clearest signal of a weak strategy is an inability to decline requests from stakeholders with conviction and data.
Outcome-based roadmaps, not feature lists - Roadmaps organized by features (build search, add dark mode, create reports) measure output. Roadmaps organized by outcomes (reduce time-to-value by 40%, increase weekly active usage, improve onboarding completion) measure impact. Ship outcomes; features are the means, not the end.
Prioritize ruthlessly - Most teams have 10x more ideas than capacity. The job of a product leader is not to find ways to do everything - it is to find the 20% of work that delivers 80% of the impact and protect the team's focus on it.
Validate before building - The most expensive mistake in product is building something nobody wants. Every assumption in a roadmap should have a cheapest possible test: a landing page, a prototype, a sales call, a survey. Build only after validation reduces uncertainty to an acceptable level.
Align product to business goals - Product teams that operate in isolation from business metrics (revenue, retention, activation) eventually lose organizational trust and budget. Every major initiative should trace directly to a business outcome the company cares about. If you cannot draw the line, reconsider the initiative.
Core concepts
Vision / Strategy / Roadmap hierarchy
- Vision is the 3-5 year aspirational destination: "What does the world look like if we succeed?" It is qualitative, inspiring, and stable. It changes rarely.
- Strategy is the 12-18 month plan for how you get there: which customer segments, which problems, which bets. It is directional, not exhaustive.
- Roadmap is the quarterly execution plan: which outcomes to drive, which initiatives to fund, what ships when. It is concrete and frequently updated.
A common mistake is writing a roadmap without a strategy, or a strategy without a vision. The hierarchy must exist for prioritization decisions to be defensible.
Prioritization frameworks
RICE (Reach, Impact, Confidence, Effort) and ICE (Impact, Confidence, Ease) are
quantitative scoring models that convert gut-feel debates into structured comparisons.
MoSCoW (Must Have, Should Have, Could Have, Won't Have) is a categorization system
used most often for release scoping. Kano maps features to customer satisfaction curves
to distinguish must-haves from delighters. See references/prioritization-frameworks.md
for detailed scoring guides, formulas, and examples.
Product-market fit signals
Strong PMF is characterized by: >40% of users saying they would be "very disappointed" if the product disappeared (Sean Ellis test), high organic/word-of-mouth growth, strong retention curves that flatten rather than decay to zero, and sales cycles that shorten as you refine the pitch. Weak PMF shows as: feature-request-driven roadmaps, high churn despite onboarding improvements, and a sales team that cannot articulate who the product is for.
North star metric
A single metric that best captures the core value your product delivers to customers. It must be a leading indicator of long-term revenue (not revenue itself), it must be actionable by the product team, and it must be understandable by everyone in the company. Examples: Slack (messages sent per user per day), Airbnb (nights booked), Spotify (time spent listening). Choose one. Two north stars create two competing roadmaps.
Common tasks
Write a product vision statement
A strong vision answers: who are we serving, what problem do we solve, and what does the world look like when we win?
Template:
For [target customer], who [has this problem or need],
[Product name] is a [product category] that [key benefit / why it's valuable].
Unlike [primary alternative], our product [key differentiator].Extended narrative vision (for internal strategy docs):
## Our Vision
In [timeframe], [company/product] will be [aspirational description of the future state].
[Target customers] will [be able to do / experience] [specific outcome] that was
previously impossible or painful.
We will know we have succeeded when [measurable signal]:
- [Signal 1]
- [Signal 2]
- [Signal 3]Good vision statements are short (fits on one slide), memorable (team can recite it), and opinionated (excludes some customers and use cases intentionally).
Build an outcome-based roadmap
Step 1 - Identify themes from strategy
Map each strategic bet to a roadmap theme. A theme is a broad problem area, not a feature. Examples: "Reduce time-to-first-value," "Improve team collaboration," "Unlock enterprise segment."
Step 2 - Define outcomes per theme
For each theme, write one measurable outcome: the metric that would move if this theme is executed well. Outcome = metric + direction + magnitude + timeframe.
Example: "Increase 7-day activation rate from 42% to 60% by Q3."
Roadmap template:
| Theme | Outcome target | Key initiatives | Quarter | Status |
|---|---|---|---|---|
| Activation | 7-day activation 42% -> 60% | Onboarding redesign, empty state improvements | Q2 | In progress |
| Collaboration | Teams with 3+ active members +30% | Shared workspaces, @ mentions | Q3 | Planned |
| Enterprise | 10 enterprise logos signed | SSO, audit logs, admin dashboard | Q3-Q4 | Discovery |
Step 3 - Sequence by dependency and impact
Order themes by: does this unblock something else? If yes, pull it earlier. Then order remaining themes by expected impact on the north star metric.
Prioritize with RICE, ICE, or MoSCoW
Use RICE for quarterly planning with multiple competing initiatives. Use ICE for rapid triage of a long backlog. Use MoSCoW for scoping a specific release.
RICE scoring:
Score = (Reach x Impact x Confidence) / Effort
- Reach: how many users affected per quarter (number)
- Impact: 3 = massive, 2 = high, 1 = medium, 0.5 = low, 0.25 = minimal
- Confidence: 100% = high, 80% = medium, 50% = low
- Effort: person-months requiredICE scoring (faster, less precise):
Score = Impact x Confidence x Ease (each 1-10)MoSCoW categorization:
- Must Have - release fails without this (legal, core function, blocking user flow)
- Should Have - important but not blocking; include if capacity allows
- Could Have - nice to have; cut first when scope is tight
- Won't Have - explicitly out of scope this cycle (park, do not delete)
For detailed scoring examples and comparison tables, see
references/prioritization-frameworks.md.
Set product OKRs
Product OKRs translate strategy into measurable quarterly commitments.
Structure:
Objective: [Qualitative, inspiring, directional - no metric]
KR1: [Metric] from [baseline] to [target] by [date]
KR2: [Metric] from [baseline] to [target] by [date]
KR3: [Metric] from [baseline] to [target] by [date]Rules for strong KRs:
- Measure outcomes, not outputs ("activation rate increases to 60%" not "ship new onboarding flow")
- Baseline must be known before the quarter starts - never set a KR on a metric you do not currently measure
- 70% attainment is success; 100% means the target was too conservative
- Max 3 KRs per objective; max 3 objectives per team per quarter
Common anti-pattern: Writing KRs as task lists ("Launch feature X," "Complete Y project"). These are milestones, not results. Rewrite as: "Feature X drives metric M to level N."
Create a product strategy document
A one-page product strategy document that stakeholders can read in 5 minutes:
## Product Strategy - [Year / Half]
### Context
[1-2 sentences: company stage, market conditions, and what has changed since last cycle]
### Where we play
[Target customer segments and use cases we are optimizing for this period]
### Where we do not play
[Explicit exclusions - segments, use cases, or problems out of scope]
### Strategic bets
1. [Bet 1]: [Hypothesis] - if we do X, we expect Y outcome because Z
2. [Bet 2]: [Hypothesis]
3. [Bet 3]: [Hypothesis]
### Key metrics
- North star: [metric and current baseline]
- Supporting metrics: [2-3 metrics that feed the north star]
### Risks and assumptions
- [Assumption 1] - we will validate by [date] using [method]
- [Assumption 2] - we will validate by [date] using [method]Make build vs. buy decisions
When evaluating whether to build, buy, or partner for a capability:
| Criterion | Build | Buy | Partner |
|---|---|---|---|
| Core differentiator? | Yes | No | No |
| Time to market critical? | No | Yes | Yes |
| Internal expertise exists? | Yes | No | Available externally |
| Long-term maintenance cost | High | Vendor dependent | Shared |
| Customization required? | Full control | Limited | Negotiable |
Decision heuristic:
- If the capability is a core differentiator AND you have the expertise: build
- If the capability is commodity AND a mature solution exists: buy
- If speed matters more than control AND a capable partner exists: partner
- Never build what the market commoditizes; never buy what creates lock-in on your core differentiator
Communicate roadmap to stakeholders
Different audiences require different roadmap formats.
For executives: One-page view. Themes and outcomes only. No feature names. Answers: "What business problems are we solving and when will we see results?"
For engineering and design: Outcome-first with supporting initiatives. Includes known dependencies, risks, and confidence level. Answers: "What are we building and why does it matter?"
For customers: Public roadmap with near-term themes only. Commitments, not dates. Avoid feature-level specifics that constrain design. Answers: "Is this team moving in a direction I trust?"
For sales and customer success: Near-term deliverables with anticipated dates. Include enterprise-specific items. Answers: "What can I promise to prospects this quarter?"
Anti-patterns
| Anti-pattern | Why it fails | What to do instead |
|---|---|---|
| Roadmap as a feature wish list | Treats output as success; teams ship but metrics do not move | Reframe each initiative as an outcome with a target metric |
| Prioritizing by loudest stakeholder | Recency and seniority bias override user impact and data | Score every request with RICE or ICE before any commitment |
| Annual roadmap with no updates | Markets change; a frozen roadmap becomes fiction by Q3 | Review and reforecast roadmap quarterly; update stakeholders |
| Skipping discovery to ship faster | Builds the wrong thing faster; sunk cost forces bad decisions | Run a 1-2 week discovery sprint before committing to any major initiative |
| Copying competitor features | Optimizes for the competitor's strategy, not your users | Start with your own user research; competitor features are signals, not specs |
| Treating OKRs as a task list | Measures effort, not impact; creates busywork culture | Write KRs as metric movements, not deliverables; review weekly |
Gotchas
RICE scores treated as absolute truth - RICE produces a number, but the inputs (Reach, Confidence, Effort) are estimates. Teams often stop debating once the spreadsheet exists. Treat RICE as a structured conversation starter, not a decision oracle - challenge the inputs, not just the outputs.
Vision written for the all-hands, not the team - Inspirational vision statements that sound good in a company meeting often give zero guidance on what to build. A vision that can't help a PM decline a feature request has failed its job.
OKRs that are actually task lists - The most common mistake is writing KRs as deliverables ("ship search feature") rather than metric movements. When asked to write OKRs, explicitly check every KR: can you achieve it without the metric moving? If yes, rewrite it.
Roadmap shared at the feature level with executives - Executives reading feature-level roadmaps immediately start adding items. Share outcomes-only views with execs; reserve feature-level detail for engineering and design.
"Won't Have" items deleted instead of parked - MoSCoW's Won't Have bucket is a deliberate parking lot. Deleting items means they reappear as new requests next quarter. Always keep the Won't Have list visible and reference it when similar requests arrive.
References
For detailed scoring guides, formulas, and worked examples:
references/prioritization-frameworks.md- RICE, ICE, MoSCoW, and Kano with step-by-step examples, comparison tables, and guidance on when to use each
Only load the references file when the task requires scoring or framework selection - it is detailed and will consume context.
References
prioritization-frameworks.md
Prioritization Frameworks
Detailed guides for RICE, ICE, MoSCoW, and Kano - the four frameworks that cover 95% of product prioritization situations. Each section includes the formula or method, a worked example, and guidance on when to use it.
RICE Scoring
RICE is a quantitative scoring model designed at Intercom to reduce prioritization debates by converting estimates into a comparable score.
Formula:
RICE Score = (Reach x Impact x Confidence) / EffortComponent definitions
Reach - How many users will this initiative affect in a given time period (typically one quarter)? Use real data: DAU, MAU, or the number of users who touch the relevant flow. Be specific about the denominator.
- Count users (or transactions, sessions) per quarter
- Example: 1,200 users per quarter hit the onboarding step this affects
Impact - How much will this move the needle for each user who is reached? Use a fixed scale to maintain comparability across items.
| Score | Meaning |
|---|---|
| 3 | Massive - drives significant metric movement |
| 2 | High - clear measurable improvement |
| 1 | Medium - noticeable improvement |
| 0.5 | Low - small improvement |
| 0.25 | Minimal - marginal at best |
Confidence - How confident are you in your Reach and Impact estimates? If you have user research and data to back the estimates, confidence is high. If the estimate is a gut feeling, it is low.
| Score | Meaning |
|---|---|
| 100% | High - data or strong research supports the estimate |
| 80% | Medium - some evidence, some assumption |
| 50% | Low - mostly assumption, little validation |
Effort - Total person-months required from all roles (product, design, engineering, QA). Use half-months if needed. Effort does not use a scale - it is a direct count.
- Example: 1 designer x 0.5 months + 2 engineers x 1 month = 2.5 person-months
Worked example
Three initiatives competing for Q3:
| Initiative | Reach | Impact | Confidence | Effort | RICE Score |
|---|---|---|---|---|---|
| Onboarding redesign | 1,200 | 2 | 80% | 3 | (1200 x 2 x 0.8) / 3 = 640 |
| Bulk CSV import | 500 | 3 | 80% | 2 | (500 x 3 x 0.8) / 2 = 600 |
| Dark mode | 2,000 | 0.5 | 100% | 1.5 | (2000 x 0.5 x 1.0) / 1.5 = 667 |
Despite affecting fewer users, dark mode scores highest because the effort is low. However, RICE scores are inputs to a decision - not the decision itself. If dark mode does not contribute to the north star metric and onboarding does, deprioritize dark mode regardless of score.
When to use RICE
- Quarterly planning sessions with 5-15 competing initiatives
- When stakeholder opinions are competing and you need a neutral framework
- When you have enough data to make estimates meaningful (not for early discovery)
- When team size is large enough that informal prioritization breaks down
RICE pitfalls
- Gaming the scores - teams inflate Reach and Impact to get their favorite feature prioritized. Require that estimates cite a source (analytics pull, user research note).
- Ignoring strategic fit - a high RICE score does not mean the initiative aligns with the current strategy. Always filter by "does this serve this quarter's theme?" before scoring.
- Treating estimates as facts - RICE creates false precision. Use it to rank order, not to predict exact impact.
ICE Scoring
ICE is a faster, lighter-weight scoring model created by Sean Ellis (of GrowthHackers). It trades RICE's precision for speed. Use it for rapid triage of large backlogs.
Formula:
ICE Score = Impact x Confidence x EaseEach dimension is scored 1-10.
Component definitions
Impact (1-10) - How much will this move the target metric if it works? 10 = game changing, 1 = negligible.
Confidence (1-10) - How confident are you that this will work as expected? 10 = proven by data or prior experiments, 1 = pure hypothesis.
Ease (1-10) - How easy is this to implement? 10 = can be done in a day, 1 = requires months of engineering effort. (Note: this is the inverse of Effort in RICE.)
Worked example
| Initiative | Impact | Confidence | Ease | ICE Score |
|---|---|---|---|---|
| Add email reminder for incomplete onboarding | 7 | 8 | 9 | 504 |
| Redesign dashboard home | 6 | 5 | 3 | 90 |
| Add Google SSO | 8 | 9 | 7 | 504 |
| Build native mobile app | 9 | 4 | 2 | 72 |
Email reminder and Google SSO tie on score - resolve ties by asking which one unblocks more other work, or which one the team has more context on.
When to use ICE vs. RICE
| Scenario | Use |
|---|---|
| Large backlog triage (20+ items) | ICE - faster to score |
| Quarterly planning with stakeholders | RICE - more defensible |
| Growth experiment queue | ICE - built for experiment prioritization |
| Cross-functional initiative trade-offs | RICE - shared language across roles |
| Solo PM, small team, low ceremony | ICE - lightweight enough to use weekly |
ICE pitfalls
- Ease overweights low-hanging fruit - high-ease items may not move strategic metrics. Balance ICE scores with a "does this serve the strategy?" filter.
- No separation of Reach from Impact - ICE cannot distinguish "affects 10 users massively" from "affects 10,000 users minimally." When reach matters, use RICE.
MoSCoW
MoSCoW is a categorization method, not a scoring model. It is best used for scoping a specific release or sprint, not for general backlog ordering.
The name is an acronym: Must Have, Should Have, Could Have, Won't Have.
Category definitions
Must Have - The release cannot ship without this. If it is missing, the release fails. Criteria:
- Legal or compliance requirement
- Core function without which the product does not work
- Agreed contract commitment to a customer
- Blocks another Must Have item
Test: "If we removed this, would we have to delay the release entirely?" If yes, it is a Must Have.
Should Have - Important, valuable, and expected - but the release can function without it in a degraded form. Include if capacity allows. Move to next release if needed without risk.
Could Have - Nice to have. Improves experience but not expected by users. Cut first when scope is tight. Sometimes called "wish list" items.
Won't Have (this time) - Explicitly out of scope for this release cycle. Not rejected permanently - just parked. Writing these down is critical: it prevents the same items from being re-raised in every planning meeting.
MoSCoW in practice
Run MoSCoW as a workshop: list all candidate items, then vote them into categories. Require that Must Haves account for no more than 60% of available capacity, leaving room for Should Haves and buffer for risk.
| Category | Target capacity allocation |
|---|---|
| Must Have | 60% |
| Should Have | 20% |
| Could Have | 10% |
| Buffer / risk | 10% |
Worked example (mobile app v1.0 release scope)
Must Have:
- User authentication (login, logout, password reset)
- Core task creation and completion flow
- Push notification for due dates
- Offline read access
Should Have:
- Search across tasks
- File attachment support
- Dark mode
Could Have:
- Custom notification sounds
- Widget for home screen
- Keyboard shortcuts
Won't Have (this release):
- Team collaboration features
- Calendar integration
- AI task suggestions
When to use MoSCoW
- Scoping a specific release or sprint with a fixed deadline
- Aligning with stakeholders on what is and is not included
- Release planning for external commitments (customer demos, conference deadlines)
- When the question is "what ships now?" not "what do we build next year?"
MoSCoW pitfalls
- Everything becomes Must Have - stakeholders lobby to upgrade their item. Enforce the 60% rule strictly. If adding a Must Have means removing another, make the trade explicit.
- Won't Have items get forgotten - document them in a visible backlog location. They are candidates for the next cycle, not the trash.
Kano Model
The Kano model maps features to customer satisfaction curves to identify which features are expected (must-haves), which create linear satisfaction (performance features), and which delight beyond expectations (delighters / exciters).
The three core Kano categories
Basic needs (Must-be / Dissatisfiers) Features customers expect as table stakes. Their presence does not increase satisfaction, but their absence causes strong dissatisfaction. Customers do not mention these in research because they assume they exist.
Examples: password reset, HTTPS, mobile responsiveness, 99.9% uptime.
Rule: Invest enough to meet baseline expectations, then stop. Over-investing here does not move satisfaction - it just avoids disaster.
Performance needs (One-dimensional / Satisfiers) Features where more is better in a linear relationship. The more capability you provide, the more satisfied the customer. These show up clearly in customer surveys and competitive differentiation.
Examples: search quality, page load speed, report depth, storage limits.
Rule: Invest proportionally to how much customers value the metric. Benchmark against competitors. Marginal returns exist here, but it takes longer to hit them than with Basic needs.
Excitement needs (Attractive / Delighters) Features customers did not expect and did not know to ask for, but react to with genuine delight. When absent, customers are not dissatisfied (they did not expect them). When present, they drive strong positive word-of-mouth and differentiation.
Examples: Spotify Wrapped, Notion's confetti on task completion, Figma's multiplayer cursors.
Rule: A few well-chosen delighters differentiate more than dozens of performance improvements. Invest selectively. Over time, delighters decay into performance features and eventually into basic needs (Kano decay).
Additional Kano categories
Indifferent - Features that neither satisfy nor dissatisfy regardless of presence. Stop building these. They consume capacity without moving user sentiment.
Reverse - Features that actually decrease satisfaction when present. Some users dislike features that others love (e.g., autoplay, gamification elements).
Running a Kano survey
For each candidate feature, ask two questions:
- Functional form: "If this feature were present, how would you feel?"
- Dysfunctional form: "If this feature were NOT present, how would you feel?"
Response options: Delighted / Expected it / Neutral / Could live with it / Dislike it
Plot responses on the Kano matrix to categorize each feature.
| Functional reaction | Dysfunctional reaction | Category |
|---|---|---|
| Delighted | Dislike | Excitement (Delighter) |
| Expected it | Dislike | Performance |
| Neutral | Dislike | Basic need |
| Neutral | Neutral | Indifferent |
| Dislike | Delighted | Reverse |
Kano + RICE combined approach
Use Kano to categorize features, then use RICE to sequence within each category:
- Ship all unfulfilled Basic needs first (table stakes - no scoring needed)
- RICE-score Performance features to find the highest-leverage investments
- Select 1-2 Excitement features per release cycle to differentiate
When to use Kano
- Understanding which features to invest in vs. just maintain
- Pre-launch research to scope a new product's MVP correctly
- Post-launch when satisfaction scores are flat despite shipping features
- Competitive analysis: mapping competitor features to Kano categories reveals gaps
Framework selection guide
| Question | Best framework |
|---|---|
| "What do we build in Q3?" (quarterly planning) | RICE |
| "Which experiments should growth run this sprint?" | ICE |
| "What ships in the v2 release?" (release scoping) | MoSCoW |
| "Which features make customers happy vs. expected?" | Kano |
| "Quick triage of 30 backlog items" | ICE |
| "Stakeholder alignment on scope" | MoSCoW |
| "Cross-team initiative trade-offs" | RICE |
| "MVP feature selection for a new product" | Kano + MoSCoW |
Frequently Asked Questions
What is product-strategy?
Use this skill when defining product vision, building roadmaps, prioritizing features, or choosing frameworks like RICE, ICE, or MoSCoW. Triggers on product vision, roadmapping, prioritization, RICE scoring, product strategy, feature prioritization, OKRs for product, and any task requiring product direction or planning decisions.
How do I install product-strategy?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill product-strategy in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support product-strategy?
product-strategy works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.