super-brainstorm
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, designing systems, or making architectural decisions. Enters plan mode, reads all available docs, explores the codebase deeply, then interviews the user relentlessly with ultrathink-level reasoning on every decision until a shared understanding is reached. Produces a validated design spec before any implementation begins. Triggers on feature requests, design discussions, refactors, new projects, component creation, system changes, and any task requiring design decisions.
workflow brainstormingdesignplanningarchitecturespec-writinginterviewingWhat is super-brainstorm?
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, designing systems, or making architectural decisions. Enters plan mode, reads all available docs, explores the codebase deeply, then interviews the user relentlessly with ultrathink-level reasoning on every decision until a shared understanding is reached. Produces a validated design spec before any implementation begins. Triggers on feature requests, design discussions, refactors, new projects, component creation, system changes, and any task requiring design decisions.
super-brainstorm
super-brainstorm is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex, and 1 more. You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, designing systems, or making architectural decisions. Enters plan mode, reads all available docs, explores the codebase deeply, then interviews the user relentlessly with ultrathink-level reasoning on every decision until a shared understanding is reached. Produces a validated design spec before any implementation begins.
Quick Facts
| Field | Value |
|---|---|
| Category | workflow |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex, mcp |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill super-brainstorm- The super-brainstorm skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Tags
brainstorming design planning architecture spec-writing interviewing
Platforms
- claude-code
- gemini-cli
- openai-codex
- mcp
Related Skills
Pair super-brainstorm with these complementary skills:
Frequently Asked Questions
What is super-brainstorm?
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, designing systems, or making architectural decisions. Enters plan mode, reads all available docs, explores the codebase deeply, then interviews the user relentlessly with ultrathink-level reasoning on every decision until a shared understanding is reached. Produces a validated design spec before any implementation begins. Triggers on feature requests, design discussions, refactors, new projects, component creation, system changes, and any task requiring design decisions.
How do I install super-brainstorm?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill super-brainstorm in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support super-brainstorm?
This skill works with claude-code, gemini-cli, openai-codex, mcp. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Super Brainstorm
Activation Banner
At the very start of every super-brainstorm invocation, before any other output, display this ASCII art banner:
███████╗██╗ ██╗██████╗ ███████╗██████╗
██╔════╝██║ ██║██╔══██╗██╔════╝██╔══██╗
███████╗██║ ██║██████╔╝█████╗ ██████╔╝
╚════██║██║ ██║██╔═══╝ ██╔══╝ ██╔══██╗
███████║╚██████╔╝██║ ███████╗██║ ██║
╚══════╝ ╚═════╝ ╚═╝ ╚══════╝╚═╝ ╚═╝
██████╗ ██████╗ █████╗ ██╗███╗ ██╗███████╗████████╗ ██████╗ ██████╗ ███╗ ███╗
██╔══██╗██╔══██╗██╔══██╗██║████╗ ██║██╔════╝╚══██╔══╝██╔═══██╗██╔══██╗████╗ ████║
██████╔╝██████╔╝███████║██║██╔██╗ ██║███████╗ ██║ ██║ ██║██████╔╝██╔████╔██║
██╔══██╗██╔══██╗██╔══██║██║██║╚██╗██║╚════██║ ██║ ██║ ██║██╔══██╗██║╚██╔╝██║
██████╔╝██║ ██║██║ ██║██║██║ ╚████║███████║ ██║ ╚██████╔╝██║ ██║██║ ╚═╝ ██║
╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝╚═╝ ╚═══╝╚══════╝ ╚═╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝Follow the banner immediately with: Entering plan mode - ultrathink enabled
A relentless, ultrathink-powered design interview that turns vague ideas into bulletproof specs. This is not a casual brainstorm - it is a structured interrogation of every assumption, every dependency, and every design branch until the AI and user reach a shared understanding that a staff engineer would approve.
When to use this skill
Trigger this skill when the user:
- Requests a new feature, component, or functionality that needs design before implementation
- Asks to build something and the scope or approach is not yet clear
- Wants to brainstorm or explore options for a design decision
- Is starting a greenfield project and needs to define architecture
- Needs to refactor or redesign an existing system
- Says "let's think through this" or "help me plan this"
- Wants to decompose a complex request into a validated spec before coding
Do NOT trigger this skill for:
- Quick bug fixes or typo corrections where the solution is obvious
- Pure code review tasks with no design decisions
- Tasks where the user explicitly says "just do it" and the scope is trivially clear
Hard Gates
Anti-Pattern: "This Is Too Simple To Need A Design"
Every project goes through this process. A todo list, a single-function utility, a config change - all of them. "Simple" projects are where unexamined assumptions cause the most wasted work. The design can be short (a few sentences for truly simple projects), but you MUST present it and get approval.
Checklist
You MUST complete these steps in order:
- Enter plan mode
- Deep context scan - read docs/, README.md, CLAUDE.md, CONTRIBUTING.md, recent commits, project structure
- Codebase-first exploration - before every question, check if the codebase already answers it
- Scope assessment - if the request spans multiple independent subsystems, decompose first
- Relentless interview - one question at a time, strictly linear, dependency-resolved, ultrathink every decision
- Approach proposal - only when there's a genuine fork; mark one (Recommended) with rationale
- Design presentation - section by section, user approval per section
- Write spec - save to
docs/plans/YYYY-MM-DD-<topic>-design.md - Spec review loop - dispatch reviewer subagent, fix issues, max 3 iterations
- User reviews spec - gate before proceeding
- Flexible exit - user chooses next step (writing-plans, super-human, direct implementation, etc.)
Process Flow
digraph super_brainstorm {
rankdir=TB;
node [shape=box];
"Enter plan mode" -> "Deep context scan";
"Deep context scan" -> "Scope assessment";
"Scope assessment" -> "Decompose into sub-projects" [label="too large"];
"Scope assessment" -> "Relentless interview" [label="right-sized"];
"Decompose into sub-projects" -> "Relentless interview" [label="first sub-project"];
"Relentless interview" -> "Genuine fork?" [shape=diamond];
"Genuine fork?" -> "Propose approaches\n(mark Recommended)" [label="yes"];
"Genuine fork?" -> "Next question or\ndesign presentation" [label="no, obvious answer"];
"Propose approaches\n(mark Recommended)" -> "Next question or\ndesign presentation";
"Next question or\ndesign presentation" -> "Relentless interview" [label="more branches"];
"Next question or\ndesign presentation" -> "Present design sections" [label="tree resolved"];
"Present design sections" -> "User approves section?" [shape=diamond];
"User approves section?" -> "Present design sections" [label="no, revise"];
"User approves section?" -> "Write spec to docs/plans/" [label="yes, all sections"];
"Write spec to docs/plans/" -> "Spec review loop\n(subagent, max 3)";
"Spec review loop\n(subagent, max 3)" -> "User reviews spec";
"User reviews spec" -> "Write spec to docs/plans/" [label="changes requested"];
"User reviews spec" -> "User chooses next step" [label="approved"];
}Phase 1: Deep Context Scan
Before asking the user a single question, build comprehensive project awareness.
Mandatory reads (if they exist):
docs/directory - read README.md first, then scan all filesREADME.mdat project rootCLAUDE.md/.claude/configurationCONTRIBUTING.mddocs/plans/- existing design docs that might overlap- Recent git commits (last 10-20)
- Package manifests (
package.json,Cargo.toml,pyproject.toml, etc.) - Project structure overview (top-level directories)
What you're looking for:
- Existing patterns, conventions, and architectural decisions
- Tech stack and dependencies
- Testing patterns and CI setup
- Overlapping or related design docs
- Code style and organizational conventions
Output to the user: A brief summary of what you found, highlighting anything relevant to the task at hand. Do NOT dump a file listing - synthesize what matters.
Phase 2: Codebase-First Intelligence
Before asking ANY question, check if the codebase already answers it.
This is the core differentiator. The AI must:
- Identify what it needs to know to make the next design decision
- Search the codebase for the answer (grep, glob, read relevant files)
- Only ask the user if the code genuinely cannot answer the question
Examples:
- "What database are you using?" - DON'T ASK. Check package.json, config files, existing code.
- "How do you handle authentication?" - DON'T ASK. Search for auth middleware, JWT usage, session handling.
- "What testing framework?" - DON'T ASK. Check test files, config, package.json.
- "What's the visual style you want?" - ASK. Code can't answer aesthetic preferences.
- "Should this be real-time or batch?" - ASK. This is a product decision the codebase can't resolve.
When you DO find the answer in the codebase, tell the user what you found:
"I see you're using Prisma with PostgreSQL (from
prisma/schema.prisma). I'll design around that."
This builds confidence and saves the user from answering questions they already answered in code.
Phase 3: Scope Assessment
Before diving into detailed questions, assess scope.
If the request describes multiple independent subsystems (e.g., "build a platform with chat, file storage, billing, and analytics"):
- Flag this immediately
- Help decompose into sub-projects: what are the independent pieces, how do they relate, what order should they be built?
- Each sub-project gets its own brainstorm -> spec -> plan -> implementation cycle
- Brainstorm the first sub-project through the normal design flow
If the request is appropriately scoped, proceed to the interview.
Phase 4: Relentless Interview
This is the heart of the skill. Walk down every branch of the design tree, resolving dependencies between decisions one by one.
Rules:
- Use the
AskUserQuestiontool for every question - this is a built-in Claude Code tool that pauses execution and waits for the user's response. Use it for every interview question, every section approval, and every decision point. Never just print a question in your output - always use the tool so the conversation properly blocks until the user responds. - One question at a time - never overwhelm with multiple questions
- Ultrathink before every question - reason deeply about what you need to know next, what depends on what, and whether the codebase can answer it
- Strictly linear - if decision B depends on decision A, never ask about B until A is locked
- Prefer multiple choice when possible - easier to answer. Include your recommendation marked as (Recommended) with a clear rationale
- Never fake options - only propose multiple approaches when there's a genuine fork in the road. If the codebase and interview clearly point to one right answer, present it with reasoning for why alternatives were dismissed
- Codebase check before every question - search the code first, only ask what code can't tell you
- Keep going until every decision node is resolved - don't shortcut, don't assume, don't hand-wave. If a branch of the design tree hasn't been explored, explore it.
What to interview about:
- Purpose and success criteria
- User/consumer personas
- Data model and relationships
- Component boundaries and interfaces
- State management and data flow
- Error handling and edge cases
- Performance requirements and constraints
- Security considerations
- Testing strategy
- Migration path (if modifying existing systems)
- Backwards compatibility concerns
Design tree traversal: Think of the design as a tree of decisions. Each decision may open new branches. Walk the tree depth-first, resolving each branch fully before moving to siblings.
Feature X
- Who is this for? (resolve)
- What's the core interaction? (resolve)
- How does data flow? (resolve)
- What are the edge cases? (resolve)
- What are the error states? (resolve)
- What's the secondary interaction? (resolve)
- How does this integrate with existing system? (resolve)Phase 5: Approach Proposals
Only propose multiple approaches when there is a genuine design fork.
When the answer is obvious: Present the single approach with reasoning. Briefly mention why you dismissed alternatives:
"Given your existing Express + Prisma stack and the read-heavy access pattern, a new Prisma model with a cached read path is the clear approach. A separate microservice would add complexity without benefit at this scale, and a raw SQL approach would lose Prisma's type safety."
When there's a genuine fork: Present each option with:
- What it is (1-2 sentences)
- Pros and cons
- When you'd pick it
- Mark one as (Recommended) with clear rationale
Phase 6: Design Presentation
Once the design tree is fully resolved, present the design section by section.
Rules:
- Scale each section to its complexity: a few sentences if straightforward, up to 200-300 words if nuanced
- Ask after each section whether it looks right so far
- Cover: architecture, components, data flow, error handling, testing approach
- Be ready to go back and revise if something doesn't fit
- Reference existing codebase patterns you're building on
Design for isolation and clarity:
- Break the system into smaller units with one clear purpose each
- Well-defined interfaces between units
- Each unit should be understandable and testable independently
- For each unit: what does it do, how do you use it, what does it depend on?
Working in existing codebases:
- Follow existing patterns. Don't fight the codebase.
- Where existing code has problems that affect the work (e.g., a file that's grown too large, unclear boundaries), include targeted improvements as part of the design - the way a good developer improves code they're working in.
- Don't propose unrelated refactoring. Stay focused on what serves the current goal.
Phase 7: Write Spec
After user approves the full design:
- Write to
docs/plans/YYYY-MM-DD-<topic>-design.md - Clear, concise prose. No fluff.
- Sections should mirror what was discussed and approved
- Include: summary, architecture, components, data model, interfaces, error handling, testing strategy, migration path (if applicable)
Phase 8: Spec Review Loop
After writing the spec, dispatch a reviewer subagent:
Agent tool (general-purpose):
description: "Review spec document"
prompt: |
You are a spec document reviewer. Verify this spec is complete and ready
for implementation planning.
Spec to review: [SPEC_FILE_PATH]
| Category | What to Look For |
|------------- |---------------------------------------------------------------|
| Completeness | TODOs, placeholders, "TBD", incomplete sections |
| Consistency | Internal contradictions, conflicting requirements |
| Clarity | Requirements ambiguous enough to cause building the wrong thing|
| Scope | Focused enough for a single plan |
| YAGNI | Unrequested features, over-engineering |
Only flag issues that would cause real problems during implementation.
Approve unless there are serious gaps.
Output format:
## Spec Review
**Status:** Approved | Issues Found
**Issues (if any):**
- [Section X]: [specific issue] - [why it matters]
**Recommendations (advisory, do not block approval):**
- [suggestions]- If issues found: fix them, re-dispatch, repeat
- Max 3 iterations. If still failing, surface to the user for guidance.
Phase 9: User Reviews Spec
After the review loop passes:
"Spec written to
<path>. Please review it and let me know if you want to make any changes before we proceed."
Wait for the user's response. If they request changes, make them and re-run the spec review loop. Only proceed once the user approves.
Phase 10: Flexible Exit
Once the spec is approved, present the user with options:
"Spec is approved. What would you like to do next?"
- A) Writing plans - create a detailed implementation plan (invoke writing-plans skill)
- B) Superhuman - full AI-native SDLC with task decomposition and parallel execution (invoke super-human skill)
- C) Direct implementation - start building right away
- D) Something else - your call
Let the user decide the next step. Do not auto-invoke any skill.
Key Principles
- Ultrathink everything - deep reasoning before every decision, no lazy shortcuts
- Codebase before questions - respect the user's time, only ask genuine unknowns
- One question at a time via
AskUserQuestiontool - never overwhelm, always use the built-in tool to ask - Strictly linear - resolve dependencies before moving forward
- Honest options - real forks get multiple approaches, obvious answers get presented directly
- Always mark (Recommended) - every set of options includes a clear recommendation with rationale
- YAGNI ruthlessly - remove unnecessary features from all designs
- Incremental validation - present design section by section, get approval before moving on
- Plan mode always - this skill operates entirely in plan mode
Gotchas
AskUserQuestiontool not available in all environments - TheAskUserQuestiontool is a Claude Code-specific built-in. In other environments (Gemini CLI, OpenAI Codex), it may not exist. Fall back to printing the question as a clearly demarcated output block and waiting for user response, but track that you are waiting for an answer before proceeding.Deep context scan can consume the entire context window - Reading every file in
docs/and every recent commit in a large codebase can exhaust the context window before the first question is asked. Be selective: read README, CLAUDE.md, and recent commits first; only go deeper on files directly relevant to the task.Spec saved to
docs/plans/in the wrong repo - If the skill is invoked in a monorepo or a workspace with multipledocs/directories, saving the spec to the wrong subdirectory means it will never be found during future DISCOVER phases. Confirm the targetdocs/plans/path with the user before writing.Reviewer subagent approves incomplete specs - The reviewer subagent is prompted to "approve unless there are serious gaps," which means minor incompleteness often passes. Do not treat reviewer approval as a substitute for user approval. The user gate in Phase 9 is mandatory regardless of the reviewer's verdict.
Flexible exit auto-invokes the next skill - Presenting the exit options and then immediately invoking a skill without waiting for user input defeats the purpose of a flexible exit. Always use
AskUserQuestion(or equivalent) to receive the user's choice before taking any post-spec action.
Anti-Patterns and Common Mistakes
| Anti-Pattern | Better Approach |
|---|---|
| Asking questions the codebase can answer | Search code first - check configs, existing patterns, test files before every question |
| Batching multiple questions in one message | One question at a time, always. Break complex topics into sequential questions |
| Printing questions as plain text output | Always use the AskUserQuestion tool to ask - it blocks until the user responds |
| Skipping docs/ and README before starting | Always read all available documentation before the first question |
| Proposing fake alternatives when the answer is obvious | Present the single right answer with rationale; only show options at genuine forks |
| Accepting vague answers without follow-up | Dig deeper - "what do you mean by that?" is always valid |
| Asking about implementation before purpose | Always resolve "why" and "what" before "how" |
| Not exploring error/edge case branches | Every design tree has an error handling branch - walk it |
| Jumping to code before spec approval | Hard gate: no code, no scaffolding, no implementation until spec is approved |
| Presenting options without a (Recommended) marker | Every option set must have a clear recommendation with rationale |
| Using normal thinking when ultrathink is required | Ultrathink on every decision, every question, every proposal - no exceptions |
| Decomposing too late | Flag multi-system scope immediately, don't spend 10 questions refining details of an unscoped project |
| Auto-invoking the next skill without asking | Flexible exit - always let the user choose what happens after spec approval |
References
For detailed guidance on specific aspects, load these reference files:
references/interview-playbook.md- Design tree traversal, question banks by project type, codebase-first intelligence patterns, example interview sessionsreferences/spec-writing.md- Spec document template, section scaling rules, writing style guide, decision log format, spec review checklist, example specreferences/approach-analysis.md- When to propose multiple approaches, approach proposal format, trade-off dimensions, project decomposition guide, common decision trees
Only load a references file if the current phase requires it - they are long and will consume context.
References
approach-analysis.md
Approach Analysis
This reference covers when and how to propose multiple approaches, how to evaluate trade-offs, and how to decompose large projects into manageable sub-projects.
When to Propose Multiple Approaches
Not every decision deserves a multi-approach breakdown. Use this decision framework to determine the right level of analysis.
Decision Framework
| Scenario | What to Do | Example |
|---|---|---|
| Genuine fork - multiple viable paths with meaningful trade-offs | Propose 2-3 approaches with explicit trade-offs | "Auth: NextAuth vs Clerk vs custom JWT" |
| Clear winner - one approach is obviously better | Present it directly, explain why alternatives were dismissed | "Use TypeScript - the project already uses it everywhere" |
| Constraint-driven - requirements narrow it to one option | State the constraint and the single approach | "Must use PostgreSQL per company policy" |
How to Tell Which Scenario You Are In
Ask these questions in order:
- Is there a hard constraint? (company policy, existing tech stack, compliance requirement) - If yes, it is constraint-driven. State the constraint and move on.
- Does one option dominate on every dimension? (faster, simpler, cheaper, more maintainable) - If yes, it is a clear winner. Present it with brief dismissals.
- Do the options trade off against each other? (A is faster but harder to maintain, B is slower but more flexible) - If yes, it is a genuine fork. Present the full comparison.
Anti-Patterns
- Analysis paralysis: Proposing 5+ approaches when 2 would suffice
- False equivalence: Presenting a clearly inferior option alongside a superior one just to fill space
- Missing the obvious: Over-analyzing when the codebase already has an established pattern
- Premature commitment: Picking an approach without surfacing the trade-offs to the user
Approach Proposal Format
Use this format when presenting 2-3 approaches for a genuine fork.
### Approach A: [Name] **(Recommended)**
[1-2 sentence description]
- Pros: ...
- Cons: ...
- When to pick: ...
- Complexity: S/M/L
### Approach B: [Name]
[1-2 sentence description]
- Pros: ...
- Cons: ...
- When to pick: ...
- Complexity: S/M/L
**Recommendation:** Approach A because [specific rationale tied to project context]Format Rules
- Always lead with the recommended approach. The user should see the best option first.
- Cap at 3 approaches. If you have more, you have not filtered enough.
- Tie the recommendation to project context. "Approach A because your app already uses Redux" is better than "Approach A because it is popular."
- Complexity uses t-shirt sizes. S = hours, M = days, L = weeks. Be honest.
- "When to pick" is mandatory. It tells the user under what conditions each approach wins - this is the most valuable part.
Single Approach Format
Use this format when the answer is obvious (clear winner or constraint-driven).
Given [project context], [approach] is the clear path because [reason].
I considered [alternative 1] but dismissed it because [reason].
[Alternative 2] doesn't apply here because [reason].Rules for Single Approach
- Still mention what you considered and why you dismissed it. This builds trust.
- Keep the dismissals to one sentence each.
- Do not apologize for not providing multiple options. Confidence is appropriate when the answer is clear.
Trade-off Dimensions
When comparing approaches, evaluate them against these dimensions. You do not need all of them - pick the 3-5 most relevant to the decision at hand.
| Dimension | What to Compare | Questions to Ask |
|---|---|---|
| Complexity | Implementation effort and cognitive load | How many moving parts? How hard is it to reason about? |
| Performance | Runtime speed, memory, bundle size | Does the performance difference matter at this scale? |
| Maintainability | Long-term cost of ownership | Can a new team member understand this in 30 minutes? |
| Testing difficulty | How hard it is to write and maintain tests | Can we test this without complex mocking or infrastructure? |
| Migration risk | Risk of breaking existing functionality | What is the blast radius if something goes wrong? |
| Team familiarity | How well the team knows the technology | Will this require a learning curve that slows delivery? |
| Time to implement | Calendar time from start to working feature | What is the fastest path to a working solution? |
| Reversibility | How easy it is to undo the decision | If this turns out to be wrong, can we switch without a rewrite? |
Weighting by Project Phase
- Early-stage / MVP: Weight time to implement, reversibility, and complexity highest
- Growth / scaling: Weight performance, maintainability, and testing difficulty highest
- Mature / enterprise: Weight migration risk, team familiarity, and maintainability highest
Project Decomposition Guide
Large requests often hide multiple independent projects. Breaking them apart leads to better planning, clearer milestones, and earlier delivery of value.
Signals That a Project Is Too Large
A request should be decomposed if it has 2 or more of these signals:
| Signal | Example |
|---|---|
| Multiple independent subsystems | "Build a platform with chat, billing, and analytics" |
| Different user personas | "Admin dashboard and customer-facing storefront" |
| Separate data stores or schemas | "User database and event log and file storage" |
| Different deployment targets | "Web app, mobile API, and CLI tool" |
| Independent release cycles | "Auth can ship before billing is ready" |
| Different tech stacks within the request | "React frontend and Python ML pipeline" |
How to Identify Sub-Project Boundaries
- List the nouns. Every distinct entity (user, order, notification, report) is a candidate boundary.
- Draw the data flow. Where data crosses from one noun to another is a boundary.
- Ask "Can this ship alone?" If a subsystem delivers value without the others, it is its own sub-project.
- Check coupling. If two subsystems share a database table or API, they may need to be in the same sub-project, or the shared layer becomes its own sub-project.
How to Determine Build Order
Once you have sub-projects, order them by:
- Shared foundations first. Auth, database schema, core data models.
- Highest-value next. The sub-project that delivers the most user value or de-risks the most unknowns.
- Independent sub-projects in parallel. If two sub-projects share no dependencies, they can run simultaneously.
- Integration and glue last. Cross-cutting concerns like notifications, analytics, and dashboards that aggregate data from other sub-projects.
Example: Decomposing "Build a Platform with Chat, Billing, and Analytics"
Step 1 - Identify sub-projects:
| Sub-Project | Core Entities | Data Store | Ships Alone? |
|---|---|---|---|
| User & Auth | User, Session, Role | Users DB | Yes |
| Chat | Message, Conversation, Participant | Messages DB | Yes (after User & Auth) |
| Billing | Subscription, Invoice, Payment | Billing DB | Yes (after User & Auth) |
| Analytics | Event, Metric, Dashboard | Events DB / warehouse | Yes (after User & Auth) |
Step 2 - Determine build order:
Phase 1: User & Auth (foundation)
Phase 2: Chat, Billing [parallel - independent of each other]
Phase 3: Analytics [depends on events from Chat + Billing]
Phase 4: Cross-cutting (admin dashboard, notification preferences, onboarding flow)Step 3 - Validate boundaries:
- Chat and Billing both depend on User & Auth but not on each other - confirmed independent.
- Analytics consumes events from Chat and Billing - confirmed it must come after.
- Admin dashboard aggregates data from all three - confirmed it is last.
Common Approach Patterns
Reusable decision trees for recurring architectural decisions. Use these as starting points, not gospel - project context always wins.
New Feature in an Existing App
Is the feature closely related to an existing module?
YES --> Extend the existing module
- Lower risk, follows existing patterns
- Watch for: bloating the module beyond its original responsibility
NO --> Does it need its own data model or API surface?
YES --> Create a new module within the app
- Clear ownership, independent testing
- Watch for: unnecessary duplication of shared utilities
NO --> Create a shared utility or helper
- Lightweight, reusable
- Watch for: utilities that grow into modules over timeState Management
Is the state used by a single component?
YES --> Local state (useState, component state)
Is the state shared by a parent and a few children?
YES --> Lift state up or use composition
Is the state shared across a subtree of components?
YES --> Context (React Context, provide/inject)
Is the state shared app-wide or needs persistence?
YES --> Is it server-derived data?
YES --> Server state (React Query, SWR, tRPC)
NO --> Client store (Zustand, Redux, Pinia)Data Fetching
Is the API internal to your team and typed?
YES --> tRPC or similar end-to-end typed solution
- Best DX, type safety, least boilerplate
Is the client a single app consuming a known set of endpoints?
YES --> REST with a typed client (OpenAPI codegen)
- Simple, cacheable, well-understood
Do clients need flexible queries across many entities?
YES --> GraphQL
- Flexible, avoids over/under-fetching
- Watch for: complexity of schema management and N+1 queriesReal-Time Communication
Is the update frequency low (< 1/second) and tolerance for delay is high?
YES --> Polling
- Simplest to implement, works everywhere
- Watch for: unnecessary server load at scale
Is the data flow one-directional (server to client)?
YES --> Server-Sent Events (SSE)
- Simpler than WebSocket, auto-reconnect, works with HTTP/2
- Watch for: limited browser connection pool in HTTP/1.1
Is the data flow bidirectional or high-frequency?
YES --> WebSocket
- Full-duplex, lowest latency
- Watch for: connection management, scaling, and load balancer configurationStorage
Is the data structured with relationships?
YES --> Is the schema well-defined and stable?
YES --> SQL (PostgreSQL, MySQL)
- Strong consistency, joins, mature tooling
NO --> Document store (MongoDB, DynamoDB)
- Flexible schema, horizontal scaling
Is the data unstructured (files, media, blobs)?
YES --> Object/file storage (S3, GCS, local filesystem)
Is the data ephemeral or cache-like?
YES --> In-memory store (Redis, Memcached)
Is the data time-series or append-only events?
YES --> Time-series DB or event store (InfluxDB, EventStoreDB) interview-playbook.md
Interview Playbook
The design interview is the engine of Super Brainstorm. A relentless, structured interview extracts every requirement, constraint, and edge case before a single line of code is written. This playbook provides the full methodology - design tree traversal, question banks, codebase intelligence patterns, calibration rules, and anti-patterns.
Design Tree Traversal
Every feature is a tree of decisions. Super Brainstorm walks that tree depth-first, resolving each branch completely before moving to siblings. This prevents half-explored requirements from haunting the implementation later.
Traversal Rules
- Root first - Start with the purpose node. Why does this feature exist? What problem does it solve?
- Depth before breadth - When a node has children, explore the first child fully before moving to siblings.
- Resolve before advancing - A node is "resolved" when you have a clear answer, a concrete decision, or an explicit deferral. Never leave a node ambiguous.
- Backtrack on dead ends - If a branch leads to "we don't need this," mark it as explicitly out of scope and backtrack.
- Dependency edges - When a node depends on another branch's resolution, note the dependency and resolve the blocker first.
Example: "Add a Commenting System"
commenting-system (root)
├── purpose
│ ├── who comments? (authenticated users only? admins?)
│ └── what is commentable? (posts? pages? both?)
├── data-model
│ ├── comment schema (author, body, timestamp, parent_id)
│ │ ├── max length? (validation)
│ │ └── rich text or plain text?
│ ├── threading (flat vs nested)
│ │ ├── if nested: max depth?
│ │ └── if nested: how to render deep threads?
│ └── storage (same DB? separate collection?)
├── permissions
│ ├── who can create? (depends on: purpose/who comments)
│ ├── who can edit? (author only? admins?)
│ ├── who can delete? (soft delete vs hard delete?)
│ └── moderation (flagging? auto-moderation? approval queue?)
├── ui
│ ├── comment input (inline? modal? expandable?)
│ ├── comment list (pagination? infinite scroll? load-more?)
│ ├── threading display (indentation? collapse/expand?)
│ └── empty state (no comments yet - what shows?)
├── real-time
│ ├── needed? (depends on: purpose - is this high-traffic?)
│ ├── if yes: websockets? polling? SSE?
│ └── optimistic updates?
├── notifications
│ ├── notify on reply? (depends on: threading decision)
│ ├── notify post author?
│ └── delivery channel (in-app? email? both?)
└── edge-cases
├── deleted parent comment with children
├── comment on deleted post
├── concurrent edits
└── spam/abuseTraversal walkthrough:
- Start at
purpose. Ask: "Who will be commenting, and what content can they comment on?" Resolve fully. - Descend into
data-model > comment schema. Ask about max length, formatting. Resolve. - Move to sibling:
data-model > threading. This is a major fork - flat vs nested changes everything downstream. Resolve before continuing. - Move to
data-model > storage. Resolve. - Now
data-modelis fully resolved. Move topermissions. Note thatpermissions > who can createdepends on thepurposebranch - but that is already resolved, so proceed. - Continue depth-first through
ui,real-time,notifications,edge-cases.
The key insight: by the time you reach notifications > notify on reply, you have already resolved the threading decision, so you know whether replies even exist.
Question Banks by Project Type
Feature Development
| # | Question | Purpose | Depth Trigger |
|---|---|---|---|
| 1 | What is the feature and what user problem does it solve? | Root purpose | Always ask first |
| 2 | Who is the target user? Are there different user roles involved? | Scope actors | If multi-role system |
| 3 | What is the expected user flow from start to finish? | Map the journey | Always |
| 4 | What existing features does this interact with? | Dependency map | If not greenfield |
| 5 | What does the happy path look like? Walk me through it. | Core behavior | Always |
| 6 | What happens when things go wrong? (network failure, invalid input, missing data) | Error handling | Always |
| 7 | Are there any performance requirements? (response time, data volume) | Non-functional reqs | If data-heavy |
| 8 | Is this feature behind a flag or always-on? | Rollout strategy | If production system |
| 9 | What is explicitly out of scope for this version? | Prevent scope creep | Always |
| 10 | How will we know this feature is working correctly in production? | Observability | If production system |
System Design / Architecture
| # | Question | Purpose | Depth Trigger |
|---|---|---|---|
| 1 | What is the system's primary responsibility? One sentence. | Core abstraction | Always |
| 2 | What are the inputs and outputs? | Boundary definition | Always |
| 3 | What are the throughput and latency requirements? | Scale constraints | Always |
| 4 | What is the data model? What are the core entities and relationships? | Data layer | Always |
| 5 | How does data flow through the system? | Architecture pattern | Always |
| 6 | What are the failure modes? What happens when each dependency is down? | Resilience | Always |
| 7 | What consistency guarantees are needed? (eventual vs strong) | Data integrity | If distributed |
| 8 | What are the security boundaries? Who can access what? | Auth/authz model | Always |
| 9 | How will this be deployed and scaled? | Infrastructure | If production |
| 10 | What existing systems does this replace or integrate with? | Migration/integration | If not greenfield |
Refactoring
| # | Question | Purpose | Depth Trigger |
|---|---|---|---|
| 1 | What is the specific pain point with the current code? | Root cause | Always |
| 2 | What does the ideal end state look like? | Target architecture | Always |
| 3 | What is the blast radius? How many files, modules, consumers? | Risk assessment | Always |
| 4 | Is there test coverage for the code being refactored? | Safety net check | Always |
| 5 | Can this be done incrementally, or is it all-or-nothing? | Strategy | If blast radius > 5 files |
| 6 | Are there downstream consumers or public APIs affected? | Breaking changes | If library/shared code |
| 7 | What is the rollback plan if the refactor introduces regressions? | Safety | If production code |
| 8 | Are there performance characteristics that must be preserved? | Regression prevention | If performance-sensitive |
Greenfield Projects
| # | Question | Purpose | Depth Trigger |
|---|---|---|---|
| 1 | What problem does this project solve? Who has this problem? | Problem/user fit | Always |
| 2 | What are the 3-5 core features for v1? No more. | Scope discipline | Always |
| 3 | What is the tech stack? Any hard constraints? | Foundation | Always |
| 4 | Are there reference implementations, competitors, or designs to study? | Prior art | Always |
| 5 | What does the data model look like at a high level? | Core entities | Always |
| 6 | How will users authenticate and what are their roles? | Auth model | If multi-user |
| 7 | What third-party services or APIs are needed? | External deps | If integrations exist |
| 8 | What is the deployment target? (Vercel, AWS, self-hosted, etc.) | Infrastructure | Always |
| 9 | What is the testing strategy? (unit, integration, e2e) | Quality gates | Always |
| 10 | What is the priority order? If you could only ship one feature, which? | Prioritization | Always |
API Design
| # | Question | Purpose | Depth Trigger |
|---|---|---|---|
| 1 | Who are the consumers of this API? (frontend, mobile, third-party, internal) | Audience | Always |
| 2 | REST, GraphQL, gRPC, or something else? Why? | Protocol | Always |
| 3 | What are the core resources/entities? | Domain model | Always |
| 4 | What operations are needed on each resource? (CRUD? Custom actions?) | Endpoints | Always |
| 5 | What are the authentication and authorization requirements? | Security | Always |
| 6 | What is the pagination strategy? (cursor, offset, keyset) | Data access pattern | If list endpoints exist |
| 7 | What is the error response format? | Developer experience | Always |
| 8 | What are the rate limiting requirements? | Abuse prevention | If public API |
| 9 | Is versioning needed? What strategy? (URL, header, content negotiation) | Evolution | If long-lived API |
| 10 | What is the expected payload size and response time? | Performance | If high-traffic |
UI/UX Features
| # | Question | Purpose | Depth Trigger |
|---|---|---|---|
| 1 | What is the user trying to accomplish? Describe the goal, not the widget. | User intent | Always |
| 2 | Is there a design/mockup/wireframe? Link or describe it. | Visual spec | Always |
| 3 | What is the interaction model? (click, drag, keyboard, touch) | Input methods | Always |
| 4 | What are the states? (empty, loading, error, partial, complete) | State coverage | Always |
| 5 | How does this behave on different screen sizes? | Responsiveness | If web/mobile |
| 6 | What accessibility requirements exist? (keyboard nav, screen reader, contrast) | A11y | Always |
| 7 | What animations or transitions are expected? | Motion design | If interactive |
| 8 | What existing component library or design system should be used? | Consistency | If not greenfield |
| 9 | What is the data source? How is state managed? | Data binding | Always |
| 10 | What happens at the boundaries? (0 items, 1000 items, very long text) | Edge cases | Always |
Codebase-First Intelligence
Before asking the user a question, check whether the codebase already has the answer. Every question you ask that the codebase could have answered is a wasted round-trip and erodes trust.
What to Search Before Asking
| Before Asking About | Search For | Where to Look |
|---|---|---|
| Database / ORM | package.json deps (prisma, typeorm, sequelize, mongoose, drizzle), config files |
package.json, *.config.*, prisma/schema.prisma, drizzle.config.* |
| Authentication | Auth middleware, JWT libraries, session config, auth routes | middleware/auth*, lib/auth*, **/passport*, package.json (next-auth, clerk, passport) |
| Testing framework | Test config, existing test files, test scripts | jest.config*, vitest.config*, .mocharc*, cypress.config*, package.json scripts |
| State management | Store files, context providers, state libraries | **/store/**, **/context/**, package.json (redux, zustand, jotai, mobx) |
| API patterns | Route files, handler patterns, middleware chain | **/routes/**, **/api/**, **/handlers/**, **/controllers/** |
| Styling approach | CSS config, style files, component patterns | tailwind.config*, postcss.config*, *.module.css, styled-components in deps |
| Component library | UI library deps, shared component directory | package.json (shadcn, radix, chakra, MUI), **/components/ui/** |
| Deployment | CI/CD config, Dockerfiles, deploy scripts | .github/workflows/*, Dockerfile, docker-compose*, vercel.json, netlify.toml |
| Linting / formatting | Lint config, prettier config, editor config | .eslintrc*, .prettierrc*, biome.json, .editorconfig |
| Monorepo structure | Workspace config, package directories | pnpm-workspace.yaml, lerna.json, turbo.json, packages/, apps/ |
| Environment variables | Env files, env validation, env usage | .env.example, .env.local, **/env.*, process.env usage in code |
| Error handling | Error classes, error middleware, error boundaries | **/errors/**, **/middleware/error*, **/ErrorBoundary* |
| Logging | Logger setup, logging library | **/logger*, package.json (winston, pino, bunyan) |
| Caching | Cache config, Redis/Memcached setup | **/cache*, package.json (redis, ioredis), **/redis* |
| File structure conventions | Existing module organization | Top-level directories, barrel exports (index.ts), naming patterns |
Intelligence-First Protocol
For every question you are about to ask, run this check:
- Can I find this in the codebase? Search for it. If found, state what you found and skip the question.
- Can I infer this from the codebase? If the project uses Prisma + PostgreSQL, do not ask "what database do you use?" Instead, state: "I see you're using Prisma with PostgreSQL" and ask the deeper question.
- Is this a preference or a fact? Facts live in code. Preferences require asking. "What test framework?" is a fact. "What level of test coverage do you want?" is a preference.
Question Calibration
Multiple Choice vs Open-Ended
| Use Multiple Choice When | Use Open-Ended When |
|---|---|
| There are 2-4 well-known options | The answer space is unbounded |
| You want to anchor the user on realistic choices | You want to discover requirements you haven't thought of |
| The user is non-technical and might not know terminology | The user is the domain expert and you need their mental model |
| Speed matters - reduce back-and-forth | Depth matters - you need their full reasoning |
Examples:
- Multiple choice: "For threading, should comments be: (a) flat - single level, (b) nested - up to 3 levels, (c) nested - unlimited depth?"
- Open-ended: "Describe how you envision the moderation workflow when a comment is flagged."
When a Question Is Too Broad
A question is too broad if the user would need more than 3 sentences to answer it well. Split it.
| Too Broad | Better |
|---|---|
| "How should the notification system work?" | "Should notifications be in-app only, email only, or both?" followed by "For in-app notifications, should they appear as a badge, a dropdown, or a full page?" |
| "What are the security requirements?" | "Who should be able to access this resource?" followed by "Do we need rate limiting on this endpoint?" |
| "How should errors be handled?" | "When the API returns a 4xx error, what should the user see?" followed by "Should we retry on network failures? How many times?" |
When to Merge vs Split Questions
Merge when two questions are so tightly coupled that answering one without the other is meaningless:
- "Should comments support threading, and if so, what is the max nesting depth?" (threading yes/no and depth are inseparable)
Split when a question bundles independent decisions:
- Bad: "Should we use WebSockets for real-time updates and Redis for caching?"
- Good: Ask about real-time transport separately from caching strategy.
Calibration Rules
- One decision per question. If your question contains "and," consider splitting.
- No compound conditionals. Instead of "If X, should we do Y or Z, and if not X, should we do A?" - first resolve X, then ask the follow-up.
- Ground in the codebase. Reference what you found: "I see you're using Express with middleware-based routing. Should the new auth endpoints follow the same pattern?"
- Offer a recommendation when you can. "I'd suggest cursor-based pagination here because your dataset will grow over time. Does that work, or do you need offset-based for a specific reason?"
- Timebox complexity. If a question opens a 20-minute rabbit hole, flag it: "This is a significant decision. Should we resolve it now or defer and use a simple placeholder?"
Extracting Implicit Requirements
Users say what they want. They rarely say what they need. Watch for these patterns and surface the hidden requirements.
| User Says | Hidden Requirements |
|---|---|
| "Add notifications" | Delivery channel (in-app, email, push), read/unread state, notification preferences, quiet hours, batch/digest mode, notification center UI |
| "Make it real-time" | WebSocket infrastructure, reconnection logic, optimistic updates, conflict resolution, offline handling, connection state UI |
| "Add user roles" | Permission model, role assignment UI, role hierarchy, inheritance rules, admin override, audit logging |
| "Support file uploads" | Max file size, allowed formats, virus scanning, storage backend (S3, local), progress indicator, resume on failure, thumbnail generation |
| "Add a dashboard" | Data aggregation, refresh interval, date range filtering, export capability, empty state, loading skeletons, responsive layout |
| "Make it work offline" | Sync strategy, conflict resolution, storage limits, cache invalidation, sync status indicator, queue management |
| "Add search" | Full-text vs exact match, indexing strategy, debounce, highlighting, filters/facets, empty results state, typo tolerance, pagination |
| "Support multiple languages" | i18n framework, string extraction, RTL support, date/number formatting, language detection, translation workflow, fallback language |
| "Add an API" | Authentication, rate limiting, versioning, documentation, error format, pagination, CORS, request validation, response caching |
| "Add logging" | Log levels, structured vs unstructured, rotation policy, PII redaction, correlation IDs, alerting thresholds, log aggregation |
The Extraction Protocol
When you detect an implicit requirement:
- Acknowledge the stated requirement. "Got it - you want notifications."
- Surface 2-3 of the most critical hidden requirements as questions. "That raises a few design questions: How should notifications be delivered - in-app banner, email, or both? And should users be able to configure their notification preferences?"
- Do not dump all hidden requirements at once. Prioritize. Ask about the ones that affect architecture first, then circle back to polish later.
Example Interview Session
Task: "Add real-time notifications to our app"
Step 1: Codebase Intelligence Scan
Before asking anything, search the codebase:
Search: package.json
Found: Next.js 14, Prisma, PostgreSQL, NextAuth, Tailwind CSS, no WebSocket library
Search: **/notification*, **/socket*, **/ws*
Found: Nothing - no existing notification or real-time infrastructure
Search: **/middleware/auth*, **/lib/auth*
Found: NextAuth with JWT sessions, role-based access (user, admin)
Search: prisma/schema.prisma
Found: User model, Post model, Comment model - no Notification model
Search: **/api/**
Found: REST API routes under /api/ with consistent error handling middlewareStep 2: Design Tree
real-time-notifications (root)
├── purpose
│ ├── what events trigger notifications? (new comment, reply, mention, system)
│ └── who receives them? (post author, mentioned users, admins)
├── data-model
│ ├── Notification entity (id, userId, type, payload, read, createdAt)
│ └── storage (Prisma/PostgreSQL - extends existing setup)
├── delivery
│ ├── in-app (real-time push to open browser tabs)
│ │ ├── transport (WebSocket vs SSE vs polling)
│ │ └── reconnection & offline handling
│ ├── email (optional - for offline users)
│ │ └── batch/digest vs immediate
│ └── push notifications (mobile - defer or now?)
├── ui
│ ├── notification bell/badge (header component)
│ ├── notification dropdown/panel
│ ├── notification page (full list with pagination)
│ ├── read/unread state management
│ └── empty state
├── preferences
│ ├── per-type opt-in/opt-out
│ └── quiet hours / digest mode
└── edge-cases
├── user has 1000+ unread notifications
├── notification for deleted content
├── duplicate notifications (e.g., multiple mentions)
└── notification ordering (out-of-order delivery)Step 3: Interview (Depth-First)
Agent: "I scanned the codebase. You're running Next.js 14 with Prisma/PostgreSQL, NextAuth with JWT sessions, and REST API routes. There's no existing notification or real-time infrastructure, so we're building from scratch. Let me walk through the design.
Starting with purpose - what events should trigger a notification? For example: (a) new comment on your post, (b) reply to your comment, (c) @mention, (d) system announcements, or (e) some combination?"
User: "Comments on your post and replies to your comments. Maybe mentions later."
Agent: "Got it - comments on posts and replies to comments for v1, mentions deferred. So the notification recipients are: post author (when someone comments) and comment author (when someone replies). Does that cover it, or should admins also receive notifications?"
User: "Just those two for now."
Agent: "Good. Moving to delivery - since there's no WebSocket infrastructure yet, I want to pick the right transport. For your use case (low-to-medium frequency notifications, not a chat app), I'd recommend Server-Sent Events (SSE) over WebSockets. SSE is simpler, works over HTTP (no special server config), and is sufficient for one-way server-to-client pushes. Does that work, or do you have a reason to prefer WebSockets?"
User: "SSE sounds fine."
Agent: "Now for the UI. I see your header component uses Tailwind and follows a consistent pattern. Should the notification indicator be: (a) a bell icon with a red badge count in the header, (b) a toast/popup in the corner, or (c) both?"
User: "Bell with badge count. And a dropdown when you click it."
Agent: "When the user clicks the bell and sees the dropdown - should clicking a notification mark it as read and navigate to the relevant content (e.g., the comment on the post)? And should there be a 'Mark all as read' action?"
User: "Yes to both."
Agent: "One edge case to resolve: what happens when a user has hundreds of unread notifications? Should the dropdown show the most recent 10-20 with a 'View all' link to a full notifications page? Or should the dropdown try to load everything?"
User: "Recent 10 with a view all link."
Agent: "Last question for this branch - email notifications. Should we send an email when the user is offline, or is in-app sufficient for v1?"
User: "In-app only for now."
Agent: "Here is the resolved design summary:
- Events: Comment on post, reply to comment
- Recipients: Post author, comment author
- Transport: SSE for real-time push
- Data model: New Notification table in Prisma (id, userId, type, referenceId, read, createdAt)
- UI: Bell icon with badge count in header, dropdown with 10 most recent, 'View all' page, click to navigate + mark read, 'Mark all as read' action
- Delivery: In-app only (no email for v1)
- Deferred: @mentions, email delivery, push notifications, notification preferences
Does this capture everything, or did I miss something?"
Anti-Patterns
These are the most common mistakes during a design interview. Each one wastes the user's time or produces an incomplete design.
1. Asking Questions the Codebase Can Answer
Wrong: "What database are you using?"
Right: Search package.json and config files first. Then say: "I see you're using Prisma with PostgreSQL."
Why it matters: It signals you haven't done your homework. The user will lose confidence that you're thorough enough to build the feature correctly.
2. Batching Multiple Unrelated Questions
Wrong: "What should the notification bell look like, and how should we handle real-time transport, and do you need email notifications?" Right: Ask one question at a time, depth-first. Resolve delivery transport before asking about email. Resolve the bell UI separately.
Why it matters: The user will answer the easiest question and skip the hard ones. You will think you have answers when you have gaps.
3. Asking About Implementation Before Purpose
Wrong: "Should we use WebSockets or SSE?" Right: First ask what the feature does and how frequently events occur. Then recommend the transport based on the requirements.
Why it matters: Implementation decisions without context are coin flips. Purpose determines constraints, constraints determine implementation.
4. Accepting Vague Answers Without Follow-Up
Wrong:
- User: "It should handle errors gracefully."
- Agent: "Got it." (moves on)
Right:
- User: "It should handle errors gracefully."
- Agent: "Can you give me an example? When a notification fails to deliver, should we: (a) silently retry, (b) show an error toast to the user, or (c) log it and move on?"
Why it matters: "Gracefully" means something different to every person. Vague requirements become bugs.
5. Not Exploring Error and Edge Case Branches
Wrong: Only asking about the happy path - what happens when everything works. Right: For every feature branch, ask: "What happens when this fails? What happens at the boundaries (0 items, 10,000 items, very long input)?"
Why it matters: Edge cases are where most bugs live. If you don't ask, the user won't volunteer them, and you will discover them during implementation when they are 10x more expensive to resolve.
6. Asking Leading Questions
Wrong: "We should use Redis for caching here, right?" Right: "Do you need caching for this endpoint? If so, what is the acceptable staleness - a few seconds, minutes, or hours?"
Why it matters: Leading questions confirm your bias instead of discovering the actual requirement. The user will often agree with your suggestion even if it is wrong.
7. Skipping the "Out of Scope" Conversation
Wrong: Assuming everything mentioned is in scope for v1. Right: After mapping the full design tree, explicitly ask: "Which of these branches are v1 vs later?"
Why it matters: Without explicit scoping, the feature grows silently. You will build things the user did not need yet, and miss things they needed now.
8. Interviewing in the Wrong Order
Wrong: Asking about UI styling before understanding the data model. Right: Follow the tree: purpose - data model - behavior - UI - edge cases.
Why it matters: Upstream decisions constrain downstream ones. If you design the UI before the data model, you may design something the data cannot support.
spec-writing.md
Spec Writing
Complete reference for producing design spec documents during the super-brainstorm workflow. Covers the document template, section scaling rules, writing style, decision logging, and the reviewer checklist.
Spec Document Template
Every spec is written to docs/plans/YYYY-MM-DD-<topic>-design.md where <topic> is a short kebab-case slug (e.g. 2026-03-18-commenting-system-design.md).
# [Topic] Design Spec
## Summary
<!-- 2-3 sentences. What is being built and why. -->
## Context
<!-- What exists today. Why this change is needed. Link to relevant code paths. -->
## Design
### Architecture
<!-- High-level diagram or description of how the pieces fit together. -->
### Components
<!-- Each new or modified component, with its responsibility. -->
### Data Model
<!-- Schemas, tables, types. Use code blocks for definitions. -->
### Interfaces / API Surface
<!-- Endpoints, function signatures, event contracts. Use code blocks. -->
### Data Flow
<!-- Step-by-step description of how data moves through the system for key operations. -->
## Error Handling
<!-- Failure modes, retry strategies, user-facing error states. -->
## Testing Strategy
<!-- What to test, how, and what level (unit, integration, e2e). -->
## Migration Path
<!-- Steps to move from current state to new state. Remove this section if not applicable. -->
## Open Questions
<!-- Unresolved items that need follow-up. Remove this section if none remain. -->
## Decision Log
<!-- Key decisions made during the interview phase. See Decision Log Format below. -->Section Scaling Rules
Not every spec needs every section at full depth. Scale based on complexity.
Simple (config change, utility function, small fix)
Target length: ~1 page
| Section | Include? | Depth |
|---|---|---|
| Summary | Yes | 2-3 sentences |
| Context | Yes | 1-2 sentences |
| Architecture | No | - |
| Components | Yes | Bullet list of what changes |
| Data Model | Only if changed | Schema diff |
| Interfaces / API Surface | Only if changed | Signature only |
| Data Flow | No | - |
| Error Handling | One sentence if relevant | - |
| Testing Strategy | Yes | Which tests to add/update |
| Migration Path | No | - |
| Open Questions | No | - |
| Decision Log | No | - |
Medium (new component, API endpoint, moderate feature)
Target length: 2-3 pages
| Section | Include? | Depth |
|---|---|---|
| Summary | Yes | 2-3 sentences |
| Context | Yes | 1 paragraph |
| Architecture | Yes | Brief description, no diagram needed |
| Components | Yes | Table with name, responsibility, file path |
| Data Model | Yes | Full schema in code block |
| Interfaces / API Surface | Yes | Full signatures with request/response shapes |
| Data Flow | Yes | Numbered steps for the primary flow |
| Error Handling | Yes | Table of failure modes and responses |
| Testing Strategy | Yes | Specific test cases listed |
| Migration Path | If applicable | Steps only |
| Open Questions | If any remain | Bullet list |
| Decision Log | Yes | Key choices made |
Complex (new system, migration, cross-cutting change)
Target length: 4-6 pages
| Section | Include? | Depth |
|---|---|---|
| Summary | Yes | 2-3 sentences |
| Context | Yes | Multiple paragraphs, link existing code |
| Architecture | Yes | Diagram (ASCII or described), component relationships |
| Components | Yes | Table with name, responsibility, file path, dependencies |
| Data Model | Yes | Full schemas, relationships, indexes |
| Interfaces / API Surface | Yes | All endpoints/functions, full request/response types |
| Data Flow | Yes | Numbered steps for primary + secondary flows |
| Error Handling | Yes | Comprehensive table, retry logic, circuit breakers |
| Testing Strategy | Yes | Test matrix by type, coverage targets |
| Migration Path | Yes | Phased plan with rollback steps |
| Open Questions | If any remain | Bullet list with owners and deadlines |
| Decision Log | Yes | All significant choices |
Complexity Detection Heuristic
| Signal | Simple | Medium | Complex |
|---|---|---|---|
| Files touched | 1-2 | 3-8 | 8+ |
| New components | 0 | 1-2 | 3+ |
| External dependencies | 0 | 0-1 | 2+ |
| Data model changes | None or trivial | New table/type | Schema migration |
| Cross-cutting concerns | No | Maybe | Yes |
Writing Style Guide
Be concrete, not abstract
| Bad | Good |
|---|---|
| "An endpoint for comments" | POST /api/posts/:postId/comments |
| "A component that shows comments" | src/components/CommentThread.tsx |
| "Some kind of database table" | comments table with columns id, post_id, author_id, body, created_at |
| "We'll need to handle errors" | Return 422 with { error: "body_required" } when comment body is empty |
Include file paths when referencing code
Always use paths relative to the repo root:
The auth middleware at `src/middleware/auth.ts` validates the JWT
before the request reaches `src/api/comments/create.ts`.Use tables for comparisons
When evaluating options, always present them in a table rather than prose:
| Option | Pros | Cons |
|---|---|---|
| PostgreSQL | ACID, familiar, existing infra | Needs schema migration |
| MongoDB | Flexible schema, easy nesting | New dependency, no existing infra |
Use code blocks for interfaces, schemas, and API shapes
interface CreateCommentRequest {
postId: string;
body: string;
parentId?: string; // for threaded replies
}
interface CreateCommentResponse {
id: string;
postId: string;
authorId: string;
body: string;
createdAt: string;
}YAGNI
Remove anything not directly needed for the work being designed:
- Do not spec future phases unless they constrain the current design
- Do not add "nice to have" sections
- Do not include sections that only say "N/A" - remove them entirely
- If a section has one sentence, consider folding it into another section
Decision Log Format
Every spec includes a Decision Log at the bottom. Record decisions made during the brainstorm interview that shaped the design.
Format
| Decision | Options Considered | Chosen | Rationale |
|---|---|---|---|
| Database for comments | PostgreSQL, MongoDB, SQLite | PostgreSQL | Already in the stack, supports full-text search, ACID transactions needed for comment threading |
| Comment nesting depth | Unlimited, flat, 2-level | 2-level | Keeps UI simple, covers reply-to-reply which is 90% of use cases, avoids recursive query complexity |
| Auth for commenting | Anonymous, logged-in only, mixed | Logged-in only | Reduces spam, simplifies moderation, matches existing auth system |
Guidelines
- Record every decision where more than one reasonable option existed
- The "Rationale" column is the most important - it explains why the choice was made and prevents future re-litigation
- Include decisions the user made explicitly (e.g. "user chose PostgreSQL") and decisions you recommended (e.g. "recommended 2-level nesting because...")
- Keep each cell concise - one to two sentences maximum
Spec Review Checklist
After writing the spec, a reviewer subagent checks it against these criteria before delivering the final document.
Criteria
| Criterion | What to Check |
|---|---|
| Completeness | Every section required by the scaling tier is present and substantive. No placeholders or TODOs remain. |
| Consistency | Names, types, and paths used in one section match their usage in all other sections. API shapes in Interfaces match the Data Model. |
| Clarity | A developer unfamiliar with the project can read the spec and understand what to build. No ambiguous pronouns ("it", "this") without clear antecedents. |
| Scope | The spec covers exactly what was discussed during the interview - no more, no less. No scope creep into adjacent systems. |
| YAGNI | No speculative features, future phases (unless they constrain current design), or unnecessary sections. |
| Concrete | All endpoints have method + path. All schemas have field names + types. All file references use repo-relative paths. |
| Testable | The Testing Strategy section contains specific, actionable test cases - not vague statements like "test the happy path". |
Reviewer Prompt Template
The following prompt is used by the reviewer subagent:
You are a design spec reviewer. Read the spec below and evaluate it against
these criteria: Completeness, Consistency, Clarity, Scope, YAGNI, Concrete,
and Testable.
For each criterion, output one of:
PASS - meets the bar
NEEDS WORK - explain what is missing or wrong
If any criterion is NEEDS WORK, write a corrected version of the affected
section(s) at the end of your review.
Spec complexity tier: [SIMPLE | MEDIUM | COMPLEX]
--- BEGIN SPEC ---
{spec_content}
--- END SPEC ---
--- BEGIN INTERVIEW CONTEXT ---
{interview_summary}
--- END INTERVIEW CONTEXT ---Example Spec
A complete example for a medium-complexity feature.
# Commenting System Design Spec
## Summary
Add a commenting system to blog posts that supports threaded replies (one
level deep), real-time updates, and moderation. Comments are only available
to logged-in users.
## Context
The blog at `src/app/blog/` currently supports posts with markdown content
rendered via `src/lib/markdown.ts`. Posts are stored in the `posts` table
in PostgreSQL via Prisma (`prisma/schema.prisma`). There is no commenting
functionality today. Users have requested the ability to discuss posts, and
engagement metrics show readers spend an average of 4 minutes per post,
suggesting they would interact with comments.
## Design
### Architecture
The commenting system adds a new API route group under `/api/posts/:postId/comments`,
a new `comments` table in PostgreSQL, and a React component tree mounted in the
existing post page at `src/app/blog/[slug]/page.tsx`.
### Components
| Component | Responsibility | File Path |
|---|---|---|
| CommentThread | Renders top-level comments and their replies | `src/components/comments/CommentThread.tsx` |
| CommentForm | Input form for new comments and replies | `src/components/comments/CommentForm.tsx` |
| CommentItem | Single comment with author, timestamp, reply button | `src/components/comments/CommentItem.tsx` |
| comments API | CRUD endpoints for comments | `src/app/api/posts/[postId]/comments/route.ts` |
| Comment model | Prisma model and validation | `prisma/schema.prisma` |
### Data Model
```prisma
model Comment {
id String @id @default(cuid())
body String @db.Text
postId String
post Post @relation(fields: [postId], references: [id], onDelete: Cascade)
authorId String
author User @relation(fields: [authorId], references: [id])
parentId String?
parent Comment? @relation("CommentReplies", fields: [parentId], references: [id], onDelete: Cascade)
replies Comment[] @relation("CommentReplies")
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@index([postId, createdAt])
@@index([parentId])
}Interfaces / API Surface
Create comment
POST /api/posts/:postId/comments
// Request
{ body: string; parentId?: string }
// Response 201
{ id: string; body: string; authorId: string; parentId: string | null; createdAt: string }
// Error 422
{ error: "body_required" | "parent_not_found" | "nesting_too_deep" }
// Error 401
{ error: "unauthorized" }List comments
GET /api/posts/:postId/comments?cursor=<id>&limit=20
// Response 200
{
comments: Array<{
id: string;
body: string;
author: { id: string; name: string; avatarUrl: string };
parentId: string | null;
replies: Array<{ /* same shape without nested replies */ }>;
createdAt: string;
}>;
nextCursor: string | null;
}Delete comment (author or admin only)
DELETE /api/posts/:postId/comments/:commentId
// Response 204 (no body)
// Error 403
{ error: "forbidden" }Data Flow
Creating a comment (primary flow):
- User types in
CommentFormand submits - Client sends
POST /api/posts/:postId/commentswith auth cookie - API validates auth via
src/middleware/auth.ts - API validates request body (non-empty, parentId exists if provided, parent is not itself a reply)
- API inserts row into
commentstable via Prisma - API returns the created comment
- Client prepends the comment to
CommentThreadstate
Error Handling
| Failure | Behavior |
|---|---|
| User not authenticated | Redirect to login, preserve draft in localStorage |
| Comment body empty | Client-side validation prevents submission; server returns 422 |
| Parent comment not found | Server returns 422 with parent_not_found |
| Nesting too deep (reply to a reply) | Server returns 422 with nesting_too_deep |
| Post not found | Server returns 404 |
| Rate limit exceeded | Server returns 429, client shows "Please wait before commenting again" |
Testing Strategy
| Test | Type | What It Verifies |
|---|---|---|
| Create comment with valid body | Integration | 201 response, comment appears in DB |
| Create comment without auth | Integration | 401 response |
| Create comment with empty body | Integration | 422 response |
| Reply to existing comment | Integration | 201, parentId set correctly |
| Reply to a reply (nesting too deep) | Integration | 422 with nesting_too_deep |
| List comments with pagination | Integration | Cursor-based pagination returns correct pages |
| Delete own comment | Integration | 204, comment removed from DB |
| Delete another user's comment | Integration | 403 response |
| CommentThread renders comments | Unit | Renders list of CommentItem components |
| CommentForm submits on enter | Unit | Calls onSubmit with body text |
| Full comment flow | E2E | Login, navigate to post, create comment, see it appear, reply, delete |
Decision Log
| Decision | Options Considered | Chosen | Rationale |
|---|---|---|---|
| Nesting depth | Unlimited, flat, 2-level | 2-level (comment + reply) | Keeps UI manageable, avoids recursive queries, covers the majority of discussion patterns |
| Pagination | Offset, cursor | Cursor-based | More reliable with concurrent inserts, better performance on large comment threads |
| Real-time updates | WebSocket, polling, SSE | Polling (30s interval) | Simplest to implement, sufficient for blog comment velocity, no infra changes needed |
| Soft delete vs hard delete | Soft delete, hard delete | Hard delete with cascading replies | Blog context does not require audit trail, simpler data model, GDPR-friendly |
| Comment editing | Allow edits, no edits | No edits in v1 | Reduces complexity, avoids edit history UI, can add later without migration |
Frequently Asked Questions
What is super-brainstorm?
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, designing systems, or making architectural decisions. Enters plan mode, reads all available docs, explores the codebase deeply, then interviews the user relentlessly with ultrathink-level reasoning on every decision until a shared understanding is reached. Produces a validated design spec before any implementation begins. Triggers on feature requests, design discussions, refactors, new projects, component creation, system changes, and any task requiring design decisions.
How do I install super-brainstorm?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill super-brainstorm in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support super-brainstorm?
super-brainstorm works with claude-code, gemini-cli, openai-codex, mcp. Install it once and use it across any supported AI coding agent.