Absolute-Human: An AI-Native SDLC That Actually Ships - AbsolutelySkilled Blog
Absolute-Human: An AI-Native SDLC That Actually Ships
Agile was designed for humans. Standups, sprint planning, story points, retros - all of it exists because humans struggle with parallelism, lose context when switching tasks, and need meetings to stay aligned.
AI agents have none of these constraints. They can run ten tasks at once, maintain perfect context across all of them, and coordinate without a Jira board. So why are we still making them work like a human development team?
Absolute-Human is a development lifecycle built from the ground up for AI agents. No sprints. No story points. No standup. Just a 7-phase process that decomposes work into a dependency graph, executes independent tasks in parallel waves, enforces TDD at every step, and tracks everything on a persistent board.
The Problem with “Just Build It”
When you give an AI agent a complex task - “add a commenting system to the blog” - it does one of two things:
- Builds it linearly. Starts at the database, works up to the UI, finishes with tests. One thing at a time. Slow.
- Builds it all at once. Jumps between files, loses track of what depends on what, introduces subtle bugs where components don’t quite match.
Both approaches fail for the same reason: there is no plan. The agent does not know what tasks exist, what depends on what, or what can run in parallel. It is improvising.
Absolute-Human fixes this with structure. Before a single line of code is written, the entire task is decomposed into a dependency graph with explicit edges between sub-tasks. Then execution follows the graph - parallel where possible, serial where required.
The 7 Phases
Absolute-Human operates in seven phases. The first four are planning (no code changes). The last three are execution and verification.
INTAKE --> DECOMPOSE --> DISCOVER --> PLAN --> EXECUTE --> VERIFY --> CONVERGE
Phase 1: INTAKE
An interactive interview scaled to task complexity. Simple tasks get 3 questions. Complex tasks get 8-10. The goal is to extract the problem statement, success criteria, constraints, and scope before any work begins.
The agent also detects project conventions automatically - package manager, test runner, linting setup, directory structure, CI pipeline. This grounds every subsequent decision in the actual project, not assumptions.
Phase 2: DECOMPOSE
This is where Absolute-Human diverges from traditional workflows. Instead of a flat task list, the agent builds a directed acyclic graph (DAG) of sub-tasks with explicit dependencies.
Each sub-task gets an ID, a type (code, test, docs, config), a complexity estimate (S or M - large tasks get split further), and a list of dependencies. Then the agent assigns tasks to waves based on their depth in the graph.
Here is what that looks like for “add a commenting system”:
Wave 1 (parallel): Comment DB model, CommentList UI, CommentForm UI
Wave 2 (serial): Comment CRUD API (needs the model)
Wave 3 (parallel): Wire API to UI, notifications, API tests, UI tests, docs
Three tasks can start immediately. One must wait for the model. Five more can run in parallel once the API exists. That is the power of dependency-first decomposition - you find parallelism that linear planning misses.
Phase 3: DISCOVER
Research phase. For each sub-task, the agent explores the codebase to find existing patterns, reusable utilities, and relevant conventions. If codedocs are available, it reads those first and only falls back to raw code exploration when docs are insufficient.
This prevents the classic AI failure of reimplementing something that already exists three directories away.
Phase 4: PLAN
Detailed execution plans for each sub-task. Exact file paths to create or modify. Test files to write. Acceptance criteria. Test cases for happy path, edge cases, and error handling.
After this phase, the board contains everything an agent needs to execute each task independently - which is exactly what happens next.
Phase 5: EXECUTE
Wave-based parallel execution. For each wave, the agent spins up parallel sub-agents, one per task. Each agent gets a structured prompt derived from the board with full context, research notes, and the execution plan.
Every task follows TDD: write tests first (red), implement to make them pass (green), refactor (clean). No exceptions.
Between waves, the agent runs cross-task verification - checking for file conflicts between parallel agents, verifying that shared interfaces match, running the combined build and test suite.
Phase 6: VERIFY
Per-task verification: tests pass, lint clean, types check, build succeeds. Then integration verification across the wave. Failed tasks get up to 2 retries with error context appended to the agent prompt. Tasks that still fail after retries get flagged for human attention.
Phase 7: CONVERGE
Final integration. Full test suite. Documentation updates. A change summary with files created, tests added, decisions made, and any deferred work. The board is marked complete and serves as a permanent audit trail.
Why Waves Matter
The wave model is the key insight. Consider a feature with 9 sub-tasks:
Linear execution: 9 steps, one after another. The agent spends most of its time doing one thing while seven other tasks sit idle.
Unstructured parallelism: All 9 at once. Some will fail because their dependencies are not ready. Others will conflict because they modify the same files.
Wave execution: 3 waves. Wave 1 runs 3 independent tasks in parallel. Wave 2 runs 1 task that depends on Wave 1. Wave 3 runs 5 tasks in parallel. Total wall-clock time is roughly 3 sequential steps instead of 9.
The graph makes this safe. Tasks only run when their dependencies are complete. Cross-task verification catches conflicts at wave boundaries. The board tracks everything.
The Persistent Board
All state lives in .absolute-human/board.md at the project root. This is not just a progress tracker - it is the single source of truth for the entire lifecycle.
The board contains:
- Intake summary and project conventions
- Task graph with wave assignments
- Per-task research notes, execution plans, and verification reports
- Status transitions with timestamps
- A rollback point (git commit hash) recorded before execution begins
The board survives across sessions. If you close your terminal mid-wave, the next Absolute-Human invocation detects the existing board, shows a status summary, and resumes from where it left off. No work is lost.
Pairing with Super Brainstorm
Absolute-Human is purpose-built to execute validated specs. But where do those specs come from?
Super Brainstorm is the companion skill that handles the design phase. It runs a structured interview with ultrathink reasoning, probing every design decision until both you and the AI are 100% confident. It produces a formal spec, then recommends handing off to Absolute-Human for execution.
The workflow looks like this:
- Super Brainstorm - design interview, confidence gates, validated spec
- Absolute-Human - task decomposition, parallel waves, TDD execution
Design and execution as separate concerns, each handled by a specialized skill. You can use either one independently, but together they cover the full lifecycle from vague idea to shipped code.
What Absolute-Human Is Not
It is not for everything. Single-file bug fixes, quick questions, code explanations - these do not need a 7-phase lifecycle. The overhead is not worth it.
Use Absolute-Human when the task touches 3+ files or components, requires planning before implementation, or would benefit from parallel execution. Features, refactors, greenfield projects, migrations - these are where the wave model pays for itself.
It is also not a replacement for human judgment. Absolute-Human asks for approval at key gates - after DECOMPOSE (the task graph) and after PLAN (the execution details). You review the plan before any code is written. The agent builds what you approved, not what it imagined.
Getting Started
Install the skill:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill absolute-human
Then invoke it on your next multi-step task:
/absolute-human Add a commenting system with threading, notifications, and moderation
For the full design-to-execution workflow, install both skills:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill absolute-brainstorm
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill absolute-human
Start with /absolute-brainstorm to design, then hand off to /absolute-human to build.
For best results, pair with these companion skills:
- absolute-brainstorm - structured design interview that produces validated specs for Absolute-Human to execute
- test-strategy - deeper guidance on choosing between test types
- clean-code - ensures implementation follows Clean Code principles
- code-review-mastery - reviews completed work before convergence
Ship Faster by Planning Better
The paradox of Absolute-Human is that it ships faster by spending more time planning. The decomposition phase typically takes a few minutes. But it saves thirty minutes of rework, conflict resolution, and debugging that comes from unplanned parallel execution.
AI agents are fast at writing code. The bottleneck was never typing speed - it was knowing what to type. Absolute-Human solves that by making the plan explicit, the dependencies visible, and the execution structured.
Your agent stops improvising and starts engineering.