technical-writing
Use this skill when writing, reviewing, or structuring technical documentation for software projects. Triggers on API documentation, tutorials, architecture decision records (ADRs), runbooks, onboarding guides, README files, or any developer-facing prose. Covers documentation structure, writing style, audience analysis, and doc-as-code workflows for engineering teams.
writing technical-writingdocumentationapi-docsadrrunbookstutorialsWhat is technical-writing?
Use this skill when writing, reviewing, or structuring technical documentation for software projects. Triggers on API documentation, tutorials, architecture decision records (ADRs), runbooks, onboarding guides, README files, or any developer-facing prose. Covers documentation structure, writing style, audience analysis, and doc-as-code workflows for engineering teams.
technical-writing
technical-writing is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex, and 1 more. Writing, reviewing, or structuring technical documentation for software projects.
Quick Facts
| Field | Value |
|---|---|
| Category | writing |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex, mcp |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill technical-writing- The technical-writing skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
Technical writing for software teams is the practice of producing clear, accurate, and maintainable documentation that helps developers understand systems, use APIs, follow procedures, and make informed architectural decisions. Good technical docs reduce onboarding time, prevent production incidents, and eliminate tribal knowledge. This skill covers the five core document types every engineering organization needs: API docs, tutorials, architecture docs, ADRs, and runbooks.
Tags
technical-writing documentation api-docs adr runbooks tutorials
Platforms
- claude-code
- gemini-cli
- openai-codex
- mcp
Related Skills
Pair technical-writing with these complementary skills:
Frequently Asked Questions
What is technical-writing?
Use this skill when writing, reviewing, or structuring technical documentation for software projects. Triggers on API documentation, tutorials, architecture decision records (ADRs), runbooks, onboarding guides, README files, or any developer-facing prose. Covers documentation structure, writing style, audience analysis, and doc-as-code workflows for engineering teams.
How do I install technical-writing?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill technical-writing in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support technical-writing?
This skill works with claude-code, gemini-cli, openai-codex, mcp. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Technical Writing
Technical writing for software teams is the practice of producing clear, accurate, and maintainable documentation that helps developers understand systems, use APIs, follow procedures, and make informed architectural decisions. Good technical docs reduce onboarding time, prevent production incidents, and eliminate tribal knowledge. This skill covers the five core document types every engineering organization needs: API docs, tutorials, architecture docs, ADRs, and runbooks.
When to use this skill
Trigger this skill when the user:
- Needs to write or improve API documentation (REST, GraphQL, gRPC)
- Wants to create a step-by-step tutorial or getting-started guide
- Asks to write an Architecture Decision Record (ADR)
- Needs to produce a runbook for an operational procedure
- Wants to document system architecture or design
- Asks to review existing documentation for clarity or completeness
- Needs a README, onboarding guide, or contributor guide
- Wants to establish documentation standards for a team
Do NOT trigger this skill for:
- Marketing copy, blog posts, or sales content (use content-marketing skill)
- Code comments and inline documentation only (use clean-code skill)
Key principles
Write for the reader, not yourself - Identify who will read this doc and what they need to accomplish. A new hire reading a tutorial has different needs than an on-call engineer reading a runbook at 3 AM. Adjust depth, tone, and structure accordingly.
Optimize for scanning - Engineers rarely read docs linearly. Use headings, bullet lists, tables, and code blocks so readers can find what they need in under 30 seconds. Front-load the most important information in every section.
Show, then tell - Lead with a concrete example (code snippet, command, or screenshot), then explain what it does. Abstract explanations without examples force the reader to build a mental model from scratch.
Keep docs close to code - Documentation that lives in the repo (Markdown, OpenAPI specs, doc comments) stays current. Documentation in wikis or external tools drifts and dies. Treat docs as code: review them in PRs, lint them in CI.
One document, one purpose - A tutorial teaches. A reference answers. A runbook instructs. Never mix purposes in a single document - a tutorial that detours into reference tables loses the reader.
Core concepts
Technical documentation falls into five categories, each with a distinct audience, structure, and maintenance cadence:
API Documentation is the reference layer. It describes every endpoint, parameter,
response shape, and error code. The audience is developers integrating with your
system. API docs are high-frequency reads and must be exhaustively accurate. See
references/api-docs.md.
Tutorials are the learning layer. They walk a reader from zero to a working
outcome through ordered steps. The audience is new users. Tutorials must be
reproducible - every step should produce a predictable result. See
references/tutorials.md.
Architecture Documentation is the context layer. It explains how a system is
structured and why, using diagrams and prose. The audience is engineers joining the
team or making cross-cutting changes. See references/architecture-docs.md.
Architecture Decision Records (ADRs) are the history layer. Each ADR captures a
single decision - the context, options considered, and the chosen approach with
rationale. They are immutable once accepted. See references/adrs.md.
Runbooks are the action layer. They provide step-by-step instructions for
operational tasks - deployments, incident response, data migrations. The audience is
on-call engineers under pressure. See references/runbooks.md.
Common tasks
Write API endpoint documentation
For each endpoint, include these fields in order:
### POST /api/v1/users
Create a new user account.
**Authentication:** Bearer token (required)
**Request body:**
| Field | Type | Required | Description |
|----------|--------|----------|--------------------------|
| email | string | yes | Valid email address |
| name | string | yes | Display name (2-100 chars)|
| role | string | no | One of: admin, member |
**Response (201 Created):**
```json
{
"id": "usr_abc123",
"email": "dev@example.com",
"name": "Ada Lovelace",
"role": "member",
"created_at": "2025-01-15T10:30:00Z"
}Errors:
| Status | Code | Description |
|---|---|---|
| 400 | invalid_email | Email format is invalid |
| 409 | email_exists | Account with email exists |
| 401 | unauthorized | Missing or expired token |
> Always include a realistic response example with plausible data, not placeholder
> values like "string" or "0".
### Write a step-by-step tutorial
Use this structure for every tutorial:
1. **Title** - "How to [accomplish specific goal]"
2. **Prerequisites** - What the reader needs before starting (tools, accounts, prior knowledge)
3. **Steps** - Numbered, each with one action and its expected outcome
4. **Verify** - How to confirm the tutorial worked
5. **Next steps** - Where to go from here
Each step should follow this pattern:
```markdown
## Step 3: Configure the database connection
Add your database URL to the environment file:
```bash
echo 'DATABASE_URL=postgres://localhost:5432/myapp' >> .envYou should see the variable when you run cat .env.
> Never assume the reader can infer a step. If you deleted a step and the tutorial
> would still work, the step is load-bearing for understanding, not execution - keep
> it but mark it as context.
### Write an Architecture Decision Record (ADR)
Use the Michael Nygard format:
```markdown
# ADR-007: Use PostgreSQL for the primary datastore
## Status
Accepted (2025-03-10)
## Context
The application needs a relational datastore that supports ACID transactions,
JSON columns for semi-structured data, and full-text search. The team has
production experience with PostgreSQL and MySQL.
## Decision
Use PostgreSQL 16 as the primary datastore.
## Consequences
- **Positive:** Native JSONB support eliminates the need for a separate
document store. Full-text search via tsvector avoids an Elasticsearch
dependency.
- **Negative:** Requires operational expertise for vacuum tuning and
connection pooling at scale. Team must learn PostgreSQL-specific features
(CTEs, window functions) that differ from MySQL.
- **Neutral:** Migration tooling (pgloader) is available if we need to move
data from the existing MySQL instance.ADRs are immutable. If a decision is reversed, write a new ADR that supersedes the original. Never edit an accepted ADR.
Write a runbook
Structure every runbook for someone who is stressed, tired, and unfamiliar with the system:
# Runbook: Database failover to read replica
**Severity:** SEV-1 (data serving impacted)
**Owner:** Platform team
**Last tested:** 2025-02-20
**Estimated time:** 10-15 minutes
## Symptoms
- Application returns 500 errors on all database-backed endpoints
- Database primary shows `connection refused` or replication lag > 60s
## Prerequisites
- Access to AWS console (production account)
- `kubectl` configured for the production cluster
- Pager notification sent to #incidents channel
## Steps
1. Verify the primary is actually down:
```bash
pg_isready -h primary.db.internal -p 5432Expected: "no response" or connection refused.
Promote the read replica:
aws rds promote-read-replica --db-instance-identifier myapp-replica-1Wait for status to change to "available" (3-5 minutes).
Update the application config:
kubectl set env deployment/myapp DATABASE_URL=postgres://replica-1.db.internal:5432/myappVerify recovery:
curl -s https://myapp.com/health | jq .databaseExpected:
"ok"
Rollback
If the promoted replica has issues, revert to the original primary once it recovers by reversing step 3 with the original DATABASE_URL.
> Every runbook step must include the exact command to run and the expected output.
> Never write "check the database" without specifying the exact check.
### Write architecture documentation
Use the C4 model approach - zoom in through layers:
1. **System context** - What is this system and how does it fit in the landscape?
2. **Container diagram** - What are the deployable units (services, databases, queues)?
3. **Component diagram** - What are the major modules inside a container?
4. **Code diagram** - Only for genuinely complex logic (optional)
For each layer, include a diagram (Mermaid, PlantUML, or ASCII) plus 2-3 paragraphs
of explanatory prose. See `references/architecture-docs.md` for templates.
### Review existing documentation
Apply this checklist when reviewing any doc:
- [ ] **Accuracy** - Does the doc match the current state of the system?
- [ ] **Completeness** - Are there gaps where a reader would get stuck?
- [ ] **Audience** - Is the language appropriate for the target reader?
- [ ] **Structure** - Can the reader find what they need in under 30 seconds?
- [ ] **Examples** - Does every abstract concept have a concrete example?
- [ ] **Freshness** - Is there a "last updated" date? Is it recent?
- [ ] **Actionability** - Can the reader do something after reading this?
---
## Anti-patterns / common mistakes
| Mistake | Why it's wrong | What to do instead |
|---|---|---|
| Wall of text | Engineers stop reading after the first paragraph without visual breaks | Use headings every 3-5 paragraphs, bullet lists for items, tables for structured data |
| Documenting internals as tutorials | Implementation details change frequently and confuse new users | Separate reference docs (internals) from tutorials (user journey) |
| Missing prerequisites | Reader gets stuck at step 3 because they don't have a required tool | List every prerequisite at the top, including versions |
| "Obvious" steps omitted | What's obvious to the author is not obvious to the reader | Write as if the reader has never seen the codebase before |
| Stale screenshots | Screenshots go stale faster than any other doc element | Prefer text-based examples (code blocks, CLI output) over screenshots |
| ADRs written after the fact | Retroactive ADRs lose the context and rejected alternatives | Write the ADR as part of the decision process, not after implementation |
| Runbooks without rollback | On-call engineer makes things worse because there is no undo path | Every runbook must include a rollback section |
---
## Gotchas
1. **Tutorials that work on the author's machine often fail for readers** - Missing environment prerequisites, OS-specific path differences, and version mismatches are the most common failure points. Test every tutorial on a clean environment with no prior setup before publishing.
2. **ADRs edited after acceptance lose their value** - The entire point of an ADR is preserving the reasoning at the time of the decision, including rejected alternatives. Editing an accepted ADR to "clean it up" erases the historical context. If the decision changed, write a new ADR that supersedes the original.
3. **OpenAPI specs and actual API behavior diverge silently** - Without automated contract testing, your API docs will drift from the implementation. Integrate OpenAPI validation in CI to catch discrepancies before they reach developers consuming the API.
4. **Runbooks tested only at creation become unreliable** - A runbook that has never been executed in a real or simulated incident will fail when needed. Schedule quarterly runbook dry runs and update the "Last tested" date. A runbook with no test date should be treated as untrusted.
5. **Audience mismatch between docs is harder to fix than missing docs** - A tutorial written for experienced engineers will block new hires; architecture docs written for junior engineers will be ignored by seniors. Define the audience explicitly at the top of every document and review it when onboarding new readers.
---
## References
For detailed templates and examples on specific document types, read the relevant
file from `references/`:
- `references/api-docs.md` - OpenAPI patterns, REST vs GraphQL doc strategies, response examples
- `references/tutorials.md` - Tutorial structure, progressive disclosure, common pitfalls
- `references/architecture-docs.md` - C4 model templates, diagram tools, living doc strategies
- `references/adrs.md` - ADR templates (Nygard, MADR), lifecycle management, indexing
- `references/runbooks.md` - Runbook structure, severity levels, testing cadence, automation
Only load a references file if the current task requires deep detail on that topic. References
adrs.md
Architecture Decision Records (ADRs)
What is an ADR?
An ADR is a short document that captures a single architectural decision along with its context and consequences. ADRs are the "commit history" of your architecture - they explain why the system looks the way it does.
When to write an ADR
Write an ADR when:
- Choosing a technology (database, language, framework, cloud service)
- Deciding on an architectural pattern (microservices vs monolith, sync vs async)
- Making a trade-off that future engineers will question
- Reversing or superseding a previous decision
Do NOT write an ADR for:
- Implementation details (use code comments or design docs)
- Bug fixes or routine changes
- Decisions that are easily reversible without architectural impact
The Nygard format (recommended)
Michael Nygard's original format is the industry standard:
# ADR-[NNN]: [Short title in imperative mood]
## Status
[Proposed | Accepted | Deprecated | Superseded by ADR-NNN]
## Context
[What is the issue that we're seeing that is motivating this decision?
Describe the forces at play: technical constraints, business requirements,
team capabilities, timeline pressure. Be factual and neutral.]
## Decision
[What is the change that we're proposing and/or doing?
State the decision clearly in 1-3 sentences. Use active voice.]
## Consequences
[What becomes easier or more difficult to do because of this change?
List positive, negative, and neutral consequences. Be honest about
trade-offs - this is the most valuable section.]The MADR format (alternative)
Markdown Any Decision Records (MADR) adds more structure:
# [Short title]
## Context and problem statement
[Describe the context and the problem in 2-3 sentences.]
## Decision drivers
- [Driver 1: e.g., "Team has deep experience with PostgreSQL"]
- [Driver 2: e.g., "Need ACID transactions for financial data"]
- [Driver 3]
## Considered options
1. [Option A]
2. [Option B]
3. [Option C]
## Decision outcome
Chosen option: "[Option B]", because [justification].
### Positive consequences
- [Consequence 1]
- [Consequence 2]
### Negative consequences
- [Consequence 1]
- [Consequence 2]
## Pros and cons of the options
### [Option A]
- Good, because [argument]
- Bad, because [argument]
### [Option B]
...ADR numbering and filing
Store ADRs in the repository:
docs/
adr/
README.md # Index of all ADRs with title and status
0001-use-postgresql.md
0002-adopt-event-sourcing.md
0003-choose-react-for-frontend.mdThe README.md index should be a simple table:
# Architecture Decision Records
| ADR | Title | Status | Date |
|-----|-------|--------|------|
| 001 | Use PostgreSQL for primary datastore | Accepted | 2025-01-15 |
| 002 | Adopt event sourcing for order processing | Accepted | 2025-02-01 |
| 003 | Choose React for the frontend | Superseded by 007 | 2025-02-10 |ADR lifecycle rules
Immutability - Once an ADR is accepted, never edit its content. If circumstances change, write a new ADR that supersedes the old one.
Status transitions - An ADR moves through: Proposed -> Accepted -> (optionally) Deprecated or Superseded.
Superseding - When writing a new ADR that reverses a previous decision, update the old ADR's status to "Superseded by ADR-NNN" and reference the old ADR in the new one's context section.
Timing - Write the ADR during the decision process, not after implementation. The context and rejected alternatives are freshest at decision time.
Writing tips for ADRs
- Context section: Be specific about constraints. "We have 3 weeks" is better than "tight timeline." Include data when available.
- Decision section: State the decision in one clear sentence before elaborating. The reader should understand the choice in 5 seconds.
- Consequences section: Be honest about negatives. An ADR that lists only positives is not trustworthy. The negative consequences are what future engineers need most.
- Keep it short: An ADR should be 1-2 pages. If it's longer, the decision is probably too broad - split it into multiple ADRs.
Tools for ADR management
- adr-tools (CLI) -
adr new "Use PostgreSQL"creates a numbered file from template - Log4brains - Generates a searchable ADR website from Markdown files
- ADR Manager (VS Code extension) - Create and browse ADRs from the editor
api-docs.md
API Documentation
Principles
API docs are reference material. They must be exhaustive, accurate, and scannable. Every endpoint, parameter, response shape, and error code must be documented. The reader is a developer integrating with your API - they need facts, not narratives.
REST API documentation structure
For each endpoint, document in this order:
1. Method and path
POST /api/v1/resourcesUse the full path including version prefix. Group endpoints by resource.
2. One-line description
A single sentence describing what the endpoint does. Use imperative mood: "Create a new user" not "Creates a new user" or "This endpoint creates a user."
3. Authentication
State the auth requirement explicitly:
- "Bearer token (required)"
- "API key via X-Api-Key header (required)"
- "No authentication required"
4. Path parameters
| Parameter | Type | Description |
|---|---|---|
| id | string (UUID) | The resource identifier |
5. Query parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| page | integer | 1 | Page number for pagination |
| limit | integer | 20 | Items per page (max 100) |
6. Request body
Show the JSON schema as a table, then provide a complete example:
{
"email": "ada@example.com",
"name": "Ada Lovelace",
"role": "member"
}7. Response
Show status code, headers (if relevant), and a complete body example with realistic data. Never use placeholder values like "string" or "0".
HTTP/1.1 201 Created
Content-Type: application/json
{
"id": "usr_abc123",
"email": "ada@example.com",
"created_at": "2025-01-15T10:30:00Z"
}8. Error responses
Document every error the endpoint can return:
| Status | Error code | Description | Resolution |
|---|---|---|---|
| 400 | invalid_email | Email format invalid | Check email format |
| 409 | email_exists | Email already registered | Use a different email |
| 429 | rate_limited | Too many requests | Retry after Retry-After header value |
OpenAPI / Swagger integration
When the project uses OpenAPI specs, generate docs from the spec rather than maintaining separate documentation. Tools:
- Redoc - Clean, three-panel layout from OpenAPI 3.x specs
- Swagger UI - Interactive "try it" panels
- Stoplight - Visual editor + hosted docs
Keep the OpenAPI spec as the single source of truth. Add x-codeSamples extensions
for language-specific examples.
GraphQL documentation
For GraphQL APIs, document:
- Schema overview - Types, queries, mutations, subscriptions
- Type definitions - Each type with field descriptions
- Query examples - Complete queries with variables and responses
- Error conventions - How errors are returned in the
errorsarray - Rate limiting - Query complexity limits, depth limits
Use tools like GraphQL Voyager for visual schema exploration and GraphiQL for interactive documentation.
Common mistakes in API docs
- Missing error codes - Document every error, not just the happy path
- Outdated examples - Automate example generation from tests when possible
- No pagination docs - Always document how pagination works (cursor vs offset)
- Inconsistent naming - Use the same field names in docs as in the actual API
- Missing rate limit info - Always document rate limits and how to handle 429s
Versioning documentation
When the API is versioned, document:
- Which version is current
- Which versions are deprecated (with sunset dates)
- Migration guides between versions
- Changelog of breaking changes per version
architecture-docs.md
Architecture Documentation
Purpose
Architecture docs answer two questions: "How is this system built?" and "Why was it built this way?" They are the context layer that enables engineers to make informed changes to a system they did not design.
The C4 model
Use Simon Brown's C4 model as the default structure. It provides four levels of zoom, each serving a different audience:
Level 1: System context diagram
Shows the system as a single box, surrounded by the users and external systems it interacts with. This is the 30-second overview.
[User] --> [Your System] --> [Payment Provider]
--> [Email Service]
--> [Analytics Platform]Include: system name, actors, external dependencies, data flow direction. Exclude: internal components, technology choices.
Level 2: Container diagram
Shows the major deployable units inside the system - services, databases, message queues, CDNs. This is the "what gets deployed" view.
[Web App (React)] --> [API Gateway (Kong)]
--> [User Service (Node.js)]
--> [Order Service (Go)]
--> [PostgreSQL]
--> [Redis Cache]
--> [RabbitMQ]Include: container names, technology choices, communication protocols. Exclude: internal module structure.
Level 3: Component diagram
Shows the major structural building blocks inside a single container. Only create this for containers that are complex enough to warrant it.
Include: modules, services, repositories, controllers and their relationships. Exclude: individual classes or functions.
Level 4: Code diagram
UML class diagrams or similar. Only for genuinely complex algorithms or patterns. Most systems never need this level.
Diagram tools
| Tool | Format | Best for |
|---|---|---|
| Mermaid | Markdown-embeddable | Docs-as-code, GitHub rendering |
| PlantUML | Text-based | Detailed UML, sequence diagrams |
| Structurizr | C4-native DSL | Full C4 model with workspace |
| Excalidraw | Visual/hand-drawn style | Informal, whiteboard-feel diagrams |
| draw.io | Visual editor | Complex diagrams with manual layout |
Prefer text-based diagrams (Mermaid, PlantUML) for docs that live in Git. They diff cleanly, render in Markdown, and cannot go stale as easily as images.
Mermaid example
```mermaid
graph LR
Client[Web Client] --> Gateway[API Gateway]
Gateway --> UserSvc[User Service]
Gateway --> OrderSvc[Order Service]
UserSvc --> DB[(PostgreSQL)]
OrderSvc --> DB
OrderSvc --> Queue[RabbitMQ]
Queue --> NotifySvc[Notification Service]
```Architecture doc template
# [System Name] Architecture
## Overview
[2-3 sentences: what this system does and its primary value proposition.]
## System context
[Level 1 diagram + 1 paragraph listing key actors and external dependencies.]
## Containers
[Level 2 diagram + a table describing each container:]
| Container | Technology | Purpose |
|-----------|-----------|---------|
| API Gateway | Kong | Route requests, rate limiting, auth |
| User Service | Node.js | User CRUD, authentication |
| PostgreSQL | v16 | Primary datastore |
## Key design decisions
[Link to relevant ADRs or summarize the 3-5 most important decisions:]
- **ADR-001:** Chose event-driven architecture for order processing
- **ADR-003:** PostgreSQL over MongoDB for ACID guarantees
## Data flow
[Sequence diagram or prose describing the primary request path.]
## Infrastructure
[Where it runs: cloud provider, regions, scaling approach.]
## Known limitations
[Current technical debt, scaling bottlenecks, planned improvements.]Keeping architecture docs alive
Architecture docs rot faster than any other documentation type. Strategies to keep them current:
- Link ADRs to architecture docs - When an ADR changes the architecture, update the architecture doc in the same PR
- Auto-generate diagrams - Use tools that generate diagrams from code or infrastructure (e.g., Structurizr from code annotations)
- Quarterly review cadence - Schedule a calendar reminder to review architecture docs with the team
- Ownership - Assign an explicit owner to each architecture doc
runbooks.md
Runbooks
Purpose
A runbook is a step-by-step procedure for an operational task. The reader is typically an on-call engineer who may be stressed, sleep-deprived, and unfamiliar with the system. Every word must earn its place.
Runbook template
# Runbook: [Action] [system/component]
**Severity:** [SEV-1 | SEV-2 | SEV-3 | Routine]
**Owner:** [Team name]
**Last tested:** [YYYY-MM-DD]
**Estimated time:** [X-Y minutes]
## Symptoms
- [Observable symptom 1 - what alerts fire, what errors appear]
- [Observable symptom 2]
- [Observable symptom 3]
## Prerequisites
- [ ] Access to [system/tool] with [permission level]
- [ ] [CLI tool] installed and configured
- [ ] Incident channel created in [Slack/Teams/etc.]
## Steps
### 1. [Verify the problem]
[Brief context - why this step matters.]
```bash
[exact command to run]Expected output: [what you should see] If unexpected: [what to do - usually "escalate to [team]"]
2. [Take corrective action]
[exact command]Expected output: [what success looks like] Wait time: [if the step takes time to propagate]
3. [Verify recovery]
[exact verification command]Expected output: [what confirms the fix worked]
Rollback
If the above steps make things worse, reverse them:
- [Exact rollback command for step 2]
- [Verify rollback succeeded]
- Escalate to [team/person]
Post-incident
- Update the incident timeline
- Write a brief summary in the incident channel
- Schedule a post-mortem if SEV-1 or SEV-2
## Writing rules for runbooks
### Every step must have an exact command
Never write "restart the service" - write the exact command:
```bash
kubectl rollout restart deployment/user-service -n productionThe on-call engineer should be able to copy-paste every command.
Include expected output for every step
The reader needs to know if the step worked. Show what success looks like:
**Expected output:**deployment.apps/user-service restarted
**If you see "not found":** The deployment name may have changed. Run
`kubectl get deployments -n production` to find the current name.Include timing information
When a step takes time to propagate, say so explicitly:
Wait 2-3 minutes for the new pods to become ready. Monitor with:
```bash
kubectl get pods -n production -w | grep user-serviceAll pods should show Running and 1/1 ready.
### Always include a rollback section
Every runbook must answer: "What if this makes things worse?" Provide explicit
rollback steps, not just "undo the above."
### Include prerequisites as a checklist
Use checkboxes so the on-call engineer can verify they have everything before
starting:
```markdown
## Prerequisites
- [ ] SSH access to production bastion host
- [ ] AWS CLI configured with production credentials
- [ ] PagerDuty incident acknowledgedSeverity levels
| Level | Definition | Response time | Example |
|---|---|---|---|
| SEV-1 | Service down, all users affected | Immediate | Database primary is unreachable |
| SEV-2 | Degraded service, many users affected | 15 minutes | API latency > 5s on 50% of requests |
| SEV-3 | Minor issue, few users affected | 1 hour | One webhook endpoint returning errors |
| Routine | Planned maintenance task | Scheduled | Monthly certificate rotation |
Testing runbooks
Runbooks that have never been tested do not work. Period.
Testing cadence
- SEV-1 runbooks: Test quarterly via gameday exercises
- SEV-2 runbooks: Test semi-annually
- Routine runbooks: Test on first use, then annually
- All runbooks: Review after every incident that used them
How to test
- Have someone who did NOT write the runbook follow it in a staging environment
- Note every place they got confused, had to ask a question, or deviated
- Update the runbook to address every friction point
- Record the test date in the "Last tested" field
Automation ladder
Runbooks exist on a spectrum from fully manual to fully automated:
- Manual runbook - Human follows steps (this document type)
- Semi-automated - Script handles repetitive parts, human makes decisions
- Automated with approval - Script runs end-to-end, human approves
- Fully automated - Triggered by alert, no human needed
Every runbook should aspire to move up this ladder. If you're writing the same runbook steps for the third time, it's time to automate.
tutorials.md
Tutorials
The tutorial contract
A tutorial makes a promise: "Follow these steps and you will have a working [thing]." Every element of the tutorial must serve that promise. If a paragraph does not advance the reader toward the outcome, cut it.
Structure template
# How to [accomplish specific goal]
[1-2 sentences: what the reader will build/achieve and why it matters.]
## Prerequisites
- [Tool] version [X] or later ([install link])
- [Account/service] with [specific access level]
- Familiarity with [concept] (see [link] if new)
## Step 1: [Action verb] + [object]
[1-2 sentences of context if needed.]
```bash
[exact command][Expected output or how to verify the step worked.]
Step 2: ...
Verify it works
[How to confirm the entire tutorial succeeded - a curl command, a browser check, a test run.]
Next steps
- [Link to related tutorial]
- [Link to reference docs for deeper customization]
- [Link to production deployment guide]
## Writing rules for tutorials
### One action per step
Each numbered step should contain exactly one action. If a step requires the
reader to do two things, split it into two steps. This makes it easy to identify
where something went wrong.
**Bad:** "Install the CLI and configure your credentials"
**Good:** Step 3 - Install the CLI. Step 4 - Configure your credentials.
### Show the expected output
After every command or action, tell the reader what they should see. This builds
confidence and helps them diagnose problems.
```markdown
Run the migration:
```bash
npx prisma migrate dev --name initYou should see output like:
Applying migration `20250115_init`
Migration applied successfully.
### Progressive disclosure
Start with the simplest possible example, then layer complexity:
1. **Quickstart** - Minimal viable setup (5 minutes)
2. **Customization** - Configuration options
3. **Production** - Security, scaling, monitoring
Do not front-load the tutorial with configuration options. Get the reader to a
working state first, then teach customization.
### Prerequisites must be exhaustive
List every prerequisite including:
- Exact version numbers (not "recent version")
- Links to installation instructions
- Required accounts or API keys
- Prior knowledge (with links for those who lack it)
### Avoid branching paths
A tutorial should have one path. If different operating systems need different
commands, use tabs or callout boxes - not "if you're on Linux do X, if you're on
Mac do Y" scattered throughout the text.
## Testing tutorials
Tutorials must be tested. The author should follow their own tutorial from scratch
on a clean environment at least once before publishing. Ideally, automate tutorial
testing:
- Use a Dockerfile that starts from a clean base image
- Run the tutorial commands as a script
- Verify the expected outputs
## Common tutorial mistakes
| Mistake | Fix |
|---------|-----|
| Assuming tool is installed | Add to prerequisites with install link |
| Skipping "boring" steps | Include every step, even `cd` into directories |
| No expected output shown | Add expected output after every command |
| Tutorial only works on author's machine | Test on a clean environment |
| Mixing tutorial with reference | Keep tutorials focused on one path; link to reference for options |
| Starting with theory | Start with "do this" and explain "why" after the reader has a working result | Frequently Asked Questions
What is technical-writing?
Use this skill when writing, reviewing, or structuring technical documentation for software projects. Triggers on API documentation, tutorials, architecture decision records (ADRs), runbooks, onboarding guides, README files, or any developer-facing prose. Covers documentation structure, writing style, audience analysis, and doc-as-code workflows for engineering teams.
How do I install technical-writing?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill technical-writing in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support technical-writing?
technical-writing works with claude-code, gemini-cli, openai-codex, mcp. Install it once and use it across any supported AI coding agent.