code-review-mastery
Use this skill when the user asks to review their local git changes, staged or unstaged diffs, or wants a code review before committing. Triggers on "review my changes", "review staged", "review my diff", "check my code", "code review local changes", "review unstaged", "review before commit".
engineering code-reviewgit-difflocal-reviewautomated-reviewqualityWhat is code-review-mastery?
Use this skill when the user asks to review their local git changes, staged or unstaged diffs, or wants a code review before committing. Triggers on "review my changes", "review staged", "review my diff", "check my code", "code review local changes", "review unstaged", "review before commit".
code-review-mastery
code-review-mastery is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. The user asks to review their local git changes, staged or unstaged diffs, or wants a code review before committing.
Quick Facts
| Field | Value |
|---|---|
| Category | engineering |
| Version | 0.2.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill code-review-mastery- The code-review-mastery skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
This skill reviews your local git changes (staged or unstaged) with
project-aware analysis. It gathers project context - lint rules, conventions,
framework patterns - then produces structured [MAJOR] / [MINOR] findings
you can work through interactively.
Tags
code-review git-diff local-review automated-review quality
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair code-review-mastery with these complementary skills:
Frequently Asked Questions
What is code-review-mastery?
Use this skill when the user asks to review their local git changes, staged or unstaged diffs, or wants a code review before committing. Triggers on "review my changes", "review staged", "review my diff", "check my code", "code review local changes", "review unstaged", "review before commit".
How do I install code-review-mastery?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill code-review-mastery in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support code-review-mastery?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Local Diff Code Review
This skill reviews your local git changes (staged or unstaged) with
project-aware analysis. It gathers project context - lint rules, conventions,
framework patterns - then produces structured [MAJOR] / [MINOR] findings
you can work through interactively.
When to use this skill
Trigger this skill when the user:
- Asks to review their local changes, staged changes, or unstaged changes
- Says "review my diff", "check my code", "code review before commit"
- Wants a quality check on what they're about to commit or push
- Asks "what's wrong with my changes" or "anything I should fix before committing"
Do NOT trigger this skill for:
- Reviewing remote PRs or GitHub links (use a PR review tool instead)
- Writing or refactoring code from scratch
- Architecture discussions not tied to a specific set of changes
- General code quality advice without a concrete diff to review
Key principles
Review the code, not the person - Findings are about the change, not the author. Frame issues as observations, not judgments.
Prioritize by impact - Security > Correctness > Performance > Design > Readability > Convention. Spend most analysis time at the top of this list.
Two-tier severity - Every finding is either
[MAJOR](must fix) or[MINOR](consider fixing). No ambiguity, no middle ground.Respect project conventions - Read configs and surrounding code before judging. What looks wrong in isolation may be the project's established pattern.
Present, don't preach - Structured findings with file locations and suggested fixes. Not essays about best practices.
[MAJOR] vs [MINOR] definitions
| Severity | Criteria | Examples |
|---|---|---|
[MAJOR] |
Must be fixed. Would block a PR in a professional code review. | Bugs, security vulnerabilities, data loss risks, missing error handling for critical paths, violations of explicit project rules (lint configs, CLAUDE.md), missing tests for new behavior |
[MINOR] |
Improves quality but code works without it. Reviewer would approve anyway. | Naming improvements, readability tweaks, minor performance gains, style inconsistencies, documentation gaps, implicit convention deviations |
Decision rule
Ask: "Would a staff engineer block a PR on this?"
- Yes -
[MAJOR] - No, but they'd leave a comment -
[MINOR] - No, they wouldn't mention it - Don't report it
When in doubt, downgrade to [MINOR]. False positives at [MAJOR] erode
trust in the review.
The review workflow
Work through these four phases in order. Each phase feeds the next, so skipping one typically degrades review quality.
Phase 1: DETECT
Determine what changes exist and what to review.
- Run
git diff --stat(unstaged) andgit diff --cached --stat(staged) - If both have changes, ask the user which set to review (or "both")
- If neither has changes, inform the user: "No local changes to review." Stop.
- Identify languages from file extensions in the diff
- Count files changed, insertions, and deletions for the report header
- If the diff exceeds 500 lines, warn the user and suggest focusing on
[MAJOR]findings only to keep the review actionable
Phase 2: CONTEXT
Gather project context to calibrate the review. See
references/context-detection.md for the full detection guide.
- Read
CLAUDE.md,AGENT.md,README.mdif they exist in the project root - Read relevant lint and format configs (ESLint, Prettier, Ruff, tsconfig, etc.)
- Scan 2-3 existing files in the same directories as changed files to detect naming, import, and error handling conventions
- Note the framework and language from config files
- Store context mentally - do not output it to the user. Use it to calibrate severity and skip findings that linters already enforce.
Phase 3: ANALYZE
Review the actual diff using the review pyramid (bottom-up).
- Get the full diff with
git difforgit diff --cached - For large diffs (>500 lines), process file-by-file with
git diff -- <file> - Walk through each file's changes with these passes:
- Security pass - injection, auth, data exposure, secrets
- Correctness pass - null safety, edge cases, async/await, off-by-one
- Performance pass - N+1, missing indexes, memory leaks, unbounded queries
- Design pass - coupling, SRP violations, abstraction levels
- Readability pass - naming, dead code, magic numbers, nesting depth
- Convention pass - check against detected project rules and patterns
- Testing pass - new behavior untested, skipped tests, flaky patterns
- For each finding: classify
[MAJOR]or[MINOR], assign a category, note the file and line number
See references/review-checklist.md for the detailed per-category checklist.
Phase 4: REPORT
Present the structured review and offer to fix.
- Output the review using the format specification below
- After presenting, ask: "Would you like me to fix any of these? Tell me which items or say 'fix all MAJOR' / 'fix all'."
The review pyramid
Allocate attention proportionally to impact. Start at the bottom:
[Convention] <- least critical; check against project rules
[Readability] <- naming, clarity, dead code
[Design] <- structure, patterns, coupling
[Performance] <- N+1, memory, blocking I/O
[Correctness] <- bugs, edge cases, logic errors
[Security / Safety] <- the most critical layerA diff with a SQL injection vulnerability does not need a naming discussion - it needs the security fix flagged first.
Analysis passes
Condensed checklist per pass. See references/review-checklist.md for the
full version.
Security (all [MAJOR])
- Injection: SQL, HTML/XSS, command injection, path traversal
- Auth: missing auth middleware, IDOR, privilege escalation
- Data exposure: logging secrets/PII, over-broad API responses
- Secrets: API keys, tokens, or credentials in code
- CSRF: missing token validation on state-changing endpoints
Correctness (mostly [MAJOR])
- Null/undefined safety: unhandled null paths
- Edge cases: empty input, zero, negative, boundary values
- Async: missing await, unhandled promise rejections, race conditions
- Off-by-one: loop bounds, array indices, pagination
- Type safety:
==vs===, implicit coercion,anycasts
Performance ([MAJOR] if in hot path, [MINOR] otherwise)
- N+1 queries: database calls inside loops
- Missing indexes: new WHERE/ORDER BY columns without index
- Memory leaks: listeners/intervals without cleanup
- Unbounded queries: no LIMIT on large table queries
- Blocking I/O: synchronous operations in request handlers
Design ([MINOR] unless architectural)
- Tight coupling between unrelated modules
- Single Responsibility violations
- Mixed abstraction levels within a function
- Overly complex conditionals that should be extracted
Readability ([MINOR])
- Vague names:
data,temp,flag, single letters outside tight loops - Dead code: unreachable branches, unused variables, obsolete imports
- Magic numbers/strings not extracted to named constants
- Deep nesting: more than 3 levels of indentation
Convention ([MAJOR] if explicit rule, [MINOR] if implicit pattern)
- Violates configured lint rules (ESLint, Ruff, clippy, etc.)
- Deviates from naming convention in surrounding files
- Import style inconsistent with project pattern
- Breaks a rule stated in CLAUDE.md or AGENT.md
Testing ([MAJOR] for missing tests)
- New behavior without corresponding tests
- Tests that don't assert meaningful behavior
- Skipped tests without explanation
- Test names that don't describe the behavior being verified
Output format specification
Use this exact structure for the review output:
## Code Review: [staged|unstaged] changes
**Files changed**: N | **Insertions**: +X | **Deletions**: -Y
### [MAJOR] Issues (N)
- [ ] **file.ts:42** [Security] Description of the issue.
Suggested fix or approach.
- [ ] **file.ts:87** [Correctness] Description of the issue.
Suggested fix or approach.
### [MINOR] Suggestions (N)
- [ ] **file.ts:15** [Readability] Description of the suggestion.
Suggested improvement.
- [ ] **file.ts:99** [Convention] Description of the deviation.
Project convention reference.
### Summary
N major issues to resolve, M minor suggestions to consider.
Would you like me to fix any of these? Tell me which items or say "fix all MAJOR" / "fix all".Rules for the output:
- Group all
[MAJOR]findings first, then all[MINOR]findings - Within each group, order by file path, then line number
- Each finding is a checkbox (
- [ ]) so the user can track progress - Each finding includes: file:line, category tag, one-line description, one-line suggested fix
- If there are zero
[MAJOR]findings, say so explicitly: "No major issues found." - If there are zero findings at all: "No issues found. Code looks good to commit."
- Always end with the offer to fix
Handling special cases
| Scenario | How to handle |
|---|---|
| Large diffs (>500 lines) | Warn the user. Process file-by-file. Focus on [MAJOR] only unless user requests full review. |
| Binary files | Skip with a note: "Skipping binary file: path/to/file" |
| Generated/lock files | Skip package-lock.json, yarn.lock, pnpm-lock.yaml, *.min.js, *.generated.*, and similar. Note skipped files. |
| No changes | Inform user "No local changes to review." and stop. |
| Mixed staged/unstaged | Ask user: "You have both staged and unstaged changes. Which would you like me to review? (staged / unstaged / both)" |
| Merge conflicts | Note conflict markers as [MAJOR] and suggest resolving before review. |
| Only deletions | Review for missing cleanup (dangling references, broken imports, orphaned tests). |
Anti-patterns
Avoid these mistakes when producing a review:
| Anti-pattern | Why it's wrong | What to do instead |
|---|---|---|
| Flagging what linters already catch | Wastes attention if CI enforces the rule | Check if a linter config exists and CI runs it; skip those findings |
| Ignoring CLAUDE.md / project conventions | Misses the project's actual standards | Always read project configs in Phase 2 before analyzing |
| Writing essay-length findings | Hard to action, loses signal in noise | One-line description + one-line suggested fix per finding |
Marking style preferences as [MAJOR] |
Erodes trust in severity classification | Only [MAJOR] for bugs, security, explicit rule violations, missing tests |
| Reviewing files not in the diff | Scope creep; confuses the user | Only analyze lines present in the diff output |
| Inventing project rules | Flagging violations of standards the project doesn't have | Only flag Convention [MAJOR] when you found an explicit config/rule |
| Skipping the offer to fix | Misses the interactive value of this skill | Always end with the fix offer |
Gotchas
Reviewing files not in the diff - It's easy to open related files for context and then accidentally include findings from those files in the review. Only report issues on lines that appear in the actual diff output - scope creep confuses authors and erodes trust.
Flagging what linters already enforce - If the project has ESLint, Prettier, or Ruff configured and CI runs them, reporting style violations in the review duplicates automated feedback. Check for linter configs in Phase 2 and skip findings that existing tooling will catch.
Severity inflation - Marking every finding
[MAJOR]to signal thoroughness causes authors to lose trust in severity ratings and start ignoring the review. Apply the staff engineer test strictly: only block-worthy issues are[MAJOR]. When in doubt, downgrade to[MINOR].Missing context before judging - A pattern that looks wrong in isolation (e.g., a
.catch(() => {})that swallows errors) may be intentional and documented elsewhere. Phase 2 context gathering exists to prevent false positives. ReadCLAUDE.md, surrounding files, and lint config before flagging anything as a violation.Large diff, no focus strategy - Reviewing a 1,000-line diff end-to-end produces an overwhelming output that authors can't action. For large diffs, warn the user and focus exclusively on
[MAJOR]findings. Offer to do a second pass for[MINOR]items if wanted.
References
For detailed content on specific topics, read the relevant file from references/:
references/review-checklist.md- Full per-category review checklist with detailed items for correctness, security, performance, readability, testing, documentation, and convention checksreferences/context-detection.md- Guide for gathering project context before reviewing: config file detection, framework heuristics, convention sampling, and language-specific focus areas
Load references/review-checklist.md when performing a thorough multi-pass
review. Load references/context-detection.md when the project uses an
unfamiliar framework or you need to identify conventions systematically.
References
context-detection.md
Project Context Detection
Before analyzing any diff, gather project context to calibrate your review. This avoids flagging patterns that are intentional project conventions and ensures you catch violations of explicit project rules.
Config file detection
Scan the project root for these files. Each reveals specific conventions:
| File pattern | What it reveals |
|---|---|
package.json |
Node.js project; dependencies, scripts, engine constraints |
.eslintrc* / eslint.config.* |
JavaScript/TypeScript lint rules - check for custom rules and overrides |
.prettierrc* / .editorconfig |
Formatting rules - indent style, line width, trailing commas |
tsconfig.json |
TypeScript strictness level (strict, noUncheckedIndexedAccess, exactOptionalPropertyTypes) |
biome.json / biome.jsonc |
Biome lint and format rules (replaces ESLint + Prettier in some projects) |
ruff.toml / pyproject.toml [tool.ruff] |
Python lint rules, line length, selected/ignored rules |
.flake8 / setup.cfg [flake8] |
Python legacy lint config |
mypy.ini / pyproject.toml [tool.mypy] |
Python type checking strictness |
.golangci.yml / .golangci.yaml |
Go lint rules and enabled linters |
Cargo.toml + clippy.toml |
Rust project; clippy lint configuration |
.rubocop.yml |
Ruby style and lint rules |
CLAUDE.md / AGENT.md |
Agent-specific project rules - these override general conventions |
.github/CODEOWNERS |
Ownership boundaries - helps identify sensitive areas |
Makefile / justfile / Taskfile.yml |
Build/task conventions |
.env.example |
Expected environment variables |
Priority order: CLAUDE.md > explicit lint/format configs > framework defaults > language defaults.
If CLAUDE.md or AGENT.md exists, read it first - it contains the project's
authoritative rules and any instruction there takes precedence over general
best practices.
Framework detection heuristics
Identify the framework to know which patterns are idiomatic vs problematic:
| Signal | Framework | Key review implications |
|---|---|---|
next.config.* in root |
Next.js | Check RSC boundaries, server/client component separation, metadata exports |
app/ dir + layout.tsx |
Next.js App Router | Verify "use client" directives, check for client-side data fetching in server components |
pages/ dir + _app.tsx |
Next.js Pages Router | Check getServerSideProps/getStaticProps usage |
express in deps |
Express.js | Check middleware ordering, error handler placement, async route handlers |
fastify in deps |
Fastify | Check schema validation, plugin encapsulation |
django in deps |
Django | Check ORM query efficiency, middleware ordering, CSRF |
flask in deps |
Flask | Check blueprint structure, app factory pattern, request context |
fastapi in deps |
FastAPI | Check Pydantic models, dependency injection, async endpoints |
spring-boot in deps |
Spring Boot | Check bean scoping, transaction boundaries, injection patterns |
gin-gonic/gin in deps |
Gin (Go) | Check middleware chain, context usage, error handling |
react in deps (no Next) |
React SPA | Check effect cleanup, memoization, state management patterns |
vue in deps |
Vue.js | Check reactivity patterns, composition API usage |
svelte in deps |
SvelteKit | Check load functions, form actions, reactive declarations |
Convention sampling
For each directory containing changed files, read 2-3 existing files in the same directory to detect local patterns:
What to look for
- Naming conventions: camelCase vs snake_case vs PascalCase for files, variables, functions, classes
- Import style: relative vs absolute imports, barrel files, import ordering
- Error handling: try/catch patterns, Result types, error boundary placement
- Export style: named vs default exports, re-export patterns
- Comment style: JSDoc vs inline, when comments are used
- Test organization: co-located vs separate
__tests__directory, naming convention for test files
How to use detected conventions
- If the changed code deviates from surrounding file conventions:
[MINOR]Convention finding - If the changed code deviates from an explicit lint/config rule:
[MAJOR]Convention finding - If surrounding code is inconsistent (no clear convention): do not flag
Language-specific focus areas
Each language has characteristic pitfalls to watch for during review:
| Language | Key focus areas |
|---|---|
| TypeScript | Strict null checks, exhaustive switch/union handling, any escape hatches, proper generic constraints, as casts hiding type errors |
| JavaScript | == vs ===, missing await, prototype pollution, implicit type coercion, missing error handling in Promises |
| Python | Missing type hints on public APIs, bare except: clauses, mutable default arguments, context manager usage for resources, f-string injection |
| Go | Unchecked errors (_ = err), goroutine leaks, deferred close on nil, error wrapping with %w, context propagation |
| Rust | unwrap() / expect() in library code, unnecessary clone(), lifetime issues, missing Send/Sync bounds |
| Java | Null pointer risks, unclosed resources (use try-with-resources), checked exception handling, thread safety |
| Ruby | Method visibility, N+1 queries in ActiveRecord, missing strong parameters, symbol vs string keys |
| C# | IDisposable not disposed, async void (should be async Task), null reference in nullable context |
What to do with gathered context
- Store mentally - Do not output the context gathering step to the user
- Calibrate severity - A pattern that violates an explicit lint rule is
[MAJOR]; a pattern that merely differs from surrounding code style is[MINOR] - Skip what's automated - If a linter already enforces a rule and CI runs it, don't duplicate the finding
- Note gaps - If there are no lint configs and no CLAUDE.md, note that conventions are implicit and be more conservative with
[MAJOR]Convention findings
review-checklist.md
Review Checklist
Use this checklist when analyzing local git diffs. Work top-to-bottom through
each section. Any "No" in Correctness or Security is a [MAJOR] finding. Any
"No" in Testing is [MAJOR]. Convention violations of explicit project rules
are [MAJOR]. Everything else is [MINOR].
Correctness
Core logic must be right. These are always [MAJOR] findings.
- The implementation matches the apparent intent of the changes
- All code paths are covered (happy path, empty/null input, error path)
- Off-by-one errors checked (loop bounds, array indices, pagination offsets)
- No silent failures: errors are surfaced, not swallowed
- Concurrent access is safe: shared state is protected if accessed from multiple threads/workers
- Integer overflow / type coercion edge cases considered (e.g., JavaScript
==vs===) - Async code awaited correctly: no fire-and-forget where the result matters
- No race conditions in state updates
- Return values and error types match the function's documented contract
Security
One security miss can outweigh a thousand correct lines. Always [MAJOR].
Input handling
- All user-supplied input is validated and sanitized before use
- SQL queries use parameterized statements or an ORM, never string concatenation
- HTML output is escaped to prevent XSS (or a templating engine handles it automatically)
- Shell commands do not use user input directly (
exec,spawn, etc.) - File paths constructed from user input are validated against a safe root
Authentication and authorization
- All new endpoints have authentication middleware applied
- Every resource access verifies the requesting user owns or has permission for that resource (no IDOR)
- Privilege checks happen server-side, never based on client-supplied role or flag
- Admin-only routes are protected at the route level, not just hidden in the UI
Data exposure
- API responses return only the fields the client needs (no full DB row serialization)
- Passwords, tokens, and secrets are never logged
- PII is not written to application logs
- Error responses do not expose stack traces, SQL, or internal paths to the client
Secrets and dependencies
- No API keys, tokens, or credentials committed to the repository
- Environment variables used for all secrets;
.env.exampleupdated if new vars added - New dependencies do not have known critical CVEs (check with
npm audit/pip-audit/ equivalent) - New dependencies are actively maintained and have an appropriate license
CSRF and transport
- State-changing endpoints (POST/PUT/DELETE) validate CSRF tokens where applicable
- Sensitive endpoints enforce HTTPS; no mixed content
- CORS configuration is not overly permissive (no
*in production for credentialed requests)
Performance
- No N+1 queries: database calls are not inside loops over collections
- New database queries on large tables have an appropriate index (check the migration)
- Queries have a LIMIT or pagination; no unbounded full-table scans in production paths
- No synchronous blocking I/O in a hot path (e.g.,
fs.readFileSyncin a request handler) - No unnecessary re-renders in frontend code: memoization applied where the component re-renders on every parent update
- Event listeners, intervals, and subscriptions have cleanup/teardown logic
- Large payloads are paginated or streamed; not loaded entirely into memory
- Caching is used appropriately for expensive, frequently-read, rarely-changing data
Readability
- Variable and function names are intention-revealing (no
data,temp,flag, single letters outside tight loops) - Functions do one thing and operate at one level of abstraction
- No function is longer than ~40 lines without a clear justification
- Complex logic has a comment explaining why, not what
- No commented-out code (use version control instead)
- No dead code (unreachable branches, unused variables, obsolete imports)
- Magic numbers and strings are extracted to named constants
- Nesting depth is kept shallow (max 3 levels of indentation; extract early returns or helper functions)
Testing
Untested code is untrusted code. Missing tests for new behavior are [MAJOR].
- New behavior has corresponding unit tests
- Tests cover the happy path and at least one failure/edge case
- Tests would catch a regression if the implementation changed (not just asserting the implementation returned something)
- Tests do not rely on ordering, timing, or shared mutable state between test cases
- Mocks and stubs are used for external services; tests do not hit real databases, APIs, or the filesystem
- Test names describe what behavior is being verified (
it('returns 401 when token is expired')notit('works')) - No test is skipped (
.skip,xtest,xit) without a linked issue explaining why - Code coverage is not regressed (if the project tracks coverage thresholds)
Documentation
- Public API methods have updated JSDoc / docstrings if their signature or behavior changed
-
README.mdis updated if the change affects setup, configuration, or usage -
CHANGELOG.mdentry added for user-facing changes (if the project maintains one) - Architecture Decision Record (ADR) created for significant design choices
- New environment variables are documented in
.env.exampleand the setup guide - Deprecations are marked with
@deprecatedand include a migration path
Convention
Checks against detected project rules. Violations of explicit rules (lint
configs, CLAUDE.md) are [MAJOR]. Deviations from implicit patterns are [MINOR].
- Code follows project lint rules (ESLint, Ruff, golangci-lint, etc.)
- Naming matches the convention in surrounding files (camelCase vs snake_case, etc.)
- Import style matches project pattern (relative vs absolute, ordering, barrel files)
- Error handling follows the project's established pattern
- Export style is consistent (named vs default exports)
- File organization matches project structure conventions
- CLAUDE.md / AGENT.md rules are followed (if present)
- Framework-idiomatic patterns are used (see
references/context-detection.md)
Frequently Asked Questions
What is code-review-mastery?
Use this skill when the user asks to review their local git changes, staged or unstaged diffs, or wants a code review before committing. Triggers on "review my changes", "review staged", "review my diff", "check my code", "code review local changes", "review unstaged", "review before commit".
How do I install code-review-mastery?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill code-review-mastery in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support code-review-mastery?
code-review-mastery works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.