ci-cd-pipelines
Use this skill when setting up CI/CD pipelines, configuring GitHub Actions, implementing deployment strategies, or automating build/test/deploy workflows. Triggers on GitHub Actions, CI pipeline, CD pipeline, deployment automation, blue-green deployment, canary release, rolling update, build matrix, artifacts, and any task requiring continuous integration or delivery setup.
infra ci-cdgithub-actionsdeploymentautomationpipelinesdevopsWhat is ci-cd-pipelines?
Use this skill when setting up CI/CD pipelines, configuring GitHub Actions, implementing deployment strategies, or automating build/test/deploy workflows. Triggers on GitHub Actions, CI pipeline, CD pipeline, deployment automation, blue-green deployment, canary release, rolling update, build matrix, artifacts, and any task requiring continuous integration or delivery setup.
ci-cd-pipelines
ci-cd-pipelines is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Setting up CI/CD pipelines, configuring GitHub Actions, implementing deployment strategies, or automating build/test/deploy workflows.
Quick Facts
| Field | Value |
|---|---|
| Category | infra |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill ci-cd-pipelines- The ci-cd-pipelines skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
A practitioner's guide to continuous integration and continuous delivery for production systems. This skill covers pipeline design, GitHub Actions workflows, deployment strategies, and the operational patterns that keep software shipping safely at speed. The emphasis is on when to apply each pattern and why it matters, not just the YAML syntax.
CI/CD is not a tool configuration problem - it is a software delivery discipline. The pipeline is the product team's contract with production: every commit that passes is a candidate release, and the pipeline enforces that contract automatically.
Tags
ci-cd github-actions deployment automation pipelines devops
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair ci-cd-pipelines with these complementary skills:
Frequently Asked Questions
What is ci-cd-pipelines?
Use this skill when setting up CI/CD pipelines, configuring GitHub Actions, implementing deployment strategies, or automating build/test/deploy workflows. Triggers on GitHub Actions, CI pipeline, CD pipeline, deployment automation, blue-green deployment, canary release, rolling update, build matrix, artifacts, and any task requiring continuous integration or delivery setup.
How do I install ci-cd-pipelines?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill ci-cd-pipelines in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support ci-cd-pipelines?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
CI/CD Pipelines
A practitioner's guide to continuous integration and continuous delivery for production systems. This skill covers pipeline design, GitHub Actions workflows, deployment strategies, and the operational patterns that keep software shipping safely at speed. The emphasis is on when to apply each pattern and why it matters, not just the YAML syntax.
CI/CD is not a tool configuration problem - it is a software delivery discipline. The pipeline is the product team's contract with production: every commit that passes is a candidate release, and the pipeline enforces that contract automatically.
When to use this skill
Trigger this skill when the user:
- Creates or modifies a GitHub Actions, GitLab CI, or Jenkins pipeline
- Implements PR checks, branch protection rules, or required status checks
- Sets up deployment environments (staging, production) with promotion gates
- Implements blue-green, canary, rolling, or recreate deployment strategies
- Configures caching for dependencies or build artifacts to speed up pipelines
- Sets up matrix builds to test across multiple Node versions or operating systems
- Automates secrets injection, environment promotion, or rollback procedures
- Diagnoses a slow pipeline and needs to find what to parallelize or cache
Do NOT trigger this skill for:
- Infrastructure provisioning from scratch (use a Terraform/Kubernetes skill instead)
- Application-level testing strategies unrelated to pipeline structure
Key principles
Fail fast - The pipeline should surface errors as early as possible. Run linting and type-checking before tests. Run unit tests before integration tests. A 30-second lint failure beats a 10-minute test run that tells you the same thing.
Cache aggressively -
node_modules, Maven.m2, pip wheels, and Docker layer caches can turn a 12-minute pipeline into a 3-minute one. Cache by the lockfile hash so the cache busts exactly when dependencies change.Keep pipelines under 10 minutes - Pipelines longer than 10 minutes cause developers to stop watching them, batch commits to avoid waiting, and skip running them locally. Parallelize jobs, split slow test suites, and move heavy analysis to scheduled runs.
Trunk-based development - Short-lived branches merged frequently (at least daily) are the prerequisite for effective CI. Long-lived branches turn CI into a lie - the code integrates in CI but not in reality.
Immutable artifacts - Build once, deploy everywhere. The same Docker image or archive that passed staging must be the thing that goes to production. Never rebuild from source at deploy time.
Core concepts
Pipeline stages run in order and each must pass before the next begins:
build -> test -> deploy:staging -> approve -> deploy:productionTriggers determine when a pipeline runs:
pushon any branch - run build and testpull_request- run full check suite for the PRschedule(cron) - run security scans or long test suites nightlyworkflow_dispatch- manual trigger with optional inputs for on-demand deploys
Environments are named targets (staging, production) with their own secrets, protection rules, and deployment history. GitHub Environments let you require manual approvals before promoting to production.
Secrets management - secrets live in GitHub Secrets or an external vault (Vault, AWS Secrets Manager). They are injected as environment variables at runtime. Never print them in logs. Rotate them on a schedule.
Artifact storage - build outputs (compiled code, Docker images, test reports) are stored in GitHub Artifacts or a registry (GHCR, ECR, Docker Hub). Artifacts have a retention window; images are tagged with the commit SHA.
Common tasks
Set up GitHub Actions for Node.js
A standard Node.js pipeline with lint, test, and build, using dependency caching:
# .github/workflows/ci.yml
name: CI
on:
push:
branches: [main]
pull_request:
jobs:
ci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm # caches ~/.npm by package-lock.json hash
- run: npm ci # clean install from lockfile
- run: npm run lint
- run: npm test -- --coverage
- run: npm run build
- uses: actions/upload-artifact@v4
with:
name: dist
path: dist/
retention-days: 7Use
npm ciinstead ofnpm installin CI. It is faster, deterministic, and will fail ifpackage-lock.jsonis out of sync withpackage.json.
Implement PR checks
Require the CI workflow to pass before merging. Configure in GitHub Settings > Branches > Branch protection rules:
- Enable "Require status checks to pass before merging"
- Add the job name (
ci) as a required check - Enable "Require branches to be up to date before merging"
# .github/workflows/pr-check.yml
name: PR Check
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- run: npm ci
- run: npm run lint
test:
runs-on: ubuntu-latest
needs: lint # only run tests if lint passes
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- run: npm ci
- run: npm test
typecheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- run: npm ci
- run: npm run typecheckSet up deployment environments with approvals
Use GitHub Environments to gate production deploys behind a manual approval:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
outputs:
image-tag: ${{ steps.tag.outputs.tag }}
steps:
- uses: actions/checkout@v4
- id: tag
run: echo "tag=${{ github.sha }}" >> $GITHUB_OUTPUT
- run: docker build -t myapp:${{ github.sha }} .
- run: docker push ghcr.io/org/myapp:${{ github.sha }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
deploy-staging:
needs: build
runs-on: ubuntu-latest
environment: staging # uses staging secrets + URL
steps:
- run: ./scripts/deploy.sh
env:
IMAGE_TAG: ${{ needs.build.outputs.image-tag }}
DEPLOY_URL: ${{ vars.DEPLOY_URL }}
API_KEY: ${{ secrets.DEPLOY_API_KEY }}
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
environment: production # requires manual approval in GitHub UI
steps:
- run: ./scripts/deploy.sh
env:
IMAGE_TAG: ${{ needs.build.outputs.image-tag }}
DEPLOY_URL: ${{ vars.DEPLOY_URL }}
API_KEY: ${{ secrets.DEPLOY_API_KEY }}Configure environment protection rules in GitHub Settings > Environments > production > Required reviewers.
Implement blue-green deployment
Route traffic between two identical environments. Switch instantly; roll back by switching back:
deploy-blue-green:
runs-on: ubuntu-latest
environment: production
env:
IMAGE_TAG: ${{ needs.build.outputs.image-tag }}
steps:
- uses: actions/checkout@v4
- name: Determine inactive slot
id: slot
run: |
ACTIVE=$(curl -s https://api.example.com/active-slot)
if [ "$ACTIVE" = "blue" ]; then
echo "target=green" >> $GITHUB_OUTPUT
else
echo "target=blue" >> $GITHUB_OUTPUT
fi
- name: Deploy to inactive slot
run: ./scripts/deploy-slot.sh ${{ steps.slot.outputs.target }} $IMAGE_TAG
- name: Run smoke tests against inactive slot
run: ./scripts/smoke-test.sh ${{ steps.slot.outputs.target }}
- name: Switch traffic to new slot
run: ./scripts/switch-slot.sh ${{ steps.slot.outputs.target }}
- name: Verify production is healthy
run: ./scripts/health-check.sh production
- name: Roll back on failure
if: failure()
run: ./scripts/switch-slot.sh ${{ steps.slot.outputs.target == 'blue' && 'green' || 'blue' }}See
references/deployment-strategies.mdfor a detailed comparison of blue-green vs canary vs rolling vs recreate.
Implement canary release with rollback
Route a small percentage of traffic to the new version before full rollout:
deploy-canary:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Deploy canary (10% traffic)
run: ./scripts/deploy-canary.sh ${{ env.IMAGE_TAG }} 10
- name: Monitor canary for 5 minutes
run: |
for i in $(seq 1 10); do
sleep 30
ERROR_RATE=$(./scripts/get-error-rate.sh canary)
echo "Canary error rate: $ERROR_RATE%"
if (( $(echo "$ERROR_RATE > 1.0" | bc -l) )); then
echo "Error rate too high. Rolling back canary."
./scripts/rollback-canary.sh
exit 1
fi
done
- name: Promote canary to 100%
run: ./scripts/promote-canary.sh ${{ env.IMAGE_TAG }}
- name: Roll back on any failure
if: failure()
run: ./scripts/rollback-canary.shCache dependencies and build artifacts
Cache node_modules by lockfile hash. Always restore-then-save so partial
installs don't get cached:
- name: Cache node_modules
id: cache-node-modules
uses: actions/cache@v4
with:
path: node_modules
key: node-modules-${{ runner.os }}-${{ hashFiles('package-lock.json') }}
restore-keys: |
node-modules-${{ runner.os }}-
- name: Install dependencies
if: steps.cache-node-modules.outputs.cache-hit != 'true'
run: npm ci
- name: Cache Next.js build
uses: actions/cache@v4
with:
path: |
.next/cache
key: nextjs-${{ runner.os }}-${{ hashFiles('package-lock.json') }}-${{ hashFiles('**/*.ts', '**/*.tsx') }}
restore-keys: |
nextjs-${{ runner.os }}-${{ hashFiles('package-lock.json') }}-
nextjs-${{ runner.os }}-Cache keys should go from most-specific to least-specific in
restore-keys. A partial cache restore is almost always faster than a cold install.
Set up matrix builds
Test across multiple Node versions and operating systems in parallel:
test-matrix:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false # don't cancel other jobs if one fails
matrix:
node-version: [18, 20, 22]
os: [ubuntu-latest, windows-latest, macos-latest]
exclude:
- os: windows-latest
node-version: 18 # don't test EOL Node on Windows
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: npm
- run: npm ci
- run: npm testSet fail-fast: false when the matrix combinations are independent. Use
fail-fast: true (default) when any failure means the whole build is broken.
Error handling
| Failure | Likely cause | Fix |
|---|---|---|
npm ci fails with lockfile mismatch |
package.json updated without re-running npm install |
Run npm install locally and commit the updated package-lock.json |
| Cache miss on every run | Cache key includes volatile data (timestamps, random) | Use only stable inputs in cache key - lockfile hash, OS, Node version |
| Secrets not available in fork PR | GitHub does not expose secrets to workflows triggered by fork PRs | Use pull_request_target with caution, or require manual approval for external PRs |
| Workflow hangs with no output | Long-running process with no stdout, or missing --ci flag on test runner |
Add timeout-minutes to the job; pass --ci flag to jest/vitest |
| Deploy fails but staging passed | Environment-specific secrets or config missing in production environment | Verify all vars and secrets are configured in the production environment settings |
| Matrix job passes on one OS but fails another | Path separators, line endings, or OS-specific tools diverge | Use path.join() in code; add .gitattributes for line endings; pin tool versions |
Gotchas
Secrets not available in fork PRs - GitHub Actions does not expose repository secrets to workflows triggered by pull requests from forks (security boundary). Using
pull_request_targetto work around this requires careful vetting because it runs in the base repo context with full secret access - a malicious PR can exfiltrate them. Default to requiring manual approval for external PRs instead.Required status check name must match exactly - If the workflow or job name changes (even a typo, case difference, or rename), GitHub branch protection silently stops enforcing the old required check. Always verify the status check name in branch protection settings after any workflow rename.
Cache poisoning via restore-keys - A
restore-keyspartial hit restores an older cache, then the job installs on top. If the old cache has a broken or security-patched package thatnpm cidoesn't replace (because it wasn't in the lockfile update), the cached bad state persists. Usenpm ci(notnpm install) so it always installs exactly what the lockfile specifies.Immutable artifact drift - Building from source at deploy time (instead of promoting the artifact that passed staging) is a subtle but critical violation of the immutable artifact principle. A second build from the same SHA can produce different outputs if external dependencies (base images, npm packages without lockfiles) changed between builds.
Matrix
fail-fast: truemasking failures - Withfail-fast: true(the default), a single failing job cancels all others, which can hide that multiple matrix combinations are broken. Usefail-fast: falsewhen you want a full picture of which combinations fail, then switch back totruefor normal CI.
References
For detailed implementation guidance on specific deployment strategies:
references/deployment-strategies.md- blue-green, canary, rolling, recreate, A/B, and shadow deployments with ASCII diagrams and decision framework
Only load the references file when choosing or implementing a specific deployment strategy - it is detailed and will consume context.
References
deployment-strategies.md
Deployment Strategies
A deployment strategy defines how a new version of software replaces the old one in production. The choice affects downtime, rollback speed, risk exposure, infrastructure cost, and operational complexity. This reference covers the six primary strategies with diagrams, trade-offs, and decision guidance.
Decision framework
Does the deploy require downtime?
YES -> Can you schedule a maintenance window?
YES -> Recreate (simplest)
NO -> You need a zero-downtime strategy (see below)
Is instant rollback (< 30 seconds) required?
YES -> Blue-Green
Is gradual traffic shifting acceptable?
YES -> Can you segment users or requests?
YES (by user/cohort) -> A/B or Shadow
YES (by percentage) -> Canary
NO -> Rolling
Do you need to test a new version with real traffic without affecting users?
YES -> Shadow1. Recreate
Stop the old version completely, then start the new version.
BEFORE:
[ v1 ] [ v1 ] [ v1 ]
Users -> Load Balancer -> v1 instances
DEPLOY:
[ -- ] [ -- ] [ -- ] <- downtime window
(all instances stopped, new ones starting)
AFTER:
[ v2 ] [ v2 ] [ v2 ]
Users -> Load Balancer -> v2 instancesWhen to use:
- Batch jobs, background workers, or jobs with no user-facing SLA
- Database migrations that are incompatible with the previous schema
- Non-production environments (dev, QA)
Trade-offs:
| Downtime | Yes - hard downtime while old stops and new starts |
| Rollback speed | Slow - must re-deploy v1 |
| Infrastructure cost | Low - no duplicate capacity needed |
| Complexity | Very low |
GitHub Actions pattern:
- name: Stop old version
run: kubectl scale deployment myapp --replicas=0
- name: Deploy new version
run: kubectl set image deployment/myapp myapp=$IMAGE_TAG
- name: Wait for rollout
run: kubectl rollout status deployment/myapp2. Rolling Update
Replace instances one at a time (or in small batches). The load balancer continues routing traffic to the healthy old instances while new ones come up.
STEP 1: [ v2 ] [ v1 ] [ v1 ] <- 1/3 new
STEP 2: [ v2 ] [ v2 ] [ v1 ] <- 2/3 new
STEP 3: [ v2 ] [ v2 ] [ v2 ] <- 3/3 new (done)
At all times: Users see a mix of v1 and v2 responses.When to use:
- APIs that are backward compatible with the previous version
- Stateless services where any instance can serve any request
- When you have limited capacity and can't run double the instances
Trade-offs:
| Downtime | None (if health checks are configured correctly) |
| Rollback speed | Medium - must roll back instance by instance |
| Infrastructure cost | Low - brief capacity reduction during rollout |
| Complexity | Low - most orchestrators (Kubernetes, ECS) do this natively |
| Risk | Medium - users may hit both old and new version during rollout |
Warning: Both v1 and v2 serve traffic simultaneously. Your API must be backward compatible during the rollout window. If v2 changes a database schema in a breaking way, rolling updates will break v1 instances.
GitHub Actions / Kubernetes pattern:
- name: Apply rolling update
run: |
kubectl set image deployment/myapp myapp=$IMAGE_TAG
kubectl rollout status deployment/myapp --timeout=5m
- name: Roll back on failure
if: failure()
run: kubectl rollout undo deployment/myappKubernetes config snippet:
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # allow 1 extra instance during update
maxUnavailable: 0 # never reduce below desired replica count3. Blue-Green
Maintain two identical production environments (blue and green). Only one is live at a time. Deploy to the idle environment, test it, then switch traffic.
CURRENT STATE (blue is live):
Users -> Load Balancer -> [ blue: v1 ] [ blue: v1 ] [ blue: v1 ]
[ green: v1 ] [ green: v1 ] [ green: v1 ] <- idle
DEPLOY TO GREEN:
Users -> Load Balancer -> [ blue: v1 ] [ blue: v1 ] [ blue: v1 ]
[ green: v2 ] [ green: v2 ] [ green: v2 ] <- staging v2
SMOKE TEST PASSES, SWITCH TRAFFIC:
Users -> Load Balancer -> [ blue: v1 ] [ blue: v1 ] [ blue: v1 ] <- idle (rollback target)
-> [ green: v2 ] [ green: v2 ] [ green: v2 ] <- live
ROLLBACK (instant - just flip the switch):
Users -> Load Balancer -> [ blue: v1 ] [ blue: v1 ] [ blue: v1 ] <- live againWhen to use:
- Services where instant rollback (under 30 seconds) is a hard requirement
- Deployments that are risky or involve significant changes
- E-commerce, payments, authentication - anywhere downtime costs money
Trade-offs:
| Downtime | None |
| Rollback speed | Instant - traffic switch is atomic |
| Infrastructure cost | High - 2x capacity at all times |
| Complexity | Medium - requires traffic switching mechanism |
| Database challenge | Hard - both environments may share a database, requiring backward-compatible schema |
Key consideration - databases: Blue-green is simple for stateless compute but complex for databases. If both environments share one database, schema changes must be backward compatible with both versions simultaneously. The expand-contract migration pattern handles this:
- Expand: add new column/table (v1 and v2 both work)
- Migrate: v2 writes to both old and new
- Contract: once v1 is gone, remove the old column
4. Canary Release
Route a small percentage of real traffic to the new version. Increase gradually if metrics look healthy. Roll back immediately if error rate or latency spikes.
INITIAL CANARY (10% traffic):
Users (90%) -> Load Balancer -> [ v1 ] [ v1 ] [ v1 ] [ v1 ] [ v1 ]
Users (10%) -> [ v2 ]
AFTER MONITORING (promote to 50%):
Users (50%) -> Load Balancer -> [ v1 ] [ v1 ] [ v1 ]
Users (50%) -> [ v2 ] [ v2 ] [ v2 ]
FULL ROLLOUT (100%):
Users (100%) -> [ v2 ] [ v2 ] [ v2 ] [ v2 ] [ v2 ]When to use:
- High-traffic services where even a 1% error rate affects thousands of users
- Deployments with uncertain performance characteristics
- When you need real production data to validate before full rollout
Trade-offs:
| Downtime | None |
| Rollback speed | Fast - reduce canary weight to 0 |
| Infrastructure cost | Low during canary phase, zero extra after full rollout |
| Complexity | High - requires traffic splitting and monitoring integration |
| Observability requirement | High - need reliable per-version metrics |
Canary promotion checklist:
- Error rate on canary <= error rate on stable
- p99 latency on canary <= p99 latency on stable
- No new error types in logs
- Memory and CPU usage within expected range
- Business metrics (conversion, checkout success) unchanged
Automatic rollback trigger thresholds (example):
| Metric | Threshold |
|---|---|
| Error rate | > 1% (vs 0.1% baseline) |
| p99 latency | > 500ms (vs 120ms baseline) |
| 5xx rate | > 0.5% |
| Memory | > 90% of limit |
5. A/B Testing (Feature Flags)
Route specific user segments to different versions based on user attributes, not just percentages. Used to test product hypotheses, not just deployment risk.
Segment routing:
User (cohort A: beta users) -> [ v2 - new checkout flow ]
User (cohort B: everyone else) -> [ v1 - old checkout flow ]
Routing logic lives in:
- Load balancer rules (header-based)
- Feature flag service (LaunchDarkly, Unleash, Flipt)
- Application layer (check flag at runtime)A/B vs Canary - key difference:
- Canary is about deployment safety - random % of traffic
- A/B is about product experimentation - specific user segments, tracked metrics
When to use:
- Testing product changes (new UI, new algorithm) before full rollout
- Personalization experiments
- When you need statistical significance over a defined cohort
Trade-offs:
| Downtime | None |
| Rollback speed | Instant - toggle the flag |
| Infrastructure cost | None extra (same instances, different code paths) |
| Complexity | High - requires feature flag infrastructure |
| Long-term risk | Flag debt - unremoved flags become tech debt |
Rule: Set a sunset date for every feature flag when you create it.
6. Shadow Deployment (Traffic Mirroring)
Send a copy of production traffic to the new version, but discard its responses. Users only see v1 responses. v2 processes requests in parallel for observation.
Request flow:
User -> Load Balancer -> [ v1 ] -> Response to user
\-> [ v2 ] -> Response discarded (observability only)
v2 receives: real production traffic, same volume and shape
v2 returns: responses that nobody sees
v2 records: latency, errors, resource usage, output diffsWhen to use:
- Replacing a core algorithm or pricing engine where correctness is critical
- Testing a new service that must match an existing service's behavior exactly
- Pre-production load testing with real traffic patterns
Trade-offs:
| Downtime | None |
| Rollback speed | N/A - users were never on v2 |
| Infrastructure cost | Medium - v2 runs at full production load |
| Complexity | High - requires traffic mirroring infrastructure |
| Side effects risk | High - v2 must not write to production databases or send emails |
Critical: In shadow mode, v2 must not produce side effects. Disable:
- Database writes (connect to a shadow/read-only DB)
- Outbound emails or notifications
- Payment processing
- Third-party API calls with side effects
Strategy comparison matrix
| Strategy | Downtime | Rollback speed | Extra infra cost | Complexity | Best for |
|---|---|---|---|---|---|
| Recreate | Yes | Slow | None | Very low | Batch jobs, incompatible migrations |
| Rolling | No | Medium | Low (brief) | Low | Stateless, backward-compatible APIs |
| Blue-Green | No | Instant | High (2x) | Medium | High-stakes deploys, instant rollback |
| Canary | No | Fast | Low | High | High-traffic, uncertain perf |
| A/B | No | Instant | None | High | Product experiments, feature flags |
| Shadow | No | N/A | Medium | High | Algorithm replacement, behavior matching |
Combining strategies
Real systems often combine strategies:
- Blue-green + canary: deploy to green slot, then use canary to shift traffic 10% -> 50% -> 100% before decommissioning blue
- Feature flags + rolling: roll out binary with a flag off, then gradually enable the flag for user cohorts
- Shadow + canary: shadow first to verify correctness, then canary to validate performance under load, then full rollout
Database migration compatibility by strategy
| Strategy | Schema requirement |
|---|---|
| Recreate | Any schema change (downtime window covers migration) |
| Rolling | Schema must be backward compatible with v1 during rollout |
| Blue-Green | Schema must be compatible with both blue and green simultaneously |
| Canary | Schema must be compatible with both versions simultaneously |
| A/B | Schema must support both code paths simultaneously |
| Shadow | v2 should use a separate or read-only database |
The expand-contract pattern (also called parallel change) makes rolling, blue-green, and canary deploys safe with schema changes:
Phase 1 - Expand: Add new_column (nullable). Deploy v1 unchanged.
Phase 2 - Migrate: Deploy v2 that writes to both old_column and new_column.
Phase 3 - Backfill: Migrate historical data to new_column.
Phase 4 - Contract: Deploy v3 that reads only new_column. Remove old_column.Each phase can be deployed and verified independently, with rollback possible at every step.
Frequently Asked Questions
What is ci-cd-pipelines?
Use this skill when setting up CI/CD pipelines, configuring GitHub Actions, implementing deployment strategies, or automating build/test/deploy workflows. Triggers on GitHub Actions, CI pipeline, CD pipeline, deployment automation, blue-green deployment, canary release, rolling update, build matrix, artifacts, and any task requiring continuous integration or delivery setup.
How do I install ci-cd-pipelines?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill ci-cd-pipelines in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support ci-cd-pipelines?
ci-cd-pipelines works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.