security-incident-response
Use this skill when responding to security incidents, conducting forensic analysis, containing breaches, or writing incident reports. Triggers on security incident, breach response, forensics, containment, eradication, recovery, incident report, IOC analysis, and any task requiring security incident management.
engineering incident-responsesecurityforensicsbreachcontainmentWhat is security-incident-response?
Use this skill when responding to security incidents, conducting forensic analysis, containing breaches, or writing incident reports. Triggers on security incident, breach response, forensics, containment, eradication, recovery, incident report, IOC analysis, and any task requiring security incident management.
security-incident-response
security-incident-response is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Responding to security incidents, conducting forensic analysis, containing breaches, or writing incident reports.
Quick Facts
| Field | Value |
|---|---|
| Category | engineering |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill security-incident-response- The security-incident-response skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
A practitioner's framework for detecting, containing, and recovering from security incidents. This skill covers the full NIST incident response lifecycle - preparation through lessons learned - with emphasis on when to act, what to preserve, and how to communicate under pressure. Designed for engineers and security practitioners who need to respond with speed and precision when a breach is suspected or confirmed.
Tags
incident-response security forensics breach containment
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair security-incident-response with these complementary skills:
Frequently Asked Questions
What is security-incident-response?
Use this skill when responding to security incidents, conducting forensic analysis, containing breaches, or writing incident reports. Triggers on security incident, breach response, forensics, containment, eradication, recovery, incident report, IOC analysis, and any task requiring security incident management.
How do I install security-incident-response?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill security-incident-response in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support security-incident-response?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Security Incident Response
A practitioner's framework for detecting, containing, and recovering from security incidents. This skill covers the full NIST incident response lifecycle - preparation through lessons learned - with emphasis on when to act, what to preserve, and how to communicate under pressure. Designed for engineers and security practitioners who need to respond with speed and precision when a breach is suspected or confirmed.
When to use this skill
Trigger this skill when the user:
- Suspects or confirms a security breach, intrusion, or unauthorized access
- Needs to classify incident severity and decide on escalation
- Is containing a threat (isolating systems, revoking credentials, blocking IPs)
- Needs to preserve forensic evidence or maintain chain of custody
- Is communicating an incident to stakeholders, executives, or regulators
- Is eradicating malware, backdoors, or persistent access from systems
- Is writing a security incident report or post-mortem
Do NOT trigger this skill for:
- Proactive security hardening or architectural review (use the backend-engineering security reference instead)
- Vulnerability disclosure or bug bounty triage that has not yet become an active incident
Key principles
Contain first, investigate second - Stopping the bleeding takes priority over understanding the wound. Isolate affected systems before collecting forensic evidence if the attacker still has active access. Evidence is recoverable; damage from continued access may not be.
Preserve evidence - Everything you do to an affected system changes it. Use read-only mounts, memory snapshots, and write blockers. Log every command you run. Courts and regulators require chain of custody.
Communicate early and often - A 30-second "we are investigating" message is better than silence for three hours. Stakeholders need to plan. Delayed notification erodes trust far more than the incident itself.
Document everything in real-time - Keep a live incident timeline. Record every action taken, every finding, every decision, and every person involved. Memory fades in 24 hours; your logs won't.
Never blame - Incidents are system failures, not individual failures. A post-mortem that names a person instead of fixing a process produces fear, not improvement. Apply the same principle as SRE blameless post-mortems.
Core concepts
NIST IR Phases
The NIST SP 800-61 framework defines six phases that form the backbone of any structured incident response program:
| Phase | Goal | Key outputs |
|---|---|---|
| Preparation | Build capability before incidents happen | Runbooks, contact lists, tooling, trained responders |
| Detection & Analysis | Identify that an incident is occurring and understand its scope | Severity classification, initial IOC list, affected asset inventory |
| Containment | Prevent the incident from spreading or causing more damage | Isolated systems, revoked credentials, blocked IPs/domains |
| Eradication | Remove the threat from all affected systems | Cleaned/reimaged hosts, patched vulnerabilities, removed persistence mechanisms |
| Recovery | Restore systems to normal operations safely | Verified clean systems returned to production, monitoring confirmed |
| Lessons Learned | Improve defenses and process based on what happened | Post-mortem report, process changes, new detections |
Phases are not always strictly sequential. Containment and eradication can overlap. Detection and analysis continues throughout the entire response.
Severity Classification
Assign severity at detection time. Reassess as facts emerge.
| Severity | Definition | Response SLA | Example |
|---|---|---|---|
| P1 - Critical | Active breach with ongoing data exfiltration or system compromise | Immediate, 24/7 response | Attacker has shell on production DB, ransomware spreading |
| P2 - High | Confirmed compromise but impact is contained or unclear | Response within 1 hour | Stolen API key used, single host compromised, credential stuffing succeeding |
| P3 - Medium | Suspicious activity with no confirmed compromise | Response within 4 hours | Anomalous login from new country, unusual outbound traffic spike |
| P4 - Low | Potential indicator, no evidence of compromise | Next business day | Single failed login attempt, phishing email reported but not clicked |
When in doubt, escalate to a higher severity. Downgrading is always easier than explaining why you under-responded.
Chain of Custody
Chain of custody is the documented, unbroken record of who collected, handled, and transferred evidence. Required for:
- Legal proceedings or law enforcement cooperation
- Regulatory compliance (HIPAA, PCI-DSS, GDPR)
- Insurance claims
- Internal disciplinary actions
Every piece of evidence needs: what it is, when it was collected, who collected it, where it has been stored, and who has accessed it since collection.
IOC Types
Indicators of Compromise (IOCs) are artifacts that indicate a system may have been compromised. Categories:
| Type | Examples | Volatility |
|---|---|---|
| Atomic | IP addresses, domain names, email addresses, file hashes | Low - easy to change by attacker |
| Computed | Network traffic patterns, YARA rules, behavioral signatures | Medium - harder to change |
| Behavioral | TTP patterns (MITRE ATT&CK techniques), lateral movement indicators | High - most durable signal |
Prefer behavioral IOCs for detection rules. Atomic IOCs burn quickly as attackers rotate infrastructure. Map findings to MITRE ATT&CK techniques when possible - it enables cross-team communication and threat intelligence sharing.
Common tasks
Detect and classify an incident
When an alert fires or suspicious activity is reported, your first job is triage.
Initial triage checklist:
- What triggered the alert or report? (alert, user report, third-party notification)
- What systems and data are potentially affected?
- Is the attacker likely still active (ongoing) or was this historical activity?
- Is PII, PHI, PCI, or other regulated data in scope?
- What is the business impact if this is confirmed?
Severity matrix (quick reference):
Is an attacker actively operating in your systems right now?
YES -> P1. Activate incident response team immediately.
NO -> Is a confirmed compromise present (evidence of unauthorized access)?
YES -> P2. Assemble response team within 1 hour.
NO -> Is there suspicious activity with credible threat indicators?
YES -> P3. Assign responder, investigate within 4 hours.
NO -> P4. Log and monitor, review next business day.Open an incident channel (e.g., Slack #inc-YYYY-MM-DD-shortname) and post the
initial severity assessment within 15 minutes of detection.
Contain a breach
Containment is the most time-critical action. Execute in two stages:
Short-term containment (immediate - do not wait for full investigation):
- Isolate affected hosts from the network (network segment or pull the cable) without powering them off - RAM evidence is lost on shutdown
- Revoke or rotate all credentials that may have been exposed
- Block attacker-controlled IPs and domains at the firewall and DNS level
- Disable any compromised service accounts or API keys
- Preserve a snapshot (cloud VM snapshot or disk image) before remediation begins
Long-term containment (within hours):
- Move affected systems to an isolated network segment for forensic analysis
- Deploy additional monitoring on systems adjacent to the compromise
- Validate that backups for affected systems are clean and pre-date the intrusion
- Determine if the attacker has established persistence (scheduled tasks, cron jobs, SSH authorized_keys, new user accounts, implants)
- Coordinate with legal before communicating externally about the breach
Never reimage or restore a system before taking a forensic image. A clean system is useless evidence.
Preserve forensic evidence
Forensic integrity requires that you capture volatile data before it disappears and that all evidence collection is documented.
Order of volatility (capture in this order):
- CPU registers and cache (already lost if you can't attach a debugger live)
- RAM / memory dump - use tools like
avml,WinPmem, or cloud provider memory capture APIs - Network connections -
ss -tnp,netstat -ano, ARP cache - Running processes -
ps auxf,lsof, process tree with hashes - File system - timestamps (MAC times), recently modified files, new files
- Disk image - bit-for-bit copy using
ddwith write blocker or cloud snapshot
Chain of custody log template:
Evidence ID: [unique ID, e.g., INC-2024-001-E01]
Description: [e.g., Memory dump from prod-web-01]
Collected by: [name + role]
Collection time: [ISO 8601 timestamp with timezone]
Collection tool: [tool name + version + command run]
Hash (SHA-256): [hash of the evidence file]
Storage location:[path or bucket with access controls]
Chain of access: [who accessed it and when after collection]Every command run on a live affected system must be logged with timestamp and operator name - these commands themselves modify the system and must be part of the record.
Communicate during an incident
Timely, accurate communication prevents panic and enables stakeholders to take protective action. Follow a tiered communication model:
Internal responders (Slack incident channel, every 30-60 minutes):
Current status, what we know, what we're doing, next update in X minutes.
Executive / management stakeholder template:
Subject: [P1 ACTIVE / P2 CONTAINED] Security Incident - [date]
What happened: [1-2 sentences, plain language]
Current status: [Investigating / Contained / Eradicating / Recovering]
Business impact: [Systems affected, services degraded, data at risk]
What we are doing: [Top 3 actions in progress]
Next update: [Time]
Contact: [IR lead name + contact]Customer / external notification (when required by law or policy):
- Consult legal before sending any external notification
- GDPR requires notification to supervisory authority within 72 hours of becoming aware of a breach
- State breach notification laws vary; legal must determine which apply
- Be factual and specific about what data was affected; avoid speculation
- Include what affected users should do to protect themselves
Never speculate in stakeholder communications. State only what is confirmed. Use "we are investigating" until you have facts.
Eradicate the threat and recover
Eradication removes every trace of the attacker. Recovery restores normal operations.
Eradication checklist:
- All identified malware, webshells, backdoors, and implants removed
- Persistence mechanisms eliminated (cron, scheduled tasks, startup entries, SSH authorized_keys audited)
- All compromised credentials rotated (service accounts, API keys, user passwords, certificates)
- Vulnerability that enabled the initial access is patched or mitigated
- Affected systems reimaged or verified clean from a known-good state
- New IOC-based detection rules deployed to SIEM/EDR
Recovery checklist:
- Restored systems are patched and hardened before returning to production
- Enhanced monitoring is in place for all recovered systems (minimum 30 days)
- Backups validated as clean before restoring data
- Access controls reviewed and reduced to least privilege
- Stakeholders notified that service has been restored
Do not rush recovery. A compromised system returned to production prematurely is a worse outcome than extended downtime.
Write an incident report
Every P1 and P2 incident requires a written report. P3 incidents warrant a brief write-up. Reports serve three purposes: accountability, improvement, and compliance.
Incident report template:
# Incident Report: [Short title]
**Incident ID:** INC-YYYY-MM-DD-NNN
**Severity:** P1 / P2 / P3
**Status:** Closed
**Date/Time Detected:** [ISO 8601]
**Date/Time Resolved:** [ISO 8601]
**Total Duration:** [HH:MM]
**Report Author:** [Name]
**Reviewed By:** [Names]
## Executive Summary
[2-3 sentences: what happened, what was affected, what was done]
## Timeline
| Time (UTC) | Event |
|---|---|
| HH:MM | [First indicator observed] |
| HH:MM | [Incident declared, responders engaged] |
| HH:MM | [Containment action taken] |
| HH:MM | [Root cause identified] |
| HH:MM | [Eradication complete] |
| HH:MM | [Systems restored to production] |
## Root Cause
[What vulnerability, misconfiguration, or human factor enabled this incident?]
## Impact
- Systems affected: [list]
- Data affected: [type, volume, sensitivity]
- Users affected: [count / segments]
- Business impact: [downtime, revenue, SLA breach]
## What Went Well
- [list]
## What Could Be Improved
- [list]
## Action Items
| Action | Owner | Due Date | Status |
|---|---|---|---|
| [Patch CVE-XXXX-XXXX] | [Name] | [Date] | Open |
## Evidence References
| Evidence ID | Description | Location |
|---|---|---|Distribute the report within 5 business days of incident closure. For P1 incidents, hold a live lessons-learned meeting before the written report is finalized.
Conduct lessons learned and improve
The lessons learned phase is where incidents pay dividends. Skip it and you will respond to the same incident again.
Meeting structure (60-90 minutes for P1, 30 minutes for P2):
- Timeline review (15 min) - walk through the incident timeline factually
- What went well (10 min) - reinforce what worked
- What can improve (20 min) - identify gaps in detection, response, tools, or process
- Action items (15 min) - assign specific, time-bound improvements with owners
- Detection gap analysis (10 min) - what new detections would have caught this earlier?
Improvement categories to consider:
- Detection: new SIEM rules, EDR signatures, alerting thresholds
- Prevention: patches, hardening, access control changes
- Process: runbook updates, communication templates, escalation paths
- Training: tabletop exercises, awareness training for the attack vector used
Track action items in your ticketing system. Review completion at the next security review cycle. An unactioned post-mortem is a missed opportunity and a future liability.
Gotchas
Rotating credentials before isolating the system - If you rotate credentials while the attacker still has an active session, they may have already cached the token or established a persistent back-channel (e.g., reverse shell). Isolate the system from the network first, then rotate credentials.
GDPR 72-hour clock starts at awareness, not confirmation - The 72-hour notification requirement to the supervisory authority begins when you have reasonable belief a breach occurred - not when you have confirmed every detail. Filing "we are investigating" within 72 hours is required; waiting until the investigation is complete is not compliant.
"Contained" does not mean "eradicated" - Isolating a system stops active damage but the threat is still present. Attackers frequently pre-plant persistence (cron jobs, scheduled tasks, additional user accounts) before containment. Never return a system to production after containment alone - eradication must follow.
Slack incident channel membership leaks intel - Adding non-essential stakeholders to the incident Slack channel before legal review can expose information subject to privilege or create a discoverable record. Keep the responder channel tight until legal has been consulted on external communication scope.
Post-mortem action items without due dates are permanent backlogs - An action item with no owner and no deadline will be in the "backlog" forever. Every action item from a post-mortem needs a named owner, a specific due date, and a check-in at the next security review.
Anti-patterns / common mistakes
| Mistake | Why it's wrong | What to do instead |
|---|---|---|
| Rebooting or wiping a system immediately | Destroys volatile evidence (RAM, network state, running processes) that is critical for forensics | Isolate from network, take memory dump and disk image first, then remediate |
| Investigating without containment | Attacker retains access while you analyze, exfiltrating more data | Contain first (isolate, revoke creds), then investigate in parallel |
| Communicating speculation as fact | Creates false expectations, erodes trust when facts change | State only confirmed findings; use "we are investigating" for unknown scope |
| Skipping chain of custody documentation | Evidence becomes inadmissible in legal proceedings or insurance claims | Document every piece of evidence with collector, time, tool, and hash from collection |
| Declaring an incident closed too quickly | Attacker may have established persistence that survives remediation | Monitor recovered systems for 30+ days before considering the incident fully closed |
| Blaming individuals in post-mortems | Creates fear culture, people hide future incidents, root causes go unfixed | Focus on system and process failures; use blameless post-mortem framework |
References
For detailed playbooks on specific incident types, read:
references/incident-playbooks.md- step-by-step playbooks for ransomware, credential theft, data exfiltration, insider threat, and supply chain attacks
Only load the references file when the current incident type matches a playbook - it is detailed and will consume context.
References
incident-playbooks.md
Incident Playbooks
Step-by-step response playbooks for the five most common enterprise incident types. Each playbook follows the NIST IR phases and includes specific commands, decision points, and communication guidance.
How to use these playbooks
- Identify the incident type (or closest match)
- Start at "Immediate actions" - these are time-critical
- Work through Detection, Containment, Eradication, and Recovery in order
- Adapt to your environment - not every step applies to every organization
- Document every action taken in your incident timeline as you go
Playbook 1: Ransomware
Overview
Ransomware encrypts files and demands payment for decryption keys. Modern ransomware operators also exfiltrate data before encrypting (double extortion) and may use ransomware as a distraction for deeper persistence.
Common initial access vectors: phishing email attachments, RDP exposed to internet, unpatched public-facing services (Exchange, VPNs, Citrix), compromised MSP access.
Immediate actions (first 15 minutes)
- Isolate affected systems from the network immediately - pull ethernet or block at the switch/firewall. Do NOT power off.
- Alert your incident response team, management, and legal
- Determine scope: is encryption still spreading? Check neighboring systems for encrypted files or unusual disk activity.
- Disable file shares and mapped drives accessible from the affected network segment
- Block all outbound traffic from affected segment at the firewall (C2 callback)
Detection and analysis
Identify patient zero:
# Windows: check file modification timestamps in affected directories
# Find recently modified files across a share
Get-ChildItem -Recurse -Path \\server\share | Sort-Object LastWriteTime -Descending | Select-Object -First 100
# Check Windows Event Log for first encryption activity
Get-WinEvent -LogName Security | Where-Object {$_.Id -eq 4663} | Select-Object TimeCreated, Message | Select-Object -First 50Confirm ransomware family:
- Collect a ransom note and sample encrypted file
- Submit file hash to VirusTotal or use
id-ransomware.malwarehunterteam.com - Identify family to determine if a free decryptor exists (check No More Ransom)
Assess exfiltration:
- Review DNS query logs and firewall logs for large outbound transfers in the 24-72 hours before encryption began
- Look for use of rclone, WinRAR, 7-Zip, or cloud storage clients (common exfil tools)
- Check for staging directories (common temp paths like
C:\ProgramData\,%TEMP%)
Containment
- Isolate all confirmed and suspected compromised systems
- Disable or reset all privileged accounts used on affected systems
- Revoke all VPN, RDP, and remote access credentials pending investigation
- Block the C2 domains/IPs identified in ransomware sample at perimeter
- Take VM snapshots or disk images of affected systems before any remediation
Do not:
- Pay the ransom without legal and executive approval and law enforcement consultation
- Delete encrypted files until backups are verified clean
- Patch or update affected systems before forensic imaging
Eradication and recovery
- Identify and close the initial access vector (patch, disable RDP, MFA on VPN)
- Audit all accounts for unauthorized additions or privilege escalations
- Restore from the last known-clean backup that predates the intrusion
- Validate restored data integrity before returning systems to production
- Reimage compromised systems rather than cleaning in-place (ransomware operators often leave backdoors)
- Deploy EDR/AV with updated signatures before reconnecting to production network
Backup validation steps:
1. Identify backup timestamp before first indicators in logs
2. Restore to isolated environment first
3. Verify file integrity (compare hashes against known-good baseline if available)
4. Confirm no encrypted or corrupted files in restored backup
5. Validate application functionality in isolated environment before production restoreCommunication guidance
- Engage legal before any external communication (double extortion means data may be published publicly)
- Law enforcement notification: FBI (US), NCSC (UK), BSI (DE) - do not pay ransom without consulting law enforcement
- Cyber insurance carrier must be notified early - many policies have specific requirements that can void coverage if not followed
Playbook 2: Credential Theft and Account Takeover
Overview
An attacker has obtained valid credentials (username/password, API keys, tokens, or session cookies) and is using them to access systems. Sources include phishing, credential stuffing, password spray, malware keyloggers, or database breach of a third-party service.
Indicators: login from unusual geography or IP, access outside normal hours, bulk data downloads, new MFA devices added, password resets not initiated by the user.
Immediate actions (first 15 minutes)
- Disable or force password reset for the compromised account(s)
- Terminate all active sessions for the compromised account
- If API keys or service tokens were compromised, revoke immediately and rotate
- Check for MFA bypass or new authenticator devices added by the attacker
- Alert the account owner to confirm they did not initiate the activity
Detection and analysis
Determine scope of access:
# AWS: review CloudTrail for actions taken by compromised credentials
aws cloudtrail lookup-events --lookup-attributes AttributeKey=Username,AttributeValue=<username> \
--start-time <incident_start_iso8601> --end-time <now_iso8601>
# Review what services/resources the compromised credential could access
aws iam simulate-principal-policy --policy-source-arn arn:aws:iam::<account>:user/<username> \
--action-names "s3:GetObject" "rds:DescribeDBInstances" "ec2:DescribeInstances"
# GCP: review audit logs
gcloud logging read "protoPayload.authenticationInfo.principalEmail=<email>" \
--freshness=24h --format=jsonCheck for lateral movement:
- Were other accounts accessed or created using the compromised credential?
- Were there API calls to IAM (adding permissions, creating new users/keys)?
- Were any external data transfers initiated?
- Were new instances, functions, or services spun up (cryptomining is common)?
Determine credential source:
- Was this a phishing attack? Check email logs for suspicious links/attachments sent to the victim
- Credential stuffing? Check for parallel login attempts across multiple accounts from the same IP range
- Exposed credential? Check Have I Been Pwned API for the email address
- Internal leak? Review if the credential appears in recent code commits or logs
Containment
- Revoke and rotate all credentials for the compromised account
- Revoke credentials for any accounts the attacker may have pivoted to
- Remove any unauthorized MFA devices, SSH keys, or OAuth app authorizations
- Terminate and deauthorize any active sessions or tokens
- Enable conditional access policies (block risky sign-ins) if not already in place
Eradication and recovery
- Enforce MFA on all accounts that did not have it (particularly privileged accounts)
- Reset passwords for all users in the same organizational unit if credential stuffing is suspected
- Review and remove any new IAM roles, policies, or permissions added during the attack window
- Audit and clean up any resources created (instances, functions, storage buckets) by the attacker
- Enable anomalous login alerting (impossible travel, new device, new country)
- Implement or enforce passwordless/passkey for high-value accounts
Playbook 3: Data Exfiltration
Overview
An attacker or malicious insider has transferred sensitive data outside of authorized systems. This may be the primary objective or a secondary action during a broader intrusion. Focus is on determining what was taken, stopping ongoing exfiltration, and fulfilling notification obligations.
Indicators: large outbound transfers, access to unusual data sets, use of personal cloud storage (Dropbox, Google Drive) from corporate systems, DLP alerts, DNS tunneling anomalies, bulk API reads.
Immediate actions (first 15 minutes)
- Block outbound traffic from the suspected exfiltration source at the firewall
- Preserve firewall, proxy, and DLP logs before they are rotated
- Identify the destination of the exfiltration (external IP, cloud service, domain)
- Determine if the exfiltration is still in progress
- Alert legal immediately - data exfiltration triggers notification obligations
Detection and analysis
Quantify what was exfiltrated:
# Estimate volume from firewall/proxy logs
# Look for large outbound sessions to unfamiliar destinations
grep "<source_ip>" /var/log/firewall.log | awk '{print $7, $8, $9}' | sort -k3 -rn | head -50
# S3: check for bulk GetObject operations
aws s3api list-objects --bucket <bucket> --query 'Contents[].{Key:Key,Size:Size}' | \
jq 'sort_by(.Size) | reverse | .[:20]'
# Check CloudTrail for S3 data events
aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=GetObject \
--start-time <start> --end-time <end>Classify the data:
- What data classifications were in the exfiltrated dataset? (PII, PHI, PCI, IP)
- How many records? How many individuals are affected?
- Is the data subject to specific regulations? (GDPR, HIPAA, CCPA, PCI-DSS)
- Was encryption applied to the data at rest? Was the encryption key also exfiltrated?
Determine the exfiltration method:
| Method | Detection source | Evidence to collect |
|---|---|---|
| HTTPS upload to cloud storage | Proxy/firewall logs, DLP | URLs, byte counts, timestamps |
| Email attachment | Email gateway logs | Subject, recipients, attachment names/hashes |
| DNS tunneling | DNS query logs | Query volume, query entropy, unusual subdomain patterns |
| Physical media | USB device logs (Windows Event 4663) | Device ID, user, timestamp, files accessed |
| API bulk reads | Application and API gateway logs | Endpoint, query params, response size |
Containment
- Block the exfiltration destination (IP, domain, cloud service endpoint)
- Revoke access for the account used to exfiltrate
- Enable egress DLP rules for the affected data classification
- Coordinate with legal on regulatory notification timeline (GDPR: 72 hours)
- Preserve all logs for legal hold
Eradication and recovery
- Remove unauthorized access paths (misconfigured S3 bucket policies, exposed APIs)
- Implement data loss prevention controls for the vector used
- Rotate encryption keys for any encrypted data whose keys may have been exposed
- Review and tighten data access controls (least privilege, just-in-time access)
- Enable data access auditing on sensitive datasets if not already active
Playbook 4: Insider Threat
Overview
A current or former employee, contractor, or privileged user is intentionally or accidentally causing harm - through data theft, sabotage, unauthorized access, or sharing credentials. Insider threats are the hardest to detect and require careful coordination between security, HR, and legal.
Important: Do not confront the suspected insider directly. Do not alert them that they are under investigation. All actions must be coordinated with HR and legal.
Immediate actions (first 15 minutes)
- Immediately loop in HR and legal - do not proceed without them
- Do not alert or confront the suspect
- Preserve all access logs, emails, and activity records for the suspect's accounts
- Determine if they still have active access and what systems/data they can reach
- Assess whether an emergency access revocation is warranted (destructive risk)
Detection and analysis
Behavioral indicators to investigate:
- Access to data outside normal job function or data they have no business reason to access
- Bulk downloads or transfers to personal devices or external services
- Access attempts after hours or from unusual locations
- Searching for competitors, sensitive company information, or executive communications
- Recent performance issues, disciplinary actions, or announced resignation
Technical evidence collection:
1. Pull complete access logs for the suspect's accounts (SSO, VPN, cloud, app)
2. Capture email metadata (do not read content without legal authorization)
3. Collect DLP alerts and endpoint activity logs
4. Document all systems the suspect has valid credentials for
5. Preserve badge/physical access records if applicableLegal constraints:
- In many jurisdictions, monitoring employee communications requires consent or a specific legal basis - consult legal before accessing email content, chat logs, or personal device data
- Evidence collected in violation of applicable law may be inadmissible and create liability
- Preserve chain of custody with particular care - insider cases often result in litigation
Containment
Containment timing is critical and must be coordinated with HR:
- For active destruction risk: revoke access immediately, notify HR simultaneously
- For ongoing exfiltration: legal and HR may want to monitor briefly to gather evidence before revocation - this is their decision, not security's
- Prepare a complete access revocation runbook before executing (all accounts, all systems, physical access, personal devices enrolled in MDM)
Access revocation checklist:
- SSO / identity provider account disabled
- VPN certificate revoked
- All cloud provider IAM accounts/keys revoked
- All application accounts disabled
- SSH keys removed from all systems
- MDM: remote wipe initiated for company-managed devices
- Physical access cards deactivated
- Shared account passwords rotated if suspect knew them
Eradication and recovery
- Audit for any backdoors, new accounts, or scheduled tasks created by the insider
- Review changes made to code, infrastructure configs, or data during the suspect's access window
- Recover or attempt to recover any deleted or modified data from backups
- Conduct a privilege audit across all systems to identify over-provisioned access (the insider case often reveals systemic access control failures)
- Implement user behavior analytics (UEBA) if not already deployed
Playbook 5: Supply Chain Attack
Overview
A trusted third-party software package, vendor, or service has been compromised and used as a vector to attack your systems. Examples include SolarWinds-style trojanized updates, compromised npm/PyPI packages, or a vendor whose credentials gave them access to your environment.
Indicators: unexpected network connections from trusted software, security alerts from vendor or public disclosure, anomalous behavior from a recently updated dependency, threat intelligence from a feed or ISAC.
Immediate actions (first 15 minutes)
- Identify all instances of the compromised software/package in your environment
- Check the vendor's security advisory for indicators of compromise (IOCs)
- Determine if those IOCs are present in your systems (run the provided hashes, domains, and file paths against your SIEM/EDR)
- Assess blast radius: what access did the compromised component have?
- If active compromise is confirmed, treat as a P1 and begin containment
Detection and analysis
Software bill of materials (SBOM) query:
# Find all instances of a specific package version across your repos
# npm
find /path/to/repos -name "package-lock.json" -exec grep -l "<package>@<version>" {} \;
# Python
find /path/to/repos -name "requirements*.txt" -exec grep -l "<package>=<version>" {} \;
# Docker images: use your image registry's vulnerability scanning
# AWS ECR, GitHub Container Registry, etc. provide package inventory APIsCheck for IOC presence:
# Search for known malicious file hashes
find / -type f -exec sha256sum {} \; 2>/dev/null | grep -f <ioc_hashes_file>
# Check DNS for known C2 domains in logs
grep -E "(malicious-domain1|malicious-domain2)" /var/log/dns-queries.log
# Check EDR for process executions matching known malicious binaries
# (tool-specific - use your EDR's threat hunting query interface)Determine compromise window:
- When was the compromised version first deployed in your environment?
- What actions were taken by processes running the compromised software?
- Was any data accessible to the compromised component exfiltrated?
Containment
- Remove or disable the compromised software/package version from all systems
- Block C2 domains and IPs from the vendor's IOC list at firewall and DNS
- If the compromised component had cloud provider access, rotate all credentials it used
- If a vendor's SSO or direct access was used as the attack vector, revoke their access immediately and notify the vendor
- Enable enhanced logging on all systems that ran the compromised software
Eradication and recovery
- Upgrade to a clean version verified by the vendor, or remove the dependency
- Verify the clean version with the vendor's provided hash or signature
- Audit all systems that ran the compromised version for persistence mechanisms
- Implement dependency pinning and integrity verification (lock files, SRI hashes, signed packages) if not already in place
- Add SBOM generation to your CI/CD pipeline for future visibility
- Subscribe to security advisories for all critical third-party dependencies
Verification commands:
# npm: verify package integrity
npm audit
npm pack <package>@<version> | sha256sum # compare with vendor-published hash
# Python: verify with pip hash checking
pip install --require-hashes -r requirements.txt
# Docker: verify image digest
docker pull <image>@sha256:<digest>Quick reference: Incident type to playbook
| What you're seeing | Playbook |
|---|---|
| Files being encrypted, ransom note | Ransomware (Playbook 1) |
| Login from unusual location, account used without owner's knowledge | Credential Theft (Playbook 2) |
| Large outbound data transfer, DLP alert, bulk API reads | Data Exfiltration (Playbook 3) |
| Employee suspected of data theft or sabotage | Insider Threat (Playbook 4) |
| Vendor security advisory, compromised npm/PyPI package | Supply Chain Attack (Playbook 5) |
For incidents that combine types (e.g., credential theft leading to exfiltration), run both playbooks and combine the containment and eradication steps.
Frequently Asked Questions
What is security-incident-response?
Use this skill when responding to security incidents, conducting forensic analysis, containing breaches, or writing incident reports. Triggers on security incident, breach response, forensics, containment, eradication, recovery, incident report, IOC analysis, and any task requiring security incident management.
How do I install security-incident-response?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill security-incident-response in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support security-incident-response?
security-incident-response works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.