terraform-iac
Use this skill when writing Terraform configurations, managing infrastructure as code, creating reusable modules, handling state backends, or detecting drift. Triggers on Terraform, HCL, infrastructure as code, IaC, providers, modules, state management, terraform plan, terraform apply, drift detection, and any task requiring declarative infrastructure provisioning.
infra terraformiacinfrastructurehclmodulesdevopsWhat is terraform-iac?
Use this skill when writing Terraform configurations, managing infrastructure as code, creating reusable modules, handling state backends, or detecting drift. Triggers on Terraform, HCL, infrastructure as code, IaC, providers, modules, state management, terraform plan, terraform apply, drift detection, and any task requiring declarative infrastructure provisioning.
terraform-iac
terraform-iac is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Writing Terraform configurations, managing infrastructure as code, creating reusable modules, handling state backends, or detecting drift.
Quick Facts
| Field | Value |
|---|---|
| Category | infra |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill terraform-iac- The terraform-iac skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
Terraform is the de-facto standard for declarative infrastructure provisioning. This skill covers the complete lifecycle - project setup, module design, remote state management, multi-environment strategy, and keeping real infrastructure aligned with declared configuration. Designed for engineers who know basic Terraform and need opinionated guidance on structure, safety, and production practices.
Tags
terraform iac infrastructure hcl modules devops
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair terraform-iac with these complementary skills:
Frequently Asked Questions
What is terraform-iac?
Use this skill when writing Terraform configurations, managing infrastructure as code, creating reusable modules, handling state backends, or detecting drift. Triggers on Terraform, HCL, infrastructure as code, IaC, providers, modules, state management, terraform plan, terraform apply, drift detection, and any task requiring declarative infrastructure provisioning.
How do I install terraform-iac?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill terraform-iac in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support terraform-iac?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Terraform Infrastructure as Code
Terraform is the de-facto standard for declarative infrastructure provisioning. This skill covers the complete lifecycle - project setup, module design, remote state management, multi-environment strategy, and keeping real infrastructure aligned with declared configuration. Designed for engineers who know basic Terraform and need opinionated guidance on structure, safety, and production practices.
When to use this skill
Trigger this skill when the user:
- Writes or reviews Terraform HCL for any cloud provider (AWS, GCP, Azure)
- Designs reusable Terraform modules or a module registry structure
- Sets up or migrates remote state backends (S3, GCS, Terraform Cloud)
- Manages multiple environments (dev/staging/prod) with Terraform
- Diagnoses drift between actual infrastructure and Terraform state
- Runs or interprets
terraform plan,terraform apply, orterraform import - Handles state operations:
state mv,state rm,taint,untaint
Do NOT trigger this skill for:
- Kubernetes manifest authoring (use a kubernetes/helm skill instead)
- Application-level configuration management (Ansible, Chef, Puppet)
Key principles
Declarative over imperative - Describe the desired end state, not the steps to get there. If you find yourself writing
null_resourcewith provisioners to run shell scripts, stop and ask whether the provider has a proper resource for this.Modules for every reusable pattern - Any configuration block you copy between environments or projects is a module waiting to be written. Extract early; the cost of refactoring into a module grows with usage.
Remote state always - Local state is only acceptable for throwaway experiments. Production state lives in a versioned, locked backend (S3 + DynamoDB, GCS, or Terraform Cloud) from day one. State is your source of truth.
Plan before apply, in CI -
terraform applywithout a reviewed plan is the infrastructure equivalent of deploying untested code. Always runterraform plan -out=tfplanand review the diff before applying. Automate this in CI pipelines.Least privilege for providers - The IAM role or service account Terraform uses must have only the permissions needed for that specific configuration. Never use AdministratorAccess or Owner roles for provider credentials.
Core concepts
Providers - Plugins that translate HCL into API calls for a cloud or service.
Always pin provider versions in required_providers. Unpinned providers break
on provider releases.
Resources - The fundamental unit. Each resource block declares one
infrastructure object (aws_vpc, google_container_cluster, etc.).
Data sources - Read-only lookups of existing infrastructure not managed by
this configuration. Use data blocks to reference shared resources (AMIs,
existing VPCs, DNS zones) without importing them into state.
Modules - Containers for multiple resources that are used together. A module
is a directory with .tf files. Modules accept variable inputs and expose
output values to callers.
State - A JSON file that maps declared resources to real infrastructure
objects. Terraform uses state to calculate diffs. Never edit state manually -
use terraform state commands.
Workspaces - Named state instances within a single backend configuration. Useful for short-lived feature environments; not recommended for long-lived environment separation (use separate root modules instead).
Backends - Configuration for where and how state is stored and locked. Locking prevents concurrent applies from corrupting state.
Common tasks
Set up a project with S3 backend
Structure every Terraform project with these three foundational files before writing any resources.
versions.tf - Pin everything. Unpinned versions cause silent breakage.
terraform {
required_version = ">= 1.6.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "my-org-terraform-state"
key = "services/my-service/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-lock"
}
}providers.tf - One provider block, no credentials hardcoded.
provider "aws" {
region = var.aws_region
default_tags {
tags = {
ManagedBy = "terraform"
Environment = var.environment
Service = var.service_name
}
}
}variables.tf - Declare all inputs with descriptions and sensible defaults.
variable "aws_region" {
description = "AWS region to deploy into"
type = string
default = "us-east-1"
}
variable "environment" {
description = "Deployment environment (dev, staging, prod)"
type = string
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "environment must be one of: dev, staging, prod"
}
}
variable "service_name" {
description = "Name of the service owning this infrastructure"
type = string
}Create the S3 bucket and DynamoDB table for the backend manually (or with a separate bootstrap Terraform config) before running
terraform init. You cannot manage the state backend with the same configuration that uses it.
Write a reusable module
A module is a directory with main.tf, variables.tf, and outputs.tf.
Modules should express one cohesive infrastructure concern. All inputs are
declared with descriptions in variables.tf; all outputs expose only what
callers need in outputs.tf.
Calling a module from a root configuration:
module "vpc" {
source = "../../modules/vpc"
name = "my-service-${var.environment}"
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
public_subnet_cidrs = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
private_subnet_cidrs = ["10.0.11.0/24", "10.0.12.0/24", "10.0.13.0/24"]
}See references/module-patterns.md for complete module templates, versioning, and monorepo layout.
Manage environments with workspaces
Workspaces share a single backend and configuration. Use them for ephemeral
feature environments; prefer separate state files (separate key paths) for
permanent environments like staging and prod.
# Create and switch to a feature workspace
terraform workspace new feature-xyz
terraform workspace select feature-xyz
# Reference workspace name in configuration to vary resource names/sizes
resource "aws_instance" "app" {
instance_type = terraform.workspace == "prod" ? "t3.large" : "t3.micro"
tags = { Environment = terraform.workspace }
}
# Clean up the workspace when done
terraform workspace select default
terraform destroy
terraform workspace delete feature-xyzFor prod/staging: use separate backend
keypaths or separate AWS accounts with separate root modules. Workspaces with a single state key per environment mean a bad apply in one workspace can corrupt state for others.
Import existing resources into state
When infrastructure was created outside Terraform and you need to manage it.
# Terraform 1.5+: use import blocks (preferred, reviewable in plan)
# Add this to your .tf file temporarily:
import {
to = aws_s3_bucket.my_bucket
id = "my-existing-bucket-name"
}
# Run plan to preview what will be generated
terraform plan -generate-config-out=generated.tf
# Review generated.tf, copy the resource block into your main config, remove
# the import block, then apply
terraform applyFor older Terraform versions (pre-1.5), use the CLI:
terraform import aws_s3_bucket.my_bucket my-existing-bucket-nameAfter importing, always run
terraform planto verify zero diff before continuing. A non-empty plan after import means your HCL does not match the real resource - fix the HCL, do not apply the diff blindly.
Handle state operations safely
State operations modify which resources Terraform tracks. Always take a state backup first.
# Backup state before any manual operation
terraform state pull > backup-$(date +%Y%m%d-%H%M%S).tfstate
# Rename a resource (e.g., after refactoring module structure)
terraform state mv aws_instance.old_name aws_instance.new_name
# Move a resource into a module
terraform state mv aws_s3_bucket.logs module.logging.aws_s3_bucket.logs
# Remove a resource from state without destroying it
# (when you want Terraform to stop managing it)
terraform state rm aws_instance.temporary
# Mark a resource for replacement on next apply
# (forces destroy + recreate even if config unchanged)
terraform taint aws_instance.app
# Terraform 0.15.2+ preferred syntax:
terraform apply -replace="aws_instance.app"
state rmdoes NOT destroy the real infrastructure. The resource will simply become unmanaged. If you want it gone, destroy first, then remove from state.
Detect and fix drift
Drift occurs when real infrastructure diverges from Terraform state (e.g., manual console changes, external automation).
# Step 1: Refresh state against real infrastructure
terraform refresh
# Step 2: Run plan to see what Terraform would change to correct drift
terraform plan
# Step 3a: If drift is unintentional - apply to correct it
terraform apply
# Step 3b: If drift is intentional - update HCL to match reality,
# then verify plan shows no changes
terraform plan # should output: "No changes. Infrastructure is up-to-date."
# For a targeted drift check on one resource:
terraform plan -target=aws_security_group.appIn CI, detect drift on a schedule:
# Run as a daily cron job - alert if exit code is 2 (changes detected)
terraform plan -detailed-exitcode
# Exit 0: no diff | Exit 1: error | Exit 2: diff detectedUse data sources and dynamic blocks
Data sources look up existing infrastructure without managing it:
# Look up the latest Amazon Linux 2 AMI - never hardcode AMI IDs
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
}
resource "aws_instance" "app" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
}
# Reference an existing VPC not managed by this config
data "aws_vpc" "shared" {
tags = { Name = "shared-services-vpc" }
}Dynamic blocks eliminate repetitive nested blocks:
variable "ingress_rules" {
type = list(object({
from_port = number
to_port = number
protocol = string
cidr_blocks = list(string)
}))
}
resource "aws_security_group" "app" {
name = "app-sg"
vpc_id = data.aws_vpc.shared.id
dynamic "ingress" {
for_each = var.ingress_rules
content {
from_port = ingress.value.from_port
to_port = ingress.value.to_port
protocol = ingress.value.protocol
cidr_blocks = ingress.value.cidr_blocks
}
}
}Error handling
| Error | Root cause | Fix |
|---|---|---|
Error acquiring the state lock |
Another apply is running, or a previous run crashed without releasing the lock | Wait for concurrent run; if stale: terraform force-unlock <LOCK_ID> (verify no concurrent apply first) |
Error: inconsistent result after apply |
Provider returned a different value than what was planned (often eventual consistency) | Add depends_on or increase retry logic; file a provider bug if persistent |
Error: Resource already exists |
Trying to create a resource that exists but is not in state | Use terraform import to bring it under management before applying |
Error refreshing state: AccessDenied |
Provider credentials lack read permissions on existing resources | Expand IAM policy to include Describe* / Get* / List* for affected services |
Error: Cycle detected |
Circular dependency between resources (A depends on B, B depends on A) |
Break the cycle with depends_on or restructure - often caused by security group self-references |
Plan shows replacement for unchanged resource |
A computed attribute (e.g., an ARN or auto-generated field) changed externally | Run terraform refresh then re-plan; if persistent, check for provider version changes |
Gotchas
You cannot manage the S3 backend bucket with the config that uses it - The backend must exist before
terraform initruns. Bootstrap the state bucket and DynamoDB lock table with a separate configuration (or manually). Attempting to create both in the same root module causes a chicken-and-egg failure.terraform destroyin a workspace also destroys shared resources - If your module references shared infrastructure (e.g., a VPC created in another root module), and you rundestroyin a feature workspace, any shared resources included viadatasources will not be destroyed - but any created by this config will. Audit what belongs to the workspace before destroying.Unpinned provider versions cause silent breakage on upgrades - Without
version = "~> 5.0"inrequired_providers, a provider major version bump in the registry can change resource schemas and break existing configs on the nextterraform init. Always pin providers; update versions deliberately.terraform state rmdoes not destroy the real resource - It only removes Terraform's tracking entry. The resource continues running and accumulating cost. If you want the resource gone, runterraform destroy -target=<resource>first, then remove from state if needed.Workspaces share a backend - a corrupted state affects all workspaces - Using workspaces with separate state keys in the same S3 bucket means a misconfigured
state mvor force-unlock at the wrong key can corrupt a different environment's state. Prefer separate AWS accounts or separate state buckets for prod/staging separation.
References
For detailed patterns and implementation guidance, read the relevant file from
the references/ folder:
references/module-patterns.md- module composition, factory pattern, versioning, monorepo layout
Only load a references file if the current task requires it - they are detailed and will consume context.
References
module-patterns.md
Terraform Module Design Patterns
Opinionated reference for building maintainable, shareable Terraform modules. These patterns apply whether you are building a team-internal module library or publishing to the public Terraform Registry.
Module taxonomy
Before choosing a pattern, classify the module:
| Type | Purpose | Example |
|---|---|---|
| Resource module | Wraps a single resource type with sane defaults | modules/s3-bucket |
| Composition module | Assembles multiple resource modules into a capability | modules/app-cluster (ECS + ALB + SG) |
| Root module | Entry point for terraform apply; calls composition modules |
environments/prod/main.tf |
| Wrapper module | Thin shim over an upstream module to enforce org standards | modules/vpc wrapping terraform-aws-modules/vpc |
Root modules own state. Resource and composition modules own no state themselves
- they are called by root modules.
1. Resource module pattern
The simplest and most reusable module type. One resource, hardened defaults, minimal surface area.
Rules:
- Every input must have a
description. - Provide sensible defaults for non-environment-specific values.
- Expose all attributes that callers might need as outputs.
- Use
lifecycleblocks to protect against accidental deletion on critical resources.
# modules/s3-bucket/main.tf
resource "aws_s3_bucket" "this" {
bucket = var.bucket_name
}
resource "aws_s3_bucket_versioning" "this" {
bucket = aws_s3_bucket.this.id
versioning_configuration {
status = var.versioning_enabled ? "Enabled" : "Disabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_public_access_block" "this" {
bucket = aws_s3_bucket.this.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}# modules/s3-bucket/variables.tf
variable "bucket_name" {
description = "Globally unique name for the S3 bucket"
type = string
}
variable "versioning_enabled" {
description = "Enable S3 versioning. Recommended for state and artifact buckets."
type = bool
default = true
}# modules/s3-bucket/outputs.tf
output "bucket_id" {
description = "Name of the bucket (same as ID for S3)"
value = aws_s3_bucket.this.id
}
output "bucket_arn" {
description = "ARN of the bucket"
value = aws_s3_bucket.this.arn
}2. Composition module pattern
Assembles multiple resource modules or resources into a single deployable capability. This is the right level for "a working ECS service" or "a complete RDS setup with parameter groups and subnet groups."
Rules:
- Accept high-level intent variables, not low-level resource IDs where possible.
- Use
depends_ononly when implicit dependencies are impossible. - Do not accept a
tagsvariable and merge it - let callers useprovider default_tags.
# modules/ecs-service/main.tf
module "alb" {
source = "../alb"
name = "${var.service_name}-alb"
vpc_id = var.vpc_id
subnet_ids = var.public_subnet_ids
}
module "security_group" {
source = "../security-group"
name = "${var.service_name}-ecs-sg"
vpc_id = var.vpc_id
ingress_rules = [
{
from_port = var.container_port
to_port = var.container_port
protocol = "tcp"
cidr_blocks = [var.vpc_cidr]
}
]
}
resource "aws_ecs_service" "this" {
name = var.service_name
cluster = var.cluster_arn
task_definition = aws_ecs_task_definition.this.arn
desired_count = var.desired_count
load_balancer {
target_group_arn = module.alb.target_group_arn
container_name = var.service_name
container_port = var.container_port
}
network_configuration {
subnets = var.private_subnet_ids
security_groups = [module.security_group.security_group_id]
}
}3. Wrapper module pattern
A thin wrapper over a well-known upstream module (Terraform Registry or community module) that enforces organizational standards - naming conventions, required tags, forbidden settings, and approved defaults.
When to use: Your org uses terraform-aws-modules/vpc/aws but every team
keeps forgetting to enable flow logs and VPN gateway. Write a wrapper once.
# modules/vpc/main.tf
# Wraps the community VPC module with org-required settings locked in
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = var.name
cidr = var.cidr_block
azs = var.availability_zones
private_subnets = var.private_subnet_cidrs
public_subnets = var.public_subnet_cidrs
# Org standards: always enabled
enable_nat_gateway = true
single_nat_gateway = var.environment != "prod"
enable_vpn_gateway = false
enable_flow_log = true
flow_log_destination_type = "s3"
flow_log_destination_arn = var.flow_log_bucket_arn
# Prevent callers from disabling DNS (required for ECS service discovery)
enable_dns_hostnames = true
enable_dns_support = true
}# modules/vpc/variables.tf - only expose what callers should control
variable "name" { type = string }
variable "cidr_block" { type = string; default = "10.0.0.0/16" }
variable "availability_zones" { type = list(string) }
variable "private_subnet_cidrs" { type = list(string) }
variable "public_subnet_cidrs" { type = list(string) }
variable "flow_log_bucket_arn" { type = string }
variable "environment" { type = string }Rule: do not expose every upstream variable. Exposing everything defeats the purpose - callers could disable the org-required settings. Only expose variables where variation between callers is legitimate.
4. Factory pattern
Generate multiple similar resources from a map of configurations. Prefer this
over count when resources have distinct identities - with count, removing
an element from the middle of a list shifts all indices and causes unwanted
replacements.
# Preferred: for_each with a map - each resource has a stable key
variable "environments" {
type = map(object({
instance_type = string
min_size = number
max_size = number
}))
default = {
dev = {
instance_type = "t3.micro"
min_size = 1
max_size = 2
}
staging = {
instance_type = "t3.small"
min_size = 1
max_size = 3
}
}
}
resource "aws_launch_template" "env" {
for_each = var.environments
name_prefix = "app-${each.key}-"
instance_type = each.value.instance_type
}
resource "aws_autoscaling_group" "env" {
for_each = var.environments
name = "app-${each.key}"
min_size = each.value.min_size
max_size = each.value.max_size
launch_template {
id = aws_launch_template.env[each.key].id
version = "$Latest"
}
}Rule: use count only for resources where identity is purely ordinal
(e.g., three identical worker nodes where any can be replaced by any other).
Use for_each for anything with a meaningful name.
5. Module versioning
Local modules (monorepo)
Use relative paths. No versioning - all modules are always at HEAD.
module "vpc" {
source = "../../modules/vpc"
}Monorepo directory layout:
infrastructure/
modules/
vpc/
s3-bucket/
ecs-service/
rds/
environments/
dev/
main.tf
terraform.tfvars
staging/
main.tf
terraform.tfvars
prod/
main.tf
terraform.tfvarsGit-sourced modules
Pin to a tag, not main. Tags are immutable; main drifts.
module "vpc" {
source = "git::https://github.com/my-org/terraform-modules.git//modules/vpc?ref=v1.4.2"
}Terraform Registry modules
Pin to a minor version constraint. The ~> operator allows patch updates but
not minor version bumps.
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
}Never use an unpinned Registry module in production. Major and minor version bumps can include breaking changes.
6. Module interface contract rules
A module's variables.tf and outputs.tf are its public API. Treat them with
the same discipline as a REST API.
| Rule | Rationale |
|---|---|
Add description to every variable and output |
Callers should not need to read main.tf to understand an input |
Use validation blocks for constrained inputs |
Catch errors at plan time, not after apply |
Do not change a variable's type in a patch release |
Breaking change - bump minor version |
| Do not remove an output | Downstream configs may depend on it |
Use sensitive = true for secret outputs |
Prevents values appearing in plan output and logs |
| Avoid outputting raw IDs when the ARN is more useful | ARNs are globally unique and more composable for IAM policies |
variable "environment" {
description = "Deployment environment. Controls instance sizing and HA settings."
type = string
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Valid values: dev, staging, prod"
}
}
output "db_password" {
description = "RDS master password. Store in secrets manager immediately."
value = random_password.db.result
sensitive = true
}Quick reference - when to use which pattern
| Scenario | Pattern |
|---|---|
| Wrap a single AWS resource with secure defaults | Resource module |
| Bundle several resources into one deployable unit | Composition module |
| Enforce org standards over a community module | Wrapper module |
| Create N similar resources from a config map | Factory (for_each) |
| Same 3 worker nodes, interchangeable identity | count |
| Shared modules within one repo | Local path source |
| Shared modules across repos, need versioning | Git ref or Terraform Registry |
Frequently Asked Questions
What is terraform-iac?
Use this skill when writing Terraform configurations, managing infrastructure as code, creating reusable modules, handling state backends, or detecting drift. Triggers on Terraform, HCL, infrastructure as code, IaC, providers, modules, state management, terraform plan, terraform apply, drift detection, and any task requiring declarative infrastructure provisioning.
How do I install terraform-iac?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill terraform-iac in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support terraform-iac?
terraform-iac works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.