Code vs. Configuration
The philosophy behind separating Terraform Logic from Environment Data—and why it saves teams months of engineering time.
In traditional Terraform implementations, it’s common to see values hardcoded into modules or duplicative .tfvars files scattered across directories. This approach doesn’t scale for teams managing multiple environments (dev, staging, prod) across multiple regions and services.
IaC Console enforces a strict separation between Units (Logic) and Dimensions (Configuration)—a pattern that professional DevOps teams would otherwise spend significant time designing and implementing themselves.
The Problem with Monolithic Terraform
When code and configuration are mixed, teams end up with:
- Code Duplication — Separate folders for dev/staging/prod with copied Terraform files
- Configuration Drift — Copy-pasted files diverge over time as engineers make changes
- No Single Source of Truth — Can’t answer “what’s the instance type in staging?” without checking files
- Time Waste — Engineers spend hours managing file structures instead of building infrastructure
- Security Risks — Secrets and configs scattered across repositories
The IaC Console Solution: Units + Dimensions
This pattern is what teams would build as “internal IaC tooling” after hitting scale problems. IaC Console provides it out-of-the-box.
Key Concepts: Units vs. Modules
Units and Modules serve different purposes in IaCConsole:
Units (Deployment Units)
- What: A complete, deployable Terraform configuration that defines infrastructure for a specific purpose
- Purpose: The “what you deploy” (e.g.,
vpc,microservice-app,database-cluster) - Location:
units/[org-name]/[unit-name]/ - Requires:
unit_manifest.jsondefining required dimensions - Example: A VPC unit that creates VPC, subnets, route tables, and flow logs bucket
Shared Modules (Reusable Components)
- What: Generic, reusable Terraform code that can be used by multiple units
- Purpose: The “building blocks” that units can compose (e.g.,
s3-bucket,iam-role,security-group) - Location:
shared-modules/[module-name]/ - Auto-linked: IaCConsole CLI automatically links the
shared-modules/folder to every unit’s execution directory - Example: An S3 module that creates bucket with policies, encryption, and versioning
Real-World Example
shared-modules/
└── s3-bucket/ # Reusable S3 module
├── main.tf # S3 bucket, policy, ABAC
├── variables.tf # Accepts: bucket_name, custom_policy
└── outputs.tf
units/
└── myorg/
├── vpc/ # Unit: VPC deployment
│ ├── main.tf # Uses s3-bucket module for VPC flow logs
│ ├── unit_manifest.json
│ └── outputs.tf
└── microservice/ # Unit: Microservice deployment
├── main.tf # Also uses s3-bucket module if needed
├── unit_manifest.json
└── outputs.tf
VPC Unit using shared S3 module:
# units/myorg/vpc/main.tf
module "flow_logs_bucket" {
source = "./shared-modules/s3-bucket" # Auto-linked by CLI!
bucket_name = "vpc-flow-logs-${var.iacconsole_env_name}"
custom_policy = jsonencode({
# VPC Flow Logs policy
})
}
resource "aws_vpc" "main" {
cidr_block = var.iacconsole_datacenter_data.vpc_cidr
}
resource "aws_flow_log" "main" {
vpc_id = aws_vpc.main.id
log_destination = module.flow_logs_bucket.bucket_arn
}
Microservice Unit using the same S3 module:
# units/myorg/microservice/main.tf
module "app_storage" {
source = "./shared-modules/s3-bucket" # Same module, different use!
bucket_name = "app-data-${var.iacconsole_env_name}"
custom_policy = jsonencode({
# Application access policy
})
}
Why this matters: By creating reusable modules, you write the S3 configuration logic once and reuse it across all units. The IaCConsole CLI automatically links the shared-modules/ folder (configurable via shared_modules_path in .iacconsolerc) to every unit’s execution directory.
Module Source Path:
- ✅ Always use:
source = "./shared-modules/module-name" - ✅ With subdirectories:
source = "./shared-modules/networking/vpc" - ❌ Never use:
source = "../../shared-modules/module-name"
The CLI mounts/links the shared-modules folder, so the path is always relative to the unit’s execution directory, regardless of where the actual folder is on disk.
1. Units (The Code)
A Unit is a pure Terraform module. It references dimension data through auto-generated variables. It does not know about “prod” or “dev”.
# units/myorg/ec2-app/main.tf
# NO variable definitions needed - IaCConsole CLI generates them automatically!
resource "aws_instance" "app" {
instance_type = var.iacconsole_env_data.instance_type
tags = {
Environment = var.iacconsole_env_name
}
}
Important: The variable definitions (iacconsole_*_vars.tf.json) and values (iacconsole_*.auto.tfvars.json) are both automatically generated by the IaCConsole CLI. The values are populated from your dimension data in the CMDB. You never need to define these variables manually in your Terraform code.
2. Dimensions (The Config)
A Dimension is a slice of configuration data stored in the IaCConsole Inventory. It defines the values for a specific context.
// Dimension: env:prod
{
"instance_type": "t3.large",
"region": "us-east-1"
}
// Dimension: env:dev
{
"instance_type": "t3.micro",
"region": "us-west-2"
}
When you run:
iacconsole-cli exec -o myorg -u ec2-app -d env:prod -- plan
The CLI automatically:
- Fetches the
env:proddimension data - Generates variable definitions (
iacconsole_env_vars.tf.json) - Populates values (
iacconsole_env.auto.tfvars.json) - Runs Terraform with all variables pre-configured
Why this matters: By decoupling these, you can deploy the exact same Unit to 100 different environments just by creating new Dimensions, without copying a single line of HCL code.