A
Arun's Blog
← Back to all posts

Automated Infrastructure Management with Terraform Backend and S3 Bucket Integration

TerraformS3Infrastructure as Code
TL;DR

Store Terraform state files remotely in S3 with DynamoDB locking for team collaboration. Create an S3 bucket with versioning enabled, set up a DynamoDB table for state locking, configure the backend in your Terraform code, and run terraform init to migrate your state.

Introduction

When creating infrastructure as code (IaC) via Terraform, it is necessary to maintain a history of the infrastructure being built. It accomplishes this using a state file, which is merely a straightforward JSON file that outlines the infrastructure that Terraform has built and any modifications that might be made. This file should be checked into git or any other source code management system that is being used because, by default, it is typically kept on the local machine where Terraform commands are executed and we all know that it is not 'if' my local machine goes down, but 'when'.

Prerequisites

  • Terraform installed
  • AWS billing account
  • AWS CLI (see here for CLI installations)
  • AWS IAM account with necessary permissions

Create S3 Bucket with Versioning

When using a S3 bucket as a backend, it improves collaboration as multiple team members can modify the state file without causing any infrastructure/git conflicts.

Note

S3 bucket names must be globally unique across all AWS accounts. Choose a naming convention that includes your organization name or project identifier to avoid conflicts.

Via AWS CLI, you can create a s3 bucket by:

aws s3 mb s3://myTFbackend

Once the bucket is created, enable versioning so that you can revert back to previous state files if needed:

aws s3api put-bucket-versioning --bucket myTFbackend --versioning-configuration Status=Enabled
Important

Always enable versioning on your state bucket. This allows you to recover from accidental state file corruption or deletion by reverting to a previous version.

Create DynamoDB Table

To enable state locking and preventing write operations to your state file while another write operation is running (when 2 or more people attempt to modify the state file at the same time), we will use a Dynamo Table for state locking and consistency.

aws dynamodb create-table --table-name tfStateLock --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
Pro Tip

Consider using on-demand capacity mode for the DynamoDB table instead of provisioned throughput. This is more cost-effective for state locking since lock operations are infrequent: --billing-mode PAY_PER_REQUEST

Configure Terraform Backend

Now that the stage is set with a S3 bucket to hold the statefile and a DynamoDB table to take care of locks, we can modify the configuration file (e.g. main.tf) in Terraform to use these objects:

terraform {
  backend "s3" {
    bucket         = "myTFbackend "
    key            = "terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "tfStateLock "
  }
}
Note

The "key" parameter specifies the path within the bucket where the state file will be stored. Use a descriptive path like "project-name/environment/terraform.tfstate" for better organization when managing multiple projects.

Initialize Terraform

Even though you may have already initialized your environment before attempting to switch the state backend to S3, whenever there is a change in configuration of the backend then it is recommended to re-initialize so that the state file will be sent to the S3 bucket:

terraform init
terraform init result

Troubleshooting

Issue Possible Cause Solution
"Error acquiring state lock" Previous Terraform operation crashed or another user has the lock Wait for the other operation to complete, or use terraform force-unlock LOCK_ID if you confirm no other operations are running.
"Access Denied" when initializing backend IAM permissions insufficient for S3 or DynamoDB Ensure your IAM user/role has s3:GetObject, s3:PutObject, s3:DeleteObject on the bucket, and dynamodb:GetItem, dynamodb:PutItem, dynamodb:DeleteItem on the table.
"Bucket does not exist" S3 bucket not created or wrong bucket name Verify the bucket exists with aws s3 ls. Remember bucket names are case-sensitive and must be globally unique.
State file corruption after migration Migration interrupted or network issues Use S3 versioning to restore the previous state file version. Always backup state before major operations.
"Backend configuration changed" Backend settings modified without reinitialization Run terraform init -reconfigure to update the backend configuration. Use -migrate-state to move existing state to the new backend.