Managing Your Terraform State File

I started messing with GitHub Actions a little more in my Using GitHub to Manage Kubernetes post. I also did some tinkering around with Terraform in my How to Use Terraform to Deploy a Python Script to AWS Lambda post. As I started messing with Terraform even more, I realized how cool it would be to merge Terraform and GitHub Actions. As I started to do this, I ran into the problem of how to manage Terraform state files. This appears to be a rather common problem too.

I’ve been managing all of my Kubernetes configurations using kubectl and then started migrating into using GitHub Actions. While being able to manage configurations with simple commits to GitHub has been great, the setup for this is a little shaky. After looking around at the various options for storing the state file, I happened upon this article. This article explained setting up AWS S3 and DynamoDB to be used for storing the state and lock files.

After going through this example, I was left with an aws_resources.tf file that looked like the below:

# IAM Requirements : S3
# - CreateBucket
# - PutBucketPublicAccessBlock
# - PutBucketEncryption
# - PutBucketVersioning
resource "aws_s3_bucket" "terraform_state" {
  bucket = "my-k8-tf-state"
 
  # Prevent accidental deletion of this S3 bucket
  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_s3_bucket_versioning" "enabled" {
  bucket = aws_s3_bucket.terraform_state.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "default" {
  bucket = aws_s3_bucket.terraform_state.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

resource "aws_s3_bucket_public_access_block" "public_access" {
  bucket                  = aws_s3_bucket.terraform_state.id
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

# IAM Requirements : Dynamo
# - DescribeTable
# - CreateTable
resource "aws_dynamodb_table" "terraform_locks" {
  name         = "my-k8-tf-locks"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

I believe the article mentions ensuring you have a role with predefined AWS Roles assigned but I dug a little deeper in the CloudTrail log to find the specific permissions required for the above. Note that each resource has it’s IAM Requirements listed as comments. With my DynamoDB and S3 in place, I could now setup the backend.

Creating a Terraform Backend

With the above AWS resources created, I then created a new backend.tf file that looked like the below

provider "aws" {
  region = "us-east-2"
}

terraform {
  backend "s3" {
    # Replace this with your bucket name!
    bucket = "my-k8-tf-state"
    key    = "global/s3/terraform.tfstate"
    region = "us-east-2"

    # Replace this with your DynamoDB table name!
    dynamodb_table = "my-k8-tf-locks"
    encrypt        = true
  }
}

With the backend setup, I can now get rid of a local terraform.tfstate file and instead use the state information stored in AWS.

Checking Our Resources in AWS

After doing a simple terraform apply, I checked the s3 bucket, and sure enough my state file is there

When I looked at the DynamoDB, I can also see that there is a lock present.

This all seemed too easy! Now that I have a way to manage my Terraform state file in a central place, I’m going to see if I can integrate this into my GitHub actions in my next post!