Terraform Import Made Easy: Conquering Infrastructure Management Complexities

I will admit that I’m quite a bit of a rookie when it comes to Terraform and the whole infrastructure as code scene but I’m learning. One of the most powerful tools I’ve found when trying to convert your current infrastructure into Terraform is the terraform import command. After too many times of doing an import and manually copying information from my state file, I felt there had to be a way to simplify Terraform import.

I thought of writing a script to do the work but that seemed to be a challenge. I then noticed a new experimental feature in terraform import called -generate-config-out. This cli switch is listed as experimental but it still seemed to work nicely.

Terraform Import Blocks

Starting in Terraform 1.5.0, they added import blocks. I wasn’t quite sure how to use them until I found an example of:

import {
    to = terraform resource
    id = resource identifier in system
}

This block seems simple enough to me so I kept going

Terraform Kubernetes Deployment Resource

As I mentioned previously in my Infrastructure as Code with Terraform and GitHub Actions: A Kubernetes Case Study post, I’m working towards managing more of my infrastructure with Terraform. With that post, I was able to interact with my Kubernetes cluster. It is now time to start moving resources over to Terraform.

I wanted to start with a super simple deployment of my ubuntu host

apiVersion: apps/v1
kind: Deployment
metadata:
  generation: 1
  labels:
    apps: ubuntu
  name: ubuntu
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ubuntu
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ubuntu
      namespace: default
    spec:
      containers:
      - command:
        - /bin/sleep
        - 3651d
        env:
        - name: MY_SERVICE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.serviceAccountName
        image: registry.digitalocean.com/k8-registry/ubuntu-client:latest
        imagePullPolicy: Always
        name: ubuntu
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30

This super simple deployment just spins up an Ubuntu pod in case I want a server to tinker with inside of the cluster. This is deployed in the default namespace as ubuntu

% kubectl get deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
log-gen        1/1     1            1           572d
nano-web       1/1     1            1           248d
perf-testing   1/1     1            1           312d
twitterbot     1/1     1            1           349d
ubuntu         1/1     1            1           100d

If you have looked at the Kubernetes resource in the Terraform provider, you’ll see that this super simple deployment takes quite a few arguments deploy using the kubernetes_deployment.

Planning My Import for Deployment

I lost interest when I thought of having to do an import and then copying all of the resources from my terraform state file. Then I read more about the import blocks and related -generate-config-out option. It turns out, you can use these in combination with each other to create a new resource definition.

I tried creating my import block in a file import.tf

import {
   to = kubernetes_deployment.default_ubuntu
   id = "default/ubuntu"
}

In this import block, I’m telling Terraform to import my ubuntu deployment that is in the default namespace. This should be imported as a kubernetes_deployment resource called default_ubuntu.

In order to run the import, you actually do a plan while supplying the file you’d like -generate-config-out to use for the imported resources. In my case, I just told it to use generated_resources.tf. Below is the result of the command being executed.

% terraform plan -var-file=vars.tfvars -generate-config-out=generated_resources.tf                               
kubernetes_deployment.default_ubuntu: Preparing import... [id=default/ubuntu]
kubernetes_namespace.example: Refreshing state... [id=my-first-namespace]
kubernetes_deployment.default_ubuntu: Refreshing state... [id=default/ubuntu]

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Warning: Config generation is experimental
│ 
│ Generating configuration during import is currently experimental, and the generated configuration format may change in future versions.
╵
╷
│ Error: spec.0.template.0.spec.0.active_deadline_seconds must be greater than 0
│ 
│   with kubernetes_deployment.default_ubuntu,
│   on generated_resources.tf line 43:
│   (source code not available)
│ 
╵

The Warning just reminds us that this feature is experimental. The Error seems to be an easy fix. The resulting generated_resources.tf looks a little something like this

# __generated__ by Terraform
# Please review these resources and move them into your main configuration files.

# __generated__ by Terraform
resource "kubernetes_deployment" "default_ubuntu" {
  wait_for_rollout = null
  metadata {
    annotations   = {}
    generate_name = null
    labels = {
      apps = "ubuntu"
    }
    name      = "ubuntu"
    namespace = "default"
  }
  spec {
    min_ready_seconds         = 0
    paused                    = false
    progress_deadline_seconds = 600
    replicas                  = "1"
    revision_history_limit    = 10
    selector {
      match_labels = {
        app = "ubuntu"
      }
    }
    strategy {
      type = "RollingUpdate"
      rolling_update {
        max_surge       = "25%"
        max_unavailable = "25%"
      }
    }
    template {
      metadata {
        annotations   = {}
        generate_name = null
        labels = {
          app = "ubuntu"
        }
        name      = null
        namespace = "default"
      }
      spec {
        active_deadline_seconds          = 0
        automount_service_account_token  = false
        dns_policy                       = "ClusterFirst"
        enable_service_links             = false
        host_ipc                         = false
        host_network                     = false
        host_pid                         = false
        hostname                         = null
        node_name                        = null
        node_selector                    = {}
        priority_class_name              = null
        restart_policy                   = "Always"
        runtime_class_name               = null
        scheduler_name                   = "default-scheduler"
        service_account_name             = null
        share_process_namespace          = false
        subdomain                        = null
        termination_grace_period_seconds = 30
        container {
          args                       = []
          command                    = ["/bin/sleep", "3651d"]
          image                      = "registry.digitalocean.com/k8-registry/ubuntu-client:latest"
          image_pull_policy          = "Always"
          name                       = "ubuntu"
          stdin                      = false
          stdin_once                 = false
          termination_message_path   = "/dev/termination-log"
          termination_message_policy = "File"
          tty                        = false
          working_dir                = null
          env {
            name  = "MY_SERVICE_NAME"
            value = null
            value_from {
              field_ref {
                api_version = "v1"
                field_path  = "spec.serviceAccountName"
              }
            }
          }
          resources {
            limits   = {}
            requests = {}
          }
        }
      }
    }
  }
  timeouts {
    create = null
    delete = null
    update = null
  }
}

The error seemed simple enough to fix in the file. I just changed line 45 from 0 to 1.

Deploying my Imported Resource

I wanted to make sure this deployment actually works so I first deleted my deployment sitting in Kubernetes.

% kubectl delete deployment ubuntu                                                
deployment.apps "ubuntu" deleted

From there, I did committed my changes and watched the Gitbub Action do all of the work. The problem is that I got an error on the apply.

Plan: 1 to add, 0 to change, 0 to destroy.
kubernetes_deployment.default_ubuntu: Creating...

Error: Failed to create deployment: Deployment.apps "ubuntu" is invalid: spec.template.spec.activeDeadlineSeconds: Forbidden: activeDeadlineSeconds in ReplicaSet is not Supported

  with kubernetes_deployment.default_ubuntu,
  on deployments.tf line 1, in resource "kubernetes_deployment" "default_ubuntu":
   1: resource "kubernetes_deployment" "default_ubuntu" {

Error: Terraform exited with code 1.
Error: Process completed with exit code 1.

The import error makes a little more sense now. I edited my generated_resources.tf file and just deleted this line instead of worrying about setting a value:

        active_deadline_seconds          = 1

I did another commit and push and watched the logs again in my Github Action.

Plan: 1 to add, 0 to change, 0 to destroy.
kubernetes_deployment.default_ubuntu: Creating...
kubernetes_deployment.default_ubuntu: Creation complete after 8s [id=default/ubuntu]

This looks good so now let’s check and see if we deployed

% kubectl get pod             
NAME                            READY   STATUS    RESTARTS          AGE
log-gen-59f94c9d86-85tds        2/2     Running   120 (9h ago)      23d
nano-web-67d44d5bbd-9q8mt       2/2     Running   107 (3d16h ago)   23d
nginx-0                         2/2     Running   0                 23d
nginx-1                         2/2     Running   2 (10d ago)       23d
perf-testing-5f8cb5797d-8ls4x   1/1     Running   0                 23d
reverse-proxy-0                 1/1     Running   0                 23d
reverse-proxy-1                 1/1     Running   0                 23d
reverse-proxy-2                 1/1     Running   0                 3d15h
twitterbot-54d6c46744-nxlrw     2/2     Running   0                 23d
ubuntu-65b8d8bdcb-ghr8c         1/1     Running   0                 20s

The deployment is there! Let’s check in on the pod.

% kubectl get pod             
NAME                            READY   STATUS    RESTARTS          AGE
log-gen-59f94c9d86-85tds        2/2     Running   120 (9h ago)      23d
nano-web-67d44d5bbd-9q8mt       2/2     Running   107 (3d16h ago)   23d
nginx-0                         2/2     Running   0                 23d
nginx-1                         2/2     Running   2 (10d ago)       23d
perf-testing-5f8cb5797d-8ls4x   1/1     Running   0                 23d
reverse-proxy-0                 1/1     Running   0                 23d
reverse-proxy-1                 1/1     Running   0                 23d
reverse-proxy-2                 1/1     Running   0                 3d15h
twitterbot-54d6c46744-nxlrw     2/2     Running   0                 23d
ubuntu-65b8d8bdcb-ghr8c         1/1     Running   0                 24s

Great! My pod has redeployed!

This is How to Simplify Terraform Import

This is indeed the way forward! I’m going to make sure this was this simple and not something random. My next steps will be to import my other simple deployments. From there, I’ll tackle some of my reverse proxies that have configmaps and more!

I’m super excited to NOT have to manually pick through a Terraform state file to create resource definitions in my Terraform.