Tfedit - A refactoring tool for Terraform

tfedit

License: MIT GitHub release GoDoc

Features

Easy refactoring Terraform configurations in a scalable way.

  • CLI-friendly: Read HCL from stdin, apply filters and write results to stdout, easily pipe and combine other commands.
  • Keep comments: You can update lots of existing Terraform configurations without losing comments.
  • Available operations:
    • filter awsv4upgrade: Upgrade configurations to AWS provider v4. Only aws_s3_bucket refactor is supported.

For upgrading AWS provider v4, some rules have not been implemented yet. The current implementation status is as follows:

S3 Bucket Refactor

  • acceleration_status Argument
  • acl Argument
  • cors_rule Argument
  • grant Argument
  • lifecycle_rule Argument
  • logging Argument
  • object_lock_configuration rule Argument
  • policy Argument
  • replication_configuration Argument
  • request_payer Argument
  • server_side_encryption_configuration Argument
  • versioning Argument
  • website, website_domain, and website_endpoint Arguments

Although the initial goal of this project is providing a way for bulk refactoring of the aws_s3_bucket resource required by breaking changes in AWS provider v4, but the project scope is not limited to specific use-cases. It's by no means intended to be an upgrade tool for all your providers. Instead of covering all you need, it provides reusable building blocks for Terraform refactoring and shows examples for how to compose them in real world use-cases.

As you know, Terraform refactoring often requires not only configuration changes, but also Terraform state migrations. However, it's error-prone and not suitable for CI/CD. For declarative Terraform state migration, use tfmigrate.

If you are not ready for the upgrade, you can pin version constraints in your Terraform configurations with tfupdate.

Install

Source

If you have Go 1.17+ development environment:

$ go install github.com/minamijoyo/tfedit@latest
$ tfedit version

Usage

$ tfedit --help
A refactoring tool for Terraform

Usage:
  tfedit [command]

Available Commands:
  completion  Generate the autocompletion script for the specified shell
  filter      Apply a built-in filter
  help        Help about any command
  version     Print version

Flags:
  -f, --file string   A path of input file (default "-")
  -h, --help          help for tfedit
  -u, --update        Update files in-place

Use "tfedit [command] --help" for more information about a command.
$ tfedit filter --help
Apply a built-in filter

Arguments:
  FILTER_TYPE    A type of filter.
                 Valid values are:
                 - awsv4upgrade
                   Upgrade configurations to AWS provider v4.
                   Only aws_s3_bucket refactor is supported.

Usage:
  tfedit filter <FILTER_TYPE> [flags]

Flags:
  -h, --help   help for filter

Global Flags:
  -f, --file string   A path of input file (default "-")
  -u, --update        Update files in-place

By default, the input is read from stdin, and the output is written to stdout. You can also read a file with -f flag, and update the file in-place with -u flag.

Example

Given the following file:

$ cat ./test-fixtures/awsv4upgrade/aws_s3_bucket.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "4.0.0"
    }
  }
}

provider "aws" {
  region = "ap-northeast-1"
}

resource "aws_s3_bucket" "example" {
  bucket = "minamijoyo-tf-aws-v4-test1"
  acl    = "private"

  logging {
    target_bucket = "minamijoyo-tf-aws-v4-test1-log"
    target_prefix = "log/"
  }
}
$ tfedit filter awsv4upgrade -f ./test-fixtures/awsv4upgrade/aws_s3_bucket.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "4.0.0"
    }
  }
}

provider "aws" {
  region = "ap-northeast-1"
}

resource "aws_s3_bucket" "example" {
  bucket = "minamijoyo-tf-aws-v4-test1"

}

resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket.example.id
  acl    = "private"
}

resource "aws_s3_bucket_logging" "example" {
  bucket = aws_s3_bucket.example.id

  target_bucket = "minamijoyo-tf-aws-v4-test1-log"
  target_prefix = "log/"
}

License

MIT

Owner
Comments
  • aws_s3_bucket_versioning with mfa_delete needs status = Enabled

    aws_s3_bucket_versioning with mfa_delete needs status = Enabled

    resource "aws_s3_bucket" "global_cloudtrail_logs" {
      bucket        = "cloudtrail-logs"
    
      versioning {
        mfa_delete = true
      }
    }
    

    Is translated to:

    resource "aws_s3_bucket" "global_cloudtrail_logs" {
      bucket = "cloudtrail-logs"
    }
    
    resource "aws_s3_bucket_versioning" "global_cloudtrail_logs" {
      bucket = aws_s3_bucket.global_cloudtrail_logs.id
    
      versioning_configuration {
        mfa_delete = "Enabled"
      }
    }
    

    This isn't quite correct, it should also have status

    resource "aws_s3_bucket_versioning" "whim_global_cloudtrail_logs" {
      bucket = aws_s3_bucket.whim_global_cloudtrail_logs.id
    
      versioning_configuration {
        status     = "Enabled"
        mfa_delete = "Enabled"
      }
    }
    

    Relates to #40

  • mfa_delete is set incorrectly on aws_s3_bucket_versioning resources

    mfa_delete is set incorrectly on aws_s3_bucket_versioning resources

    When upgrading an aws_s3_bucket resource that is using MFA delete the aws_s3_bucket_versioning resource that gets created sets the mfa_delete value to true when it should be "Enabled"

  • Custom provider not copied to new resources

    Custom provider not copied to new resources

    If you set a provider on an S3 resource, it's not copied to the child S3 resources

    resource "aws_s3_bucket" "bucket" {
      provider = aws.ohio
      bucket   = "mybucket"
      acl      = "private"
    }
    

    After migration:

    resource "aws_s3_bucket" "bucket" {
      provider = aws.ohio
      bucket   = "mybucket"
    }
    
    resource "aws_s3_bucket_acl" "bucket" {
      bucket = aws_s3_bucket.bucket.id
      acl    = "private"
    }
    

    This will cause an error when you try and import:

    │ Error: error getting S3 bucket ACL (bucket,private): AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-east-2'
    
  • aws_s3_bucket_lifecycle_configuration: empty filter and tags causes a drift on import

    aws_s3_bucket_lifecycle_configuration: empty filter and tags causes a drift on import

    Summary

    When aws_s3_bucket.lifecycle_rule.filter and tags were empty in AWS v3, importing aws_s3_bucket_lifecycle_configuration in AWS v4 with tfmigrate plan will be failed due to detecting a drift.

    When both filter and tags were empty in v3, just removing the empty filter in v4 converges the drift. This looks like a similar problem to #29, but actually a different one.

    Version

    $ tfedit version
    0.0.3
    
    $ tfmigrate -v
    0.3.3
    
    $ terraform -v
    Terraform v1.2.1
    on linux_amd64
    + provider registry.terraform.io/hashicorp/aws v4.15.1
    

    Configuration

    AWS v3.74.3

    resource "aws_s3_bucket" "example" {
      bucket = "tfedit-test"
    
      lifecycle_rule {
        id      = "log"
        enabled = true
        prefix  = ""
        tags    = {}
    
        noncurrent_version_transition {
          days          = 30
          storage_class = "GLACIER"
        }
    
        noncurrent_version_expiration {
          days = 90
        }
      }
    }
    

    AWS v4.15.1

    resource "aws_s3_bucket" "example" {
      bucket = "tfedit-test"
    }
    
    resource "aws_s3_bucket_lifecycle_configuration" "example" {
      bucket = aws_s3_bucket.example.id
    
      rule {
        id = "log"
    
        noncurrent_version_transition {
          storage_class   = "GLACIER"
          noncurrent_days = 30
        }
    
        noncurrent_version_expiration {
          noncurrent_days = 90
        }
        status = "Enabled"
    
        filter {
    
          and {
            prefix = ""
            tags   = {}
          }
        }
      }
    }
    

    Expected behavior

    $ terraform plan -out=tmp.tfplan
    $ terraform show -json tmp.tfplan | tfedit migration fromplan -o=tfmigrate_fromplan.hcl
    $ cat tfmigrate_fromplan.hcl
    migration "state" "fromplan" {
      actions = [
        "import aws_s3_bucket_lifecycle_configuration.example tfedit-test",
      ]
    }
    
    $ tfmigrate plan tfmigrate_fromplan.hcl
    (snip.)
    YYYY/MM/DD hh:mm:ss [INFO] [migrator] state migrator plan success!
    

    Actual behavior

    $ tfmigrate plan tfmigrate_fromplan.hcl
    2022/05/27 09:47:02 [INFO] [runner] load migration file: tfmigrate_fromplan.hcl
    2022/05/27 09:47:02 [INFO] [migrator] start state migrator plan
    2022/05/27 09:47:02 [INFO] [migrator@.] terraform version: 1.2.1
    2022/05/27 09:47:02 [INFO] [migrator@.] initialize work dir
    2022/05/27 09:47:05 [INFO] [migrator@.] get the current remote state
    2022/05/27 09:47:06 [INFO] [migrator@.] override backend to local
    2022/05/27 09:47:06 [INFO] [executor@.] create an override file
    2022/05/27 09:47:06 [INFO] [migrator@.] creating local workspace folder in: terraform.tfstate.d/default
    2022/05/27 09:47:06 [INFO] [executor@.] switch backend to local
    2022/05/27 09:47:10 [INFO] [migrator@.] compute a new state
    2022/05/27 09:47:24 [INFO] [migrator@.] check diffs
    2022/05/27 09:47:39 [ERROR] [migrator@.] unexpected diffs
    2022/05/27 09:47:39 [INFO] [executor@.] remove the override file
    2022/05/27 09:47:39 [INFO] [executor@.] remove the workspace state folder
    2022/05/27 09:47:39 [INFO] [executor@.] switch back to remote
    terraform plan command returns unexpected diffs: failed to run command (exited 2): terraform plan -state=/tmp/tmp783177411 -out=/tmp/tfplan2179793936 -input=false -no-color -detailed-exitcode
    stdout:
    aws_s3_bucket.example: Refreshing state... [id=tfedit-test]
    aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=tfedit-test]
    
    Terraform used the selected providers to generate the following execution
    plan. Resource actions are indicated with the following symbols:
      ~ update in-place
    
    Terraform will perform the following actions:
    
      # aws_s3_bucket_lifecycle_configuration.example will be updated in-place
      ~ resource "aws_s3_bucket_lifecycle_configuration" "example" {
            id     = "tfedit-test"
            # (1 unchanged attribute hidden)
    
          ~ rule {
                id     = "log"
                # (1 unchanged attribute hidden)
    
              ~ filter {
                  + and {}
                }
    
    
                # (2 unchanged blocks hidden)
            }
        }
    
    Plan: 0 to add, 1 to change, 0 to destroy.
    
    ─────────────────────────────────────────────────────────────────────────────
    
    Saved the plan to: /tmp/tfplan2179793936
    
    To perform exactly these actions, run the following command to apply:
        terraform apply "/tmp/tfplan2179793936"
    
    stderr:
    
  • aws_s3_bucket_website_configuration: An argument named

    aws_s3_bucket_website_configuration: An argument named "routing_rules" is not expected here.

    Summary

    In AWS v3, aws_s3_bucket.website.routing_rules is a string which is a JSON array contains routing rules. In AWS v4, aws_s3_bucket_website_configuration.routing_rule is a block. We need to parse json and build corresponding blocks representation.

    https://registry.terraform.io/providers/hashicorp/aws/3.74.3/docs/resources/s3_bucket#website https://registry.terraform.io/providers/hashicorp/aws/4.14.0/docs/resources/s3_bucket_website_configuration

    Version

    $ tfedit version
    0.0.3
    
    $ terraform -v
    Terraform v1.1.9
    on darwin_amd64
    + provider registry.terraform.io/hashicorp/aws v4.14.0
    

    Expected behavior

    tmp/iss30/main.tf

    before

    resource "aws_s3_bucket" "example" {
      bucket = "tfedit-test"
    
      website {
        index_document = "index.html"
        error_document = "error.html"
    
        routing_rules = <<EOF
    [{
        "Condition": {
            "KeyPrefixEquals": "docs/"
        },
        "Redirect": {
            "ReplaceKeyPrefixWith": "documents/"
        }
    }]
    EOF
      }
    }
    
    $ cat tmp/iss30/main.tf | tfedit filter awsv4upgrade
    

    after

    resource "aws_s3_bucket" "example" {
      bucket = "tfedit-test"
    }
    
    resource "aws_s3_bucket_website_configuration" "example" {
      bucket = aws_s3_bucket.example.id
    
      index_document {
        suffix = "index.html"
      }
    
      error_document {
        key = "error.html"
      }
    
      routing_rule {
        condition {
          key_prefix_equals = "docs/"
        }
        redirect {
          replace_key_prefix_with = "documents/"
        }
      }
    }
    

    Actual behavior

    $ cat tmp/iss30/main.tf | tfedit filter awsv4upgrade
    
    resource "aws_s3_bucket" "example" {
      bucket = "tfedit-test"
    }
    
    resource "aws_s3_bucket_website_configuration" "example" {
      bucket = aws_s3_bucket.example.id
    
      index_document {
        suffix = "index.html"
      }
    
      error_document {
        key = "error.html"
      }
    
    
      routing_rules = <<EOF
    [{
        "Condition": {
            "KeyPrefixEquals": "docs/"
        },
        "Redirect": {
            "ReplaceKeyPrefixWith": "documents/"
        }
    }]
    EOF
    }
    
  • Complete all primitive top-level block types

    Complete all primitive top-level block types

    Add the following block types:

    • variable
    • output
    • locals
    • module
    • terraform
    • moved

    It's just a type definition and no useful methods have been implemented yet, but it provides a good start point.

  • Redesigning the interface as a library

    Redesigning the interface as a library

    As we expanded support for other block types, we began to see a lot of code duplication. In addition, I found it difficult for the current implementation handles multiple block types in a single block filter. I'd like to stop here and redesign the interface as a library. The design goals are as follows:

    • To be able to extend block types easily
    • To be able to composite general-purpose block filters orthogonally without having to implement them for each block type
    • To be able to write filters that can process across multiple block types
    • To be able to use derived types for filters of specific block types.

    Despite the massive breaking changes to the library interface included in this PR, I believe the CLI's current behavior has not changed at all.

  • Rename s3_force_path_style to s3_use_path_style in provider aws block

    Rename s3_force_path_style to s3_use_path_style in provider aws block

    https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-4-upgrade#s3_use_path_style

    This attribute is only needed when mocking for testing, so we are currently depending on the hcledit CLI for acceptance testing. We can remove it by adding a rewrite rule as a filter.

    This is also an experiment extending the filter beyond the resource block as a refactoring library.

  • Fix invalid filter and tags for aws_s3_bucket_lifecycle_configuration

    Fix invalid filter and tags for aws_s3_bucket_lifecycle_configuration

    Fixes #29, #35

    Non-empty tags should be wrapped by an and block.

    When both prefix and tags are empty but defined, it will result in a migration plan diff, so remove them and put an empty filter.

  • Suppress creating a migration file when no action

    Suppress creating a migration file when no action

    Fixes #33

    It is not only redundant but also causes an error as an invalid migration file when loaded by tfmigrate. Intentionally does not return errors, so we can ignore irrelevant directories when processing multiple directories.

  • aws_s3_bucket_lifecycle_configuration: days_after_initiation = 0 causes a drift on import

    aws_s3_bucket_lifecycle_configuration: days_after_initiation = 0 causes a drift on import

    Summary

    When aws_s3_bucket.lifecycle_rule.abort_incomplete_multipart_upload_days was explicitly set to 0 in AWS v3, importing aws_s3_bucket_lifecycle_configuration.abort_incomplete_multipart_upload.days_after_initiation = 0 in AWS v4 with tfmigrate plan will be failed due to detecting a drift.

    According to the implementation, the zero value is explicitly skipped setting the parameter: https://github.com/hashicorp/terraform-provider-aws/blob/v3.74.3/internal/service/s3/bucket.go#L2266-L2271 https://github.com/hashicorp/terraform-provider-aws/blob/v4.15.1/internal/service/s3control/bucket_lifecycle_configuration.go#L291-L293

    After applying tfmigrate with force mode, then terraform apply with AWS v4 converges the drift. I'm not sure whether this is a bug of the AWS provider or not.

    Version

    $ tfedit version
    0.0.3
    
    $ tfmigrate -v
    0.3.3
    
    $ terraform -v
    Terraform v1.2.1
    on linux_amd64
    + provider registry.terraform.io/hashicorp/aws v4.15.1
    

    Configuration

    AWS v3.74.3

    resource "aws_s3_bucket" "example" {
      bucket = "tfedit-test"
    
      lifecycle_rule {
        id                                     = "test"
        enabled                                = true
        abort_incomplete_multipart_upload_days = 0
      }
    }
    

    AWS v4.15.1

    resource "aws_s3_bucket" "example" {
      bucket = "tfedit-test"
    }
    
    resource "aws_s3_bucket_lifecycle_configuration" "example" {
      bucket = aws_s3_bucket.example.id
    
      rule {
        id     = "test"
        status = "Enabled"
    
        filter {
          prefix = ""
        }
    
        abort_incomplete_multipart_upload {
          days_after_initiation = 0
        }
      }
    }
    

    Expected behavior

    $ terraform plan -out=tmp.tfplan
    $ terraform show -json tmp.tfplan | tfedit migration fromplan -o=tfmigrate_fromplan.hcl
    $ cat tfmigrate_fromplan.hcl
    migration "state" "fromplan" {
      actions = [
        "import aws_s3_bucket_lifecycle_configuration.example tfedit-test",
      ]
    }
    
    $ tfmigrate plan tfmigrate_fromplan.hcl
    (snip.)
    YYYY/MM/DD hh:mm:ss [INFO] [migrator] state migrator plan success!
    

    Actual behavior

    $ tfmigrate plan tfmigrate_fromplan.hcl
    2022/05/27 09:03:59 [INFO] [runner] load migration file: tfmigrate_fromplan.hcl
    2022/05/27 09:03:59 [INFO] [migrator] start state migrator plan
    2022/05/27 09:03:59 [INFO] [migrator@.] terraform version: 1.2.1
    2022/05/27 09:03:59 [INFO] [migrator@.] initialize work dir
    2022/05/27 09:04:02 [INFO] [migrator@.] get the current remote state
    2022/05/27 09:04:03 [INFO] [migrator@.] override backend to local
    2022/05/27 09:04:03 [INFO] [executor@.] create an override file
    2022/05/27 09:04:03 [INFO] [migrator@.] creating local workspace folder in: terraform.tfstate.d/default
    2022/05/27 09:04:03 [INFO] [executor@.] switch backend to local
    2022/05/27 09:04:07 [INFO] [migrator@.] compute a new state
    2022/05/27 09:04:21 [INFO] [migrator@.] check diffs
    2022/05/27 09:04:36 [ERROR] [migrator@.] unexpected diffs
    2022/05/27 09:04:36 [INFO] [executor@.] remove the override file
    2022/05/27 09:04:36 [INFO] [executor@.] remove the workspace state folder
    2022/05/27 09:04:36 [INFO] [executor@.] switch back to remote
    terraform plan command returns unexpected diffs: failed to run command (exited 2): terraform plan -state=/tmp/tmp3105549665 -out=/tmp/tfplan2994504524 -input=false -no-color -detailed-exitcode
    stdout:
    aws_s3_bucket.example: Refreshing state... [id=tfedit-test]
    aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=tfedit-test]
    
    Terraform used the selected providers to generate the following execution
    plan. Resource actions are indicated with the following symbols:
      ~ update in-place
    
    Terraform will perform the following actions:
    
      # aws_s3_bucket_lifecycle_configuration.example will be updated in-place
      ~ resource "aws_s3_bucket_lifecycle_configuration" "example" {
            id     = "tfedit-test"
            # (1 unchanged attribute hidden)
    
          ~ rule {
                id     = "test"
                # (1 unchanged attribute hidden)
    
              + abort_incomplete_multipart_upload {
                  + days_after_initiation = 0
                }
    
              - expiration {
                  - days                         = 0 -> null
                  - expired_object_delete_marker = false -> null
                }
    
                # (1 unchanged block hidden)
            }
        }
    
    Plan: 0 to add, 1 to change, 0 to destroy.
    
    ─────────────────────────────────────────────────────────────────────────────
    
    Saved the plan to: /tmp/tfplan2994504524
    
    To perform exactly these actions, run the following command to apply:
        terraform apply "/tmp/tfplan2994504524"
    
    stderr:
    
  • Preserve comments on noncurrent_version_expiration

    Preserve comments on noncurrent_version_expiration

    resource "aws_s3_bucket" "mybucket" {
      bucket = "mybucket"
    
      lifecycle_rule {
        id      = "cleanup"
        enabled = true
    
        expiration {
          days = 14 # mark as expired 14 days after creation
        }
    
        noncurrent_version_expiration {
          days = 14 # delete expired 14 days after they expired
        }
      }
    }
    

    loses the comments on noncurrent_version_expiration after filtering

    resource "aws_s3_bucket" "mybucket" {
      bucket = "mybucket"
    }
    
    resource "aws_s3_bucket_lifecycle_configuration" "mybucket" {
      bucket = aws_s3_bucket.mybucket.id
    
      rule {
        id = "cleanup"
    
        expiration {
          days = 14 # mark as expired 14 days after creation
        }
    
        noncurrent_version_expiration {
          noncurrent_days = 14
        }
        status = "Enabled"
    
        filter {
          prefix = ""
        }
      }
    }
    
  • Create new resources directly below existing aws_s3_bucket resources

    Create new resources directly below existing aws_s3_bucket resources

    Firstly, thank you so much for this tool. It has saved me a lot of time and energy.

    The only manual work I needed to do with tfedit was to move the generated resources from the bottom of the file to directly below their parent s3 bucket.

    It would be great if it was possible to create the new resources in the middle of the file next to the bucket, rather than appending to the bottom of the file.

    Thanks again!

Related tags
Automated refactoring for Terraform

tfrefactor Automated refactoring for Terraform. Currently supports: Rename local / var / data / resource across all files in a config Move items or ca

Oct 21, 2022
Terraform-in-Terraform: Execute Modules directly from the Terraform Registry

Terraform-In-Terraform Provider This provider allows running Terraform in Terraform. This might seem insane but there are some edge cases where it com

Dec 25, 2022
Terraform utility provider for constructing bash scripts that use data from a Terraform module

Terraform Bash Provider This is a Terraform utility provider which aims to robustly generate Bash scripts which refer to data that originated in Terra

Sep 6, 2022
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)

terraform-provider-awsutils Terraform provider for performing various tasks that cannot be performed with the official AWS Terraform Provider from Has

Dec 8, 2022
Terraform Provider for Azure (Resource Manager)Terraform Provider for Azure (Resource Manager)
Terraform Provider for Azure (Resource Manager)Terraform Provider for Azure (Resource Manager)

Terraform Provider for Azure (Resource Manager) Version 2.x of the AzureRM Provider requires Terraform 0.12.x and later, but 1.0 is recommended. Terra

Oct 16, 2021
Quick start repository for creating a Terraform provider using terraform-plugin-framework

Terraform Provider Scaffolding (Terraform Plugin Framework) This template repository is built on the Terraform Plugin Framework. The template reposito

Dec 15, 2022
Terraform-provider-mailcow - Terraform provider for Mailcow

Terraform Provider Scaffolding (Terraform Plugin SDK) This template repository i

Dec 31, 2021
Terraform-provider-buddy - Terraform Buddy provider For golang

Terraform Provider for Buddy Documentation Requirements Terraform >= 1.0.11 Go >

Jan 5, 2022
Terraform-provider-vercel - Terraform Vercel Provider With Golang

Vercel Terraform Provider Website: https://www.terraform.io Documentation: https

Dec 14, 2022
Terraform-grafana-dashboard - Grafana dashboard Terraform module

terraform-grafana-dashboard terraform-grafana-dashboard for project Requirements

May 2, 2022
Puccini-terraform - Enable TOSCA for Terraform using Puccini

(work in progress) TOSCA for Terraform Enable TOSCA for Terraform using Puccini.

Jun 27, 2022
Terraform Provider Scaffolding (Terraform Plugin SDK)

Terraform Provider Scaffolding (Terraform Plugin SDK) This template repository is built on the Terraform Plugin SDK. The template repository built on

Feb 8, 2022
Terraform-ncloud-docs - Terraform-ncloud-docs

terraform-ncloud-docs Overview This docs help to use terraform creation server C

Oct 2, 2022
Terraform-provider-age - Age Terraform Provider with golang

Age Terraform Provider This provider lets you generate an Age key pair. Using th

Feb 15, 2022
Terraform-house - Golang Based terraform automation example using tf.json

Terraform House Manage your own terraform workflow using go language, with the b

Feb 17, 2022
LTF is a minimal, transparent Terraform wrapper. It makes Terraform projects easier to work with.

LTF Status: alpha LTF is a minimal, transparent Terraform wrapper. It makes Terraform projects easier to work with. In standard Terraform projects, th

Nov 19, 2022
Terraform Controller manages the life cycles of a terraform resource, allowing developers to self-serve dependencies in a controlled manner.
Terraform Controller manages the life cycles of a terraform resource, allowing developers to self-serve dependencies in a controlled manner.

TERRAFORM CONTROLLER Terraform Controller manages the life cycles of a terraform resource, allowing developers to self-serve dependencies in a control

Dec 15, 2022