Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

Terraform

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

The key features of Terraform are:

  • Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.

  • Execution Plans: Terraform has a "planning" step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.

  • Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

  • Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

For more information, see the introduction section of the Terraform website.

Getting Started & Documentation

Documentation is available on the Terraform website:

If you're new to Terraform and want to get started creating infrastructure, please check out our Getting Started guides on HashiCorp's learning platform. There are also additional guides to continue your learning.

Show off your Terraform knowledge by passing a certification exam. Visit the certification page for information about exams and find study materials on HashiCorp's learning platform.

Developing Terraform

This repository contains only Terraform core, which includes the command line interface and the main graph engine. Providers are implemented as plugins, and Terraform can automatically download providers that are published on the Terraform Registry. HashiCorp develops some providers, and others are developed by other organizations. For more information, see Extending Terraform.

To learn more about compiling Terraform and contributing suggested changes, please refer to the contributing guide.

To learn more about how we handle bug reports, please read the bug triage guide.

License

Mozilla Public License v2.0

Owner
HashiCorp
Consistent workflows to provision, secure, connect, and run any infrastructure for any application.
HashiCorp
Comments
  • Support use cases with conditional logic

    Support use cases with conditional logic

    It's been important from the beginning that Terraform's configuration language is declarative, which has meant that the core team has intentionally avoided adding flow-control statements like conditionals and loops to the language.

    But in the real world, there are still plenty of perfectly reasonable scenarios that are difficult to express in the current version of Terraform without copious amounts of duplication because of the lack of conditionals. We'd like Terraform to support these use cases one way or another.

    I'm opening this issue to collect some real-world example where, as a config author, it seems like an if statement would really make things easier.

    Using these examples, we'll play around with different ideas to improve the tools Terraform provides to the config author in these scenarios.

    So please feel free to chime in with some specific examples - ideally with with blocks of Terraform configuration included. If you've got ideas for syntax or config language features that could form a solution, those are welcome here too.

    (No need to respond with just "+1" / :+1: on this thread, since it's an issue we're already aware is important.)

  • depends_on cannot be used in a module

    depends_on cannot be used in a module

    Hi there,

    Terraform Version

    0.8.0 rc1+

    Affected Resource(s)

    module

    Terraform Configuration Files

    module "legacy_site" {
      source = "../../../../../modules/site"
      name = "foo-site"
      health_check_target = "TCP:443"
      azs = "${var.azs}"
      instance_count = "${var.instance_count}"
      vpc = "apps"
      region = "${var.region}"
      environment = "${var.environment}"
      run_list = "hs_site_foo"
    
      #rds_complete = "${module.rds.db_instance_id}"
      #elasticache_cache_complete = "${module.elasticache_cache.elasticache_id}"
      #elasticache_sessions_complete = "${module.elasticache_sessions.elasticache_id}"
    
      depends_on = [
      "module.rds",
      "module.elasticache_sessions"
      ]
    
    }
    

    Debug Output

    Error loading Terraform: module root: module legacy_site: depends_on is not a valid parameter module root: module legacy_site: depends_on is not a valid parameter

    Expected Behavior

    I am trying to use the new depends_on instead of the above outputs, so I create and provision my app once I know database and caches are built.

    Actual Behavior

    Nothing as terraform errors out as above.

    Steps to Reproduce

    1. terraform apply

    References

    depends_on can reference modules. This allows a resource or output to depend on everything within a module. (#10076)

  • Depends_on for module

    Depends_on for module

    Possible workarounds

    For module to module dependencies, this workaround by @phinze may help.

    Original problem

    This issue was promoted by this question on Google Groups.

    Terraform version: Terraform v0.3.7

    I have two terraform modules for creating a digital ocean VM and DNS records that are kept purposely modular so they can be reused by others in my organisation.

    I want to add a series of provisioners using local_exec after a VM has been created and DNS records made.

    Attempted solution

    I tried adding a provisioner directly to my terraform file (i.e. not in a resource) which gave an error.

    I then tried using the null_resource which worked but was executed at the wrong time as it didn't know to wait for the other modules to execute first.

    I then tried adding a depends_on attribute to the null resource using a reference to a module but this doesn't seem to be supported using this syntax:

    depends_on = ["module.module_name"]
    

    Expected result

    Either a way for a resource to depend on a module as a dependency or a way to "inject" (for lack of a better word) some provisioners for a resource into a module without having to make a custom version of that module (I realise that might be a separate issue but it would solve my original problem).

    Terraform config used

    # Terraform definition file - this file is used to describe the required infrastructure for this project.
    
    # Digital Ocean provider configuration
    
    provider "digitalocean" {
        token = "${var.digital_ocean_token}"
    }
    
    
    # Resources
    
    # 'whoosh-dev-web1' resource
    
    # VM
    
    module "whoosh-dev-web1-droplet" {
        source = "github.com/antarctica/terraform-module-digital-ocean-droplet?ref=v1.0.0"
        hostname = "whoosh-dev-web1"
        ssh_fingerprint = "${var.ssh_fingerprint}"
    }
    
    # DNS records (public, private and default [which is an APEX record and points to public])
    
    module "whoosh-dev-web1-records" {
        source = "github.com/antarctica/terraform-module-digital-ocean-records?ref=v0.1.1"
        hostname = "whoosh-dev-web1"
        machine_interface_ipv4_public = "${module.whoosh-dev-web1-droplet.ip_v4_address_public}"
        machine_interface_ipv4_private = "${module.whoosh-dev-web1-droplet.ip_v4_address_private}"
    }
    
    
    # Provisioning (using a fake resource as provisioners can't be first class objects)
    
    # Note: The "null_resource" is an undocumented feature and should not be relied upon.
    # See https://github.com/hashicorp/terraform/issues/580 for more information.
    
    resource "null_resource" "provisioning" {
    
        depends_on = ["module.whoosh-dev-web1-records"]
    
        # This replicates the provisioning steps performed by Vagrant
        provisioner "local-exec" {
            command = "ansible-playbook -i provisioning/development provisioning/bootstrap-digitalocean.yml"
        }
    }
    
  • AWS Provider Coverage

    AWS Provider Coverage

    AWS Provider Coverage

    View this spreadsheet for a near-time summary of AWS resource coverage. If there's a resource you would like to see coverage for, just add your GitHub username to next to the resource. We will use the number of community upvotes in the spreadsheet to help prioritize our efforts.

    https://docs.google.com/spreadsheets/d/1yJKjLaTmkWcUS3T8TLwvXC6EBwNSpuQbIq0Y7OnMXhw/edit?usp=sharing

  • terraform get: can't use variable in module source parameter?

    terraform get: can't use variable in module source parameter?

    I'm trying to avoid hard-coding module sources; the simplest approach would be:

    variable "foo_module_source" {
      default = "github.com/thisisme/terraform-foo-module"
    }
    
    module "foo" {
      source = "${var.foo_module_source}"
    }
    

    The result I get while attempting to run terraform get -update is

    Error loading Terraform: Error downloading modules: error downloading module 'file:///home/thisisme/terraform-env/${var.foo_module_source}': source path error: stat /home/thisisme/terraform-env/${var.foo_module_source}: no such file or directory
    
  • OpenStack Provider

    OpenStack Provider

    UPDATE: 2/11/2015

    To Do:

    • [x] FWaaS
    • [x] Security Groups Update Issue
    • [x] Volume detachment from volume resource
    • [ ] os-floating-ip/ neutron floating IP issue
    • [ ] Refactor Security Group Rules and LB Members to their own files

    This PR is to create an OpenStack Provider. It uses the Gophercloud v1.0 library and currently supports the following resources:

    Compute v2

    • Server
    • Key Pair
    • Security Group
    • Boot From Volume
    • Metadata
    • Resizing (on flavor_id change)

    Networking v2

    • Network
    • Subnet

    Load Balancer v1

    • Pool (with members)
    • Virtual IP
    • Monitor

    Block Storage v1

    • Volume

    Object Storage v1

    • Container

    The PR includes acceptance tests for all the above resources (tested against DevStack), as well as documentation. In addition, the resources are versioned and region-based. Hopefully, this PR includes enough resources to close #51

  • Using element with splat reference should scope dependency to selected resource

    Using element with splat reference should scope dependency to selected resource

    I'm trying to setup a multi-node cluster with attached ebs volumes. An example below:

    resource "aws_instance" "nodes" {
        instance_type = "${var.model}"
        key_name = "${var.ec2_keypair}"
        ami = "${lookup(var.zk_amis, var.region)}"
        count = "${var.node_count}"
        vpc_security_group_ids = ["${aws_security_group.default.id}"]
        subnet_id = "${lookup(var.subnet_ids, element(keys(var.subnet_ids), count.index))}"
        associate_public_ip_address = true
        user_data = "${file("cloud_init")}"
        tags {
            Name = "${var.cluster_name}-${count.index}"
        }
    }
    
    resource "aws_ebs_volume" "node-ebs" {
        count = "${var.node-count}"
        availability_zone = "${element(keys(var.subnet_ids), count.index)}"
        size = 100
        tags {
            Name = "${var.cluster_name}-ebs-${count.index}"
        }
    }
    
    resource "aws_volume_attachment" "node-attach" {
        count = "${var.node_count}"
        device_name = "/dev/xvdh"
        volume_id = "${element(aws_ebs_volume.node-ebs.*.id, count.index)}"
        instance_id = "${element(aws_instance.nodes.*.id, count.index)}"
    }
    

    If a change happens to a single node (for instance if a single ec2 instance is terminated) ALL of the aws_volume_attachments are recreated.

    Clearly we would not want volume attachments to be removed in a production environment. Worse than that, in conjunction with #2957 you first must unmount these attachments before they can be recreated. This has the effect of making volume attachments only viable on brand new clusters.

  • A way to hide certain expected changes from the

    A way to hide certain expected changes from the "refresh" report ("Objects have changed outside of Terraform")

    After upgrading to 0.15.4 terraform reports changes that are ignored. It is exactly like commented here: https://github.com/hashicorp/terraform/issues/28776#issuecomment-846547594

    Terraform Version

    Terraform v0.15.4
    on darwin_amd64
    + provider registry.terraform.io/hashicorp/aws v3.42.0
    + provider registry.terraform.io/hashicorp/template v2.2.0
    

    Terraform Configuration Files

    
    resource "aws_batch_compute_environment" "batch_compute" {
      lifecycle {
        ignore_changes = [compute_resources[0].desired_vcpus]
      }
    
    ...
    
      compute_resources {
    ...
      }
    }
    
    resource "aws_db_instance" "postgres_db" {
      ...
    
      lifecycle {
        prevent_destroy = true
        ignore_changes = [latest_restorable_time]
      }
    }
    

    Output

    Note: Objects have changed outside of Terraform
    
    Terraform detected the following changes made outside of Terraform since the last "terraform apply":
    
      # module.db.aws_db_instance.postgres_db has been changed
      ~ resource "aws_db_instance" "postgres_db" {
            id                                    = "db"
          ~ latest_restorable_time                = "2021-05-25T10:24:14Z" -> "2021-05-25T10:29:14Z"
            name                                  = "db"
            tags                                  = {
                "Name" = "DatabaseServer"
            }
            # (47 unchanged attributes hidden)
    
            # (1 unchanged block hidden)
        }
      # module.batch_processor_dot_backend.aws_batch_compute_environment.batch_compute has been changed
      ~ resource "aws_batch_compute_environment" "batch_compute" {
            id                       = "batch-compute"
            tags                     = {}
            # (9 unchanged attributes hidden)
    
          ~ compute_resources {
              ~ desired_vcpus      = 0 -> 2
                tags               = {}
                # (9 unchanged attributes hidden)
            }
        }
    

    Expected Behavior

    No changes should be reported because they are listed in ignored changes.

    Actual Behavior

    Changes are reported.

    Steps to Reproduce

    Change any resource outside of terraform and see that terraform apply reports changed even when they should be ignored.

    Additional Context

    References

    • https://github.com/hashicorp/terraform/issues/28776
    • https://github.com/hashicorp/terraform/issues/28776#issuecomment-846547594
    • https://github.com/hashicorp/terraform/pull/28634#issuecomment-845934989
  • vSphere Provider: Mapping out the Next Steps

    vSphere Provider: Mapping out the Next Steps

    Wanted to kick off a higher level discussion of what needs to be done on the vSphere provider and in what order.

    • What are the important missing resources?
    • Are there any enhancements that need to be made to the existing functionality?
    • What do we need to do to ensure the provider works with all common versions of vSphere in the wild?

    Pinging @tkak and @mkuzmin to chime in as well as anybody else with interest/knowledge in the community.

  • Destroy 'provisioner' for instance resources

    Destroy 'provisioner' for instance resources

    I would be great to have sort of a 'provisioner' for destroying an instance resource.

    Example: When creating a instance, I bootstrap it with chef and the node is registered with the chef server. Now I need a way of automatically deleting the node from the chef server after terraform destroys the instance.

  • Cannot use `terraform import` with module that has dynamic provider configuration

    Cannot use `terraform import` with module that has dynamic provider configuration

    Terraform Version

    Terraform v0.9.1

    Affected Resource(s)

    N/A

    Terraform Configuration Files

    # ./main.tf
    module "module_us_west_1" {
      source = "./module"
      region = "us-west-1"
    }
    
    # ./module/main.tf
    variable "region" {
      description = "AWS region for provider"
    }
    
    provider "aws" {
      region = "${var.region}"
    }
    
    resource "aws_cloudwatch_log_group" "rds_os" {
      name = "RDSOSMetrics"
      retention_in_days = 30
    }
    

    Debug Output

    https://gist.github.com/ff54870fee49636209ecfaa5de272175

    Panic Output

    N/A

    Expected Behavior

    Resource was imported

    Actual Behavior

    Error importing: 1 error(s) occurred:
    
    * module.module_us_west_1.provider.aws: 1:3: unknown variable accessed: var.region in:
    
    ${var.region}
    

    Steps to Reproduce

    1. terraform import module.module_us_west_1.aws_cloudwatch_log_group.rds_os "arn:aws:logs:us-west-1:FILTERED:log-group:RDSOSMetrics:*"

    Important Factoids

    N/A

    References

    N/A

  • Support HTTP based cloud backends

    Support HTTP based cloud backends

    What

    Adding support for specifying the schema when using the new cloud backend type. This is primarily an internal-facing change allowing developers and CI pipelines in Terraform Cloud to host the app without needing to provision an internet-facing HTTPS URL or juggle self-signed certificates.

    TODO

    • [ ] I had to update terraform-svchost to support HTTP as well. This PR is dependent upon https://github.com/hashicorp/terraform-svchost/pull/7 being accepted.
  • The Terraform Registry displays inconsistent and wrong `source` information for submodules

    The Terraform Registry displays inconsistent and wrong `source` information for submodules

    I'm trying to use several submodules on the terraform registry, and I've noticed that they have the wrong source information.

    https://registry.terraform.io/modules/terraform-google-modules/vm/google/latest/submodules/mig says to use "terraform-google-modules/vm/google/modules/mig"

    https://registry.terraform.io/modules/hashicorp/consul/aws/latest/submodules/consul-client-security-group-rules has an automated box that suggests "hashicorp/consul/aws/modules/consul-client-security-group-rules", but then has a README that says "git::[email protected]:hashicorp/terraform-aws-vault.git//modules/vault-cluster?ref=v0.0.1"

    After getting some help from the community I see that there's a special // form for submodule. I see that this is documented as sub-directory, but I never found that because elsewhere it's called a "submodule"

    Ultimately, the correct source seems to be terraform-google-modules/vm/google//modules/mig, but it took me several hours to determine that.

    Terraform Version

    This is a registry website issue, but:

    $ terraform version
    Terraform v1.1.9
    on darwin_arm64
    

    Terraform Configuration Files

    Debug Output

    │ Error: Invalid module source address
    │ 
    │ Module "mig" (declared at main.tf line 12) has invalid source address
    │ "terraform-google-modules/vm/google/modules/mig": Terraform cannot detect a supported external module
    │ source type for terraform-google-modules/vm/google/modules/mig.
    

    References

    • https://discuss.hashicorp.com/t/using-a-registry-module-results-in-cannot-detect-a-supported-external-module-source-errors/39186
  • Add sha512crypt function

    Add sha512crypt function

    Current Terraform Version

    v1.1.8

    Use-cases

    Would be useful in generating Linux passwords. sha512crypt is the default password hash format in many Linux distros, including RHEL & Ubuntu.

    Attempted Solutions

    Terraform's sha512is "pure 512bit SHA-2" rather than sha512crypt. Example sha512crypt format:

    [email protected]~/$ mkpasswd --method=sha512crypt "hello world"
    $6$4sJezH9JHKjn$wjoE34.4jajPXrdnOSBx0rSx8kXdC/M2r1cFvOLwBCTEU.f5rAmVH.RTlp.yB0P9qZUAspqQJAXWNRhsCWJrZ1
    

    Same using Terraform sha512:

    [email protected]~/$ terraform console
    > sha512("hello world")
    "309ecc489c12d6eb4cc40f50c902f2b4d0ed77ee511a7c7a9bcd3ca86d4cd86f989dd35bc5ff499670da34255b45b0cfd830e81f605dcf7dc5542e93ae9cd76f"
    

    Proposal

    Create a sha512crypt function

  • Remote backend support encryption for all

    Remote backend support encryption for all

    Current Terraform Version

    Terraform v1.1.9
    on darwin_amd64
    

    Use-cases

    I use terraform to spawn my infrastructure and there are some data that are simple text in the state and can be credentials or other secrets. I wanted to have all sensitive information not displayed in cli (use of sensitive property on output) but also in the state and still being usable!

    Attempted Solutions

    One approach could be to use remote backend but not all backend supports encryption and especially local which is the default one. This reduce the choices to where to store state based on encryption criteria.

    Proposal

    It would be great to add options to encrypt state entirely or partially (sensitive output) by adding parameters to remote_backend regardless the backend chosen.

    Something like this

    data "terraform_remote_state" "foo" {
      backend = "gcs/local/whatever"
      config = {
        # Specific properties (here gcs example)
        bucket  = "terraform-state"
        prefix  = "prod"
    
        # Generic properties
        encrypt_state = true
        encrypt_privateKey = "..."
        encrypt_publicKey = "..."
      }
    }
    

    This will also solve the Security Notice on tls_private_key while encrypting state file and so generated private keys.

    References

    Not found

  • Added example with function argument expansion

    Added example with function argument expansion

    Even if the expansion with three dots is explicity mentioned in https://www.terraform.io/language/expressions/function-calls#expanding-function-arguments the additional example would have helped me a lot as it is a common use case to "flatten" a list of maps.

Terraform-equinix-migration-tool - Tool to migrate code from Equinix Metal terraform provider to Equinix terraform provider

Equinix Terraform Provider Migration Tool This tool targets a terraform working

Feb 15, 2022
Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications
Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications

Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications Explore PipeCD docs » Overview PipeCD provides a unified co

May 12, 2022
ip-masq-agent-v2 aims to solve more specific networking cases, allow for more configuration options, and improve observability compared to the original.

ip-masq-agent-v2 Based on the original ip-masq-agent, v2 aims to solve more specific networking cases, allow for more configuration options, and impro

May 6, 2022
sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC), and Everything as Code. So it is a tool for DevOps.

sail 中文文档 sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC),a

Dec 16, 2021
LazyXds enables Istio only push needed xDS to sidecars to reduce resource consumption and speed up xDS configuration propagation.
LazyXds enables Istio only push needed xDS to sidecars to reduce resource consumption and speed up xDS configuration propagation.

LazyXds LazyXds enables Istio only push needed xDS to sidecars to reduce resource consumption and speed up xDS configuration propagation. Problems to

May 13, 2022
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.

A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s. OAM operator scaffolds all of the code required to create resources across various cloud provides, which includes both K8s and Non-K8s resources

Nov 30, 2021
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

Oct 19, 2021
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

May 8, 2022
The server-side reproduction, similar the one of https://popcat.click, improve the performance and speed.

PopCat Echo The server-side reproduction, similar the one of https://popcat.click, improve the performance and speed. Docker Image The docker image is

Apr 19, 2022
Vilicus is an open source tool that orchestrates security scans of container images(docker/oci) and centralizes all results into a database for further analysis and metrics.
Vilicus is an open source tool that orchestrates security scans of container images(docker/oci) and centralizes all results into a database for further analysis and metrics.

Vilicus Table of Contents Overview How does it work? Architecture Development Run deployment manually Usage Example of analysis Overview Vilicus is an

Mar 22, 2022
An open source alternative to terraform enterprise.
An open source alternative to terraform enterprise.

oTF An open source alternative to terraform enterprise. Functionality is currently limited: Remote execution mode (plans and applies run remotely) Sta

May 6, 2022
Shared counter (with max limit) for k6 load testing tool

xk6-increment This is a k6 extension using the xk6 system. ❗ This is a proof of concept, isn't supported by the k6 team, and may break in the future.

Nov 30, 2021
Cloud Infrastructure as Code

CloudIaC Cloud Infrastructure as Code CloudIaC 是基于基础设施即代码构建的云环境自动化管理平台。 CloudIaC 将易于使用的界面与强大的治理工具相结合,让您和您团队的成员可以快速轻松的在云中部署和管理环境。 通过将 CloudIaC 集成到您的流程中

May 11, 2022
Infrastructure as Code Workshop

infrastructure-as-code-workshop Infrastructure as Code Workshop Run Pulumi projects Just cd into the pulumi-* folder and type pulumi up Run Terraform

Apr 26, 2022
Library/tool to change a yaml given a rules file

golang-yaml-rules/yaml-transform Library/tool to change a yaml given a rules file Using jsonpath ( https://github.com/vmware-labs/yaml-jsonpath ), thi

Feb 11, 2022
go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data.
go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data.

go-opa-validate go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data. Installation Usage Cont

Feb 5, 2022
Enforcing per team quota (sum of used resources across all their namespaces) and delegating the per namespace quota to users.

Quota Operator Enforcing per team quota (sum of used resources across all their namespaces) and delegating the per namespace quota to users. Instructi

May 16, 2022