Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

Terraform

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

The key features of Terraform are:

  • Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.

  • Execution Plans: Terraform has a "planning" step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.

  • Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

  • Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

For more information, see the introduction section of the Terraform website.

Getting Started & Documentation

Documentation is available on the Terraform website:

If you're new to Terraform and want to get started creating infrastructure, please check out our Getting Started guides on HashiCorp's learning platform. There are also additional guides to continue your learning.

Show off your Terraform knowledge by passing a certification exam. Visit the certification page for information about exams and find study materials on HashiCorp's learning platform.

Developing Terraform

This repository contains only Terraform core, which includes the command line interface and the main graph engine. Providers are implemented as plugins, and Terraform can automatically download providers that are published on the Terraform Registry. HashiCorp develops some providers, and others are developed by other organizations. For more information, see Extending Terraform.

To learn more about compiling Terraform and contributing suggested changes, please refer to the contributing guide.

To learn more about how we handle bug reports, please read the bug triage guide.

License

Mozilla Public License v2.0

Owner
HashiCorp
Consistent workflows to provision, secure, connect, and run any infrastructure for any application.
HashiCorp
Comments
  • Support use cases with conditional logic

    Support use cases with conditional logic

    It's been important from the beginning that Terraform's configuration language is declarative, which has meant that the core team has intentionally avoided adding flow-control statements like conditionals and loops to the language.

    But in the real world, there are still plenty of perfectly reasonable scenarios that are difficult to express in the current version of Terraform without copious amounts of duplication because of the lack of conditionals. We'd like Terraform to support these use cases one way or another.

    I'm opening this issue to collect some real-world example where, as a config author, it seems like an if statement would really make things easier.

    Using these examples, we'll play around with different ideas to improve the tools Terraform provides to the config author in these scenarios.

    So please feel free to chime in with some specific examples - ideally with with blocks of Terraform configuration included. If you've got ideas for syntax or config language features that could form a solution, those are welcome here too.

    (No need to respond with just "+1" / :+1: on this thread, since it's an issue we're already aware is important.)

  • depends_on cannot be used in a module

    depends_on cannot be used in a module

    Hi there,

    Terraform Version

    0.8.0 rc1+

    Affected Resource(s)

    module

    Terraform Configuration Files

    module "legacy_site" {
      source = "../../../../../modules/site"
      name = "foo-site"
      health_check_target = "TCP:443"
      azs = "${var.azs}"
      instance_count = "${var.instance_count}"
      vpc = "apps"
      region = "${var.region}"
      environment = "${var.environment}"
      run_list = "hs_site_foo"
    
      #rds_complete = "${module.rds.db_instance_id}"
      #elasticache_cache_complete = "${module.elasticache_cache.elasticache_id}"
      #elasticache_sessions_complete = "${module.elasticache_sessions.elasticache_id}"
    
      depends_on = [
      "module.rds",
      "module.elasticache_sessions"
      ]
    
    }
    

    Debug Output

    Error loading Terraform: module root: module legacy_site: depends_on is not a valid parameter module root: module legacy_site: depends_on is not a valid parameter

    Expected Behavior

    I am trying to use the new depends_on instead of the above outputs, so I create and provision my app once I know database and caches are built.

    Actual Behavior

    Nothing as terraform errors out as above.

    Steps to Reproduce

    1. terraform apply

    References

    depends_on can reference modules. This allows a resource or output to depend on everything within a module. (#10076)

  • Depends_on for module

    Depends_on for module

    Possible workarounds

    For module to module dependencies, this workaround by @phinze may help.

    Original problem

    This issue was promoted by this question on Google Groups.

    Terraform version: Terraform v0.3.7

    I have two terraform modules for creating a digital ocean VM and DNS records that are kept purposely modular so they can be reused by others in my organisation.

    I want to add a series of provisioners using local_exec after a VM has been created and DNS records made.

    Attempted solution

    I tried adding a provisioner directly to my terraform file (i.e. not in a resource) which gave an error.

    I then tried using the null_resource which worked but was executed at the wrong time as it didn't know to wait for the other modules to execute first.

    I then tried adding a depends_on attribute to the null resource using a reference to a module but this doesn't seem to be supported using this syntax:

    depends_on = ["module.module_name"]
    

    Expected result

    Either a way for a resource to depend on a module as a dependency or a way to "inject" (for lack of a better word) some provisioners for a resource into a module without having to make a custom version of that module (I realise that might be a separate issue but it would solve my original problem).

    Terraform config used

    # Terraform definition file - this file is used to describe the required infrastructure for this project.
    
    # Digital Ocean provider configuration
    
    provider "digitalocean" {
        token = "${var.digital_ocean_token}"
    }
    
    
    # Resources
    
    # 'whoosh-dev-web1' resource
    
    # VM
    
    module "whoosh-dev-web1-droplet" {
        source = "github.com/antarctica/terraform-module-digital-ocean-droplet?ref=v1.0.0"
        hostname = "whoosh-dev-web1"
        ssh_fingerprint = "${var.ssh_fingerprint}"
    }
    
    # DNS records (public, private and default [which is an APEX record and points to public])
    
    module "whoosh-dev-web1-records" {
        source = "github.com/antarctica/terraform-module-digital-ocean-records?ref=v0.1.1"
        hostname = "whoosh-dev-web1"
        machine_interface_ipv4_public = "${module.whoosh-dev-web1-droplet.ip_v4_address_public}"
        machine_interface_ipv4_private = "${module.whoosh-dev-web1-droplet.ip_v4_address_private}"
    }
    
    
    # Provisioning (using a fake resource as provisioners can't be first class objects)
    
    # Note: The "null_resource" is an undocumented feature and should not be relied upon.
    # See https://github.com/hashicorp/terraform/issues/580 for more information.
    
    resource "null_resource" "provisioning" {
    
        depends_on = ["module.whoosh-dev-web1-records"]
    
        # This replicates the provisioning steps performed by Vagrant
        provisioner "local-exec" {
            command = "ansible-playbook -i provisioning/development provisioning/bootstrap-digitalocean.yml"
        }
    }
    
  • AWS Provider Coverage

    AWS Provider Coverage

    AWS Provider Coverage

    View this spreadsheet for a near-time summary of AWS resource coverage. If there's a resource you would like to see coverage for, just add your GitHub username to next to the resource. We will use the number of community upvotes in the spreadsheet to help prioritize our efforts.

    https://docs.google.com/spreadsheets/d/1yJKjLaTmkWcUS3T8TLwvXC6EBwNSpuQbIq0Y7OnMXhw/edit?usp=sharing

  • terraform get: can't use variable in module source parameter?

    terraform get: can't use variable in module source parameter?

    I'm trying to avoid hard-coding module sources; the simplest approach would be:

    variable "foo_module_source" {
      default = "github.com/thisisme/terraform-foo-module"
    }
    
    module "foo" {
      source = "${var.foo_module_source}"
    }
    

    The result I get while attempting to run terraform get -update is

    Error loading Terraform: Error downloading modules: error downloading module 'file:///home/thisisme/terraform-env/${var.foo_module_source}': source path error: stat /home/thisisme/terraform-env/${var.foo_module_source}: no such file or directory
    
  • Optional arguments in object variable type definition

    Optional arguments in object variable type definition

    Current Terraform Version

    Terraform v0.12.0-alpha4 (2c36829d3265661d8edbd5014de8090ea7e2a076)
    

    Proposal

    I like the object variable type and it would be nice to be able to define optional arguments which can carry null value too, to use:

    variable "network_rules" {
      default = null
      type = object({
        bypass = optional(list(string))
        ip_rules = optional(list(string))
        virtual_network_subnet_ids = optional(list(string))
      })
    }
    
    resource "azurerm_storage_account" "sa" {
      name = random_string.name.result
      location = var.location
      resource_group_name = var.resource_group_name
      account_replication_type = var.account_replication_type
      account_tier = var.account_tier
    
      dynamic "network_rules" {
        for_each = var.network_rules == null ? [] : list(var.network_rules)
    
        content {
          bypass = network_rules.value.bypass
          ip_rules = network_rules.value.ip_rules
          virtual_network_subnet_ids = network_rules.value.virtual_network_subnet_ids
        }
      }
    

    instead of:

    variable "network_rules" {
      default = null
      type = map(string)
    }
    
    resource "azurerm_storage_account" "sa" {
      name = random_string.name.result
      location = var.location
      resource_group_name = var.resource_group_name
      account_replication_type = var.account_replication_type
      account_tier = var.account_tier
    
      dynamic "network_rules" {
        for_each = var.network_rules == null ? [] : list(var.network_rules)
    
        content {
          bypass = lookup(network_rules, "bypass", null) == null ? null : split(",", lookup(network_rules, "bypass"))
          ip_rules = lookup(network_rules, "ip_rules", null) == null ? null : split(",", lookup(network_rules, "ip_rules"))
          virtual_network_subnet_ids = lookup(network_rules, "virtual_network_subnet_ids", null) == null ? null : split(",", lookup(network_rules, "virtual_network_subnet_ids"))
        }
      }
    }
    
  • OpenStack Provider

    OpenStack Provider

    UPDATE: 2/11/2015

    To Do:

    • [x] FWaaS
    • [x] Security Groups Update Issue
    • [x] Volume detachment from volume resource
    • [ ] os-floating-ip/ neutron floating IP issue
    • [ ] Refactor Security Group Rules and LB Members to their own files

    This PR is to create an OpenStack Provider. It uses the Gophercloud v1.0 library and currently supports the following resources:

    Compute v2

    • Server
    • Key Pair
    • Security Group
    • Boot From Volume
    • Metadata
    • Resizing (on flavor_id change)

    Networking v2

    • Network
    • Subnet

    Load Balancer v1

    • Pool (with members)
    • Virtual IP
    • Monitor

    Block Storage v1

    • Volume

    Object Storage v1

    • Container

    The PR includes acceptance tests for all the above resources (tested against DevStack), as well as documentation. In addition, the resources are versioned and region-based. Hopefully, this PR includes enough resources to close #51

  • Using element with splat reference should scope dependency to selected resource

    Using element with splat reference should scope dependency to selected resource

    I'm trying to setup a multi-node cluster with attached ebs volumes. An example below:

    resource "aws_instance" "nodes" {
        instance_type = "${var.model}"
        key_name = "${var.ec2_keypair}"
        ami = "${lookup(var.zk_amis, var.region)}"
        count = "${var.node_count}"
        vpc_security_group_ids = ["${aws_security_group.default.id}"]
        subnet_id = "${lookup(var.subnet_ids, element(keys(var.subnet_ids), count.index))}"
        associate_public_ip_address = true
        user_data = "${file("cloud_init")}"
        tags {
            Name = "${var.cluster_name}-${count.index}"
        }
    }
    
    resource "aws_ebs_volume" "node-ebs" {
        count = "${var.node-count}"
        availability_zone = "${element(keys(var.subnet_ids), count.index)}"
        size = 100
        tags {
            Name = "${var.cluster_name}-ebs-${count.index}"
        }
    }
    
    resource "aws_volume_attachment" "node-attach" {
        count = "${var.node_count}"
        device_name = "/dev/xvdh"
        volume_id = "${element(aws_ebs_volume.node-ebs.*.id, count.index)}"
        instance_id = "${element(aws_instance.nodes.*.id, count.index)}"
    }
    

    If a change happens to a single node (for instance if a single ec2 instance is terminated) ALL of the aws_volume_attachments are recreated.

    Clearly we would not want volume attachments to be removed in a production environment. Worse than that, in conjunction with #2957 you first must unmount these attachments before they can be recreated. This has the effect of making volume attachments only viable on brand new clusters.

  • A way to hide certain expected changes from the

    A way to hide certain expected changes from the "refresh" report ("Objects have changed outside of Terraform")

    After upgrading to 0.15.4 terraform reports changes that are ignored. It is exactly like commented here: https://github.com/hashicorp/terraform/issues/28776#issuecomment-846547594

    Terraform Version

    Terraform v0.15.4
    on darwin_amd64
    + provider registry.terraform.io/hashicorp/aws v3.42.0
    + provider registry.terraform.io/hashicorp/template v2.2.0
    

    Terraform Configuration Files

    
    resource "aws_batch_compute_environment" "batch_compute" {
      lifecycle {
        ignore_changes = [compute_resources[0].desired_vcpus]
      }
    
    ...
    
      compute_resources {
    ...
      }
    }
    
    resource "aws_db_instance" "postgres_db" {
      ...
    
      lifecycle {
        prevent_destroy = true
        ignore_changes = [latest_restorable_time]
      }
    }
    

    Output

    Note: Objects have changed outside of Terraform
    
    Terraform detected the following changes made outside of Terraform since the last "terraform apply":
    
      # module.db.aws_db_instance.postgres_db has been changed
      ~ resource "aws_db_instance" "postgres_db" {
            id                                    = "db"
          ~ latest_restorable_time                = "2021-05-25T10:24:14Z" -> "2021-05-25T10:29:14Z"
            name                                  = "db"
            tags                                  = {
                "Name" = "DatabaseServer"
            }
            # (47 unchanged attributes hidden)
    
            # (1 unchanged block hidden)
        }
      # module.batch_processor_dot_backend.aws_batch_compute_environment.batch_compute has been changed
      ~ resource "aws_batch_compute_environment" "batch_compute" {
            id                       = "batch-compute"
            tags                     = {}
            # (9 unchanged attributes hidden)
    
          ~ compute_resources {
              ~ desired_vcpus      = 0 -> 2
                tags               = {}
                # (9 unchanged attributes hidden)
            }
        }
    

    Expected Behavior

    No changes should be reported because they are listed in ignored changes.

    Actual Behavior

    Changes are reported.

    Steps to Reproduce

    Change any resource outside of terraform and see that terraform apply reports changed even when they should be ignored.

    Additional Context

    References

    • https://github.com/hashicorp/terraform/issues/28776
    • https://github.com/hashicorp/terraform/issues/28776#issuecomment-846547594
    • https://github.com/hashicorp/terraform/pull/28634#issuecomment-845934989
  • Problem with dependant module resolution if the path is relative

    Problem with dependant module resolution if the path is relative

    Terraform Version

    0.12.13

    Terraform Configuration Files

    Here you can see two examples: https://github.com/xocasdashdash/terraform-test-case

    One works perfectly with 0.11, same one fails on 0.12.13 (and on dev too).

    Debug Output

    2019/11/09 10:56:07 [INFO] Terraform version: 0.12.13
    2019/11/09 10:56:07 [INFO] Go runtime version: go1.12.9
    2019/11/09 10:56:07 [INFO] CLI args: []string{"/usr/local/Cellar/tfenv/0.6.0/versions/0.12.13/terraform", "init"}
    2019/11/09 10:56:07 [DEBUG] Attempting to open CLI config file: /Users/joaquin.fernandez/.terraformrc
    2019/11/09 10:56:07 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
    2019/11/09 10:56:07 [DEBUG] checking for credentials in "/Users/joaquin.fernandez/.terraform.d/plugins"
    2019/11/09 10:56:07 [DEBUG] checking for credentials in "/Users/joaquin.fernandez/.terraform.d/plugins/darwin_amd64"
    2019/11/09 10:56:07 [INFO] CLI command args: []string{"init"}
    2019/11/09 10:56:07 [TRACE] ModuleInstaller: installing child modules for . into .terraform/modules
    Initializing modules...
    2019/11/09 10:56:07 [DEBUG] Module installer: begin a-module
    2019/11/09 10:56:07 [TRACE] ModuleInstaller: Module installer: a-module <nil> already installed in .terraform/modules/a-module
    2019/11/09 10:56:07 [DEBUG] Module installer: begin a-module.b_module
    2019/11/09 10:56:07 [TRACE] ModuleInstaller: Module installer: a-module.b_module <nil> already installed in /Users/joaquin.fernandez/projects/personal/terraform-test/not-works-on-tf-0.12.13/modules/a-module/b-module
    2019/11/09 10:56:07 [DEBUG] Module installer: begin a-module.b_module.c_module
    2019/11/09 10:56:07 [TRACE] ModuleInstaller: Module installer: a-module.b_module.c_module <nil> already installed in /Users/joaquin.fernandez/projects/personal/terraform-test/not-works-on-tf-0.12.13/modules/a-module/c-module
    2019/11/09 10:56:07 [DEBUG] Module installer: begin a-module.d_module
    2019/11/09 10:56:07 [TRACE] ModuleInstaller: a-module.d_module has local path "../d-module/"
    2019/11/09 10:56:07 [TRACE] ModuleInstaller: a-module.d_module uses directory from parent: .terraform/modules/d-module
    2019/11/09 10:56:07 [DEBUG] Module installer: a-module.d_module installed at
    2019/11/09 10:56:07 [TRACE] modsdir: writing modules manifest to .terraform/modules/modules.json
    - a-module.d_module in
    
    Error: Unreadable module directory
    
    Unable to evaluate directory symlink: lstat .terraform/modules/d-module: no
    such file or directory
    
    
    Error: Failed to read module directory
    
    Module directory  does not exist or cannot be read.
    
    
    Error: Unreadable module directory
    
    Unable to evaluate directory symlink: lstat .terraform/modules/d-module: no
    such file or directory
    
    
    Error: Failed to read module directory
    
    

    Expected Behavior

    It should resolve to the correct module path for the "d-module".

    Actual Behavior

    It does not. But if I change the route to use a local symlink and add a double "//" on the last folder before the module folder "//a-module" and set up a symlink from the module to the parent folder it does work correctly.

    Steps to Reproduce

    Run terraform init in each of the three folders with the last working version (0.11.14 and 0.12.13).

    Additional Context

    I've tried to fix it myself and I think the fix should go to this function: https://github.com/hashicorp/terraform/blob/6f66aad03262441521829ca3a678da2bb6bf51d9/internal/initwd/module_install.go#L226

    I'm gonna try some more to make it work but I believe a bigger change will be needed to get this to work in all cases

  • vSphere Provider: Mapping out the Next Steps

    vSphere Provider: Mapping out the Next Steps

    Wanted to kick off a higher level discussion of what needs to be done on the vSphere provider and in what order.

    • What are the important missing resources?
    • Are there any enhancements that need to be made to the existing functionality?
    • What do we need to do to ensure the provider works with all common versions of vSphere in the wild?

    Pinging @tkak and @mkuzmin to chime in as well as anybody else with interest/knowledge in the community.

  • Proof-of-concept only: replace

    Proof-of-concept only: replace "any" type constraint placeholder with "inferred"

    These changes are just an experiment with the idea of renaming the "any" type constraint placeholder with another keyword "inferred" which has exactly the same functionality but is more explicit about what it represents.

    Since adding any in Terraform v0.12 it's become a bit of an attractive nuisance, because its name makes people think it represents full dynamic typing but really it represents automatic inference of a single exact type. For simple situations the automatic inference does something essentially equivalent to full dynamic typing and so new module authors will often try it and see that it seems to work as they expected even though they have made an incorrect assumption about its purpose, and then only run into trouble later when their module is in real-world use but it's become hard to revise the design without breaking backward compatibility.

    This PR is just trying out one possible idea for how to address this. It includes the following:

    • Terraform will accept the keyword inferred in any location where the any placeholder was previously valid, with exactly the same meaning and resulting behavior.
    • Terraform will emit a warning if a module uses any, recommending to adopt inferred instead.
    • terraform fmt will automatically rewrite any to inferred, to make it easy to migrate and thus silence the warnings.

    This is not viable to ship as-is and is not intended to be. The goal here is only to evaluate the technical complexity of making this change, which seems to be relatively light.

    If we did want to do something in this direction in a future release, I expect we'd want to roll it out more gradually rather than all in one go like this.

    Specifically, I'd recommend to make any and inferred exactly equivalent (no deprecation warnings) and include the terraform fmt change for at least one whole minor release before explicitly deprecating any, so that there is a suitable window for module authors to migrate before their modules start generating warnings. We may choose to increase that window over multiple minor releases to ease the tradeoff between ending support for earlier Terraform versions (that won't accept inferred at all) or generating noisy warnings on newer versions of Terraform.

    Updating the docs to primarily describe inferred and to mention any only as a deprecated feature, along with the terraform fmt change, would hopefully go a long way to discourage using any for totally new modules. But we also know that new Terraform users often use existing public modules as a foundation for their learning and so long-deprecated patterns tend to stick around as long as there are highly-visible public modules still using them, and so the effectiveness of this change would be limited as long as there isn't an incentive to update existing modules to use the new keyword.

    This is just here to illustrate one possible path forward. There's no plan to do anything real with this right now, and a final plan in this area might involve doing something entirely different than what I tried here.

  • Single Nesting Mode Blocks Not Null in PlanResourceChange ProposedNewState

    Single Nesting Mode Blocks Not Null in PlanResourceChange ProposedNewState

    Terraform Version

    Terraform v1.3.6
    on darwin_arm64
    

    Terraform Configuration Files

    First apply:

    resource "hashicups_order" "test" {
      myblock {
        optional = false
        optional_int = 10
      }
    }
    

    Second apply:

    resource "hashicups_order" "test" {}
    

    Debug Output

    Please reach out if you need this.

    Expected Behavior

    When applying the second configuration without the single nesting mode block, the proposed new state for the block is null to match the null configuration -- causing the plan succeed without provider-side modification.

    Actual Behavior

    Terraform returns an error due to the proposed new state not being null:

    Error: Provider produced invalid plan
            
    Provider "registry.terraform.io/hashicorp/hashicups" planned an invalid value
    for hashicups_order.test.myblock: planned for existence but config wants
    absence.
    
    This is a bug in the provider, which should be reported in the provider's own
    issue tracker
    

    Using the TF_LOG_SDK_PROTO_DATA_DIR environment variable, such as TF_LOG_SDK_PROTO_DATA_DIR=/tmp, will save files containing MessagePack data from the protocol before it reaches terraform-plugin-framework or provider logic. Viewing those files via https://github.com/wader/fq shows the disparity between the configuration and proposed new state data sent during PlanResourceChange.

    ❯ fq -d msgpack tovalue 1672845198527_PlanResourceChange_Request_Config.msgpack
    {
      "length": 2,
      "pairs": [
        {
          "key": {
            "length": 2,
            "type": "fixstr",
            "value": "id"
          },
          "value": {
            "type": "nil",
            "value": null
          }
        },
        {
          "key": {
            "length": 7,
            "type": "fixstr",
            "value": "myblock"
          },
          "value": {
            "type": "nil",
            "value": null
          }
        }
      ],
      "type": "fixmap"
    }
    
    ❯ fq -d msgpack tovalue 1672845198527_PlanResourceChange_Request_ProposedNewState.msgpack
    {
      "length": 2,
      "pairs": [
        {
          "key": {
            "length": 2,
            "type": "fixstr",
            "value": "id"
          },
          "value": {
            "length": 1,
            "type": "fixstr",
            "value": "1"
          }
        },
        {
          "key": {
            "length": 7,
            "type": "fixstr",
            "value": "myblock"
          },
          "value": {
            "length": 2,
            "pairs": [
              {
                "key": {
                  "length": 8,
                  "type": "fixstr",
                  "value": "optional"
                },
                "value": {
                  "type": "false",
                  "value": false
                }
              },
              {
                "key": {
                  "length": 12,
                  "type": "fixstr",
                  "value": "optional_int"
                },
                "value": {
                  "type": "positive_fixint",
                  "value": 10
                }
              }
            ],
            "type": "fixmap"
          }
        }
      ],
      "type": "fixmap"
    }
    

    Please note if you want to create these files yourself, you likely need https://github.com/hashicorp/terraform-plugin-go/pull/245, to prevent the files from being overwritten across acceptance test steps since the file naming is not time granular enough.

    If the provider logic manually modifies the planned new state to match the configuration when its null, then the Terraform error goes away.

    // Refer also to the framework issue, which has a schema-defined
    // plan modifier workaround in the comments. This is just a little more
    // copy-pastable into the reproduction codebase.
    func (r *orderResource) ModifyPlan(ctx context.Context, req resource.ModifyPlanRequest, resp *resource.ModifyPlanResponse) {
    	if req.State.Raw.IsNull() {
    		return
    	}
    
    	if req.Plan.Raw.IsNull() {
    		return
    	}
    
    	var config, plan orderResourceModel
    
    	resp.Diagnostics.Append(req.Config.Get(ctx, &config)...)
    	resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...)
    
    	if resp.Diagnostics.HasError() {
    		return
    	}
    
    	if config.MyBlock == nil {
    		plan.MyBlock = nil
    	}
    
    	resp.Diagnostics.Append(resp.Plan.Set(ctx, &plan)...)
    }
    

    Steps to Reproduce

    1. gh repo clone mvantellingen/terraform-pf-testcase
    2. cd terraform-pf-testcase
    3. TF_ACC=1 go test -count=1 -v ./...

    Additional Context

    Schema definition in terraform-plugin-framework:

    func (r *orderResource) Schema(_ context.Context, _ resource.SchemaRequest, resp *resource.SchemaResponse) {
    	resp.Schema = schema.Schema{
    		Description: "Manages an order.",
    		Attributes: map[string]schema.Attribute{
    			"id": schema.StringAttribute{
    				Description: "Numeric identifier of the order.",
    				Computed:    true,
    				PlanModifiers: []planmodifier.String{
    					stringplanmodifier.UseStateForUnknown(),
    				},
    			},
    		},
    		Blocks: map[string]schema.Block{
    			"myblock": schema.SingleNestedBlock{
    				Attributes: map[string]schema.Attribute{
    					"optional": schema.BoolAttribute{
    						Optional: true,
    					},
    					"optional_int": schema.Int64Attribute{
    						Optional: true,
    					},
    				},
    			},
    		},
    	}
    }
    

    References

    • https://github.com/hashicorp/terraform-plugin-framework/issues/603
  • segmentation fault

    segmentation fault

    Terraform Version

    Hello,
    I've installed terraform as described here >>> https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli
    
    When I run "terraform -version" I get the following error:
    
    $ terraform -version
    Segmentation fault
    
    Thank & Regards
    

    Terraform Configuration Files

    ...terraform config...
    

    Debug Output

    sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

    wget -O- https://apt.releases.hashicorp.com/gpg |
    gpg --dearmor |
    sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg

    gpg --no-default-keyring
    --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg
    --fingerprint

    echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg]
    https://apt.releases.hashicorp.com $(lsb_release -cs) main" |
    sudo tee /etc/apt/sources.list.d/hashicorp.list

    sudo apt update

    sudo apt-get install terraform

    Expected Behavior

    a working binary

    Actual Behavior

    a faulty binary

    Steps to Reproduce

    sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

    wget -O- https://apt.releases.hashicorp.com/gpg |
    gpg --dearmor |
    sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg

    gpg --no-default-keyring
    --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg
    --fingerprint

    echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg]
    https://apt.releases.hashicorp.com $(lsb_release -cs) main" |
    sudo tee /etc/apt/sources.list.d/hashicorp.list

    sudo apt update

    sudo apt-get install terraform

    Additional Context

    No response

    References

    No response

  • Reverse the order of conversion/defaults, and update HCL with more flexible defaults package

    Reverse the order of conversion/defaults, and update HCL with more flexible defaults package

    Note, this PR has failing tests. I've prepped it early just for a demonstration, but we are not intending to merge this until a downstream fix for HCL has been release.

    • (eventually) Update to latest version of HCL, with flexible defaults package.
    • Add test case that demonstrates failure of conversion before defaults with any type constraint.
    • Apply defaults before conversion, in line with new HCL version.

    Fixes #32396

    Target Release

    1.3.8 / 1.3.9 / 1.4.0

    Draft CHANGELOG entry

    BUG FIXES

    • Fix terraform crash when applying defaults into a collection with dynamic type constraint.
  • Add function descriptions

    Add function descriptions

    This PR adds descriptions for all Terraform functions and is the first step in enabling machine-readable function signatures (more: TF-508: Machine-readable function signatures).

    It is planned to export the descriptions via a new terraform metadata functions --json command. The first consumer of the JSON output will be the Terraform language server, to provide function signature information inside the editor. After that, the docs website might be another potential consumer, making the description list the single source of truth.

    Instead of iterating over the functions list and using WithDescription for each one, I've another approach, referring to description entries from a function definition. But that approach requires edits in multiple places and matching slice indices when one wants to add a parameter description.

    Target Release

    1.4.x

  • Fix for no json output of state locking actions for --json flag

    Fix for no json output of state locking actions for --json flag

    Fixes #32265

    Target Release

    1.4.x

    Draft CHANGELOG

    BUG FIXES

    • state locking : when a ´--json´ flag is passed to a command locking the state (such as apply, plan), the output of the state locker happens in json format, otherwise it is human-readable
Terraform-equinix-migration-tool - Tool to migrate code from Equinix Metal terraform provider to Equinix terraform provider

Equinix Terraform Provider Migration Tool This tool targets a terraform working

Feb 15, 2022
Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications
Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications

Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications Explore PipeCD docs » Overview PipeCD provides a unified co

Jan 3, 2023
sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC), and Everything as Code. So it is a tool for DevOps.

sail 中文文档 sail is an operation framework based on Ansible/Helm. sail follows the principles of Infrastructure as Code (IaC), Operation as Code (OaC),a

Dec 16, 2021
ip-masq-agent-v2 aims to solve more specific networking cases, allow for more configuration options, and improve observability compared to the original.

ip-masq-agent-v2 Based on the original ip-masq-agent, v2 aims to solve more specific networking cases, allow for more configuration options, and impro

Aug 31, 2022
Terraform-in-Terraform: Execute Modules directly from the Terraform Registry

Terraform-In-Terraform Provider This provider allows running Terraform in Terraform. This might seem insane but there are some edge cases where it com

Dec 25, 2022
LazyXds enables Istio only push needed xDS to sidecars to reduce resource consumption and speed up xDS configuration propagation.
LazyXds enables Istio only push needed xDS to sidecars to reduce resource consumption and speed up xDS configuration propagation.

LazyXds LazyXds enables Istio only push needed xDS to sidecars to reduce resource consumption and speed up xDS configuration propagation. Problems to

Dec 28, 2022
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

Oct 19, 2021
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Jan 8, 2023
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.

A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s. OAM operator scaffolds all of the code required to create resources across various cloud provides, which includes both K8s and Non-K8s resources

Nov 30, 2021
Vilicus is an open source tool that orchestrates security scans of container images(docker/oci) and centralizes all results into a database for further analysis and metrics.
Vilicus is an open source tool that orchestrates security scans of container images(docker/oci) and centralizes all results into a database for further analysis and metrics.

Vilicus Table of Contents Overview How does it work? Architecture Development Run deployment manually Usage Example of analysis Overview Vilicus is an

Dec 6, 2022
An open source alternative to terraform enterprise.
An open source alternative to terraform enterprise.

oTF An open source alternative to terraform enterprise. Functionality is currently limited: Remote execution mode (plans and applies run remotely) Sta

Jan 2, 2023
The server-side reproduction, similar the one of https://popcat.click, improve the performance and speed.

PopCat Echo The server-side reproduction, similar the one of https://popcat.click, improve the performance and speed. Docker Image The docker image is

Dec 15, 2022
Shared counter (with max limit) for k6 load testing tool

xk6-increment This is a k6 extension using the xk6 system. ❗ This is a proof of concept, isn't supported by the k6 team, and may break in the future.

Nov 30, 2021
Library/tool to change a yaml given a rules file

golang-yaml-rules/yaml-transform Library/tool to change a yaml given a rules file Using jsonpath ( https://github.com/vmware-labs/yaml-jsonpath ), thi

Feb 11, 2022
go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data.
go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data.

go-opa-validate go-opa-validate is an open-source lib that evaluates OPA (open policy agent) policy against JSON or YAML data. Installation Usage Cont

Nov 17, 2022
Cloud Infrastructure as Code

CloudIaC Cloud Infrastructure as Code CloudIaC 是基于基础设施即代码构建的云环境自动化管理平台。 CloudIaC 将易于使用的界面与强大的治理工具相结合,让您和您团队的成员可以快速轻松的在云中部署和管理环境。 通过将 CloudIaC 集成到您的流程中

Dec 27, 2022
Infrastructure as Code Workshop

infrastructure-as-code-workshop Infrastructure as Code Workshop Run Pulumi projects Just cd into the pulumi-* folder and type pulumi up Run Terraform

Oct 21, 2022