Terraform Provider for Confluent Cloud

Terraform Provider for Confluent Cloud

The Terraform Confluent Cloud provider is a plugin for Terraform that allows for the lifecycle management of Confluent Cloud resources. This provider is maintained by Confluent.

Quick Starts

Documentation

Full documentation is available on the Terraform website.

License

Copyright 2021 Confluent Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Owner
Confluent Inc.
Real-time streams powered by Apache Kafka®
Confluent Inc.
Comments
  • Terraform Scripts fails, error indicates plugin crashed

    Terraform Scripts fails, error indicates plugin crashed

    Getting this error while running the below terraform script for provisioning of topic in Confluent kafka cluster. Giving the cluster_id , secrets as inputs.

    ╷
    │ Error: Plugin did not respond
    │ 
    │   with confluentcloud_kafka_topic.topics["confluent-test-topic"],
    │   on topics.tf line 1, in resource "confluentcloud_kafka_topic" "topics":
    │    1: resource "confluentcloud_kafka_topic" "topics" {
    │ 
    │ The plugin encountered an error, and failed to respond to the
    │ plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may
    │ contain more details.
    ╵
    Releasing state lock. This may take a few moments...
    
    Stack trace from the terraform-provider-confluentcloud_0.5.0 plugin:
    
    panic: reflect: call of reflect.Value.FieldByName on zero Value
    
    goroutine 67 [running]:
    reflect.flag.mustBe(...)
            /usr/local/golang/1.16/go/src/reflect/value.go:221
    reflect.Value.FieldByName(0x0, 0x0, 0x0, 0x104ad1e0e, 0x6, 0x0, 0x1b6, 0x0)
            /usr/local/golang/1.16/go/src/reflect/value.go:903 +0x190
    github.com/confluentinc/terraform-provider-ccloud/internal/provider.createDiagnosticsWithDetails(0x104d8bcb8, 0x14000332780, 0x1400008f588, 0x3, 0x3)
            src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/utils.go:304 +0x240
    github.com/confluentinc/terraform-provider-ccloud/internal/provider.kafkaTopicCreate(0x104d9b188, 0x1400009d020, 0x14000689480, 0x104cc3ea0, 0x14000182540, 0x140006cea80, 0x14000689300, 0x10482b700)
            src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/resource_kafka_topic.go:141 +0x374
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0x14000181500, 0x104d9b118, 0x14000416880, 0x14000689480, 0x104cc3ea0, 0x14000182540, 0x0, 0x0, 0x0)
            pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:341 +0x118
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0x14000181500, 0x104d9b118, 0x14000416880, 0x14000328680, 0x14000689300, 0x104cc3ea0, 0x14000182540, 0x0, 0x0, 0x0, ...)
            pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:467 +0x4ec
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0x1400000d470, 0x104d9b118, 0x14000416880, 0x14000392550, 0x104adad89, 0x12, 0x0)
            pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:977 +0x870
    github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0x14000688200, 0x104d9b1c0, 0x14000416880, 0x14000198000, 0x0, 0x0, 0x0)
            pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:603 +0x338
    github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x104d3ef20, 0x14000688200, 0x104d9b1c0, 0x140005cc8a0, 0x1400009c6c0, 0x0, 0x104d9b1c0, 0x140005cc8a0, 0x1400066a600, 0x2e0)
            pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x1c8
    google.golang.org/grpc.(*Server).processUnaryRPC(0x140002be8c0, 0x104da2c38, 0x14000092d80, 0x140006de100, 0x140006865d0, 0x105235900, 0x0, 0x0, 0x0)
            pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x3e8
    google.golang.org/grpc.(*Server).handleStream(0x140002be8c0, 0x104da2c38, 0x14000092d80, 0x140006de100, 0x0)
            pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xa50
    google.golang.org/grpc.(*Server).serveStreams.func1.2(0x140003021b0, 0x140002be8c0, 0x104da2c38, 0x14000092d80, 0x140006de100)
            pkg/mod/google.golang.org/[email protected]/server.go:871 +0x94
    created by google.golang.org/grpc.(*Server).serveStreams.func1
            pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1f8
    
    Error: The terraform-provider-confluentcloud_0.5.0 plugin crashed!
    
    This is always indicative of a bug within the plugin. It would be immensely
    helpful if you could report the crash with the plugin's maintainers so that it
    can be fixed. The output above should help diagnose the issue.
    
  • v0.4.0/v0.5.0 -  Unable to create ACL after creating topic (401 error) - basic cluster

    v0.4.0/v0.5.0 - Unable to create ACL after creating topic (401 error) - basic cluster

    1. Create the service accounts using the Cloud Keys. - WORKS AS EXPECTED
    2. Create a topic with the cluster key. - WORKS AS EXPECTED

    However when I try to add ACL to the topic, for the created service account using the same cluster key, I get the below error

    401 Unauthorized
    │ 
    │   with confluentcloud_kafka_acl.mynamespace-myapp-sample-private-producer,
    │   on topic-sample.tf line 23, in resource "confluentcloud_kafka_acl" "mynamespace-myapp-sample-private-producer":
    │   23: resource "confluentcloud_kafka_acl" "mynamespace-myapp-sample-private-producer" {
    

    Below is the setup

    resource "confluentcloud_kafka_topic" "mynamespace-myapp-sample-private" {
      kafka_cluster    = var.azure_sandbox_cluster_id
      topic_name       = var.mynamespace-myapp-sample-private_topic
      partitions_count = 3
      http_endpoint    = var.azure_sandbox_http_endpoint
      config = {
        "cleanup.policy"      = var.topic_delete_cleanup_policy,
        "max.message.bytes"   = var.topic_max_message_size_bytes,
        "retention.ms"        = var.topic_retention_time_day_ms,
        "min.insync.replicas" = var.topic_min_insync_replicas
      }
    
      credentials {
        key    = var.cluster_api_key
        secret = var.cluster_api_secret
      }
    }
    
    
    ## Producers
    #  --------------------------------------------------------------
    # ACL (WRITE) for Producer
    resource "confluentcloud_kafka_acl" "mynamespace-myapp-sample-private-producer" {
      kafka_cluster = var.azure_sandbox_cluster_id
      resource_type = "TOPIC"
      resource_name = confluentcloud_kafka_topic.mynamespace-myapp-sample-private.topic_name
      pattern_type  = "LITERAL"
      principal     = "User:${var.mynamespace-myproducerapp-sa_id}"
      host          = "*"
      operation     = "WRITE"
      permission    = "ALLOW"
      http_endpoint = var.azure_sandbox_http_endpoint
    
      credentials {
        key    = var.cluster_api_key
        secret = var.cluster_api_secret
      }
    }
    
    

    What is causing the "401" error during the creation of ACL when the cluster is the same and the topic was provisioned using the same cluster keys?

    Much appreciated

  • v0.4.0 - Error: Provider produced inconsistent result after apply

    v0.4.0 - Error: Provider produced inconsistent result after apply

    We are trying to upgrade the provider from version 0.2.0, because we are getting the Error: 429 Too Many Requests, to the 0.4.0 version.

    On apply we get the Plan: 15 to add, 0 to change, 0 to destroy.

    When confirming the apply, configuration starts to drift: in confluent cloud all the resources are created, but some of them are not reflected in the state file.

    Within the process we get the following errors:

    │ Error: Provider produced inconsistent result after apply
    │
    │ When applying changes to module.kafka_topics.confluentcloud_kafka_topic.kafka_topics["ingest_metrics"], provider
    │ "module.sentry_kafka_topics.provider[\"registry.terraform.io/confluentinc/confluentcloud\"]" produced an unexpected new value: Root resource was present, but now absent.
    │ 
    │ This is a bug in the provider, which should be reported in the provider's own issue tracker.
    

    and

    │ Error: 404 Not Found: 
    │ 
    │   with module.kafka_topics.confluentcloud_kafka_topic.kafka_topics["events_subscription_results"],
    │   on ../../../../confluent-kafka-topics/main.tf line 1, in resource "confluentcloud_kafka_topic" "kafka_topics":
    │    1: resource "confluentcloud_kafka_topic" "kafka_topics" {
    

    What can we do to get around this issue?

  • Stack trace creating confluentcloud_kafka_acl resource

    Stack trace creating confluentcloud_kafka_acl resource

    Hey Folks,

    I'm getting the following stack trace trying to create a Kafka ACL using Confluent Terraform provider v0.5.0:

    confluentcloud_kafka_acl.describe_cluster: Creating...
    ╷
    │ Error: Request cancelled
    │ 
    │   with confluentcloud_kafka_acl.describe_cluster,
    │   on stack_trace.tf line 22, in resource "confluentcloud_kafka_acl" "describe_cluster":
    │   22: resource "confluentcloud_kafka_acl" "describe_cluster" {
    │ 
    │ The plugin.(*GRPCProvider).ApplyResourceChange request was cancelled.
    ╵
    
    Stack trace from the terraform-provider-confluentcloud_0.5.0 plugin:
    
    panic: reflect: call of reflect.Value.FieldByName on zero Value
    
    goroutine 66 [running]:
    reflect.flag.mustBe(...)
    	/usr/local/golang/1.16/go/src/reflect/value.go:221
    reflect.Value.FieldByName(0x0, 0x0, 0x0, 0xdd983e, 0x6, 0x0, 0x140, 0x12e)
    	/usr/local/golang/1.16/go/src/reflect/value.go:903 +0x25a
    github.com/confluentinc/terraform-provider-ccloud/internal/provider.createDiagnosticsWithDetails(0xeda2e0, 0xc000596140, 0xc000267470, 0x3, 0x3)
    	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/utils.go:304 +0x2c5
    github.com/confluentinc/terraform-provider-ccloud/internal/provider.kafkaAclCreate(0xee8fe8, 0xc00060cc60, 0xc0001cae80, 0xd25360, 0xc00022e930, 0xc0001fe3e0, 0x868caa, 0xc0001cad00)
    	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/resource_kafka_acl.go:179 +0x547
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc0003b10a0, 0xee8f78, 0xc00009e4c0, 0xc0001cae80, 0xd25360, 0xc00022e930, 0x0, 0x0, 0x0)
    	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:341 +0x17f
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc0003b10a0, 0xee8f78, 0xc00009e4c0, 0xc0001d2680, 0xc0001cad00, 0xd25360, 0xc00022e930, 0x0, 0x0, 0x0, ...)
    	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:467 +0x67b
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000304120, 0xee8f78, 0xc00009e4c0, 0xc00014b090, 0xde2957, 0x12, 0x0)
    	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:977 +0xacf
    github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc000788080, 0xee9020, 0xc00009e4c0, 0xc000242000, 0x0, 0x0, 0x0)
    	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:603 +0x465
    github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0xda0580, 0xc000788080, 0xee9020, 0xc0001181b0, 0xc00060c360, 0x0, 0xee9020, 0xc0001181b0, 0xc0001be600, 0x2f8)
    	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x214
    google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000, 0xc0002a25a0, 0x1396a80, 0x0, 0x0, 0x0)
    	pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x52b
    google.golang.org/grpc.(*Server).handleStream(0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000, 0x0)
    	pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xd0c
    google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc0003101d0, 0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000)
    	pkg/mod/google.golang.org/[email protected]/server.go:871 +0xab
    created by google.golang.org/grpc.(*Server).serveStreams.func1
    	pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1fd
    
    Error: The terraform-provider-confluentcloud_0.5.0 plugin crashed!
    
    This is always indicative of a bug within the plugin. It would be immensely
    helpful if you could report the crash with the plugin's maintainers so that it
    can be fixed. The output above should help diagnose the issue.
    
    

    How to reproduce?

    1. Create a Terraform file with the following content:
    terraform {
      required_providers {
        confluentcloud = {
          source  = "confluentinc/confluentcloud"
          version = "0.5.0"
        }
      }
    }
    
    provider "confluentcloud" {}
    
    resource "confluentcloud_environment" "stack_trace" {
      display_name = "stack_trace"
    }
    
    resource "confluentcloud_service_account" "stack_trace" {
      display_name = "stack-trace"
      description  = "Service account for stack trace reproduction"
    }
    
    resource "confluentcloud_kafka_cluster" "stack_trace" {
      display_name = "default"
      availability = "SINGLE_ZONE"
      cloud        = "GCP"
      region       = "us-west4"
      basic {}
    
      environment {
        id = confluentcloud_environment.stack_trace.id
      }
    }
    
    output "environment_id" {
      value = confluentcloud_environment.stack_trace.id
    }
    
    output "cluster_id" {
      value = confluentcloud_kafka_cluster.stack_trace.id
    }
    
    output "service_account_id" {
      value = confluentcloud_service_account.stack_trace.id
    }
    
    1. Run terraform apply:
    $ terraform apply
    confluentcloud_environment.default: Refreshing state... [id=env-3y5do]
    confluentcloud_service_account.tessitura_integration: Refreshing state... [id=sa-22g2dq]
    confluentcloud_kafka_cluster.default: Refreshing state... [id=lkc-w7769j]
    
    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
      + create
    
    Terraform will perform the following actions:
    
      # confluentcloud_environment.stack_trace will be created
      + resource "confluentcloud_environment" "stack_trace" {
          + display_name = "stack_trace"
          + id           = (known after apply)
        }
    
      # confluentcloud_kafka_cluster.stack_trace will be created
      + resource "confluentcloud_kafka_cluster" "stack_trace" {
          + api_version        = (known after apply)
          + availability       = "SINGLE_ZONE"
          + bootstrap_endpoint = (known after apply)
          + cloud              = "GCP"
          + display_name       = "default"
          + http_endpoint      = (known after apply)
          + id                 = (known after apply)
          + kind               = (known after apply)
          + rbac_crn           = (known after apply)
          + region             = "us-west4"
    
          + basic {}
    
          + environment {
              + id = (known after apply)
            }
        }
    
      # confluentcloud_service_account.stack_trace will be created
      + resource "confluentcloud_service_account" "stack_trace" {
          + api_version  = (known after apply)
          + description  = "Service account for stack trace reproduction"
          + display_name = "stack-trace"
          + id           = (known after apply)
          + kind         = (known after apply)
        }
    
    Plan: 3 to add, 0 to change, 0 to destroy.
    
    Changes to Outputs:
      + cluster_id         = (known after apply)
      + environment_id     = (known after apply)
      + service_account_id = (known after apply)
    
    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value: yes
    
    confluentcloud_service_account.stack_trace: Creating...
    confluentcloud_environment.stack_trace: Creating...
    confluentcloud_environment.stack_trace: Creation complete after 2s [id=env-zoy90]
    confluentcloud_kafka_cluster.stack_trace: Creating...
    confluentcloud_service_account.stack_trace: Creation complete after 2s [id=sa-rrm5k0]
    confluentcloud_kafka_cluster.stack_trace: Creation complete after 8s [id=lkc-w77kkg]
    
    Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
    
    Outputs:
    
    cluster_id = "lkc-w77kkg"
    environment_id = "env-zoy90"
    service_account_id = "sa-rrm5k0"
    
    1. Create API Key:
    confluent api-key create \
        --service-account $(terraform output -raw service_account_id) \
        --environment $(terraform output -raw environment_id) \
        --resource $(terraform output -raw cluster_id)
    
    It may take a couple of minutes for the API key to be ready.
    Save the API key and secret. The secret is not retrievable later.
    +---------+------------------------------------------------------------------+
    | API Key | XXX_api_key_here_XXX                                             |
    | Secret  | XXX_secret_here_XXX                                              |
    +---------+------------------------------------------------------------------+
    
    1. Using the API Key and Secret from previous command output, add the following code to the Terraform file created in point 1:
    resource "confluentcloud_kafka_acl" "describe_cluster" {
      kafka_cluster = confluentcloud_kafka_cluster.stack_trace.id
      http_endpoint = confluentcloud_kafka_cluster.stack_trace.http_endpoint
      resource_type = "CLUSTER"
      resource_name = "kafka-cluster"
      pattern_type  = "LITERAL"
      principal     = "User:${confluentcloud_service_account.stack_trace.id}"
      host          = "*"
      operation     = "DESCRIBE"
      permission    = "ALLOW"
    
      credentials {
        key    = "XXX_api_key_here_XXX"
        secret = "XXX_secret_here_XXX"
      }
    }
    
    1. Finally, run terraform apply again to get the stack trace:
    $ terraform apply
    confluentcloud_service_account.tessitura_integration: Refreshing state... [id=sa-22g2dq]
    confluentcloud_environment.stack_trace: Refreshing state... [id=env-zoy90]
    confluentcloud_environment.default: Refreshing state... [id=env-3y5do]
    confluentcloud_service_account.stack_trace: Refreshing state... [id=sa-rrm5k0]
    confluentcloud_kafka_cluster.stack_trace: Refreshing state... [id=lkc-w77kkg]
    confluentcloud_kafka_cluster.default: Refreshing state... [id=lkc-w7769j]
    
    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
      + create
    
    Terraform will perform the following actions:
    
      # confluentcloud_kafka_acl.describe_cluster will be created
      + resource "confluentcloud_kafka_acl" "describe_cluster" {
          + host          = "*"
          + http_endpoint = "https://pkc-6ojv2.us-west4.gcp.confluent.cloud:443"
          + id            = (known after apply)
          + kafka_cluster = "lkc-w77kkg"
          + operation     = "DESCRIBE"
          + pattern_type  = "LITERAL"
          + permission    = "ALLOW"
          + principal     = "User:sa-rrm5k0"
          + resource_name = "kafka-cluster"
          + resource_type = "CLUSTER"
    
          + credentials {
              + key    = (sensitive value)
              + secret = (sensitive value)
            }
        }
    
    Plan: 1 to add, 0 to change, 0 to destroy.
    
    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value: yes
    
    confluentcloud_kafka_acl.describe_cluster: Creating...
    ╷
    │ Error: Request cancelled
    │ 
    │   with confluentcloud_kafka_acl.describe_cluster,
    │   on stack_trace.tf line 22, in resource "confluentcloud_kafka_acl" "describe_cluster":
    │   22: resource "confluentcloud_kafka_acl" "describe_cluster" {
    │ 
    │ The plugin.(*GRPCProvider).ApplyResourceChange request was cancelled.
    ╵
    
    Stack trace from the terraform-provider-confluentcloud_0.5.0 plugin:
    
    panic: reflect: call of reflect.Value.FieldByName on zero Value
    
    goroutine 66 [running]:
    reflect.flag.mustBe(...)
    	/usr/local/golang/1.16/go/src/reflect/value.go:221
    reflect.Value.FieldByName(0x0, 0x0, 0x0, 0xdd983e, 0x6, 0x0, 0x140, 0x12e)
    	/usr/local/golang/1.16/go/src/reflect/value.go:903 +0x25a
    github.com/confluentinc/terraform-provider-ccloud/internal/provider.createDiagnosticsWithDetails(0xeda2e0, 0xc000596140, 0xc000267470, 0x3, 0x3)
    	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/utils.go:304 +0x2c5
    github.com/confluentinc/terraform-provider-ccloud/internal/provider.kafkaAclCreate(0xee8fe8, 0xc00060cc60, 0xc0001cae80, 0xd25360, 0xc00022e930, 0xc0001fe3e0, 0x868caa, 0xc0001cad00)
    	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/resource_kafka_acl.go:179 +0x547
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc0003b10a0, 0xee8f78, 0xc00009e4c0, 0xc0001cae80, 0xd25360, 0xc00022e930, 0x0, 0x0, 0x0)
    	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:341 +0x17f
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc0003b10a0, 0xee8f78, 0xc00009e4c0, 0xc0001d2680, 0xc0001cad00, 0xd25360, 0xc00022e930, 0x0, 0x0, 0x0, ...)
    	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:467 +0x67b
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000304120, 0xee8f78, 0xc00009e4c0, 0xc00014b090, 0xde2957, 0x12, 0x0)
    	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:977 +0xacf
    github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc000788080, 0xee9020, 0xc00009e4c0, 0xc000242000, 0x0, 0x0, 0x0)
    	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:603 +0x465
    github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0xda0580, 0xc000788080, 0xee9020, 0xc0001181b0, 0xc00060c360, 0x0, 0xee9020, 0xc0001181b0, 0xc0001be600, 0x2f8)
    	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x214
    google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000, 0xc0002a25a0, 0x1396a80, 0x0, 0x0, 0x0)
    	pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x52b
    google.golang.org/grpc.(*Server).handleStream(0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000, 0x0)
    	pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xd0c
    google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc0003101d0, 0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000)
    	pkg/mod/google.golang.org/[email protected]/server.go:871 +0xab
    created by google.golang.org/grpc.(*Server).serveStreams.func1
    	pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1fd
    
    Error: The terraform-provider-confluentcloud_0.5.0 plugin crashed!
    
    This is always indicative of a bug within the plugin. It would be immensely
    helpful if you could report the crash with the plugin's maintainers so that it
    can be fixed. The output above should help diagnose the issue.
    

    Additional information

    $ cat /etc/os-release 
    NAME="Ubuntu"
    VERSION="20.04.3 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.3 LTS"
    VERSION_ID="20.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=focal
    UBUNTU_CODENAME=focal
    
    $ terraform -v
    Terraform v1.1.3
    on linux_amd64
    + provider registry.terraform.io/confluentinc/confluentcloud v0.5.0
    
    Your version of Terraform is out of date! The latest version
    is 1.1.7. You can update by downloading from https://www.terraform.io/downloads.html
    
  • ACL creation crash on both 0.5.0 and 0.4.0

    ACL creation crash on both 0.5.0 and 0.4.0

    I am attempting to create an ACL as follows (result from the plan):

      + create
    
    Terraform will perform the following actions:
    
      # confluentcloud_kafka_acl.terraform-cluster-acl-create will be created
      + resource "confluentcloud_kafka_acl" "terraform-cluster-acl-create" {
          + host          = "*"
          + http_endpoint = "https://pkc-xxxxxxy.us-east-1.aws.confluent.cloud:443"
          + id            = (known after apply)
          + kafka_cluster = "lkc-xxxxxx"
          + operation     = "CREATE"
          + pattern_type  = "LITERAL"
          + permission    = "ALLOW"
          + principal     = "User:sa-12p7n5"
          + resource_name = "kafka-cluster"
          + resource_type = "CLUSTER"
    
          + credentials {
              + key    = (sensitive value)
              + secret = (sensitive value)
            }
        }
    
    Plan: 1 to add, 0 to change, 0 to destroy.
    
    

    when I attempt to create the ACL using confluent cloud provider 0.5.0, I get the following issue:

    │ Error: Plugin did not respond
    │ 
    │   with confluentcloud_kafka_acl.terraform-cluster-acl-create,
    │   on providers.tf line 22, in resource "confluentcloud_kafka_acl" "terraform-cluster-acl-create":
    │   22: resource "confluentcloud_kafka_acl" "terraform-cluster-acl-create" {
    │ 
    │ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
    
    Stack trace from the terraform-provider-confluentcloud_0.5.0 plugin:
    
    panic: reflect: call of reflect.Value.FieldByName on zero Value
    
    goroutine 41 [running]:
    reflect.flag.mustBe(...)
    	/usr/local/golang/1.16/go/src/reflect/value.go:221
    reflect.Value.FieldByName(0x0, 0x0, 0x0, 0x19da363, 0x6, 0x0, 0x140, 0x12c)
    	/usr/local/golang/1.16/go/src/reflect/value.go:903 +0x25a
    github.com/confluentinc/terraform-provider-ccloud/internal/provider.createDiagnosticsWithDetails(0x1adb6a0, 0xc00038c500, 0xc000207470, 0x3, 0x3)
    	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/utils.go:304 +0x2c5
    github.com/confluentinc/terraform-provider-ccloud/internal/provider.kafkaAclCreate(0x1aea3e8, 0xc00054d860, 0xc000228b00, 0x1926000, 0xc0001cd110, 0xc00022f630, 0x146a3aa, 0xc000228980)
    	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/resource_kafka_acl.go:179 +0x547
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc00042f0a0, 0x1aea378, 0xc000617840, 0xc000228b00, 0x1926000, 0xc0001cd110, 0x0, 0x0, 0x0)
    	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:341 +0x17f
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc00042f0a0, 0x1aea378, 0xc000617840, 0xc00057bc70, 0xc000228980, 0x1926000, 0xc0001cd110, 0x0, 0x0, 0x0, ...)
    	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:467 +0x67b
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000392108, 0x1aea378, 0xc000617840, 0xc000396d20, 0x19e3224, 0x12, 0x0)
    	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:977 +0xacf
    github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc000784200, 0x1aea420, 0xc000617840, 0xc0001e0cb0, 0x0, 0x0, 0x0)
    	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:603 +0x465
    github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x19a12e0, 0xc000784200, 0x1aea420, 0xc000197260, 0xc00054cf60, 0x0, 0x1aea420, 0xc000197260, 0xc00055cc00, 0x2f6)
    	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x214
    google.golang.org/grpc.(*Server).processUnaryRPC(0xc000298540, 0x1af1b98, 0xc0000a3500, 0xc0003fe400, 0xc0003834d0, 0x1f9bb00, 0x0, 0x0, 0x0)
    	pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x52b
    google.golang.org/grpc.(*Server).handleStream(0xc000298540, 0x1af1b98, 0xc0000a3500, 0xc0003fe400, 0x0)
    	pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xd0c
    google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000036250, 0xc000298540, 0x1af1b98, 0xc0000a3500, 0xc0003fe400)
    	pkg/mod/google.golang.org/[email protected]/server.go:871 +0xab
    created by google.golang.org/grpc.(*Server).serveStreams.func1
    	pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1fd
    
    Error: The terraform-provider-confluentcloud_0.5.0 plugin crashed!
    
    This is always indicative of a bug within the plugin. It would be immensely
    helpful if you could report the crash with the plugin's maintainers so that it
    can be fixed. The output above should help diagnose the issue.
    
    2022-05-03T12:52:53.342-0400 [DEBUG] provider: plugin exited
    
    

    When I attempt to apply using 0.4.0:

    -----------------------------------------------------
    2022-05-03T12:55:48.265-0400 [DEBUG] [aws-sdk-go] {}
    ╷
    │ Error: 403 Forbidden
    │ 
    │   with confluentcloud_kafka_acl.terraform-cluster-acl-create,
    │   on providers.tf line 22, in resource "confluentcloud_kafka_acl" "terraform-cluster-acl-create":
    │   22: resource "confluentcloud_kafka_acl" "terraform-cluster-acl-create" {
    │ 
    ╵
    
  • Add Kafka API key as a resource

    Add Kafka API key as a resource

    I am missing the creation of an API key in the cluster environment. We are migrating from the unofficial provider: Mongey/terraform-provider-confluentcloud.

    What I expect as input:

    resource "confluentcloud_api_key" "api_key" {
      cluster_id     = confluentcloud_kafka_cluster.cluster.id
      environment_id = confluentcloud_environment.confluent_environment.id
      description    = "my API KEY for the ${confluentcloud_environment.confluent_environment.display_name} environment"
    }
    

    That outputs the key and secret as input for topics:

    resource "confluentcloud_kafka_topic" "orders" {
      kafka_cluster      = confluentcloud_kafka_cluster.basic-cluster.id
       ...
      credentials {
        key    = confluentcloud_api_key.api_key.key
        secret = confluentcloud_api_key.api_key.secret
      }
    }
    
  • Incorrect documentation on service_account_int_id to be of type number

    Incorrect documentation on service_account_int_id to be of type number

    The documentation as available on the on confluentcloud documentation page is incorrect. On the part on how to create the variables.tf file it is specifying that service account id's are of type number, however they are actually of type string using the following format:

    sa-2f2asd
    

    I have created a pr to correct the docs

  • API Key Resource: Automatically creating a cluster and topics

    API Key Resource: Automatically creating a cluster and topics

    It would be super useful to be able to give a service account the rights to access a cluster before creating it.

    I have created a service account and am able to create a cluster with its access key. But, as of my understanding, in order to access the newly created cluster and create topics in it I have to log in to the confluent cloud and create new access keys for that specific cluster and use those to create topics.

    Ideally that would be possible using the global access keys, so that you would be able to automatically create a new cluster, create all kinds of topics and configurations there, run some extensive tests and delete the cluster afterwards.

  • API Key Resource: How to create a cluster & topic?

    API Key Resource: How to create a cluster & topic?

    I have the following terraform file based on the examples:

    
    variable "env" {
      default = "test"
    }
    
    provider "confluentcloud" {
      api_key    = var.confluentcloud_api_key
      api_secret = var.confluentcloud_api_secret
    }
    
    variable "confluentcloud_api_key" {}
    variable "confluentcloud_api_secret" {}
    
    resource "confluentcloud_environment" "environment" {
      display_name = var.env
    }
    
    resource "confluentcloud_kafka_cluster" "basic-cluster" {
      display_name = var.env
      availability = "SINGLE_ZONE"
      cloud        = "AZURE"
      region       = var.region
    
      basic {
    
      }
    
      environment {
        id = confluentcloud_environment.environment.id
      }
    }
    
    resource "confluentcloud_kafka_topic" "transit-alert-trip-patches" {
      kafka_cluster      = confluentcloud_kafka_cluster.basic-cluster.id
      topic_name         = "transit-alert.trip-patches"
      partitions_count   = 4
      http_endpoint      = confluentcloud_kafka_cluster.basic-cluster.http_endpoint
      config = {
    #    "cleanup.policy"    = "compact"
    #    "max.message.bytes" = "12345"
    #    "retention.ms"      = "67890"
      }
      credentials {
        key    = "<Kafka API Key for confluentcloud_kafka_cluster.basic-cluster>"
        secret = "<Kafka API Secret for confluentcloud_kafka_cluster.basic-cluster>"
      }
    }
    

    What should I put to the confluentcloud_kafka_topic.transit-alert-trip-patches.credentials block? With the value of the var.confluentcloud_api_key & var.confluentcloud_api_secret the terraform apply failes. I see no related output from the confluentcloud_kafka_cluster.basic-cluster block.

    Is there a way to:

    • create an environment
    • create the cluster
    • and create the topics in one terraform apply?

    Regards,

  • How to create an api key for a service account?

    How to create an api key for a service account?

    It seems we can create a service account by using https://registry.terraform.io/providers/confluentinc/confluentcloud/latest/docs/resources/confluentcloud_service_account. How do we create an api key for that account?

  • Changing topic configuration generates a replace instead of an update

    Changing topic configuration generates a replace instead of an update

    After having created a topic:

    resource "confluentcloud_kafka_topic" "foobar" {
      kafka_cluster = var.kafka_cluster_id
      topic_name = "foobar"
      partitions_count = 1
      http_endpoint = var.kafka_http_endpoint
      config = {
        "retention.ms" = "600000"
      }
     credentials {
        key = var.kafka_api_key
        secret = var.kafka_api_secret
     }
    

    When changing retention.ms, terraform plan shows it will delete/recreate the topic instead of modifying in place:

    Terraform will perform the following actions:
    
      # confluentcloud_kafka_topic.foobar must be replaced
    -/+ resource "confluentcloud_kafka_topic" "foobar" {
          ~ config           = { # forces replacement
              ~ "retention.ms" = "600000" -> "6000000"
            }
          ~ id               = "<cluster id>/foobar" -> (known after apply)
            # (4 unchanged attributes hidden)
    
            credentials {
              # At least one attribute in this block is (or was) sensitive,
              # so its contents will not be displayed.
            }
        }
    
    Plan: 1 to add, 0 to change, 1 to destroy.
    

    I did not verify, but I'd suspect it doesn't matter which configuration parameter you change. This is certainly undesirable in most, if not all, cases of changing topic configuration parameters. I also observed that changing partitions_count will similarly trigger a replace instead of update. I could see arguments either way for this.

    The alternative Terraform provider for managing Kafka topics and ACLs an issues an in-place update in the same situations.

The Cloud Posse Terraform Provider for various utilities (E.g. deep merging)
The Cloud Posse Terraform Provider for various utilities (E.g. deep merging)

terraform-provider-utils Terraform provider to add additional missing functionality to Terraform This project is part of our comprehensive "SweetOps"

Jan 7, 2023
Unofficial Terraform Provider for Zscaler Private Access

Terraform Provider for ☁️ Zscaler Private Access ☁️ ⚠️ Attention: This provider is not affiliated with, nor supported by Zscaler in any way. Website:

Dec 14, 2022
Cloud cost estimates for Terraform in your CLI and pull requests 💰📉
Cloud cost estimates for Terraform in your CLI and pull requests 💰📉

Infracost shows cloud cost estimates for Terraform projects. It helps developers, devops and others to quickly see the cost breakdown and compare different options upfront.

Jan 2, 2023
TurtleDex is a decentralized cloud storage platform that radically alters the landscape of cloud storage.

TurtleDex is a decentralized cloud storage platform that radically alters the landscape of cloud storage. By leveraging smart contracts, client-side e

Feb 17, 2021
Cloudpods is a cloud-native open source unified multi/hybrid-cloud platform developed with Golang
Cloudpods is a cloud-native open source unified multi/hybrid-cloud platform developed with Golang

Cloudpods is a cloud-native open source unified multi/hybrid-cloud platform developed with Golang, i.e. Cloudpods is a cloud on clouds. Cloudpods is able to manage not only on-premise KVM/baremetals, but also resources from many cloud accounts across many cloud providers. It hides the differences of underlying cloud providers and exposes one set of APIs that allow programatically interacting with these many clouds.

Jan 11, 2022
Contentrouter - Protect static content via Firebase Hosting with Cloud Run and Google Cloud Storage

contentrouter A Cloud Run service to gate static content stored in Google Cloud

Jan 2, 2022
Lightweight Cloud Instance Contextualizer
Lightweight Cloud Instance Contextualizer

Flamingo Flamingo is a lightweight contextualization tool that aims to handle initialization of cloud instances. It is meant to be a replacement for c

Jun 18, 2022
Go language interface to Swift / Openstack Object Storage / Rackspace cloud files (golang)

Swift This package provides an easy to use library for interfacing with Swift / Openstack Object Storage / Rackspace cloud files from the Go Language

Nov 9, 2022
The extensible SQL interface to your favorite cloud APIs.
The extensible SQL interface to your favorite cloud APIs.

The extensible SQL interface to your favorite cloud APIs.

Jan 4, 2023
Cloud-native way to provide elastic Jupyter Notebook services on Kubernetes
Cloud-native way to provide elastic Jupyter Notebook services on Kubernetes

elastic-jupyter-operator: Elastic Jupyter on Kubernetes Kubernetes 原生的弹性 Jupyter 即服务 介绍 为用户按需提供弹性的 Jupyter Notebook 服务。elastic-jupyter-operator 提供以下特性

Dec 29, 2022
Google Cloud Client Libraries for Go.
Google Cloud Client Libraries for Go.

Google Cloud Client Libraries for Go.

Jan 8, 2023
A Cloud Native Buildpack for Go

The Go Paketo Buildpack provides a set of collaborating buildpacks that enable the building of a Go-based application.

Dec 14, 2022
cloud-native local storage management system
cloud-native local storage management system

Open-Local是由多个组件构成的本地磁盘管理系统,目标是解决当前 Kubernetes 本地存储能力缺失问题。通过Open-Local,使用本地存储会像集中式存储一样简单。

Dec 30, 2022
Fleex allows you to create multiple VPS on cloud providers and use them to distribute your workload.
Fleex allows you to create multiple VPS on cloud providers and use them to distribute your workload.

Fleex allows you to create multiple VPS on cloud providers and use them to distribute your workload. Run tools like masscan, puredns, ffuf, httpx or anything you need and get results quickly!

Jan 6, 2023
☁️🏃 Get up and running with Go on Google Cloud.

Get up and running with Go and gRPC on Google Cloud Platform, with this lightweight, opinionated, batteries-included service SDK.

Dec 20, 2022
Elkeid is a Cloud-Native Host-Based Intrusion Detection solution project to provide next-generation Threat Detection and Behavior Audition with modern architecture.
Elkeid is a Cloud-Native Host-Based Intrusion Detection solution project to provide next-generation Threat Detection and Behavior Audition with modern architecture.

Elkeid is a Cloud-Native Host-Based Intrusion Detection solution project to provide next-generation Threat Detection and Behavior Audition with modern architecture.

Dec 30, 2022
Sample apps and code written for Google Cloud in the Go programming language.
Sample apps and code written for Google Cloud in the Go programming language.

Google Cloud Platform Go Samples This repository holds sample code written in Go that demonstrates the Google Cloud Platform. Some samples have accomp

Jan 9, 2023
Use Google Cloud KMS as an io.Reader and rand.Source.

Google Cloud KMS Go io.Reader and rand.Source This package provides a struct that implements Go's io.Reader and math/rand.Source interfaces, using Goo

Dec 1, 2022
A local emulator for Cloud Bigtable with persistance to a sqlite3 backend.

Little Bigtable A local emulator for Cloud Bigtable with persistance to a sqlite3 backend. The Cloud SDK provided cbtemulator is in-memory and does no

Sep 29, 2022