Terraform-operator - The Terraform Operator provides support to run Terraform modules in Kubernetes in a declaritive way as a Kubernetes manifest

Terraform Operator

build codecov go report license

The Terraform Operator provides support to run Terraform modules in Kubernetes in a declaritive way as a Kubernetes manifest.

This project makes running a Terraform module, Kubernetes native through a single Kubernetes CRD. You can run the manifest with kubectl, Terraform, GitOps tools, etc...

Disclaimer

This project is not a YAML to HCL converter. It just provides a way to run Terraform commands through a Kubernetes CRD. To see how this controller works, have a look at the design doc

Installation

Helm

  helm repo add kube-champ https://kube-champ.github.io/helm-charts
  helm install terraform-operator kube-champ/terraform-operator

Chart can be found here

Kubectl

  kubectl apply -k https://github.com/kube-champ/terraform-operator/config/crd 
  kubectl apply -k https://github.com/kube-champ/terraform-operator/config/manifest

Docuemntation

For docuemntation, check our page here

Usage

For more examples on how to use this CRD, check the samples

apiVersion: run.terraform-operator.io/v1alpha1
kind: Terraform
metadata:
  name: first-module
spec:
  terraformVersion: 1.0.2

  module:
    source: IbraheemAlSaady/test/module
    ## optional module version
    version:

   ## a terraform workspace to select
  workspace:

  ## a custom terraform backend
  backend: |
    backend "local" {
      path = "/tmp/tfmodule/mytfstate.tfstate"
    }

  ## a custom providers config
  providersConfig:

  ## a list of terraform variables to be provided
  variables:
    - key: length
      value: "16"
    
    - key: AWS_ACCESS_KEY
      valueFrom:
        ## can be configMapKeyRef as well
        secretKeyRef:
          name: aws-credentials
          key: AWS_ACCESS_KEY
      environmentVariable: true

  ## files with ext '.tfvars' or '.tf' that will be mounted into the terraform runner job 
  ## to be passed to terraform as '-var-file'
  variableFiles:
    - key: terraform-env-config
      valueFrom:
        ## can also be 'secret'
        configMap:
          name: "terraform-env-config"

  dependsOn:
    - name: run-base
      ## if its in another namespace
      namespace:
  
  ## ssh key from a secret to allow pull modules from private git repos
  gitSSHKey:
    valueFrom:
      ....

  ## outputs defined will be stored in a Kubernetes secret
  outputs:
      ## The Kubernetes Secret key
    - key: my_new_output_name
      ## the output name from the module
      moduleOutputName: result

  ## a flag to run a terraform destroy
  destroy: false

  ## a flag to delete the job after the job is completed
  deleteCompletedJobs: false

  ## number of retries in case of run failure
  retryLimit: 2

Roadmap

Check the Terraform Operator Project to see what's on the roadmap

Contributing

If you find this project useful, help us:

  • Support the development of this project and star this repo!
  • Help new users with issues they may encounter 💪
  • Send a pull request with your new features and bug fixes 🚀

For instructions about setting up your environment to develop and extend the operator, please see contributing.md

Owner
kube-champ
Everything Kubernetes
kube-champ
Comments
  • Consul and Terraform

    Consul and Terraform

    I like the concept of terraform operator. I had the following questions:

    • How can terrafrom modules pickup its configuration values from Consul when used via the operator way?

    Similar to the example defined here

    https://github.com/ned1313/terraform-tuesdays/tree/017f5bd976f91e33513ed2c885866d6f654a29f1/2020-06-02-ConsulDataSource

    • Also i am using vault and terraform. How can i use a vault token in the crd to contact vault via terraform ?
  • Add support for loading terraform data sources

    Add support for loading terraform data sources

    Looking into the possiblity of PoC'ing this terraform operator I noticed that there's no way to load additional terraform context beyond the module.

    I have a situation where there are modules available for use but these require references to existing AWS shared resources loaded via terraform datasources.

    Example:

    data "aws_sns_topic" "alerts" {
      name = "slack-feed-product_alerts"
    }
    
    data "aws_security_group" "rds_default" {
      tags = {
        Name = "rds-shared-security-group"
      }
    }
    
    module "db" {
      source         = "[email protected]/terraform-modules.git//aws/postgres"
    
      vpc_security_group_ids              = [data.aws_security_group.rds_default.id]
      alerts_sns_topic_arn                = data.aws_sns_topic.alerts.arn
    }
    

    Describe the solution you'd like Provide an alterative field to inject custom terraform hcl code to be able to able to overcome such situations. Not sure if there's a better way to handle this as these data sources are specific to the resources, but an hcl field of sorts would allow us to cater to it.

    Describe alternatives you've considered Looking at the source code for the operator, we can probably use the providersConfig field, as a workaround, to inject any terraform configuration and it would likely work (as it parses directly into the template), but it is not intuitive and can be a bit confusing given the name.

    Additional context Add any other context or screenshots about the feature request here.

  • older kubernetes config maps & secrets are not deleted

    older kubernetes config maps & secrets are not deleted

    Terraform objects own resources like (ConfigMap, and Secrets). With each run, a new secret/configmap will be created. A proper cleanup needs to be in place

    Maybe consider keeping only the last run/workflow

  • rolebinding is not created

    rolebinding is not created

    Role binding is not created if the service account already exist

    To Reproduce Create a Terraform object in non default namespace. There is no error, there is a condition check that results in this bug and suppresses the error. Code is here

    Expected behavior A role binding and a service account should be created if both don't exist

    Versions

    • Operator: 0.1.1
    • Runner: 0.4.0
  • default backend is missing namespace field

    default backend is missing namespace field

    When no backend is specified, Terraform operator will add the Kubernetes as a backend by default. However, the namespace field is missing from the configuration

    To Reproduce When a Terraform object is created in any other namespace than the default, the terraform-runner will try to list secrets from the default namespace due to the missing namespace config, which will cause the runner pod to fail with RBAC permissions to list secrets from the default namespace

    Expected behavior Backend secret should be created in the namespace where the CR was created in

    Versions

    • Operator: 0.1.2
    • Runner: 0.4.0
  • Move to Go 1.18 & Kubebuilder upgrade

    Move to Go 1.18 & Kubebuilder upgrade

    Given Go 1.18 has been out for some time now, and seems to be stable expect for some minor issues with the performance of the newly introduced Generics, we should start moving to it.

  • pass requeue intervals for jobs watch and dependencies

    pass requeue intervals for jobs watch and dependencies

    It would be nice to control the requeue intervals for job watch and dependency

    Implementation We can pass these values as flags when running the operator as flag.DurationVar with default values

  • deleting an object log does not include the object name being deleted

    deleting an object log does not include the object name being deleted

    When a Terraform kind is being deleted, the following line is logged in the controller

    {"level":"info","ts":1657826774.588966,"logger":"controllers.TerraformController","msg":"Terraform run is being deleted"}
    

    Expected behavior It's not clear from the logs which object is being deleted, the name of the object needs to be added to the log line

    Versions

    • Operator: 0.1.1
  • Test error cases and increase coverage

    Test error cases and increase coverage

    Proper testing for error handling on client sets needs to be added.

    We can leverage the Fake library which is already being used. Here is an example for simulating a failure in creating a service account

    import (
      fakecorev1 "k8s.io/client-go/kubernetes/typed/core/v1/fake"
    )
    
    ....
    
    kube.ClientSet.CoreV1().(*fakecorev1.FakeCoreV1).PrependReactor("create", "serviceaccounts", func(action testing.Action) (handled bool, ret runtime.Object, err error) {
      return true, &v1.ServiceAccount{}, errors.New("Error creating service account")
    })
    

    Unit tests needs to be adjusted to create additional workflows with error

  • refactor terraform client and depednecy querying

    refactor terraform client and depednecy querying

    • Remove terraform client implementing the Kube interface and rely on the reconciler to query objects for dependency
    • Lose the suffix Spec in go Struct definitions
    • Move pkg/kube & pkg/utils to /internal
  • Terraform stuck in running state when using dependencyRef variable on non existing key

    Terraform stuck in running state when using dependencyRef variable on non existing key

    If the pod fails to mount the secret based on a variable that have dependencyRef on a key that does not exist, it will cause the pod to be in CreateContainerConfigError state with error couldn't find key number in Secret ....

    To Reproduce The following will produce the issue

    apiVersion: run.terraform-operator.io/v1alpha1
    kind: Terraform
    metadata:
      name: terraform-run1
    spec:
      terraformVersion: 1.1.7
    
      module:
        source: IbraheemAlSaady/test/module
        version: 0.0.3
    
      variables:
        - key: length
          value: "4"
    
      outputs:
        - key: result
          moduleOutputName: result
    ---
    apiVersion: run.terraform-operator.io/v1alpha1
    kind: Terraform
    metadata:
      name: terraform-run2
    spec:
      terraformVersion: 1.1.7
    
      module:
        source: IbraheemAlSaady/test/module
        version: 0.0.3
    
      dependsOn:
        - name: terraform-run1
    
      variables:
        - key: length
          dependencyRef:
            name: terraform-run1
            key: invalid-key
    
    

    Expected behavior If the pod hit an error, we need to update the Terraform status to Failed

    Current evaluations are only happening on the job, the job is not aware that pod is in that state. We probably need to validate the pod status as well

    Possibly we need to do the pod evaluation when the job is in a running state

    Versions

    • Operator: 0.1.1
    • Runner: 0.4.0
  • Terraform runner container is running with root privilege

    Terraform runner container is running with root privilege

    The terraform runner Dockerfile does not have user set. Check here

    The reason for that decision came when support for private SSH keys was introduced as there were issues adding the SSH key with the ssh-agent command. (Ref)

    We need to investigate how to make this work while the container is running in a non-root user

    Versions

    • Runner: 0.4.0
  • Move to Ginkgo V2

    Move to Ginkgo V2

    When the terraform operator was created with Kubebuilder, it was already created with Ginkgo V1.

    Ginkgo V2 has been out for some time. There is an open issue here on Kubebuilder's repo to upgrade to V2 #2532

  • Terraform runner is logging an error on `/tmp/tfvars` no such file or directory

    Terraform runner is logging an error on `/tmp/tfvars` no such file or directory

    The terraform runner pod is logging the following

    time="2022-07-14T19:19:56Z" level=error msg="failed to list files in the var files path" error="open /tmp/tfvars: no such file or directory"
    

    To Reproduce Apply any Terraform kind

    Expected behavior Check if the directory exist, and log a warn message

    Versions

    • Operator: 0.1.1
    • Runner: 0.4.0
  • run scheduled workflows

    run scheduled workflows

    Describe the solution you'd like A way to schedule Terraform workflows and run either plan or apply

    Additional context The configuration could look something like this

    apiVersion: run.terraform-operator.io/v1alpha1
    kind: ScheduledWorkflow
    metadata:
      name: terraform-aws-s3
    spec:
      schedule: 0 * * * *
      withApply: true
      terraformRef:
        name: terraform-aws-s3
    

    By default, it will schedule a run to execute a plan, we can specify the withApply flag to run Terraform apply as well

Terraform provider for Slack's App Manifest API

Terraform Provider Scaffolding (Terraform Plugin SDK) This template repository is built on the Terraform Plugin SDK. The template repository built on

Jan 9, 2022
Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behaviors.

add-operator Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behavio

Dec 15, 2021
kubectl plugin for signing Kubernetes manifest YAML files with sigstore
kubectl plugin for signing Kubernetes manifest YAML files with sigstore

k8s-manifest-sigstore kubectl plugin for signing Kubernetes manifest YAML files with sigstore ⚠️ Still under developement, not ready for production us

Nov 28, 2022
An operator which complements grafana-operator for custom features which are not feasible to be merged into core operator

Grafana Complementary Operator A grafana which complements grafana-operator for custom features which are not feasible to be merged into core operator

Aug 16, 2022
Dredger is a utility to help convert helm charts to Terraform modules using kubernetes provider.

dredger Dredger is a utility to help convert helm charts to Terraform modules using kubernetes provider. Dredger is made of dark magic and cannot full

Aug 25, 2022
resource manifest distribution among multiple clusters.

Providing content to managed clusters Support a primitive that enables resources to be applied to a managed cluster. Community, discussion, contributi

Dec 26, 2022
ArgoCD is widely used for enabling CD GitOps. ArgoCD internally builds manifest from source data in Git repository, and auto-sync it with target clusters.
ArgoCD is widely used for enabling CD GitOps. ArgoCD internally builds manifest from source data in Git repository, and auto-sync it with target clusters.

ArgoCD Interlace ArgoCD is widely used for enabling CD GitOps. ArgoCD internally builds manifest from source data in Git repository, and auto-sync it

Dec 14, 2022
The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes.
The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes.

The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes. Apache NiFI is a free, open-source solution that support powerful and scalable directed graphs of data routing, transformation, and system mediation logic.

Dec 26, 2022
Dependency management solution for Hashicorp Terraform modules

TERRADEP This is the module dependency solution for implementing terraform's modules dependency. Using this, users can now manage dependencies both fr

Dec 21, 2021
The Elastalert Operator is an implementation of a Kubernetes Operator, to easily integrate elastalert with gitops.

Elastalert Operator for Kubernetes The Elastalert Operator is an implementation of a Kubernetes Operator. Getting started Firstly, learn How to use el

Jun 28, 2022
Minecraft-operator - A Kubernetes operator for Minecraft Java Edition servers

Minecraft Operator A Kubernetes operator for dedicated servers of the video game

Dec 15, 2022
K8s-network-config-operator - Kubernetes network config operator to push network config to switches

Kubernetes Network operator Will add more to the readme later :D Operations The

May 16, 2022
Pulumi-k8s-operator-example - OpenGitOps Compliant Pulumi Kubernetes Operator Example

Pulumi GitOps Example OpenGitOps Compliant Pulumi Kubernetes Operator Example Pr

May 6, 2022
Kubernetes Operator Samples using Go, the Operator SDK and OLM
Kubernetes Operator Samples using Go, the Operator SDK and OLM

Kubernetes Operator Patterns and Best Practises This project contains Kubernetes operator samples that demonstrate best practices how to develop opera

Nov 24, 2022
The DGL Operator makes it easy to run Deep Graph Library (DGL) graph neural network training on Kubernetes

DGL Operator The DGL Operator makes it easy to run Deep Graph Library (DGL) graph neural network distributed or non-distributed training on Kubernetes

Dec 19, 2022
A kubernetes operator sample generated by kubebuilder , which run cmd in pod on specified time

init kubebuilder init --domain github.com --repo github.com/tonyshanc/sample-operator-v2 kubebuilder create api --group sample --version v1 --kind At

Jan 25, 2022
An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets

Sep 30, 2022
An operator to support Haschicorp Vault configuration workflows from within Kubernetes
An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets

Dec 19, 2022