A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice

The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage users to manage the aws-auth configmap with the EKS module.

I am planning to archive this repo on May 1st, 2022. You are welcome to open an issue here if you are having trouble with the migration steps below and will do my best to help.

Migration:

steps

  1. Remove the aidanmelen/eks-auth/aws declaration for your terraform code.
  2. Remove the aidanmelen/eks-auth/aws resources from terraform state.
  • The aws-auth configmap should still exist on the cluster but will no longer be managed by this module.
  • A plan should show that there are no infrastructure changes to the EKS cluster.
  1. Upgrade the version of the EKS module: version = ">= v18.20.0"
  2. Configure the terraform-aws-modules/eks/aws with manage_aws_auth_configmap = true. This version of the EKS module uses the new kubernetes_config_map_v1_data resource to patch aws-auth configmap data (just like the v1.0.0 version of this module).
  3. Plan and Apply.
  • The aws-auth configmap should now be managed by the EKS module.

Please see the complete example for more information.


Pre-Commit cookiecutter-tf-module

terraform-aws-eks-auth

A Terraform module to manage cluster authentication for an Elastic Kubernetes (EKS) cluster on AWS.

Assumptions

Usage

Grant access to the AWS EKS cluster by adding map_roles, map_user or map_accounts to the aws-auth configmap.

module "eks" {
  source = "terraform-aws-modules/eks/aws"
  # insert the 15 required variables here
}

module "eks_auth" {
  source = "aidanmelen/eks-auth/aws"
  eks    = module.eks

  map_roles = [
    {
      rolearn  = "arn:aws:iam::66666666666:role/role1"
      username = "role1"
      groups   = ["system:masters"]
    },
  ]

  map_users = [
    {
      userarn  = "arn:aws:iam::66666666666:user/user1"
      username = "user1"
      groups   = ["system:masters"]
    },
    {
      userarn  = "arn:aws:iam::66666666666:user/user2"
      username = "user2"
      groups   = ["system:masters"]
    },
  ]

  map_accounts = [
    "777777777777",
    "888888888888",
  ]
}

Please see the complete example for more information.

Requirements

Name Version
terraform >= 0.14.8
http >= 2.4.1
kubernetes >= 2.10.0

Providers

Name Version
http >= 2.4.1
kubernetes >= 2.10.0

Modules

No modules.

Resources

Name Type
kubernetes_config_map_v1.aws_auth resource
kubernetes_config_map_v1_data.aws_auth resource
http_http.wait_for_cluster data source

Inputs

Name Description Type Default Required
eks The outputs from the terraform-aws-modules/terraform-aws-eks module. any n/a yes
map_accounts Additional AWS account numbers to add to the aws-auth configmap. list(string) [] no
map_roles Additional IAM roles to add to the aws-auth configmap.
list(object({
rolearn = string
username = string
groups = list(string)
}))
[] no
map_users Additional IAM users to add to the aws-auth configmap.
list(object({
userarn = string
username = string
groups = list(string)
}))
[] no
wait_for_cluster_timeout A timeout (in seconds) to wait for cluster to be available. number 300 no

Outputs

Name Description
aws_auth_configmap_yaml Formatted yaml output for aws-auth configmap.
map_accounts The aws-auth map accounts.
map_roles The aws-auth map roles merged with the eks managed node group, self managed node groups and fargate profile roles.
map_users The aws-auth map users.

License

Apache 2 Licensed. See LICENSE for full details.

Comments
  • Failed to initialize `aws-auth` configmap with only self managed node group

    Failed to initialize `aws-auth` configmap with only self managed node group

    When module eks uses only self managed group, it means that no aws-auth exists (only managed-group can generate automatically) .

    Nodes can't connect to EKS, when no aws-auth exists in cluster, finally job can't execute becuase no nodes connected to cluster (jobs can't connect becuase aws-auth not exists )

    Solution would be to add new option to create config-map with aws-auth before job execution .

  •  The config map

    The config map "aws-auth" does not exist

    Describe the Bug

    Getting Error: The config map "aws-auth" does not exist

    with module.eks.module.eks_auth.kubernetes_config_map_v1_data.aws_auth[0]
    on .terraform/modules/eks.eks_auth/main.tf line 42, in resource "kubernetes_config_map_v1_data" "aws_auth":
    resource "kubernetes_config_map_v1_data" "aws_auth" {
    

    Note: ConfigMap is present in my cluster

    Expected Behavior

    It would update the config map.

    Steps to Reproduce the Problem

    1. Apply the module with the following code
    module "eks_auth" {
      source    = "aidanmelen/eks-auth/aws"
      eks       = module.eks
      map_roles = var.map_roles
    }
    

    How to get around this issue?

    Downgrade to 0.9.0

  • Does not update the aws-auth for newly created cluster.

    Does not update the aws-auth for newly created cluster.

    I am trying to create a new EKS using terraform-eks official module, and to ease the pain of creating new aws-auth configmap I was trying to use this module. But since I have multiple clusters in my kube-config, this module updates the configmap of my current-context and not the newly created EKS. Is this the expected behavior?

  • Possibility of keeping this alive ?

    Possibility of keeping this alive ?

    While version 'v18.20.0' of the terraform eks module brings back the management of the aws-auth configmap, it makes use of a local-exec to retrieve the auth token for the internal kubernetes provider and this addition prevents its use on Terraform Cloud. I was wondering if you'd be willing to keep this project alive as it definitely helps folks who can't use the new EKS module feature. Thanks

  • How to use module with kubernetes provider alias?

    How to use module with kubernetes provider alias?

    Describe the Bug

    When applying with what I think is the correct configuration:

    module "sandbox_cluster" {
      source          = "terraform-aws-modules/eks/aws"
      version         = "18.11.0"
      cluster_name    = "sandbox"
      cluster_version = "1.21"
    
      vpc_id = module.development_vpc.vpc_id
    
      eks_managed_node_groups = {
        sandbox = {
          min_size     = 1
          max_size     = 5
          desired_size = 1
    
          security_group_name = "sandbox_worker_sg"
    
          instance_types = ["t3.large"]
          capacity_type  = "SPOT"
          labels = {
            Environment = "development"
          }
          taints = {
          }
        }
      }
    }
    
    data "aws_eks_cluster" "sandbox_cluster" {
      name = module.sandbox_cluster.cluster_id
    }
    
    data "aws_eks_cluster_auth" "sandbox_cluster_auth" {
      name = module.sandbox_cluster.cluster_id
    }
    
    provider "kubernetes" {
      alias                  = "sandbox_kubernetes"
      host                   = data.aws_eks_cluster.sandbox_cluster.endpoint
      cluster_ca_certificate = base64decode(data.aws_eks_cluster.sandbox_cluster.certificate_authority[0].data)
      token                  = data.aws_eks_cluster_auth.sandbox_cluster_auth.token
    }
    
    module "eks_auth" {
      source = "aidanmelen/eks-auth/aws"
      eks    = module.sandbox_cluster
    
      map_roles = [
        {
          rolearn  = var.kubectl_access_role_arn
          username = "kubernetes_master"
          groups   = ["system:masters"]
        }
      ]
    }
    

    Returns the following error:

    │ Error: Post "http://localhost/api/v1/namespaces/kube-system/serviceaccounts": dial tcp [::1]:80: connect: connection refused
    │ 
    │   with module.development.module.eks_auth.kubernetes_service_account_v1.aws_auth_init,
    │   on .terraform/modules/development.eks_auth/main.tf line 47, in resource "kubernetes_service_account_v1" "aws_auth_init":
    │   47: resource "kubernetes_service_account_v1" "aws_auth_init" {
    │ 
    

    Expected Behavior

    For the aws_auth configmap to change

    Actual Behavior

    Fails at this point. Tried adding configuration for the kubernetes provider. I did add that in in an attempt to get this working so it's possible there's something in state that's preventing this from working or something to do with networking as it's not giving an actual IP that it's trying to connect to.

    Steps to Reproduce the Problem

    1. Setup a .tf file with the above tf code
    2. run terraform init and terraform apply.

    You'll need to create a variable for the arn that you're trying to add into the configmap.

  • bug: `data.aws_eks_cluster_auth.cluster.token` expires before completion

    bug: `data.aws_eks_cluster_auth.cluster.token` expires before completion

    Describe the Bug

    The data.aws_eks_cluster_auth.cluster.token can expire causing the kubernetes provider to timeout and fail.

    Screen Shot 2022-03-21 at 9 46 35 PM

    Expected Behavior

    The kubenetes job always runs to completion before the data.aws_eks_cluster_auth.cluster.token expires.

    Actual Behavior

    sometimes the token expires and the terraform apply fails.

    Steps to Reproduce the Problem

    1. run the complete example.
    2. most of the time it runs to completion before the token timeout.
    3. but sometimes it doesn't and the apply will fail.
  • Allow configurable CA certificate

    Allow configurable CA certificate

    Fixes

    https://github.com/aidanmelen/terraform-aws-eks-auth/issues/17

    Proposed Changes

    • Allow the user to specify a custom ca certificate, but fallback to the default if not provided
  • Unauthorized Error when executed in same execution as EKS cluster creation

    Unauthorized Error when executed in same execution as EKS cluster creation

    Describe the Bug

    Thanks so much for this project! I've been following your git comments on the EKS terraform module repo and really appreciate you pulling this out.

    Currently, I am unable to perform this operation in the same terraform execution where the EKS cluster is created. Are you expecting this module to only perform updates to already existing clusters, or do I just have a bug? Hashicorp notes here that interpolation can cause issues: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#stacking-with-managed-kubernetes-cluster-resources

    Expected Behavior

    1. create an EKS cluster
    2. Use this module to modify config map
    3. cluster works with config map changes

    Actual Behavior

    1. EKS cluster is created
    2. this module begins to execute
    3. Receive "Unauthorized!" error
    4. Run again, and execution succeeds

    Steps to Reproduce the Problem I'm using a setup that is identical to the examples here.

  • Migration path with custom RBAC groups

    Migration path with custom RBAC groups

    You suggest that users should migrate to version 18.20 of the official EKS module which supports handling the configmap directly in the cluster creation.

    However, I ran into a problem. I use your other module, terraform-kubernetes-rbac, to set up all of the RBAC cluster roles and permissions, and then I use the terraform-aws-eks-auth module to map the AWS users to those roles. This is all after initial cluster creation.

    The EKS module does not have the ability to create RBAC groups, so would I use the new aws_auth_roles feature to map to groups that do not exist? Then apply the RBAC module?

  • Add ability to provide certificate authority data

    Add ability to provide certificate authority data

    https://github.com/aidanmelen/terraform-aws-eks-auth/blob/9b2468d731e6b580a42c892ac9942b996b3b7499/main.tf#L18

    It would be useful to be able to provide a custom ca, for those of us in a corporate network which uses a man in the middle, which provides a different cert to the client

  • replace `kubernetes_job` with `kubectl_manifest`

    replace `kubernetes_job` with `kubectl_manifest`

    Fixes

    Proposed Changes

    • replace kubernetes_job with kubectl_manifest. This simplifies the module quite a bit. The provider handles create or patch functionality in a single resource. What's more, since the provider uses the k8s golang libraries, this solution also run remote operations in Terraform Cloud or in CI/CD.
    • overhauled documentation/examples with .terraform-docs templates.
    • added terraform-aws-modules/http to ensure the cluster is ACTIVE before the kubectl_manifest runs.
    • created test/mock for hacky rapid testing.

    Upgrade Notes

    You will see the kubernetes_job get replaced with the kubectl_manifest when upgrading from v0.8.3. The apply will automatically recreate the aws-auth configmap.

Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)

terraform-provider-awsutils Terraform provider for performing various tasks that cannot be performed with the official AWS Terraform Provider from Has

Dec 8, 2022
Simple-go-api - This porject deploys a simple go app inside a EKS Cluster

SimpleGoApp This porject deploys a simple go app inside a EKS Cluster Prerequisi

Jan 19, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
Terraform utility provider for constructing bash scripts that use data from a Terraform module

Terraform Bash Provider This is a Terraform utility provider which aims to robustly generate Bash scripts which refer to data that originated in Terra

Sep 6, 2022
Terraform-grafana-dashboard - Grafana dashboard Terraform module

terraform-grafana-dashboard terraform-grafana-dashboard for project Requirements

May 2, 2022
Terraform-equinix-migration-tool - Tool to migrate code from Equinix Metal terraform provider to Equinix terraform provider

Equinix Terraform Provider Migration Tool This tool targets a terraform working

Feb 15, 2022
Terraform-in-Terraform: Execute Modules directly from the Terraform Registry

Terraform-In-Terraform Provider This provider allows running Terraform in Terraform. This might seem insane but there are some edge cases where it com

Dec 25, 2022
Cloud-on-k8s- - Elastic Cloud on Kubernetes (ECK)

Elastic Cloud on Kubernetes (ECK) Elastic Cloud on Kubernetes automates the depl

Jan 29, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Oct 27, 2022
S3 Reverse Proxy with GET, PUT and DELETE methods and authentication (OpenID Connect and Basic Auth)
S3 Reverse Proxy with GET, PUT and DELETE methods and authentication (OpenID Connect and Basic Auth)

Menu Why ? Features Configuration Templates Open Policy Agent (OPA) API GET PUT DELETE AWS IAM Policy Grafana Dashboard Prometheus metrics Deployment

Jan 2, 2023
A Terraform module that creates AWS alerts billing for your resources.

terraform-aws-billing-alarms terraform-aws-billing-alarms for project Replace name project to New Project agr 'terraform-aws-billing-alarms' 'new-pr

Oct 20, 2021
Terraform module to provisison Kubernetes Clusters on Hetzner cloud (Based on KubeOne)

Terraform module template Terraform module which creates describe your intent resources on AWS. Usage Use this template to scaffold a new terraform mo

Nov 26, 2021
This repository contains Prowjob configurations for Amazon EKS Anywhere.

Amazon EKS Anywhere Prow Jobs This repository contains Prowjob configuration for the Amazon EKS Anywhere project, which includes the eks-anywhere and

Dec 19, 2022
Run Amazon EKS on your own infrastructure 🚀

Amazon EKS Anywhere Conformance test status: Amazon EKS Anywhere is a new deployment option for Amazon EKS that enables you to easily create and opera

Jan 5, 2023
A golang tool to list out all EKS clusters with active nodegroups in all regions in json format

eks-tool A quick and dirty tool to list out all EKS clusters with active nodegro

Dec 18, 2021
Kubegres is a Kubernetes operator allowing to create a cluster of PostgreSql instances and manage databases replication, failover and backup.

Kubegres is a Kubernetes operator allowing to deploy a cluster of PostgreSql pods with data replication enabled out-of-the box. It brings simplicity w

Dec 30, 2022
Manage Lets Encrypt certificates for a Kubernetes cluster.

Kubernetes Certificate Manager This project is loosely based on https://github.com/kelseyhightower/kube-cert-manager It took over most of its document

Mar 11, 2022
Prometheus exporter for Amazon Elastic Container Service (ECS)

ecs_exporter ?? ?? ?? This repo is still work in progress and is subject to change. This repo contains a Prometheus exporter for Amazon Elastic Contai

Nov 27, 2022
Pulumi provider for the Elasticsearch Service and Elastic Cloud Enterprise

Terraform Bridge Provider Boilerplate This repository contains boilerplate code for building a new Pulumi provider which wraps an existing Terraform p

Nov 18, 2022