Kubernetes Lazy User Manager

klum - Kubernetes Lazy User Manager

klum does the following basic tasks:

  • Create/Delete/Modify users
  • Easily manage roles associated with users
  • Issues kubeconfig files for users to use

This is a very simple controller that just create service accounts under the hood. Properly configured this should work on any Kubernetes cluster.

Installation

kubectl apply -f https://raw.githubusercontent.com/ibuildthecloud/klum/master/deploy.yaml

Usage

Create User

kind: User
apiVersion: klum.cattle.io/v1alpha1
metadata:
  name: darren

Download Kubeconfig

kubectl get kubeconfig darren -o json | jq .spec > kubeconfig
kubectl --kubeconfig=kubeconfig get all

The name of the kubeconfig resource will be the same as the user name

Delete User

kubectl delete user darren

Assign Roles

kind: User
apiVersion: klum.cattle.io/v1alpha1
metadata:
  name: darren
spec:
  clusterRoles:
  - view
  roles:
  - namespace: default
    # you can assign cluster roles in a namespace
    clusterRole: cluster-admin
  - namespace: other
    # or assign a role specific to that namespace
    role: something-custom

If you don't assign a role a default role will be assigned to the user which is configured on the controller. The default value is cluster-admin, so change that if you want a more secure setup.

Disable user

kind: User
apiVersion: klum.cattle.io/v1alpha1
metadata:
  name: darren
spec:
  enabled: false

When the user is reenabled a new kubeconfig with new token will be created.

Configuration

The controller can be configured as follows. You will need to edit the deployment and change then environment variables:

GLOBAL OPTIONS:
   --namespace value             Namespace to create secrets and SAs in (default: "klum") [$NAMESPACE]
   --context-name value          Context name to put in Kubeconfigs (default: "default") [$CONTEXT_NAME]
   --server value                The external server field to put in the Kubeconfigs (default: "https://localhost:6443") [$SERVER_NAME]
   --ca value                    The value of the CA data to put in the Kubeconfig [$CA]
   --default-cluster-role value  Default cluster-role to assign to users with no roles (default: "cluster-admin") [$DEFAULT_CLUSTER_ROLE]

Building

make or just go build

Running

./bin/klum --kubeconfig=${HOME}/.kube/config

License

Copyright (c) 2020 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • Update dependencies to make it compatible with kubernetes 1.22

    Update dependencies to make it compatible with kubernetes 1.22

    The current dependencies used in the project are not compatible with kubernetes 1.22. The changes in this PR are basically updating those dependencies and vendoring them. A special replacement needed to be done (github.com/rancher/wrangler-api => github.com/dylanhitt/wrangler-api v0.7.0) but it can be removed once https://github.com/rancher/wrangler-api/pull/22 gets merged. This pr fixes #4 and it was tested with Minikube.

  • Support for k8s 1.22

    Support for k8s 1.22

    I had updated a cluster to version 1.22, and found that klum is not able to find resources anymore (was working up to 1.21)

    time="2021-11-12T17:22:54Z" level=info msg="Starting klum controller"
    time="2021-11-12T17:22:54Z" level=fatal msg="the server could not find the requested resource"
    

    Is there any roadmap to bring support again for newer kubernetes versions or this project is not anymore into active development?

  • Templatable contextname

    Templatable contextname

    Use-case

    when I create multiple users and save the kubeconfig as documented

    kubectl get kubeconfig darren -o json | jq .spec > kubeconfig-darren
    kubectl get kubeconfig lalyos -o json | jq .spec > kubeconfig-lalyos
    

    Usually its easier to switch between context, than switching between KUBECONFIGs, so usuallymerge the together as:

    export KUBECONFIG=$KUBECONFIG:./kubeconfig-darren:./kubeconfig-lalyos
    

    but if KLUM has a fixed contextName it doesn't work.

    Proposed solution

    Instead of a fixed string context-name let's make it a golang template. For example:

    CONTEXT_NAME=workshop-{{ .UserName }}
    

    with the above KUBECONFIG merge it has distinct contexts:

    k config get-contexts 
    CURRENT   NAME              CLUSTER           AUTHINFO          NAMESPACE
    *         boss              singli            singli            klum
              workshop-darren   workshop-darren   workshop-darren     
              workshop-lalyos   workshop-lalyos   workshop-lalyos
    

    why send the PR

    I've seen at dockerhub that the image was updated "4 months ago" and it has 8k pulls, so why not?

  • get kubeconfig is not found

    get kubeconfig is not found

    Hello,

    This project was exactly what I was looking for but I can't get it to work. Basically follow the first few steps and everything is fine.

    kubectl apply -f https://raw.githubusercontent.com/ibuildthecloud/klum/master/deploy.yaml

    Create the below file.

    kind: User
    apiVersion: klum.cattle.io/v1alpha1
    metadata:
      name: chris
    

    Apply with: kubectl apply -f create_user_chris.yml

    Then get the kubeconfig.

    kubectl get kubeconfig chris -o json | jq .spec > kubeconfig

    Error from server (NotFound): kubeconfigs.klum.cattle.io "chris" not found

  • Deleting a user does not revoke it's access

    Deleting a user does not revoke it's access

    When deleting a user, I only get the user object deleted. No other object gets deleted by klum which means the user will still have access to the cluster with the same set of permissions. This I have tested in both EKS and Minikube.

  • [Enhancement] allow specifying multiple roles and clusterRoles per namespace

    [Enhancement] allow specifying multiple roles and clusterRoles per namespace

    Allows to specify lists of role/clusterRoles per namespaceRole, like

    kind: User
    apiVersion: klum.cattle.io/v1alpha1
    metadata:
      name: iwilltry42
    spec:
      clusterRoles:
        - view
      roles:
        - namespace: iwilltry42
          clusterRoles:
            - view
            - edit
    
  • Fails on EKS

    Fails on EKS

    Tried to deploy on EKS and fails.

    My environment:

    $ k version
    Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-23T14:21:36Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-c0eccc", GitCommit:"c0eccca51d7500bb03b2f163dd8d534ffeb2f7a2", GitTreeState:"clean", BuildDate:"2019-12-22T23:14:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
    

    No pod comes up, due to:

    $  kubectl describe rs klum-799bb95cd7
    ...
       True    FailedCreate
    Events:
      Type     Reason        Age               From                   Message
      ----     ------        ----              ----                   -------
      Warning  FailedCreate  2s (x4 over 29s)  replicaset-controller  Error creating: No API token found for service account "klum", retry after the token is automatically created and added to the service account
    

    Great stuff, can't wait to use it. Thanks and KUTGW!

network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

Dec 18, 2022
Terraform Provider for Azure (Resource Manager)Terraform Provider for Azure (Resource Manager)
Terraform Provider for Azure (Resource Manager)Terraform Provider for Azure (Resource Manager)

Terraform Provider for Azure (Resource Manager) Version 2.x of the AzureRM Provider requires Terraform 0.12.x and later, but 1.0 is recommended. Terra

Oct 16, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Kubernetes resource manager.

Halyard Halyard is in an experimentation phase where I will will be learning a lot and changing my opinion (and APIs) very often. Halyard is a deploym

Dec 13, 2021
A Kubernetes operator that allows for automatic provisioning and distribution of cert-manager certs across namespaces

cached-certificate-operator CachedCertificate Workflow When a CachedCertificate is created or updated the operator does the following: Check for a val

Sep 6, 2022
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Oct 15, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Nov 25, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Dec 30, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

Mar 25, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

Nov 13, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Jan 3, 2023
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

Feb 21, 2022