the simplest testing framework for Kubernetes controller.

KET(Kind E2e Test framework)

KET is the simplest testing framework for Kubernetes controller. KET is available as open source software, and we look forward to contributions from any engineers.

Introduction

The goal of KET is to help you build what you need to test your Kubernetes Controller. It is an open platform that allows developers to focus only on the responsibilities of the controller, without worrying about the complexities of running a cluster, building resources, and events that make the Reconciliation Loop work.

KET has following feature.

  • create kind cluster
  • Provide Build and Deploy pipelines using Skaffold
  • The necessary client tools include client-go and kubectl
  • Reproduce declarative resource state, i.e., kubectl apply -f

KET is composed of these components:

Example

Setup for e2e testing

If you want to do E2E (end to end) testing against your Kubernetes controller.

We recommend you to build a cluster environment using TestMain.

import (
	"context"
	"fmt"
	"os"
	"testing"

	"github.com/riita10069/ket/pkg/setup"
	"k8s.io/apimachinery/pkg/types"
)

func TestMain(m *testing.M) {
	os.Exit(func() int {
		ctx := context.Background()
		ctx, cancel := context.WithCancel(ctx)
		defer cancel()

		cliSet, err := setup.Start(
			ctx,
			setup.WithBinaryDirectory("./_dev/bin"),
			setup.WithKindClusterName("ket-controller"),
			setup.WithKindVersion("0.11.0"),
			setup.WithKubernetesVersion("1.20.2"),
			setup.WithKubeconfigPath("./.kubeconfig"),
			setup.WithCRDKustomizePath("./manifest/crd"),
			setup.WithUseSkaffold(),
			setup.WithSkaffoldVersion("1.26.1"),
			setup.WithSkaffoldYaml("./manifest/skaffold/skaffold.yaml"),
		)
		if err != nil {
			fmt.Fprintf(os.Stderr, "failed to setup kind, kubectl and skaffold: %s\n", err)
			return 1
		}

		kubectl := cliSet.Kubectl
		_, err = kubectl.WaitAResource(
			ctx,
			"deploy",
			types.NamespacedName{
				Namespace: "CONTROLLER_NAMESPACE",
				Name:      "CONTROLLER_NAME",
			},
		)
		if err != nil {
			fmt.Fprintf(os.Stderr, "failed to wait resource: %s\n", err)
			return 1
		}

		return m.Run()
	}())
}

setup.Start()function builds the testing environment.

context.Context

setup.Start will start one or more goroutines. It is desirable to give a context that will be canceled() at the end of the test.

WithBinaryDirectory

Save the binary, e.g. kubectl, in the specified directory. By default, . /bin is used.

WithKindClusterName

You can specify the name of the Kind cluster. By default, ket is used.

WithKubeconfigPath

It is possible to change the PATH of kubeconfig. The default is to use $HOME/.kube/config.

Please see below for details. https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

WithCRDKustomizePath

The CRD resources used by the controller are Apply using kustomize

The path to kustomize.yaml should be given here. If you do not use this option, the resource will not be applied. If you don't need a CRD, you should.

WithUseSkaffold

If this is not used, the controller will not run on the cluster. If you want to use a build with Skaffold, make sure to give this option explicitly. If you want the controller to be built directly using the local Go environment, you do not need to use this option.

WithSkaffoldYaml

If you use WithUseSkaffold(), use it. This will specify the PATH to skaffold.yaml.

clientSet

The return value of the setup.Start() function is the ClientSet struct.

type ClientSet struct {
	ClientGo *k8s.ClientGo
	Kubectl  *kubectl.Kubectl
	Kind     *kind.Kind
	Skaffold *skaffold.Skaffold
}

Start() function is a ClientSet struct, from which you can use the commands you need in your test logic.

kubectl

ApplyKustomize, ApplyFile

ApplyKustomize, ApplyFile will execute kubectl apply -k and kubectl -f.

Also, ApplyAllManifest will apply all files by passing the path of the file as an array. By including this code at the beginning of the test case, declarative resource management using yaml files becomes possible.

kubectl.ApplyAllManifest(ctx, tt.fixture.manifestPaths, false)

To avoid affecting the next case, make sure to delete the created resource at the end of the case as follows

kubectl.DeleteAllManifest(ctx, tt.fixture.manifestPaths, true)

Also, resources created by other things such as Controller can be explicitly deleted as follows.

kubectl.DeleteResource(ctx, "ket", "ket-namespace", "pod")

The fourth argument gives the name of the resource to be deleted. The name of the resource must be of type string according to the following table. https://kubernetes.io/ja/docs/reference/kubectl/_print/#resource-types

WaitAResource

This is a command that waits for a resource to be created. The name of the resource must be of type string according to the following table. https://kubernetes.io/ja/docs/reference/kubectl/_print/#resource-types

Also, when the resource is a Pod or a Deployment, it will continue to wait until it is not only created but also has a Status of Ready. Please note that ReplicaSet and DaemonSet are not supported yet.

verify using kubectl

It is more versatile to use cllient-go. However, I felt that there is merit in intuitive operation using kubectl, so I created some methods.

GetNamespacesList

You can get a list of Namespaces that exist in the cluster.

GetResourceNameList

You can get a list of Names of resources in a specific Namespace.

Kind

Create Cluster

You can create a kind cluster.

Delete Cluster

ifyou use this method, You can also delete the kind cluster at the end of the test.

Self-created commands

On KET, it is too simple and instantaneous to methodize the command you want to execute.

The KET API provided is still poor. However, you can use your own commands to do the operations you want to do. And I am very much looking forward to your contributions as well.

Suppose you want to use the command kubectl get all --all-namespaces -o=jsonpath='{.items[*].metadata.name} as a method in your test.

What we need to do to execute the kubectl command is to implement it as a method of the kubectl struct. It is very easy to provide arguments to the command. We just need to create an array.

You can do this as follows

func (k *Kubectl) AllResourcesNameList(ctx context.Context) (string, error) {
	args := []string{"get", "all", "--all-namespaces", "-o=jsonpath='{.items[*].metadata.name}'"}
	stdout, _, err := k.Capture(ctx, args)
	if err != nil {
		return nil, err
	}
	
	return stdout, nil
}

This is the only way to receive the output. If you do not need to receive the output, use Execute instead of Capture.

Owner
Riita
Tokyo Tech 18-5
Riita
Similar Resources

Kubernetes Admission controller for golang

KCAdm Kubernetes Admission controller Test locally First create the required cer

Dec 23, 2021

Sesame: an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer

Sesame Overview Sesame is an Ingress controller for Kubernetes that works by dep

Dec 28, 2021

Kubernetes controller for backing up public container images to our own registry repository

image-clone-controller Kubernetes controller which watches applications (Deployment and DaemonSet) and "caches" the images (public container images) b

Aug 28, 2022

Kubernetes Admission Controller Demo: Validating Webhook for Namespace lifecycle events

Kubernetes Admission Controller Based on How to build a Kubernetes Webhook | Admission controllers Local Kuberbetes cluster # create kubernetes cluste

Feb 27, 2022

K8s-delete-protection - Kubernetes admission controller to avoid deleteing master nodes

k8s-delete-protection Admission Controller If you want to make your Kubernetes c

Nov 2, 2022

Kngrok - Kubernetes controller for ngrok tunnel

kngrok ken-grok What is kngrok? kngrok is a Kubernetes controller to operate ngr

Feb 15, 2022

Golang Integration Testing Framework For Kong Kubernetes APIs and Controllers.

Golang Integration Testing Framework For Kong Kubernetes APIs and Controllers.

Kong Kubernetes Testing Framework (KTF) Testing framework used by the Kong Kubernetes Team for the Kong Kubernetes Ingress Controller (KIC). Requireme

Dec 20, 2022

Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021

Stuff to make standing up sigstore (esp. for testing) easier for e2e/integration testing.

Stuff to make standing up sigstore (esp. for testing) easier for e2e/integration testing.

sigstore-scaffolding This repository contains scaffolding to make standing up a full sigstore stack easier and automatable. Our focus is on running on

Dec 27, 2022
Comments
  • option for not apply crd

    option for not apply crd

    Please consider the case where you don't want to apply CRD in setup.start. This is because I want to test a custom controller that does not use custom resources.

A Controller written in kubernetes sample-controller style which watches a custom resource named Bookstore

bookstore-sample-controller A Controller written in kubernetes sample-controller style which watches a custom resource named Bookstore. A resource cre

Jan 20, 2022
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

Dec 18, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Nov 25, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Feb 14, 2022
Annotated and kubez-autoscaler-controller will maintain the HPA automatically for kubernetes resources.

Kubez-autoscaler Overview kubez-autoscaler 通过为 deployment / statefulset 添加 annotations 的方式,自动维护对应 HorizontalPodAutoscaler 的生命周期. Prerequisites 在 kuber

Jan 2, 2023
A Kubernetes Terraform Controller
A Kubernetes Terraform Controller

Terraform Controller Terraform Controller is a Kubernetes Controller for Terraform, which can address the requirement of Using Terraform HCL as IaC mo

Jan 2, 2023
Kubernetes workload controller for container image deployment

kube-image-deployer kube-image-deployer는 Docker Registry의 Image:Tag를 감시하는 Kubernetes Controller입니다. Keel과 유사하지만 단일 태그만 감시하며 더 간결하게 동작합니다. Container, I

Mar 8, 2022
An Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer

NGINX Ingress Controller Overview ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer. Learn more a

Nov 15, 2021
A kubernetes controller that watches the Deployments and “caches” the images
A kubernetes controller that watches the Deployments and “caches” the images

image-cloner This is just an exercise. It's a kubernetes controller that watches

Dec 20, 2021