A tool used for developing using Kubernetes

supplant

Overview

supplant is a tool used for improve the development experience with Kubernetes. The concept is to start with a working cluster with all of the deployed services in your application and then to supplant or replace a service using a new service without a selector and creating an endpoint that points to your local machine. The end result is that from within the cluster, the service now points to a port on your machine outside the cluster. To allow the code that you are developing/running outside the cluster to reach any dependent services inside the cluster, those services are exposed individually via port forwarding.

supplant Diagram

Why?

  • Pushing new images to test a change works, but the code/test cycle is very slow
  • It's very convenient to run all of your existing tooling including running the service under a debugger on your machine
  • It uses standard K8s port forwarding and endpoints as the implementation which will hopefully be fairly reliable

Why not?

  • Telepresence is another more seamless approach at doing this, but it has to use a bit of networking magic to make it happen and I've had a few reliability issues with it.
  • If your cluster can't reach back your local machine, this won't work.

Installation

Binaries are available for releases.

For Linux x64 you can install the latest release globally with:

sudo curl -sL https://github.com/tzneal/supplant/releases/latest/download/supplant_linux_x86_64 -o /usr/local/bin/supplant

sudo chmod a+x /usr/local/bin/supplant

Yyou can install directly using go with:

go install github.com/tzneal/supplant@latest

Production Use

Please don't run this against a production cluster. It attempts to replace services/endpoints and then return them upon exit, but this hasn't undergone extensive testing. I use this with kind locally.

Sample Usage

We'll launch and expose two deployments via services that listen on port 80 and 81 respectively.

# launch the first
$ kubectl create deployment hello-1 --image=k8s.gcr.io/echoserver:1.4
$ kubectl expose deployment hello-1 --port 80 --target-port 8080

# launch the second
$ kubectl create deployment hello-2 --image=k8s.gcr.io/echoserver:1.4
$ kubectl expose deployment hello-2 --port 81 --target-port 8080

# generate our config file
$ supplant config create test.yml

The generated test.yml will now look something like this, where each of the two services is listed under a supplant and an external section within the YAML file.

supplant:
 - name: hello-1
   namespace: default
   enabled: false
   ports:
    - protocol: TCP
      port: 80
      localport: 0
 - name: hello-2
   namespace: default
   enabled: false
   ports:
    - protocol: TCP
      port: 81
      localport: 0
external:
 - name: hello-1
   namespace: default
   enabled: false
   ports:
    - protocol: TCP
      targetport: 8080
      localport: 0
 - name: hello-2
   namespace: default
   enabled: false
   ports:
    - protocol: TCP
      targetport: 8080
      localport: 0

We want to replace the hello-1 service, but have our replacement be able to access the hello-2 service. So we enable hello-1 in the supplant section and hello-2 in the external section. We can then then clean our config file which removes any disabled services from the config file.

$ ./supplant config clean test.yml

The test.yml now looks like this:

supplant:
 - name: hello-1
   namespace: default
   enabled: true
   ports:
    - protocol: TCP
      port: 80
      localport: 0
external:
 - name: hello-2
   namespace: default
   enabled: true
   ports:
    - protocol: TCP
      targetport: 8080
      localport: 0

We can now run supplant on this configuration file:

=> connecting to K8s
=> K8s version: v1.21.1
=> updating service hello-1
 - 192.168.1.129:40709 is now the endpoint for hello-1:80
=> forwarding for hello-2
 - 127.0.0.1:43099 points to remote hello-2:8080
forwarding ports, hit Ctrl+C to exit

The log lets us know that from within our cluster, anything trying to reach the hello-1 service will connect to 192.168.88.122:40709. The port 40709 was chosen at random since the listen port was specified as 0. If a non-zero port were specified, it would be used instead. supplant has also forwarded our local port 43099 to the hello-2 service at hello-2:8080. The listen port there works the same way where specifying a non-zero port in the config file will listen on the specified port instead of a random open port. We can verify that we have replaced the hello-1 service by trying to reach it from the hello-2 pod which fails as we haven't started anything listening on port 8080 yet.

$ kubectl exec -it deployment/hello-2 -- curl hello-1:80
curl: (7) Failed to connect to hello-1 port 80: Connection refused
command terminated with exit code 7

If we start a web server locally on port 8080, the connection will then work. In a separate shell we start a web server:

$ python3 -m http.server 40709                           
Serving HTTP on 0.0.0.0 port 40709 (http://0.0.0.0:40709/) ...

And then retry the connection to the hello-1 service, which now hits our Python web server.

Directory listing for / ...">
$  kubectl exec -it deployment/hello-2 -- curl hello-1:80
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
...

Lastly, we can verify that the port forward works locally as we can reach the hello-2 service. This allows our local service to access any resources inside the cluster that it needs to.

$ curl 127.0.0.1:43099
CLIENT VALUES:
client_address=127.0.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://127.0.0.1:8080/
Similar Resources

KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023

vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023

network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

Dec 18, 2022

A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Oct 15, 2022

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Nov 25, 2022

Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Dec 30, 2022

The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022

Kubernetes IN Docker - local clusters for testing Kubernetes

Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023

An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

Mar 25, 2022
This is for managing Slack App Manifests, it is no use if you are not developing an App for Slack.

Terraform Provider Slack App This is for managing Slack App Manifests, it is no use if you are not developing an App for Slack. Requirements Terraform

May 23, 2022
This image is primarily used to ping/call a URL on regular intervals using Kubernetes (k8s) CronJob.

A simple webhook caller This image is primarily used to ping/call a URL on regular intervals using Kubernetes (k8s) CronJob. A sample job would look s

Nov 30, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
dockin ops is a project used to handle the exec request for kubernetes under supervision
dockin ops is a project used to handle the exec request for kubernetes under supervision

Dockin Ops - Dockin Operation service English | 中文 Dockin operation and maintenance management system is a safe operation and maintenance management s

Aug 12, 2022
A Kubernetes Operator used for pre-scaling applications in anticipation of load

Pre-Scaling Kubernetes Operator Built out of necessity, the Operator helps pre-scale applications in anticipation of load. At its core, it manages a c

Oct 14, 2021
Kubernetes Webhook used for image mutations

Table of Contents About Imagswap Getting Started Prerequisites Installation Usage Roadmap Contributing License Contact Acknowledgments About The Proje

Mar 7, 2022
Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behaviors.

add-operator Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behavio

Dec 15, 2021
ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run, exec, cp, logs, stop)
ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run, exec, cp, logs, stop)

English / 日本語 ecsk ECS + Task = ecsk ?? ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run

Dec 13, 2022
Tool which gathers basic info from apk, which can be used for Android penetration testing.
Tool which gathers basic info from apk, which can be used for Android penetration testing.

APKSEC Tool which gathers basic info from apk, which can be used for Android penetration testing. REQUIREMENTS AND INSTALLATION Build APKSEC: git clon

Sep 2, 2022
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023