Kubernetes IN Docker - local clusters for testing Kubernetes

kind

Please see Our Documentation for more in-depth installation etc.

kind is a tool for running local Kubernetes clusters using Docker container "nodes". kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

If you have go (1.11+) and docker installed GO111MODULE="on" go get sigs.k8s.io/[email protected] && kind create cluster is all you need!

kind consists of:

kind bootstraps each "node" with kubeadm. For more details see the design documentation.

NOTE: kind is still a work in progress, see the 1.0 roadmap.

Installation and usage

For a complete install guide see the documentation here.

You can install kind with GO111MODULE="on" go get sigs.k8s.io/[email protected].

NOTE: please use the latest go to do this, ideally go 1.13 or greater.

NOTE: go get should not be run from a Go [modules] enabled project directory, as go get inside a modules enabled project updates dependencies / behaves differently. Try for example cd $HOME first.

This will put kind in $(go env GOPATH)/bin. If you encounter the error kind: command not found after installation then you may need to either add that directory to your $PATH as shown here or do a manual installation by cloning the repo and run make build from the repository.

Without installing go, kind can be built reproducibly with docker using make build.

Stable binaries are also available on the releases page. Stable releases are generally recommended for CI usage in particular. To install, download the binary for your platform from "Assets" and place this into your $PATH:

On Linux:

curl -Lo ./kind "https://kind.sigs.k8s.io/dl/v0.11.1/kind-$(uname)-amd64"
chmod +x ./kind
mv ./kind /some-dir-in-your-PATH/kind

On macOS via Homebrew:

brew install kind

On macOS via MacPorts:

sudo port selfupdate && sudo port install kind

On macOS via Bash:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-darwin-amd64
chmod +x ./kind
mv ./kind /some-dir-in-your-PATH/kind

On Windows:

curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.11.1/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exe

# OR via Chocolatey (https://chocolatey.org/packages/kind)
choco install kind

To use kind, you will need to install docker. Once you have docker running you can create a cluster with:

kind create cluster

To delete your cluster use:

kind delete cluster

To create a cluster from Kubernetes source:

  • ensure that Kubernetes is cloned in $(go env GOPATH)/src/k8s.io/kubernetes
  • build a node image and create a cluster with:
kind build node-image
kind create cluster --image kindest/node:latest

Multi-node clusters and other advanced features may be configured with a config file, for more usage see the docs or run kind [command] --help

Community

Please reach out for bugs, feature requests, and other issues! The maintainers of this project are reachable via:

Current maintainers are @BenTheElder, @munnerz, @aojea, and @amwat - feel free to reach out if you have any questions!

Pull Requests are very welcome! If you're planning a new feature, please file an issue to discuss first.

Check the issue tracker for help wanted issues if you're unsure where to start, or feel free to reach out to discuss. 🙂

See also: our own contributor guide and the Kubernetes community page.

Why kind?

  • kind supports multi-node (including HA) clusters
  • kind supports building Kubernetes release builds from source
    • support for make / bash or docker, in addition to pre-published builds
  • kind supports Linux, macOS and Windows
  • kind is a CNCF certified conformant Kubernetes installer

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

Owner
Kubernetes SIGs
Org for Kubernetes SIG-related work
Kubernetes SIGs
Comments
  • Cluster doesn't restart when docker restarts

    Cluster doesn't restart when docker restarts

    When docker restarts or stop/start (for any reason), the kind node containers remain stopped and aren't restarted properly. When I tried to run docker restart <node container id> the cluster didn't start either.

    The only solution seems to recreate the cluster at this point.

    /kind bug

  • Support arm64

    Support arm64

    Device under test is a Packet c1.large.arm 96-core arm64 machine running Ubuntu 18.04.

    ed@ed-2a-bcc-llvm:~$ go version
    go version go1.11.2 linux/arm64
    ed@ed-2a-bcc-llvm:~$ go get sigs.k8s.io/kind
    ed@ed-2a-bcc-llvm:~$ go/bin/kind create cluster
    Creating cluster 'kind-1' ...
     ✓ Ensuring node image (kindest/node:v1.12.2)  
     ✓ [kind-1-control-plane] Creating node container 📦 
     ✗ [kind-1-control-plane] Fixing mounts 🗻 
    Error: failed to create cluster: exit status 1
    Usage:  
      kind create cluster [flags]
            
    Flags:  
          --config string   path to a kind config file
      -h, --help            help for cluster
          --image string    node docker image to use for booting the cluster
          --name string     cluster context name (default "1")
          --retain          retain nodes for debugging when cluster creation fails
          --wait duration   Wait for control plane node to be ready (default 0s)
    
    Global Flags:
          --loglevel string   logrus log level [panic, fatal, error, warning, info, debug] (default "warning")
    
    failed to create cluster: exit status 1
    ed@ed-2a-bcc-llvm:~$ docker version
    Client:
     Version:           18.09.0
     API version:       1.39
     Go version:        go1.10.4
     Git commit:        4d60db4
     Built:             Wed Nov  7 00:52:41 2018
     OS/Arch:           linux/arm64
     Experimental:      false
    
    Server: Docker Engine - Community
     Engine:
      Version:          18.09.0
      API version:      1.39 (minimum version 1.12)
      Go version:       go1.10.4
      Git commit:       4d60db4
      Built:            Wed Nov  7 00:17:01 2018
      OS/Arch:          linux/arm64
      Experimental:     false
    
  • Add dual stack support

    Add dual stack support

    Add dual stack support to KIND, it also needs to add dual-stack support to KINDNET Depends on:

    • [x] https://github.com/kubernetes/kubernetes/pull/79033
    • [x] https://github.com/kubernetes/kubernetes/pull/79386
    • [x] https://github.com/kubernetes/kubernetes/pull/82462
    • [x] https://github.com/kubernetes/kubernetes/pull/82473
    • [x] https://github.com/kubernetes/kubernetes/pull/79993
    • [x] https://github.com/kubernetes/kubernetes/pull/78801
    • [x] https://github.com/kubernetes/kubernetes/pull/83123

    New dependency for kindnet so we don't need annotations on the nodes:

    • [x] https://github.com/kubernetes/enhancements/pull/1665
  • mount node product_uuid and product_name in pod containers

    mount node product_uuid and product_name in pod containers

    When pods are running at kind cluster their product_uuid and product_name is the same since the share the kernel vfs, this PR add a new mount to OCI spec to bind mount node's product_uuid and product_name into pod's containers. This is the result

    $ kubectl exec nginx-kind-worker cat /sys/class/dmi/id/product_uuid
    053ba73c-3a24-4cfe-b7ca-5a938a4600d7
    $ kubectl exec nginx-kind-worker2 cat /sys/class/dmi/id/product_uuid
    db9f435b-0316-4f66-92a0-8d3632d6f69c
    

    Closes #https://github.com/kubernetes-sigs/kind/issues/2318

  • create HA cluster is flaky

    create HA cluster is flaky

    What happened: i started seeing odd failures in the kind-master and -1.14 kubeadm jobs: https://k8s-testgrid.appspot.com/sig-cluster-lifecycle-kubeadm#kubeadm-kind-master https://k8s-testgrid.appspot.com/sig-cluster-lifecycle-kubeadm#kubeadm-kind-1.14

    after switching to this HA config:

    # a cluster with 3 control-planes and 3 workers
    kind: Cluster
    apiVersion: kind.sigs.k8s.io/v1alpha3
    nodes:
    - role: control-plane
    - role: control-plane
    - role: control-plane
    - role: worker
    - role: worker
    - role: worker
    
    I0604 19:15:09.075770     760 join.go:480] [preflight] Retrieving KubeConfig objects
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    I0604 19:15:10.310249     760 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 401 Unauthorized in 1233 milliseconds
    error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized 
     ✗ Joining more control-plane nodes 🎮
    DEBU[22:15:10] Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}} --filter label=io.k8s.sigs.kind.cluster=kind] 
    $KUBECONFIG is still set to use /home/lubo-it/.kube/kind-config-kind even though that file has been deleted, remember to unset it
    DEBU[22:15:10] Running: /usr/bin/docker [docker rm -f -v kind-control-plane2 kind-control-plane kind-control-plane3 kind-worker kind-worker3 kind-worker2 kind-external-load-balancer] 
    ⠈⠁ Joining more control-plane nodes 🎮 Error: failed to create cluster: failed to join a control plane node with kubeadm: exit status 1
    

    What you expected to happen: no errors.

    How to reproduce it (as minimally and precisely as possible):

    cd kind-src-path
    GO111MODULE=on go build
    # install the kind binary to PATH
    cd kubernetes-src-path
    kind build node-image --kube-root=$(pwd)
    kind create cluster --config=<path-to-above-ha-config> --image kindest/node:latest
    

    Anything else we need to know?:

    • i cannot reproduce the bug without --loglevel=debug.
    • sometimes it fails during joining the extra CP nodes, something during joining the workers.

    Environment:

    • kind version: (use kind version): master at 43bf0e2594db
    • Kubernetes version: master at 1409ff38e5828f55
    • Docker version: (use docker info):
    Containers: 10
     Running: 7
     Paused: 0
     Stopped: 3
    Images: 128
    Server Version: 18.06.3-ce
    Storage Driver: overlay2
     Backing Filesystem: extfs
     Supports d_type: true
     Native Overlay Diff: true
    Logging Driver: json-file
    Cgroup Driver: systemd
    Plugins:
     Volume: local
     Network: bridge host macvlan null overlay
     Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
    Swarm: inactive
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
    runc version: a592beb5bc4c4092b1b1bac971afed27687340c5
    init version: fec3683
    Security Options:
     apparmor
     seccomp
      Profile: default
    Kernel Version: 4.13.0-41-generic
    Operating System: Ubuntu 17.10
    OSType: linux
    Architecture: x86_64
    CPUs: 4
    Total Memory: 15.66GiB
    Name: luboitvbox
    ID: K2H6:2I6N:FSBZ:S77V:R5CQ:X22B:VYTF:WZ4R:UIKC:HGOT:UCHD:GCR2
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    Labels:
    Experimental: false
    Insecure Registries:
     127.0.0.0/8
    Live Restore Enabled: false
    
    • OS (e.g. from /etc/os-release):
    NAME="Ubuntu"
    VERSION="17.10 (Artful Aardvark)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 17.10"
    VERSION_ID="17.10"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=artful
    UBUNTU_CODENAME=artful
    

    /kind bug /priority important-soon (?)

  • Add IPv6 support

    Add IPv6 support

    This PR adds allows creating IPv6 Kubernetes clusters with kind and have in mind a future dual-stack implementation, considering for simplicity, only one address of each protocol.

    It adds a new option ipFamily to the v1alpha3 API that allows choosing the IP family of the cluster. To avoid issues with the different networking options the podSubnet and the serviceSubnet kubeadm values are predefined with the following values:

    	Default PodSubnet          = "10.244.0.0/16"
    	Default ServicesSubnet     = "10.96.0.0/12"
    	Default PodSubnetIPv6      = "fd00:10:244::/64"
    	Default ServicesSubnetIPv6 = "fd00:10:96::/112"
    

    We can create a Kubernetes IPv6 cluster with the following config:

    # necessary for conformance
    kind: Cluster
    apiVersion: kind.sigs.k8s.io/v1alpha3
    networking:
      ipFamily: ipv6
    nodes:
    # the control plane node
    - role: control-plane
    - role: worker
    - role: worker
    

    Test results with IPv4 and IPv6

    References:

    • https://github.com/kubernetes-sigs/kind/issues/280
    • https://docs.google.com/document/d/17e3TWWLfnIZrsVxpln9wNi4x0JVn2oHIHDYjaeENdVE/edit?ts=5c9af5b4#heading=h.on33tp91ehzk

    Fixes #280

  • ARM64 CI

    ARM64 CI

    per discussion in #kind slack, we should setup some CI with openlab to get kind on arm64 xref #166

    @dims was able to get arm64 working, but we'll need some set this up to keep it working once that goes in, as the maintainers do not have access to suitable arm machines to test on otherwise.

    /assign /kind feature /priority important-longterm

  • overlay network cannot be applied when host is behind a proxy

    overlay network cannot be applied when host is behind a proxy

    Environment

    Host OS: RHEL 7.4 Host Docker version: 18.09.0 Host go version: go1.11.2 Node Image: kindest/node:v1.12.2

    kind create cluster

    [root@localhost bin]# kind create cluster
    Creating cluster 'kind-1' ...
     ✓ Ensuring node image (kindest/node:v1.12.2) 🖼
     ✓ [kind-1-control-plane] Creating node container 📦
     ✓ [kind-1-control-plane] Fixing mounts 🗻
     ✓ [kind-1-control-plane] Starting systemd 🖥
     ✓ [kind-1-control-plane] Waiting for docker to be ready 🐋
     ✗ [kind-1-control-plane] Starting Kubernetes (this may take a minute) ☸
    FATA[07:20:43] Failed to create cluster: failed to apply overlay network: exit status 1
    

    Code below in pkg/cluster/context.go is trying to extract k8s version using kubectl version command in order to download the version-specific weave net.yaml. The code is not ok:-

            // TODO(bentheelder): support other overlay networks
            if err = node.Command(
                    "/bin/sh", "-c",
                    `kubectl apply --kubeconfig=/etc/kubernetes/admin.conf -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version --kubeconfig=/etc/kubernetes/admin.conf | base64 | tr -d '\n')"`,
            ).Run(); err != nil {
                    return kubeadmConfig, errors.Wrap(err, "failed to apply overlay network")
            }
    

    Why is the output of kubectl version command, base64 encoded?

  • Unstable cluster

    Unstable cluster

    When running it locally on my machine the cluster seems much more unstable than on our CI. So now cluster is created inside a privileged container, but then I am getting strange errors:

    $ kubectl cluster-info
    Unable to connect to the server: unexpected EOF
    
    $ kubectl cluster-info
    Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get services)
    
    $ kubectl cluster-info 
    error: the server doesn't have a resource type "services"
    
  • WSL2 ERROR: failed to create cluster

    WSL2 ERROR: failed to create cluster

    ERROR: failed to create cluster: ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1 Command Output: I0622 15:16:13.468494 216 initconfiguration.go:246] loading configuration from "/kind/kubeadm.conf"

    What is Expected: cluster should be created without any error.

    How to reproduce it: run below command to reproduce it : $ kind create cluster

    Anything else we need to know?: I have recently install ubuntu as virtual machine on windows 10 as wsl 2. running ubuntu on windows terminal as admin user, also installed docker and set it as non root user. below I am providing environment related information.

    ENVIRONMENT:

    Ubuntu command used $lsb_release -a

    No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.2 LTS Release: 20.04 Codename: focal

    Kubectl Installation https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management

    Kubectl Version command used kubectl version --client

    Client Version: version.Info{ Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64" }

    kind Installtion :

    curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64 chmod +x ./kind sudo mv ./kind /usr/local/bin/kind

    kind version: : command used $kind version

    kind v0.11.1 go1.16.4 linux/amd64

    docker info

    command used $docker info

    Client: Context: default Debug Mode: false Plugins: app: Docker App (Docker Inc., v0.9.1-beta3) buildx: Build with BuildKit (Docker Inc., v0.5.1-docker) scan: Docker Scan (Docker Inc., v0.8.0)

    Server: Containers: 1 Running: 0 Paused: 0 Stopped: 1 Images: 2 Server Version: 20.10.7 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 1 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux Default Runtime: runc Init Binary: docker-init containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d runc version: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 init version: de40ad0 Security Options: seccomp Profile: default Kernel Version: 5.4.72-microsoft-standard-WSL2 Operating System: Ubuntu 20.04.2 LTS OSType: linux Architecture: x86_64 CPUs: 12 Total Memory: 6.133GiB Name: LAPTOP-TN6NO0LS ID: JDCK:NRQ2:ML5P:EUMK:OBYG:76PM:5SXD:FMYK:KHCX:NDTB:IQ4R:KIBJ Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false

    WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support

  • feat: support for multiple images to kind load

    feat: support for multiple images to kind load

    closes #1905

    Usage:

    Multiple image names provided (all images present locally):

    ./kind load docker-image nginx,busybox --name master    
    
    

    Multiple image names provided (all images are not present locally):

    ./kind load docker-image nginx,busybox,python --name master
    ERROR: image: "python" not present locally
    

    One image name provided without commas (backwards compatibility):

    ./kind load docker-image nginx --name master               
    
    
  • [WIP] Iptables mess

    [WIP] Iptables mess

    As explained in https://bugzilla.netfilter.org/show_bug.cgi?id=1632 there is no compatibility guarantee for the userspace iptables binaries, this causes that the output of iptables-save may be different if the iptables host version doesn't match the version used inside the kind node.

    Since the iptables we need to process for doing the dns magic are well known and can not change without breaking a lot of users, instead of manipulating the existing rules, just obtain the parameters that we need and remove and add the necessary rules.

    Fixes: #3054

  • How do I access etcd container?

    How do I access etcd container?

    Hi all,
    I have a simple cluster with the following configuration:

    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    nodes:
    - role: control-plane
      extraPortMappings:
      - containerPort: 80
        hostPort: 80
      - containerPort: 22
        hostPort: 22
    - role: worker
    - role: worker
    

    I successfully deploy it, and then try to access the etcd container by doing:

    kubectl -n kube-system exec etcd-kind-control-plane -it -- sh
    

    But I get the following error:

    error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "a22a7843cc56d0d4b6133e0eca2e1404bf80a683ac46aceff9a2f092de46e852": OCI runtime exec failed: exec failed: unable to start container process: open /dev/ptmx: operation not permitted: unknown
    

    I get the same error if I try to do it within the control plane container. Any help is much appreciated. Kind version:

    kind v0.17.0 go1.19.2 linux/amd64
    

    Docker version:

    Client:
     Version:           20.10.22
     API version:       1.41
     Go version:        go1.19.4
     Git commit:        3a2c30b63a
     Built:             Tue Dec 20 20:43:40 2022
     OS/Arch:           linux/amd64
     Context:           default
     Experimental:      true
    
    Server:
     Engine:
      Version:          20.10.22
      API version:      1.41 (minimum version 1.12)
      Go version:       go1.19.4
      Git commit:       42c8b31499
      Built:            Tue Dec 20 20:42:46 2022
      OS/Arch:          linux/amd64
      Experimental:     false
     containerd:
      Version:          v1.6.14
      GitCommit:        9ba4b250366a5ddde94bb7c9d1def331423aa323.m
     runc:
      Version:          1.1.4
      GitCommit:
     docker-init:
      Version:          0.19.0
      GitCommit:        de40ad0
    
  • kind with podman remote

    kind with podman remote

    What happened: we running podman as sidecar container in gitlab-ci pipelines - we configured the following:

    $ export DOCKER_HOST=tcp://podman:8888
    $ export DOCKER_DRIVER=fuse-overlayfs
    $ export DOCKER_TLS_CERTDIR=""
    $ export CONTAINER_CONNECTION=tcp://podman:8888
    $ export CONTAINER_HOST=tcp://podman:8888
    
    $ kind create cluster --config test/kind-config.yaml
    enabling experimental podman provider
    Creating cluster "kind" ...
     • Ensuring node image (kindest/node:v1.21.10) 🖼  ...
     ✓ Ensuring node image (kindest/node:v1.21.10) 🖼
     • Preparing nodes 📦   ...
     ✗ Preparing nodes 📦 
    ERROR: failed to create cluster: command "podman run --name kind-control-plane --hostname kind-control-plane --label io.x-k8s.kind.role=control-plane --privileged --tmpfs /tmp --tmpfs /run --volume 011549994c011b0585520606d44fc241d77c97d4d4b88e3379e1b68aefec41e3:/var:suid,exec,dev --volume /lib/modules:/lib/modules:ro -e KIND_EXPERIMENTAL_CONTAINERD_SNAPSHOTTER --detach --tty --net kind --label io.x-k8s.kind.cluster=kind -e container=podman --publish=127.0.0.1:45791:6443/tcp -e KUBECONFIG=/etc/kubernetes/admin.conf docker.io/kindest/node:v1.21.10" failed with error: exit status [126](https://gitlab.dev.xxx.sh/aws-ops/xxx/-/jobs/8774672#L126)
    Command Output: Error: crun: set xattr for `runc.sha256`: Permission denied: OCI permission denied
    

    podman configuration:

    #!/bin/bash
    echo "starting podman ..."
    unset CONTAINER_HOST
    podman system service --time 0 unix:///var/run/docker.sock & 
    podman system service --time 0 tcp://0.0.0.0:8888
    

    and containers.conf

    [containers]
    netns="host"
    userns="host"
    ipcns="host"
    utsns="host"
    cgroupns="host"
    cgroups="disabled"
    default_sysctls = []
    log_driver = "k8s-file"
    [engine]
    cgroup_manager = "cgroupfs"
    events_logger="file"
    runtime="crun"
    

    any idea what is the problem ?

    What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • kind version: (use kind version):
    • Runtime info: (use docker info or podman info):
    • OS (e.g. from /etc/os-release):
    • Kubernetes version: (use kubectl version):
    • Any proxies or other special environment settings?:
  • RFE: listing supported providers and currently configured provider

    RFE: listing supported providers and currently configured provider

    What would you like to be added:

    It would be great if the kind cli can support listing supported providers as well as an option/flag to show the currently configured provider.

    e.g.

    kind provider lists all supported provider (docker, podman, etc.) kind provider --current would show the currently configured provider (docker)

    Why is this needed:

    We have use cases where having this would benefit our internal tooling for determining what the current provider configured for kind is. And in the case of forks of kind that are setting the default provider to be not docker, then we cannot assume that if KIND_EXPERIMENTAL_PROVIDER env is missing, that the configured provider would be docker. So having this subcommand making the configured provider explicit is very useful.

    Very much open to UX suggestions and other feedback around this.

  • Adding support for 'kind provider' command

    Adding support for 'kind provider' command

    I'm very much open to UX feedback, as we can do something like kind get provider, but keeping it at kind provider for now as this seems to be somewhat separate from the other operations.

    There are use cases where it would help to be able to list the supported providers as well as display the currently configured runtime provider.

    This change adds a kind provider command which will display all supported providers. The --current flag can be provided to show the currently configured provider which honors the KIND_EXPERIMENTAL_PROVIDER env if set.

    Addresses: #3056

    Signed-off-by: Yibo Zhuang [email protected]

  • Networking somewhat broken with iptables nf_tables mode 1.8.8 on the host

    Networking somewhat broken with iptables nf_tables mode 1.8.8 on the host

    Quick placeholder issue for https://github.com/kubernetes/minikube/issues/15573#issuecomment-1374286267

    We're going to have to get creative to deal with this properly. cc @aojea

KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Dec 10, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Jan 3, 2023
Dotnet-appsettings-env - Convert .NET appsettings.json file to Kubernetes, Docker and Docker-Compose environment variables

dotnet-appsettings-env Convert .NET appsettings.json file to Kubernetes, Docker

Dec 30, 2022
Dotnet-appsettings-env - Convert .NET appsettings.json file to Kubernetes, Docker and Docker-Compose environment variables

dotnet-appsettings-env Convert .NET appsettings.json file to Kubernetes, Docker

Feb 16, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

Aug 6, 2022
WIP - Pin - local pipeline project with Docker Golang API.
WIP - Pin - local pipeline project with Docker Golang API.

pin ?? WIP - Local pipeline project with Docker Golang API. ?? Installation Download latest release You can download latest release from here Install

May 28, 2022
kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters

kubequery powered by Osquery kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters kubequery will be packaged as

Dec 27, 2022
🐶 Kubernetes CLI To Manage Your Clusters In Style!
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Jan 9, 2023
Validation of best practices in your Kubernetes clusters
Validation of best practices in your Kubernetes clusters

Best Practices for Kubernetes Workload Configuration Fairwinds' Polaris keeps your clusters sailing smoothly. It runs a variety of checks to ensure th

Jan 9, 2023
Manage large fleets of Kubernetes clusters
Manage large fleets of Kubernetes clusters

Introduction Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that it works great for a si

Dec 31, 2022
A best practices checker for Kubernetes clusters. 🤠

Clusterlint As clusters scale and become increasingly difficult to maintain, clusterlint helps operators conform to Kubernetes best practices around r

Dec 29, 2022
Kubernetes operator to autoscale Google's Cloud Bigtable clusters
Kubernetes operator to autoscale Google's Cloud Bigtable clusters

Bigtable Autoscaler Operator Bigtable Autoscaler Operator is a Kubernetes Operator to autoscale the number of nodes of a Google Cloud Bigtable instanc

Nov 5, 2021
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

Dec 31, 2022
Client extension for interacting with Kubernetes clusters from your k6 tests.

⚠️ This is a proof of concept As this is a proof of concept, it won't be supported by the k6 team. It may also break in the future as xk6 evolves. USE

Jan 2, 2023
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Dec 30, 2022
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Jan 8, 2023