vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds


WebsiteQuickstartDocumentationBlogTwitterSlack

Latest Release License: Apache-2.0

Join us on Slack!

vcluster - Virtual Clusters For Kubernetes

  • Lightweight & Low-Overhead - Based on k3s, bundled in a single pod and with super-low resource consumption
  • No Performance Degradation - Pod are scheduled in the underlying host cluster, so they get no performance hit at all while running
  • Reduced Overhead On Host Cluster - Split up large multi-tenant clusters into smaller vcluster to reduce complexity and increase scalability
  • Flexible & Easy Provisioning - Create via vcluster CLI, helm, kubectl, Argo any of your favorite tools (it is basically just a StatefulSet)
  • No Admin Privileges Required - If you can deploy a web app to a Kubernetes namespace, you will be able to deploy a vcluster as well
  • Single Namespace Encapsulation - Every vcluster and all of its workloads are inside a single namespace of the underlying host cluster
  • Easy Cleanup - Delete the host namespace and the vcluster plus all of its workloads will be gone immediately

Learn more on www.vcluster.com.


Architecture

vcluster Intro

vcluster Compatibility

Learn more in the documentation.


⭐️ Do you like vcluster? Support the project with a star ⭐️


Quick Start

To learn more about vcluster, open the full getting started guide.

1. Download vcluster CLI

Use one of the following commands to download the Loft CLI binary from GitHub:

Mac (Intel/AMD)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-darwin-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Mac (Silicon/ARM)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-darwin-arm64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Linux (AMD)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Linux (ARM)
curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-arm64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
Windows (Powershell)
md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -UseBasicParsing ((Invoke-WebRequest -URI "https://github.com/loft-sh/vcluster/releases/latest" -UseBasicParsing).Content -replace "(?ms).*`"([^`"]*vcluster-windows-amd64.exe)`".*","https://github.com/`$1") -o $Env:APPDATA\vcluster\vcluster.exe;
$env:Path += ";" + $Env:APPDATA + "\vcluster";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);

If you get the error that Windows cannot find vcluster after installing it, you will need to restart your computer, so that the changes to the PATH variable will be applied.


Alternatively, you can download the binary for your platform from the GitHub Releases page and add this binary to your PATH.


2. Create a vcluser

vcluster create vcluster-1 -n host-namespace-1
Alternative A: Helm

Create file vcluster.yaml:

vcluster:
  image: rancher/k3s:v1.19.5-k3s2    
  extraArgs:
    - --service-cidr=10.96.0.0/12    
  baseArgs:
    - server
    - --write-kubeconfig=/k3s-config/kube-config.yaml
    - --data-dir=/data
    - --no-deploy=traefik,servicelb,metrics-server,local-storage
    - --disable-network-policy
    - --disable-agent
    - --disable-scheduler
    - --disable-cloud-controller
    - --flannel-backend=none
    - --kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle
storage:
  size: 5Gi

Deploy vcluster via helm:

helm upgrade --install vcluster-1 vcluster \
  --values vcluster.yaml \
  --repo https://charts.loft.sh \
  --namespace vcluster-1 \
  --repository-config=''

Alternative B: kubectl

Create file vcluster.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: vcluster-1
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: vcluster-1
rules:
  - apiGroups: [""]
    resources: ["configmaps", "secrets", "services", "services/proxy", "pods", "pods/proxy", "pods/attach", "pods/portforward", "pods/exec", "pods/log", "events", "endpoints", "persistentvolumeclaims"]
    verbs: ["*"]
  - apiGroups: ["networking.k8s.io"]
    resources: ["ingresses"]
    verbs: ["*"]
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources: ["statefulsets"]
    verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: vcluster-1
subjects:
  - kind: ServiceAccount
    name: vcluster-1
roleRef:
  kind: Role
  name: vcluster-1
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
  name: vcluster-1
spec:
  type: ClusterIP
  ports:
    - name: https
      port: 443
      targetPort: 8443
      protocol: TCP
  selector:
    app: vcluster-1
---
apiVersion: v1
kind: Service
metadata:
  name: vcluster-1-headless
spec:
  ports:
    - name: https
      port: 443
      targetPort: 8443
      protocol: TCP
  clusterIP: None
  selector:
    app: vcluster-1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: vcluster-1
  labels:
    app: vcluster-1
spec:
  serviceName: vcluster-1-headless
  replicas: 1
  selector:
    matchLabels:
      app: vcluster-1
  template:
    metadata:
      labels:
        app: vcluster-1
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: vcluster-1
      containers:
      - image: rancher/k3s:v1.19.5-k3s2
        name: virtual-cluster
        command:
          - "/bin/k3s"
        args:
          - "server"
          - "--write-kubeconfig=/k3s-config/kube-config.yaml"
          - "--data-dir=/data"
          - "--disable=traefik,servicelb,metrics-server,local-storage"
          - "--disable-network-policy"
          - "--disable-agent"
          - "--disable-scheduler"
          - "--disable-cloud-controller"
          - "--flannel-backend=none"
          - "--kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle"  
          - "--service-cidr=10.96.0.0/12"  
        volumeMounts:
          - mountPath: /data
            name: data
      - name: syncer
        image: "loftsh/virtual-cluster:0.0.27"
        args:
          - --service-name=vcluster-1
          - --suffix=vcluster-1
          - --owning-statefulset=vcluster-1
          - --out-kube-config-secret=vcluster-1
        volumeMounts:
          - mountPath: /data
            name: data
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 5Gi

Create vcluster using kubectl:

kubectl apply -f vcluster.yaml
Alternative C: Other Get the Helm chart or Kubernetes manifest and use any tool you like for the deployment of a vcluster, e.g. Argo, Flux etc.

3. Use the vcluster

# Start port-forwarding to the vcluster service + set kube-config file
vcluster connect vcluster-1 -n host-namespace-1
export KUBECONFIG=./kubeconfig.yaml

# OR: Start port-forwarding and add kube-context to current kube-config file
vcluster connect vcluster-1 -n host-namespace-1 --update-current

# Run any kubectl, helm, etc. command in your vcluster
kubectl get namespace
kubectl get pods -n kube-system
kubectl create namespace demo-nginx
kubectl create deployment nginx-deployment -n demo-nginx --image=nginx
kubectl get pods -n demo-nginx

4. Cleanup

vcluster delete vcluster-1 -n host-namespace-1

Alternatively, you could also delete the host-namespace using kubectl.

Owner
Loft Labs
Superpowers for your Kubernetes clusters
Loft Labs
Comments
  • DaemonSet Pods stays on node even if no other workload is present

    DaemonSet Pods stays on node even if no other workload is present

    This might be intentional, so please close the issue if so and thanks for a wonderful project btw.

    What When spinning up a multi node cluster with e.g. minikube and deploying a nginx:latest deployment with a single replica, I only see the node where the workload is scheduled. I then scale the deployment to 10 replicas and I see more nodes, because of the 10 workload Pods being spread out by the scheduler. I then scale down to a single replica again and I see a single node after some waiting around. It is all to be expected.

    But when I do the same procedure with a DaemonSet deployed, my node count never goes back down due to the DaemonSet workloads still running on those nodes even no other relevant workloads are running on those nodes.

    Expected behavior I would have expected the DaemonSet Pods to be terminated (after some time) on the nodes where no relevant workload is running in order to minimize unnecessary vcluster workload pressure on the "mother ship" cluster.

    Test spec

    • Vcluster version 0.4.5 (latest available version at the time)
    • Minikube version v1.24.0
    • Kubectl version v1.22.4
    # 0) Spin up minikube test cluster
    minikube start --cpus 2 --memory 2048 --nodes=3 --cni=flannel
    
    # 1) Create vcluster
    vcluster create vcluster1 -n vcluster1 --create-namespace --disable-ingress-sync
    
    # 2) Connect to vcluster
    vcluster connect vcluster1 -n vcluster1
    
    # 3) Create deployment
    kubectl --kubeconfig ./kubeconfig.yaml create deployment workload --image=nginx:latest --replicas=1
    
    #  4) Scale deployment
    kubectl --kubeconfig ./kubeconfig.yaml scale deployment workload --replicas=10
    
    # 5) Get nodes
     kubectl --kubeconfig ./kubeconfig.yaml get nodes
    
    # 6) Scale down
    kubectl --kubeconfig ./kubeconfig.yaml scale deployment workload --replicas=1
    
    # 7) Get nodes (OBS: you have to wait for the count to go down)
     kubectl --kubeconfig ./kubeconfig.yaml get nodes
    
    # 8) Apply DaemonSet
    cat <<EOF | kubectl --kubeconfig ./kubeconfig.yaml create -f -
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      labels:
        app: nginx
      name: daemonset
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx:latest
            name: nginx
    EOF
    
    # Repeat from step 4 up and including step 7
    
  • Error: failed pre-install:timed out waiting for condition

    Error: failed pre-install:timed out waiting for condition

    What happened?

    New install of Vcluster fails/times out. Pl. see the attached logs. We have a successful install in the same VM/RHEL 7 host. But something happened last month and is not allowing us to create a new Vcluster.

    What did you expect to happen?

    Should have a new cluster created.

    How can we reproduce it (as minimally and precisely as possible)?

    Tried to create a new Vcluster but it times out.

    Anything else we need to know?

    vcluster-log.pdf ssg-log.pdf

    Host cluster Kubernetes version

    $ kubectl version
    # paste output here
    1.21.1
    
    </details>
    
    
    ### Host cluster Kubernetes distribution
    
    <details>
    

    Write here

    1.21. x
    </details>
    
    
    ### vlcuster version
    
    <details>
    
    ```console
    $ vcluster --version
    # paste output here
    Vcluster 0.7.0 and 0.10.2
    
    </details>
    
    
    ### Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)
    
    <details>
    

    Write here

    
    </details>
    K8s
    
    ### OS and Arch
    
    <details>
    

    OS: Arch: RHEL 7

  • feat(syncer): sync csi objects when scheduler enabled

    feat(syncer): sync csi objects when scheduler enabled

    Signed-off-by: Rohan CJ [email protected]

    See description in #773

    What issue type does this pull request address? (keep at least one, remove the others) /kind bugfix /kind feature

    What does this pull request do? Which issues does it resolve? (use resolves #<issue_number> if possible) resolves #773

    Please provide a short message that should be published in the vcluster release notes Automatically syncs some storage related objects when scheduler is enabled.

    What else do we need to know?

  • Istio Injection issue on vcluster >= 0.5.x

    Istio Injection issue on vcluster >= 0.5.x

  • Cannot create vcluster neither inside k3d nor inside kind on Linux

    Cannot create vcluster neither inside k3d nor inside kind on Linux

    vcluster pod fails with the following error:

    time="2021-12-24T17:56:38.949209082Z" level=fatal msg="failed to evacuate root cgroup: mkdir /sys/fs/cgroup/init: read-only file system"

    ❯ vcluster --version vcluster version 0.5.0-beta.0

    ❯ kind --version kind version 0.11.1

    ❯ k3d --version k3d version v5.2.2 k3s version v1.21.7-k3s1 (default)

  • vcluster does not start in limited RKE cluster

    vcluster does not start in limited RKE cluster

    I got a restricted namespace in our internal RKE cluster managed by Rancher. However, vcluster won't start up. I have no idea what the concrete reason is, given that the log contains a massive output.

    Things seem to start going wrong with this log entry: cluster_authentication_trust_controller.go:493] kube-system/extension-apiserver-authentication failed with : Internal error occurred: resource quota evaluation timed out

    But probably the attached log file will indicate the underlying reason better. vcluster1.log

    The syncer log is very short:

    I0629 13:25:32.393511       1 main.go:223] Using physical cluster at https://10.43.0.1:443
    I0629 13:25:32.575521       1 main.go:254] Can connect to virtual cluster with version v1.20.4+k3s1
    F0629 13:25:32.587987       1 main.go:138] register controllers: register secrets indices: no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"```
    
    Any ideas?
    
  • k0s support beside k3s?

    k0s support beside k3s?

    Hi everyone,

    please can you also support k0s beside k3s? k0s has much more use cases (bare-metal, cloud, iot, edge etc.) compared to k3s (iot, edge). In addition, k0s is much less opinionated regarding networking, storage, ingress etc. + the size is also small (187 MB). Finally, k0s is used for conventional staging & production clusters (bare-metal or cloud) which means that dev vClusters with k0s will be much closer to staging & production. So it would be great, if you can support it. Please, see the following link: https://k0sproject.io/

    Best regards, Thomas

  • vcluster remains in pending after creation, then enters CrashLoopBackOff

    vcluster remains in pending after creation, then enters CrashLoopBackOff

    What happened?

    After creating a vcluster using vcluster create csh-vcluster-01 --debug, the setup process hangs here until the command times out:

    root@k8s-ctrl01-nrh:~# vcluster create csh-vcluster-01 --debug
    debug  Will use namespace vcluster-csh-vcluster-01 to create the vcluster...
    info   Creating namespace vcluster-csh-vcluster-01
    info   Create vcluster csh-vcluster-01...
    debug  execute command: helm upgrade csh-vcluster-01 https://charts.loft.sh/charts/vcluster-0.10.2.tgz --kubeconfig /tmp/3510279876 --namespace vcluster-csh-vcluster-01 --install --repository-config='' --values /tmp/2170583696
    done √ Successfully created virtual cluster csh-vcluster-01 in namespace vcluster-csh-vcluster-01
    info   Waiting for vcluster to come up...
    

    Error messages are not verbose enough for me to figure out what exactly causes this to hang. Once this process is either interrupted via keyboard interrupt or by letting it time out, the following is visible when the command vcluster list is run:

    root@k8s-ctrl01-nrh:~# vcluster list
    
     NAME              NAMESPACE                  STATUS    CONNECTED   CREATED                         AGE
     csh-vcluster-01   vcluster-csh-vcluster-01   Pending               2022-07-10 21:51:02 -0400 EDT   5m45s
    

    Attempts to connect to the vcluster demonstrate that the vcluster is similarly unresponsive:

    root@k8s-ctrl01-nrh:~# vcluster connect csh-vcluster-01 --debug
    info   Waiting for vcluster to come up...
    

    What did you expect to happen?

    The vcluster come up and become active, as expected via the documentation's getting started guide.

    How can we reproduce it (as minimally and precisely as possible)?

    install vcluster as outlined in the documentation, then run vcluster create

    Anything else we need to know?

    This cluster is being virtualized within proxmox. However, all other cluster functions are working as expected.

    Host cluster Kubernetes version

    root@k8s-ctrl01-nrh:~# kubectl version
    WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
    Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
    Kustomize Version: v4.5.4
    Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:15:38Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
    

    Host cluster Kubernetes distribution

    k8s 1.24.2
    

    vlcuster version

    root@k8s-ctrl01-nrh:~# vcluster --version
    vcluster version 0.10.2
    

    Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)

    k8s
    

    OS and Arch

    OS:  Debian GNU/Linux 11 (bullseye) 
    Arch: x86_64
    
  • Services syncer cannot be disabled

    Services syncer cannot be disabled

    What happened?

    I tried to disable the service syncer in favor of a custom syncer as a vcluster plugin (inspired by the original syncer code). But regardless of the --sync-flag, services remain in sync, even if they are disabled. It also doesn't matter if a custom plugin is loaded or not (as a sidecar container).

    What did you expect to happen?

    No service should be synchronized by default (if disabled), at least as long as no custom plugin is added.

    How can we reproduce it (as minimally and precisely as possible)?

    First of all the tiny piece of extra.yaml-configuration:

    syncer:
      extraArgs:
        - "--sync=-services"
    

    And a dummy service service.yaml:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - name: http
          protocol: TCP
          port: 80
          targetPort: 9376
    
    1. vcluster create test --namespace test -f extra.yaml
    2. vcluster connect test --namespace test &
    3. kubectl --kubeconfig ./kubeconfig.yaml apply -f service.yaml

    The physical cluster lists the service, even if sync is disabled:

    ...
    test           service/my-service-x-default-x-vc-test   ClusterIP      10.111.80.113    <none>        80/TCP
    ...
    

    Anything else we need to know?

    No response

    Host cluster Kubernetes version

    $ kubectl version
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:32:32Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
    

    Host cluster Kubernetes distribution

    Docker4Mac
    

    vlcuster version

    $ vcluster --version
    vcluster version 0.6.0
    

    Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)

    k3s
    

    OS and Arch

    OS: MacOSX
    Arch: Intel
    
  • vcluster on k3s on WSL2

    vcluster on k3s on WSL2

    Hello, trying out vcluster. Any idea why my attempt to create a simple vcluster is failing here? I succesfully installed k3s here in WSL2 and now i'm trying to create my first vcluster inside it...

    k3s version v1.22.2+k3s2 (3f5774b4) go version go1.16.8

    image

    This is the k3s log on WSL2:

    ╰─ I1023 18:55:27.145605 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:27.145634 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:27.145866 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:55:40.145573 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:40.145604 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:40.145915 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 W1023 18:55:46.540702 31070 sysinfo.go:203] Nodes topology is not available, providing CPU topology I1023 18:55:54.145219 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:54.145249 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:54.145498 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 E1023 18:55:57.670079 31070 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.4 (legacy): Couldn't load matchlimit':No such file or directory

    Error occurred at line: 83 Try iptables-restore -h' or 'iptables-restore --help' for more information. ) *filter :INPUT ACCEPT [40874:12238794] - [0:0] :FORWARD DROP [0:0] - [0:0] :OUTPUT ACCEPT [41155:11345096] - [0:0] :DOCKER - [0:0] - [0:0] :DOCKER-ISOLATION-STAGE-1 - [0:0] - [0:0] :DOCKER-ISOLATION-STAGE-2 - [0:0] - [0:0] :DOCKER-USER - [0:0] - [0:0] :KUBE-EXTERNAL-SERVICES - [0:0] - [0:0] :KUBE-FIREWALL - [0:0] - [0:0] :KUBE-FORWARD - [0:0] - [0:0] :KUBE-KUBELET-CANARY - [0:0] - [0:0] :KUBE-NODEPORTS - [0:0] - [0:0] :KUBE-NWPLCY-DEFAULT - [0:0] - [0:0] :KUBE-PROXY-CANARY - [0:0] - [0:0] :KUBE-ROUTER-FORWARD - [0:0] - [0:0] :KUBE-ROUTER-INPUT - [0:0] - [0:0] :KUBE-ROUTER-OUTPUT - [0:0] - [0:0] :KUBE-SERVICES - [0:0] - [0:0] :KUBE-POD-FW-YTHDYMA2CBWLR2PW - [0:0] :KUBE-POD-FW-CEOFHLPKKYLD56IO - [0:0] :KUBE-POD-FW-NAOEZKKUB5NO4KBI - [0:0] :KUBE-POD-FW-4M52UXR2EWFBQ6QH - [0:0] :KUBE-POD-FW-K2TVNHK5E5ZQHCLK - [0:0] :KUBE-POD-FW-C7NCCGNSUR3CKZKN - [0:0] -A INPUT -m comment --comment "kube-router netpol - 4IA2OSFRMVNDXBVV" -j KUBE-ROUTER-INPUT -A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A INPUT -j KUBE-FIREWALL -A FORWARD -m comment --comment "kube-router netpol - TEMCG2JMHZYE7H7T" -j KUBE-ROUTER-FORWARD -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A FORWARD -j DOCKER-USER -A FORWARD -j DOCKER-ISOLATION-STAGE-1 -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o docker0 -j DOCKER -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -o br-ba82458eff39 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o br-ba82458eff39 -j DOCKER -A FORWARD -i br-ba82458eff39 ! -o br-ba82458eff39 -j ACCEPT -A FORWARD -i br-ba82458eff39 -o br-ba82458eff39 -j ACCEPT -A FORWARD -s 10.42.0.0/16 -j ACCEPT -A FORWARD -d 10.42.0.0/16 -j ACCEPT -A OUTPUT -m comment --comment "kube-router netpol - VEAAIY32XVBHCSCY" -j KUBE-ROUTER-OUTPUT -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -i br-ba82458eff39 ! -o br-ba82458eff39 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -j RETURN -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP -A DOCKER-ISOLATION-STAGE-2 -o br-ba82458eff39 -j DROP -A DOCKER-ISOLATION-STAGE-2 -j RETURN -A DOCKER-USER -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP -A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-NWPLCY-DEFAULT -m comment --comment "rule to mark traffic matching a network policy" -j MARK --set-xmark 0x10000/0x10000 -A KUBE-ROUTER-FORWARD -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-ROUTER-INPUT -d 10.43.0.0/16 -m comment --comment "allow traffic to cluster IP - M66LPN4N3KB5HTJR" -j RETURN -A KUBE-ROUTER-INPUT -p tcp -m comment --comment "allow LOCAL TCP traffic to node ports - LR7XO7NXDBGQJD2M" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN -A KUBE-ROUTER-INPUT -p udp -m comment --comment "allow LOCAL UDP traffic to node ports - 76UCBPIZNGJNWNUZ" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN -A KUBE-ROUTER-INPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-ROUTER-OUTPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-SERVICES -d 10.43.44.208/32 -p udp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable -A KUBE-SERVICES -d 10.43.44.208/32 -p tcp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable -A KUBE-SERVICES -d 10.43.44.208/32 -p tcp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:metrics has no endpoints" -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -d 10.42.0.7 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -s 10.42.0.7 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.7 -j ACCEPT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "rule to log dropped traffic POD name:svclb-traefik-k4whv namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "rule to REJECT traffic destined for POD name:svclb-traefik-k4whv namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -d 10.42.0.11 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -s 10.42.0.11 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.11 -j ACCEPT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "rule to log dropped traffic POD name:vcluster-1-0 namespace: host-namespace-1" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "rule to REJECT traffic destined for POD name:vcluster-1-0 namespace: host-namespace-1" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-CEOFHLPKKYLD56IO -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -d 10.42.0.8 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -s 10.42.0.8 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.8 -j ACCEPT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "rule to log dropped traffic POD name:traefik-74dd4975f9-4fpmk namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "rule to REJECT traffic destined for POD name:traefik-74dd4975f9-4fpmk namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -d 10.42.0.2 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -s 10.42.0.2 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.2 -j ACCEPT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "rule to log dropped traffic POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "rule to REJECT traffic destined for POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -d 10.42.0.3 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -s 10.42.0.3 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.3 -j ACCEPT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "rule to log dropped traffic POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "rule to REJECT traffic destined for POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -d 10.42.0.5 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -s 10.42.0.5 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.5 -j ACCEPT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "rule to log dropped traffic POD name:coredns-85cb69466-sfmfm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "rule to REJECT traffic destined for POD name:coredns-85cb69466-sfmfm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 COMMIT I1023 18:56:09.144877 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:09.144909 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" ╭─    ~ ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 1 ✘    at 18:54:23   ╰─ I1023 18:56:20.145216 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:20.145252 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:20.145522 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:56:31.145003 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:31.145035 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:31.145307 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:56:42.145112 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:42.145144 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:42.145447 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 Pr

  • create PersistentVolume for PersistentVolumeClaim in case of cluster is not having dynamic volume provisioning support

    create PersistentVolume for PersistentVolumeClaim in case of cluster is not having dynamic volume provisioning support

    Hello, the vCluster CLI creates a PVC but not creates PV instead it waits for creating PV via dynamic volume provisioning (IMHO) but if the cluster is only supporting static provisioning, the vCluster Pods stay in the Pending state.

    It would be nice to create PV also to avoid having this kind of problems 😋

  • Include vcluster service name in K3s cert SANs by default

    Include vcluster service name in K3s cert SANs by default

    What issue type does this pull request address? (keep at least one, remove the others) /kind enhancement

    What does this pull request do? Which issues does it resolve? (use resolves #<issue_number> if possible) Ensures that a K3s-based vcluster's Service can be reached by other applications within the host K8s cluster via its DNS name without encountering certificate issues.

    Please provide a short message that should be published in the vcluster release notes Fixed an issue where the vcluster Service name was missing from the K3s certificate SANs

    What else do we need to know? The vcluster Service name is automatically included as a cert SAN when using CNCF K8s. Therefore, for the sake of consistency, it should work the same way with K3s. Otherwise we need to rely on the ClusterIP or manually adding a --tls-san=<my_vcluster_name> to the K3s extraArgs every time.

  • Set all required fields of securityContext for 'vcluster-rewrite-hosts' initContainer

    Set all required fields of securityContext for 'vcluster-rewrite-hosts' initContainer

    What issue type does this pull request address? (keep at least one, remove the others) /kind bugfix

    What does this pull request do? Which issues does it resolve? (use resolves #<issue_number> if possible) resolves #846

    Please provide a short message that should be published in the vcluster release notes Fixed an issue where vcluster would not correctly sync securityContext for StatefulSet workload

    Closes ENG-748

  • Service account with GCP workload Identity annotations in vcluster

    Service account with GCP workload Identity annotations in vcluster

    /kind question

    Hi, has anyone managed to run workloads in vclusters that make use of GCP workload identity annotations as described in this example successfully? Or is this expected to not work? I've tried but got this error (it can't authenticate with gcloud apis)

    root@workload-identity-deployment-7b5c5d9b98-rjfzl:/# gcloud secrets versions access 1 --secret=test-secret
    ERROR: (gcloud.secrets.versions.access) There was a problem refreshing your current auth tokens: ('Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/[email protected]/token from the Google Compute Engine metadata service. Status: 403 Response:\nb"Unable to generate access token; IAM returned 403 Forbidden: Permission \'iam.serviceAccounts.getAccessToken\' denied on resource (or it may not exist).\\nThis error could be caused by a missing IAM policy binding on the target IAM service account.\\nFor more information, refer to the Workload Identity documentation:\\n\\thttps://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#authenticating_to\\n\\n"', <google.auth.transport.requests._Response object at 0x7f6ccef24a90>)
    

    However, using the Google SA key locally works, ie gcloud auth print-access-token works, which means the permissions are also set up correctly. The same setup also works on the host cluster directly.

    I see that under https://www.vcluster.com/docs/architecture/synced-resources there's an option to sync the ServiceAccount resource from vcluster to host cluster, I have tried that but it's also not working.

  • feat(nodes): Add filtering to Node Labels on Node Sync

    feat(nodes): Add filtering to Node Labels on Node Sync

    What issue type does this pull request address? /kind enhancement

    What does this pull request do? Which issues does it resolve? Adds functionality to filter the labels on nodes as they get synced down. Prevents information about the host cluster nodes being leaked.

    Please provide a short message that should be published in the vcluster release notes Add capability to filter out specific labels from Nodes synced into vcluster.

    What else do we need to know? Could possibly tie this to nodes synced enabled.

  • Not able to access vcluster pod due to Upgrade request required

    Not able to access vcluster pod due to Upgrade request required

    Is your feature request related to a problem?

    Issue: Not able to access vcluster Pod

    kubectl exec -it app-95c87457d-tv4mz --kubeconfig /Users/vudao/workspace/gitlab/eks-vcluster/kube-config/dev-kubeconfig.yaml -- bash
    Defaulted container "app" out of: app, glowrootapp (init)
    Error from server (BadRequest): Upgrade request required
    
    ➜  ~ vcluster --version
    vcluster version 0.13.0
    

    EKS v1.23

    Kubectl

    ➜  ~ kubectl version
    WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
    Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:57:26Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"darwin/arm64"}
    Kustomize Version: v4.5.7
    Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.13-eks-fb459a0", GitCommit:"55bd5d5cb7d32bc35e4e050f536181196fb8c6f7", GitTreeState:"clean", BuildDate:"2022-10-24T20:35:40Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
    WARNING: version difference between client (1.25) and server (1.23) exceeds the supported minor version skew of +/-1
    

    I connect vcluster using

    vcluster connect dev -n dev --server=https://dev-eks.simflexcloud.com --service-account admin --cluster-role cluster-admin  --update-current=false --insecure
    

    vcluster config

    service:
      type: NodePort
    
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: deployment
                  operator: In
                  values:
                    - dev-ops
    
    tolerations:
      - key: 'dedicated'
        operator: 'Equal'
        value: 'dev-ops'
        effect: 'NoSchedule'
    
    syncer:
      extraArgs:
        - --tls-san=dev-eks.simflexcloud.com
        - --sync=nodes
        - --sync-all-nodes
    
    
    sync:
      persistentvolumes:
        enabled: true
      storageclasses:
        enabled: true
      nodes:
        enabled: true
        syncAllNodes: true
      serviceaccounts:
        enabled: true
    

    Which solution do you suggest?

    N/A

    Which alternative solutions exist?

    No response

    Additional context

    No response

  • Vcluster list command

    Vcluster list command

    What happened?

    Vcluster list output reveals details of all vcluster...is there anyway we can remove or hide this command . If I am using rbac to block this command but what is happening vcluster connect command also getting blocked.

    Pls help

    What did you expect to happen?

    Only admin can see vcluster details using vcluster list command

    How can we reproduce it (as minimally and precisely as possible)?

    Vcluster list command

    Anything else we need to know?

    No response

    Host cluster Kubernetes version

    $ kubectl version
    # paste output here
    
    1.22

    Host cluster Kubernetes distribution

    # Write here
    
    1.22

    vlcuster version

    $ vcluster --version
    # paste output here
    
    K3s 1.25

    Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)

    # Write here
    
    K3s

    OS and Arch

    OS: 
    Arch:
    
    Ubuntu
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Oct 24, 2021
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

Dec 16, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

Feb 21, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Dec 10, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Oct 27, 2022
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

Dec 31, 2022
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment

Kubernetes Node on Cluster KNoC is a Virtual Kubelet Provider implementation that manages real pods and containers in a remote container runtime by su

Oct 26, 2022
Simple-go-api - This porject deploys a simple go app inside a EKS Cluster

SimpleGoApp This porject deploys a simple go app inside a EKS Cluster Prerequisi

Jan 19, 2022
Rotate is a tool for rotating out AWS Auto-Scaling Groups within a k8s cluster

k8s-r8 rotate is a tool for rotating out AWS Auto-Scaling Groups within a k8s cluster. It was developed to make upgrading AMIs as a one command experi

Mar 27, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
Command kube-tmux prints Kubernetes context and namespace to tmux status line.

kube-tmux Command kube-tmux prints Kubernetes context and namespace to tmux status line.

Sep 10, 2021
I'd like to share random apps in the spare times. Thus, I'm going to try learning some concepts of Go and as much as I can I try to clarify each line.

go-samples I'd like to share random apps in the spare times. Thus, I'm going to try learning some concepts of Go and as much as I can I try to clarify

Mar 16, 2022
Good enough Kubernetes namespace visualization tool
Good enough Kubernetes namespace visualization tool

Kubesurveyor Good enough Kubernetes namespace visualization tool. No provisioning to a cluster required, only Kubernetes API is scrapped. Installation

Dec 7, 2022
K8s-ingress-health-bot - A K8s Ingress Health Bot is a lightweight application to check the health of the ingress endpoints for a given kubernetes namespace.

k8s-ingress-health-bot A K8s Ingress Health Bot is a lightweight application to check the health of qualified ingress endpoints for a given kubernetes

Jan 2, 2022
Kubernetes Admission Controller Demo: Validating Webhook for Namespace lifecycle events

Kubernetes Admission Controller Based on How to build a Kubernetes Webhook | Admission controllers Local Kuberbetes cluster # create kubernetes cluste

Feb 27, 2022
GoScanPlayers - Hypixel online player tracker. Runs as an executable and can notify a Discord Webhook
GoScanPlayers - Hypixel online player tracker. Runs as an executable and can notify a Discord Webhook

GoScanPlayers Hypixel online player tracker. Runs as an executable and can notif

Oct 16, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Dec 4, 2022
Kubegres is a Kubernetes operator allowing to create a cluster of PostgreSql instances and manage databases replication, failover and backup.

Kubegres is a Kubernetes operator allowing to deploy a cluster of PostgreSql pods with data replication enabled out-of-the box. It brings simplicity w

Dec 30, 2022