Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet


Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet.


Clusternet (Cluster Internet) is an open source add-on that helps you manage thousands of millions of Kubernetes clusters as easily as visiting the Internet. No matter the clusters are running on public cloud, private cloud, hybrid cloud, or at the edge, Clusternet lets you manage/visit them all as if they were running locally. This also help eliminate the need to juggle different management tools for each cluster.

Clusternet will help setup network tunnels in a configurable way, when your clusters are running in a VPC network, at the edge, or behind a firewall.

Clusternet also provides a Kubernetes-styled API, where you can continue using the Kubernetes way, such as KubeConfig, to visit a certain Managed Kubernetes cluster, or a Kubernetes service.

Clusternet is multiple platforms supported now, including

  • darwin/amd64 and darwin/arm64;
  • linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386 and linux/arm;

Architecture

Clusternet is light-weighted that consists of two components, clusternet-agent and clusternet-hub.

clusternet-agent is responsible for

  • auto-registering current cluster to a parent cluster as a child cluster, which is also been called ManagedCluster;
  • reporting heartbeats of current cluster, including Kubernetes version, running platform, healthz/readyz/livez status, etc;
  • setting up a websocket connection that provides full-duplex communication channels over a single TCP connection to parent cluster;

clusternet-hub is responsible for

  • approving cluster registration requests and creating exclusive resources, such as namespaces, serviceaccounts and RBAC rules, for each child cluster;
  • serving as an aggregated apiserver (AA), which is used to serve as a websocket server that maintain multiple active websocket connections from child clusters;
  • providing Kubernstes-styled API to redirect/proxy/upgrade requests to each child cluster;

Note: Since clusternet-hub is running as an AA, please make sure that parent apiserver could visit the clusternet-hub service.

Concepts

For every Kubernetes cluster that wants to be managed, we call it child cluster. The cluster where child clusters are registerring to, we call it parent cluster.

clusternet-agent runs in child cluster, while clusternet-hub runs in parent cluster.

  • ClusterRegistrationRequest is an object that clusternet-agent creates in parent cluster for child cluster registration.
  • ManagedCluster is an object that clusternet-hub creates in parent cluster after approving ClusterRegistrationRequest.

Building

Building Binaries

Clone the repository, and run

# build for linux/amd64 by default
$ make clusternet-agent clusternet-hub

to build binaries clusternet-agent and clusternet-hub for linux/amd64.

Also you could specify other platforms when building, such as,

# build only clusternet-agent for linux/arm64 and darwin/amd64
# use comma to separate multiple platforms
$ PLATFORMS=linux/arm64,darwin/amd64 make clusternet-agent
# below are all the supported platforms
# PLATFORMS=darwin/amd64,darwin/arm64,linux/amd64,linux/arm64,linux/ppc64le,linux/s390x,linux/386,linux/arm

All the built binaries will be placed at _output folder.

Building Docker Images

You can also build docker images. Here docker buildx is used to help build multi-arch container images.

If you're running MacOS, please install Docker Desktop and then check the builder,

$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker
  default default         running linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6

If you're running Linux, please refer to docker buildx docs on the installation.

Note:

For better docker buildx support, it is recommended to use Ubuntu Focal 20.04 (LTS), Debian Bullseye 11 and CentOS 8.

And install deb/rpm package qemu-user-static as well, such as

apt-get install qemu-user-static

or

yum install qemu-user-static
# build for linux/amd64 by default
# container images for clusternet-agent and clusternet-hub
$ make images

Also you could build container images for other platforms, such as arm64,

$ PLATFORMS=linux/amd64,linux/arm64,linux/ppc64le make images
# below are all the supported platforms
# PLATFORMS=linux/amd64,linux/arm64,linux/ppc64le,linux/s390x,linux/386,linux/arm

Getting Started

Deploy Clusternet

You need to deploy clusternet-agent and clusternet-hub in child cluster and parent cluster respectively.

For clusternet-hub

kubectl apply -f deploy/hub

And then create a bootstrap token for clusternet-agent,

# this will create a bootstrap token 07401b.f395accd246ae52d
$ kubectl apply -f manifests/samples

For clusternet-agent

First we need to create a secret, which contains token for cluster registration,

# create namespace clusternet-system if not created
$ kubectl create ns clusternet-system
# here we use the token created above
$ PARENTURL=https://192.168.10.10 REGTOKEN=07401b.f395accd246ae52d envsubst < ./deploy/templates/clusternet_agent_secret.yaml | kubectl apply -f -

The PARENTURL above is the apiserver address of the parent cluster that you want to register to.

$ kubectl apply -f deploy/agent

Check Cluster Registrations

# clsrr is an alias for ClusterRegistrationRequest
$ kubectl get clsrr
NAME                                              CLUSTER-ID                             STATUS     AGE
clusternet-dc91021d-2361-4f6d-a404-7c33b9e01118   dc91021d-2361-4f6d-a404-7c33b9e01118   Approved   3d6h
$ kubectl get clsrr clusternet-dc91021d-2361-4f6d-a404-7c33b9e01118 -o yaml
apiVersion: clusters.clusternet.io/v1beta1
kind: ClusterRegistrationRequest
metadata:
  creationTimestamp: "2021-05-24T08:24:40Z"
  generation: 1
  labels:
    clusters.clusternet.io/cluster-id: dc91021d-2361-4f6d-a404-7c33b9e01118
    clusters.clusternet.io/cluster-name: clusternet-cluster-dzqkw
    clusters.clusternet.io/registered-by: clusternet-agent
  name: clusternet-dc91021d-2361-4f6d-a404-7c33b9e01118
  resourceVersion: "553624"
  uid: 8531ee8a-c66a-439e-bb5a-3adacfe58952
spec:
  clusterId: dc91021d-2361-4f6d-a404-7c33b9e01118
  clusterName: clusternet-cluster-dzqkw
  clusterType: EdgeClusterSelfProvisioned
status:
  caCertificate: REDACTED
  dedicatedNamespace: clusternet-dhxfs
  managedClusterName: clusternet-cluster-dzqkw
  result: Approved
  token: REDACTED

After ClusterRegistrationRequest is approved, the status will be updated with corresponding credentials that can be used to access parent cluster if needed. Those credentials have been set with scoped RBAC rules, see blow two rules for details.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    clusters.clusternet.io/rbac-autoupdate: "true"
  creationTimestamp: "2021-05-24T08:25:07Z"
  labels:
    clusters.clusternet.io/bootstrapping: rbac-defaults
    clusters.clusternet.io/cluster-id: dc91021d-2361-4f6d-a404-7c33b9e01118
    clusternet.io/created-by: clusternet-hub
  name: clusternet-dc91021d-2361-4f6d-a404-7c33b9e01118
  resourceVersion: "553619"
  uid: 87db2e72-f4c1-4628-9373-1536ed7fd4af
rules:
  - apiGroups:
      - clusters.clusternet.io
    resources:
      - clusterregistrationrequests
    verbs:
      - create
      - get
  - apiGroups:
      - proxies.clusternet.io
    resourceNames:
      - dc91021d-2361-4f6d-a404-7c33b9e01118
    resources:
      - sockets
    verbs:
      - '*'

and

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
    clusters.clusternet.io/rbac-autoupdate: "true"
  creationTimestamp: "2021-05-24T08:25:07Z"
  labels:
    clusters.clusternet.io/bootstrapping: rbac-defaults
    clusternet.io/created-by: clusternet-hub
  name: clusternet-managedcluster-role
  namespace: clusternet-dhxfs
  resourceVersion: "553622"
  uid: 7524b743-57f3-4a45-a6cd-ceb3321fe2ff
rules:
  - apiGroups:
      - '*'
    resources:
      - '*'
    verbs:
      - '*'

Check ManagedCluster Status

# mcls is an alias for ManagedCluster
$ kubectl get mcls -A
NAMESPACE          NAME                       CLUSTER-ID                             CLUSTER-TYPE                 KUBERNETES   READYZ   AGE
clusternet-dhxfs   clusternet-cluster-dzqkw   dc91021d-2361-4f6d-a404-7c33b9e01118   EdgeClusterSelfProvisioned   v1.19.10     true     2d20h
$ kubectl get mcls -n clusternet-dhxfs   clusternet-cluster-dzqkw -o yaml
apiVersion: clusters.clusternet.io/v1beta1
kind: ManagedCluster
metadata:
  creationTimestamp: "2021-05-24T08:25:07Z"
  generation: 1
  labels:
    clusters.clusternet.io/cluster-id: dc91021d-2361-4f6d-a404-7c33b9e01118
    clusters.clusternet.io/cluster-name: clusternet-cluster-dzqkw
    clusternet.io/created-by: clusternet-agent
  name: clusternet-cluster-dzqkw
  namespace: clusternet-dhxfs
  resourceVersion: "555091"
  uid: e7e7fb5f-1a00-4e4e-aa02-2e943e37e4ff
spec:
  clusterId: dc91021d-2361-4f6d-a404-7c33b9e01118
  clusterType: EdgeClusterSelfProvisioned
status:
  healthz: true
  k8sVersion: v1.19.10
  lastObservedTime: "2021-05-24T08:58:30Z"
  livez: true
  platform: linux/amd64
  readyz: true

The status of ManagedCluster is updated by clusternet-agent every 3 minutes for default, which can be configured by flag --cluster-status-update-frequency.

Visit ManagedCluster

You can visit all your managed clusters using the kubeConfig of parent cluster. Only a small modification is needed.

# suppose your parent cluster kubeconfig locates at /home/demo/.kube/config
$ kubectl config view --kubeconfig=/home/demo/.kube/config --minify=true --raw=true > ./config-cluster-dc91021d-2361-4f6d-a404-7c33b9e01118
$ export KUBECONFIG=`pwd`/config-cluster-dc91021d-2361-4f6d-a404-7c33b9e01118
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.0.0.10:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
# suppose your child cluster running at http://demo1.cluster.net
$ kubectl config set-cluster `kubectl config get-clusters | grep -v NAME` \
  --server=https://10.0.0.10:6443/apis/proxies.clusternet.io/v1alpha1/sockets/dc91021d-2361-4f6d-a404-7c33b9e01118/http/demo1.cluster.net
# or just use the direct path
$ kubectl config set-cluster `kubectl config get-clusters | grep -v NAME` \
  --server=https://10.0.0.10:6443/apis/proxies.clusternet.io/v1alpha1/sockets/dc91021d-2361-4f6d-a404-7c33b9e01118/direct

What you need is to append /apis/proxies.clusternet.io/v1alpha1/sockets/<CLUSTER-ID>/http/<SERVER-URL> or /apis/proxies.clusternet.io/v1alpha1/sockets/<CLUSTER-ID>/direct at the end of original parent cluster server address.

  • CLUSTER-ID is a UUID for your child cluster, which is auto-populated by clusternet-agent, such as dc91021d-2361-4f6d-a404-7c33b9e01118. You could get this UUID from objects ClusterRegistrationRequest, ManagedCluster, etc. Also this UUID is labeled with key clusters.clusternet.io/cluster-id.
  • SERVER-URL is the apiserver address of your child cluster, it could be localhost, 127.0.0.1 and etc, only if clusternet-agent could access.

Currently Clusternet only support http scheme. If your child clusters are running with https scheme, you could run a local proxy instead, for example,

kubectl proxy --address='10.212.0.7' --accept-hosts='^*$'

Please replace 10.212.0.7 with your real local IP address.

Then you can visit child cluster as usual.

Owner
Comments
  • installing kubectl plugin kubectl-clusternet, distribute application to child clusters failed

    installing kubectl plugin kubectl-clusternet, distribute application to child clusters failed

    What happened:

    After installing kubectl plugin kubectl-clusternet, we run below commands to distribute this application to child clusters,We get the following result

    subscription.apps.clusternet.io/app-demo configured Error from server (ServiceUnavailable): error when retrieving current configuration of: Resource: "/v1, Resource=namespaces", GroupVersionKind: "/v1, Kind=Namespace" Name: "foo", Namespace: "" from server for: "foo.yaml": the server is currently unable to handle the request (get namespaces foo) Error from server (ServiceUnavailable): error when retrieving current configuration of: Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment" Name: "my-nginx", Namespace: "foo" from server for: "foo.yaml": the server is currently unable to handle the request (get deployments.apps my-nginx) Error from server (ServiceUnavailable): error when retrieving current configuration of: Resource: "/v1, Resource=services", GroupVersionKind: "/v1, Kind=Service" Name: "my-nginx-svc", Namespace: "foo" from server for: "foo.yaml": the server is currently unable to handle the request (get services my-nginx-svc)

    What you expected to happen:

    resources running in each child clusters.

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • Clusternet version:
      • Clusternet-agent version (user clusternet-agent --version=json):0.11.0
      • Clusternet-hub version (user clusternet-hub --version=json):0.11.0
    • Kubernetes version (use kubectl version):

    Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:10:45Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:04:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}

    • Cloud provider or hardware configuration:
    • OS (e.g: cat /etc/os-release):
    • NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/"
    • Kernel (e.g. uname -a):

    Linux k8s-m-001 3.10.0-1160.25.1.el7.x86_64 https://github.com/clusternet/clusternet/issues/1 SMP Wed Apr 28 21:49:45 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

    • Others:
  • divided resource to child cluster failed

    divided resource to child cluster failed

    $ kubectl  describe desc -n clusternet-6phvq app-demo-generic
    ...
    Status:
      Phase:   Failure
      Reason:  please check whether the advertised apiserver of current child cluster is accessible. Unauthorized
    Events:
      Type     Reason                  Age                    From            Message
      ----     ------                  ----                   ----            -------
      Warning  UnSuccessfullyDeployed  3m31s                  clusternet-hub  failed to deploying Description clusternet-6phvq/app-demo-generic: please check whether the advertised apiserver of current child cluster is accessible. Unauthorized
      Normal   Synced                  3m30s (x2 over 3m31s)  clusternet-hub  Description synced successfully
    
  • Can the kubectl clusternet  command define a label for a cluster?

    Can the kubectl clusternet command define a label for a cluster?

    What happened:

    We want to define a new label and use this label as the application publishing strategy: Same as below: spec: subscribers: # defines the clusters to be distributed to clusterAffinity: matchLabels: clusters.clusternet.io/newLabel: newLabelValue

    What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    image image

    Environment:

    • Clusternet version:
      • Clusternet-agent version (user clusternet-agent --version=json):
      • Clusternet-hub version (user clusternet-hub --version=json):
    • Kubernetes version (use kubectl version):
    • Cloud provider or hardware configuration:
    • OS (e.g: cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Others: We want to define a new label and use this label as the application publishing strategy: Same as below: spec: subscribers: # defines the clusters to be distributed to
      • clusterAffinity: matchLabels: clusters.clusternet.io/newLabel: newLabelValue
  • fail to create subresources' shadows

    fail to create subresources' shadows

    What happened:

    clusternet-hub cannot not work caused by failing to create subresources' shadows

    What you expected to happen:

    subresources' shadows should be created after parent resources.

    Or only init shadows of built-in resources, and then using another API to create new shadow manually when needed.

    Or just skip subresources?

    How to reproduce it (as minimally and precisely as possible):

    init the clusternet-hub in a cluster with kubevirt installed.

    $ kubectl api-versions | grep kubevirt
    cdi.kubevirt.io/v1alpha1
    cdi.kubevirt.io/v1beta1
    hostpathprovisioner.kubevirt.io/v1alpha1
    hostpathprovisioner.kubevirt.io/v1beta1
    kubevirt.io/v1
    kubevirt.io/v1alpha3
    snapshot.kubevirt.io/v1alpha1
    subresources.kubevirt.io/v1
    subresources.kubevirt.io/v1alpha3
    upload.cdi.kubevirt.io/v1alpha1
    upload.cdi.kubevirt.io/v1beta1
    

    Anything else we need to know?:

    # apis of kubevirt.io
    $ kubectl get --raw="/apis/kubevirt.io/v1"
    {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"kubevirt.io/v1","resources":[{"name":"virtualmachineinstancepresets","singularName":"virtualmachineinstancepreset","namespaced":true,"kind":"VirtualMachineInstancePreset","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"shortNames":["vmipreset","vmipresets"],"categories":["all"],"storageVersionHash":"oZZyVoiG8GU="},{"name":"virtualmachineinstances","singularName":"virtualmachineinstance","namespaced":true,"kind":"VirtualMachineInstance","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"shortNames":["vmi","vmis"],"categories":["all"],"storageVersionHash":"aTZdN6HaFnI="},{"name":"virtualmachineinstancemigrations","singularName":"virtualmachineinstancemigration","namespaced":true,"kind":"VirtualMachineInstanceMigration","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"shortNames":["vmim","vmims"],"categories":["all"],"storageVersionHash":"m3FObUfKfOI="},{"name":"virtualmachineinstancemigrations/status","singularName":"","namespaced":true,"kind":"VirtualMachineInstanceMigration","verbs":["get","patch","update"]},{"name":"virtualmachineinstancereplicasets","singularName":"virtualmachineinstancereplicaset","namespaced":true,"kind":"VirtualMachineInstanceReplicaSet","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"shortNames":["vmirs","vmirss"],"categories":["all"],"storageVersionHash":"+P8t02g8MEQ="},{"name":"virtualmachineinstancereplicasets/status","singularName":"","namespaced":true,"kind":"VirtualMachineInstanceReplicaSet","verbs":["get","patch","update"]},{"name":"virtualmachineinstancereplicasets/scale","singularName":"","namespaced":true,"group":"autoscaling","version":"v1","kind":"Scale","verbs":["get","patch","update"]},{"name":"kubevirts","singularName":"kubevirt","namespaced":true,"kind":"KubeVirt","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"shortNames":["kv","kvs"],"categories":["all"],"storageVersionHash":"rJwWdyhUifw="},{"name":"kubevirts/status","singularName":"","namespaced":true,"kind":"KubeVirt","verbs":["get","patch","update"]},{"name":"virtualmachines","singularName":"virtualmachine","namespaced":true,"kind":"VirtualMachine","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"shortNames":["vm","vms"],"categories":["all"],"storageVersionHash":"68BSETZ44jA="},{"name":"virtualmachines/status","singularName":"","namespaced":true,"kind":"VirtualMachine","verbs":["get","patch","update"]}]}
    
    # but it failed in PostStartHook:
    F0819 03:31:22.154417       1 hooks.go:202] PostStartHook "start-clusternet-hub-shadowapis" failed: unable to install api resources: unable to setup API &{[shadow/v1alpha1] map[v1alpha1:map[apiservices:0xc000145440 apiservices/status:0xc000145500 bindings:0xc0004f8840 certificatesigningrequests:0xc002ea8780 certificatesigningrequests/approval:0xc002ea8840 certificatesigningrequests/status:0xc002ea8900 clusterrolebindings:0xc002ea8f00 clusterroles:0xc002ea8fc0 componentstatuses:0xc0004f8900 configmaps:0xc0004f8a80 controllerrevisions:0xc0001455c0 cronjobs:0xc002ea8600 cronjobs/status:0xc002ea86c0 csidrivers:0xc002ea9200 csinodes:0xc002ea92c0 customresourcedefinitions:0xc002ea9740 customresourcedefinitions/status:0xc002ea9800 daemonsets:0xc000145680 daemonsets/status:0xc000145740 deployments:0xc000145800 deployments/scale:0xc000435090 deployments/status:0xc000145980 endpoints:0xc0004f8b40 endpointslices:0xc002ea9b00 events:0xc0004f8c00 flowschemas:0xc002ea9bc0 flowschemas/status:0xc002ea9c80 horizontalpodautoscalers:0xc002ea8300 horizontalpodautoscalers/status:0xc002ea83c0 ingressclasses:0xc002ea89c0 ingresses:0xc002ea8a80 ingresses/status:0xc002ea8b40 jobs:0xc002ea8480 jobs/status:0xc002ea8540 leases:0xc002ea9980 limitranges:0xc0004f8cc0 localsubjectaccessreviews:0xc002ea8000 mutatingwebhookconfigurations:0xc002ea95c0 namespaces:0xc0004f8d80 namespaces/finalize:0xc0004f8e40 namespaces/status:0xc0004f8f00 networkpolicies:0xc002ea8c00 nodes:0xc0004f8fc0 nodes/proxy:0xc0004f9080 nodes/status:0xc0004f9140 persistentvolumeclaims:0xc0004f9200 persistentvolumeclaims/status:0xc0004f92c0 persistentvolumes:0xc0004f9380 persistentvolumes/status:0xc0004f9440 poddisruptionbudgets:0xc002ea8cc0 poddisruptionbudgets/status:0xc002ea8d80 pods:0xc0004f9500 pods/attach:0xc0004f95c0 pods/binding:0xc0004f9680 pods/eviction:0xc0004f9740 pods/exec:0xc0004f9800 pods/log:0xc0004f98c0 pods/portforward:0xc0004f9980 pods/proxy:0xc0004f9a40 pods/status:0xc0004f9b00 podsecuritypolicies:0xc002ea8e40 podtemplates:0xc0004f9bc0 priorityclasses:0xc002ea98c0 prioritylevelconfigurations:0xc002ea9d40 prioritylevelconfigurations/status:0xc002ea9e00 replicasets:0xc000145a40 replicasets/scale:0xc0004350d8 replicasets/status:0xc000145bc0 replicationcontrollers:0xc0004f9c80 replicationcontrollers/scale:0xc000434e60 replicationcontrollers/status:0xc0004f9e00 resourcequotas:0xc0004f9ec0 resourcequotas/status:0xc0005f5ec0 rolebindings:0xc002ea9080 roles:0xc002ea9140 runtimeclasses:0xc002ea9a40 secrets:0xc000144fc0 selfsubjectaccessreviews:0xc002ea80c0 selfsubjectrulesreviews:0xc002ea8180 serviceaccounts:0xc000145080 serviceaccounts/token:0xc000145140 services:0xc000145200 services/proxy:0xc0001452c0 services/status:0xc000145380 statefulsets:0xc000145c80 statefulsets/scale:0xc000435130 statefulsets/status:0xc000145e00 storageclasses:0xc002ea9380 subjectaccessreviews:0xc002ea8240 tokenreviews:0xc000145ec0 uploadtokenrequests:0xc002ebe9c0 validatingwebhookconfigurations:0xc002ea9680 virtualmachineinstances/addvolume:0xc002ea9ec0 virtualmachineinstances/console:0xc002ebe000 virtualmachineinstances/filesystemlist:0xc002ebe0c0 virtualmachineinstances/guestosinfo:0xc002ebe180 virtualmachineinstances/pause:0xc002ebe240 virtualmachineinstances/removevolume:0xc002ebe300 virtualmachineinstances/unpause:0xc002ebe3c0 virtualmachineinstances/userlist:0xc002ebe480 virtualmachineinstances/vnc:0xc002ebe540 virtualmachines/migrate:0xc002ebe600 virtualmachines/rename:0xc002ebe6c0 virtualmachines/restart:0xc002ebe780 virtualmachines/start:0xc002ebe840 virtualmachines/stop:0xc002ebe900 volumeattachments:0xc002ea9440 volumeattachments/status:0xc002ea9500]] v1 <nil> 0xc0002da690 {0xc0002da690 0xc0004883a8 [{application/json application json true 0xc0000ed090 0xc0000ed0e0 0xc0000ed130 0xc0001efcb0} {application/yaml application yaml true 0xc0000ed180 <nil> 0xc0000ed1d0 <nil>}] 0xc0000ed090} 0xc001829b80 <nil>}: [error in registering resource: virtualmachineinstances/addvolume, missing parent storage: "virtualmachineinstances", error in registering resource: virtualmachineinstances/console, missing parent storage: "virtualmachineinstances", error in registering resource: virtualmachineinstances/filesystemlist, missing parent storage: "virtualmachineinstances", error in registering resource: virtualmachineinstances/guestosinfo, missing parent storage: "virtualmachineinstances", error in registering resource: virtualmachineinstances/pause, missing parent storage: "virtualmachineinstances", error in registering resource: virtualmachineinstances/removevolume, missing parent storage: "virtualmachineinstances", error in registering resource: virtualmachineinstances/unpause, missing parent storage: "virtualmachineinstances", error in registering resource: virtualmachineinstances/userlist, missing parent storage: "virtualmachineinstances", error in registering resource: virtualmachineinstances/vnc, missing parent storage: "virtualmachineinstances", error in registering resource: virtualmachines/migrate, missing parent storage: "virtualmachines", error in registering resource: virtualmachines/rename, missing parent storage: "virtualmachines", error in registering resource: virtualmachines/restart, missing parent storage: "virtualmachines", error in registering resource: virtualmachines/start, missing parent storage: "virtualmachines", error in registering resource: virtualmachines/stop, missing parent storage: "virtualmachines"]
    

    beautify:

    {
        [shadow/v1alpha1
        ] map[v1alpha1:map[
            apiservices: 0xc000145440 
            apiservices/status: 0xc000145500 
            bindings: 0xc0004f8840 
            certificatesigningrequests: 0xc002ea8780 
            certificatesigningrequests/approval: 0xc002ea8840 
            certificatesigningrequests/status: 0xc002ea8900 
            clusterrolebindings: 0xc002ea8f00 
            clusterroles: 0xc002ea8fc0 
            componentstatuses: 0xc0004f8900 
            configmaps: 0xc0004f8a80 
            controllerrevisions: 0xc0001455c0 
            cronjobs: 0xc002ea8600 
            cronjobs/status: 0xc002ea86c0 
            csidrivers: 0xc002ea9200 
            csinodes: 0xc002ea92c0 
            customresourcedefinitions: 0xc002ea9740 
            customresourcedefinitions/status: 0xc002ea9800 
            daemonsets: 0xc000145680 
            daemonsets/status: 0xc000145740 
            deployments: 0xc000145800 
            deployments/scale: 0xc000435090 
            deployments/status: 0xc000145980 
            endpoints: 0xc0004f8b40 
            endpointslices: 0xc002ea9b00 
            events: 0xc0004f8c00 
            flowschemas: 0xc002ea9bc0 
            flowschemas/status: 0xc002ea9c80 
            horizontalpodautoscalers: 0xc002ea8300 
            horizontalpodautoscalers/status: 0xc002ea83c0 
            ingressclasses: 0xc002ea89c0 
            ingresses: 0xc002ea8a80 
            ingresses/status: 0xc002ea8b40 
            jobs: 0xc002ea8480 
            jobs/status: 0xc002ea8540 
            leases: 0xc002ea9980 
            limitranges: 0xc0004f8cc0 
            localsubjectaccessreviews: 0xc002ea8000 
            mutatingwebhookconfigurations: 0xc002ea95c0 
            namespaces: 0xc0004f8d80 
            namespaces/finalize: 0xc0004f8e40 
            namespaces/status: 0xc0004f8f00 
            networkpolicies: 0xc002ea8c00 
            nodes: 0xc0004f8fc0 
            nodes/proxy: 0xc0004f9080 
            nodes/status: 0xc0004f9140 
            persistentvolumeclaims: 0xc0004f9200 
            persistentvolumeclaims/status: 0xc0004f92c0 
            persistentvolumes: 0xc0004f9380 
            persistentvolumes/status: 0xc0004f9440 
            poddisruptionbudgets: 0xc002ea8cc0 
            poddisruptionbudgets/status: 0xc002ea8d80 
            pods: 0xc0004f9500 
            pods/attach: 0xc0004f95c0 
            pods/binding: 0xc0004f9680 
            pods/eviction: 0xc0004f9740 
            pods/exec: 0xc0004f9800 
            pods/log: 0xc0004f98c0 
            pods/portforward: 0xc0004f9980 
            pods/proxy: 0xc0004f9a40 
            pods/status: 0xc0004f9b00 
            podsecuritypolicies: 0xc002ea8e40 
            podtemplates: 0xc0004f9bc0 
            priorityclasses: 0xc002ea98c0 
            prioritylevelconfigurations: 0xc002ea9d40 
            prioritylevelconfigurations/status: 0xc002ea9e00 
            replicasets: 0xc000145a40 
            replicasets/scale: 0xc0004350d8 
            replicasets/status: 0xc000145bc0 
            replicationcontrollers: 0xc0004f9c80 
            replicationcontrollers/scale: 0xc000434e60 
            replicationcontrollers/status: 0xc0004f9e00 
            resourcequotas: 0xc0004f9ec0 
            resourcequotas/status: 0xc0005f5ec0 
            rolebindings: 0xc002ea9080 
            roles: 0xc002ea9140 
            runtimeclasses: 0xc002ea9a40 
            secrets: 0xc000144fc0 
            selfsubjectaccessreviews: 0xc002ea80c0 
            selfsubjectrulesreviews: 0xc002ea8180 
            serviceaccounts: 0xc000145080 
            serviceaccounts/token: 0xc000145140 
            services: 0xc000145200 
            services/proxy: 0xc0001452c0 
            services/status: 0xc000145380 
            statefulsets: 0xc000145c80 
            statefulsets/scale: 0xc000435130 
            statefulsets/status: 0xc000145e00 
            storageclasses: 0xc002ea9380 
            subjectaccessreviews: 0xc002ea8240 
            tokenreviews: 0xc000145ec0 
            uploadtokenrequests: 0xc002ebe9c0 
            validatingwebhookconfigurations: 0xc002ea9680 
            virtualmachineinstances/addvolume: 0xc002ea9ec0 
            virtualmachineinstances/console: 0xc002ebe000 
            virtualmachineinstances/filesystemlist: 0xc002ebe0c0 
            virtualmachineinstances/guestosinfo: 0xc002ebe180 
            virtualmachineinstances/pause: 0xc002ebe240 
            virtualmachineinstances/removevolume: 0xc002ebe300 
            virtualmachineinstances/unpause: 0xc002ebe3c0 
            virtualmachineinstances/userlist: 0xc002ebe480 
            virtualmachineinstances/vnc: 0xc002ebe540 
            virtualmachines/migrate: 0xc002ebe600 
            virtualmachines/rename: 0xc002ebe6c0 
            virtualmachines/restart: 0xc002ebe780 
            virtualmachines/start: 0xc002ebe840 
            virtualmachines/stop: 0xc002ebe900 
            volumeattachments: 0xc002ea9440 
            volumeattachments/status: 0xc002ea9500
            ]
        ] v1 <nil> 0xc0002da690 {
            0xc0002da690 0xc0004883a8 [
                {application/json application json true 0xc0000ed090 0xc0000ed0e0 0xc0000ed130 0xc0001efcb0
                } {application/yaml application yaml true 0xc0000ed180 <nil> 0xc0000ed1d0 <nil>
                }
            ] 0xc0000ed090
        } 0xc001829b80 <nil>
    }: [
        error in registering resource: virtualmachineinstances/addvolume, missing parent storage: "virtualmachineinstances", 
        error in registering resource: virtualmachineinstances/console, missing parent storage: "virtualmachineinstances", 
        error in registering resource: virtualmachineinstances/filesystemlist, missing parent storage: "virtualmachineinstances", 
        error in registering resource: virtualmachineinstances/guestosinfo, missing parent storage: "virtualmachineinstances", 
        error in registering resource: virtualmachineinstances/pause, missing parent storage: "virtualmachineinstances", 
        error in registering resource: virtualmachineinstances/removevolume, missing parent storage: "virtualmachineinstances", 
        error in registering resource: virtualmachineinstances/unpause, missing parent storage: "virtualmachineinstances", 
        error in registering resource: virtualmachineinstances/userlist, missing parent storage: "virtualmachineinstances", 
        error in registering resource: virtualmachineinstances/vnc, missing parent storage: "virtualmachineinstances", 
        error in registering resource: virtualmachines/migrate, missing parent storage: "virtualmachines", 
        error in registering resource: virtualmachines/rename, missing parent storage: "virtualmachines", 
        error in registering resource: virtualmachines/restart, missing parent storage: "virtualmachines", 
        error in registering resource: virtualmachines/start, missing parent storage: "virtualmachines", 
        error in registering resource: virtualmachines/stop, missing parent storage: "virtualmachines"
    ]
    

    Environment:

    • Clusternet version:
      • Clusternet-agent version (user clusternet-agent --version=json): -
      • Clusternet-hub version (user clusternet-hub --version=json): v0.11.0
    • Kubernetes version (use kubectl version): v1.20.13-21
    • Cloud provider or hardware configuration:
    • OS (e.g: cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Others:
  • k3s v1.24.6+k3s1 child cluster working abnormally

    k3s v1.24.6+k3s1 child cluster working abnormally

    What happened:

    when I upgrade k3s version from v1.23.8+k3s2 to v1.24.6+k3s1,helm release failed syncing to child cluster

    logs on clusternet-agent

    W1009 13:05:24.374070       1 clusterstatus_controller.go:140] failed to discover service CIDR: can't get ServiceIPRange
    W1009 13:05:24.383964       1 clusterstatus_controller.go:259] failed to list podMetris with err: 0x24da820
    W1009 13:05:24.385094       1 clusterstatus_controller.go:283] failed to list nodeMetris with err: 0x24da820
    E1009 13:05:26.203675       1 wait.go:188] no secrets found in ServiceAccount clusternet-system/clusternet-app-deployer
    E1009 13:05:31.818574       1 wait.go:188] no secrets found in ServiceAccount clusternet-system/clusternet-app-deployer
    E1009 13:05:38.021635       1 wait.go:188] no secrets found in ServiceAccount clusternet-system/clusternet-app-deployer
    W1009 13:05:43.676526       1 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1alpha1.ServiceExport: serviceexports.multicluster.x-k8s.io is forbidden: User "system:serviceaccount:clusternet-system:clusternet-agent" cannot list resource "serviceexports" in API group "multicluster.x-k8s.io" at the cluster scope
    E1009 13:05:43.676571       1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1alpha1.ServiceExport: failed to list *v1alpha1.ServiceExport: serviceexports.multicluster.x-k8s.io is forbidden: User "system:serviceaccount:clusternet-system:clusternet-agent" cannot list resource "serviceexports" in API group "multicluster.x-k8s.io" at the cluster scope
    W1009 13:05:44.392428       1 clusterstatus_controller.go:140] failed to discover service CIDR: can't get ServiceIPRange
    
    

    Anything else we need to know?:

    Environment:

    • Clusternet version:
      • Clusternet-agent version (user clusternet-agent --version=json): v0.12.0
      • Clusternet-hub version (user clusternet-hub --version=json): v0.12.0
    • Kubernetes version (use kubectl version): v1.24.6+k3s1
    • Cloud provider or hardware configuration:
    • OS (e.g: cat /etc/os-release): ubuntu 20.04
    • Kernel (e.g. uname -a): 5.4.0-126-generic
  • userextras headers lost in exchanger.ProxyConnect

    userextras headers lost in exchanger.ProxyConnect

    when i add extra header in http request to clusternet proxy(clusterrole rule is already added before),such as

    $ curl -k -XGET  -H "Accept: application/json" \
      -H "Impersonate-User: clusternet" \
      -H "Authorization: ${PARENTCLUSTERAUTH}" \
      -H "Impersonate-Extra-**MY-EXTRA-HEADER**: xxxxxx" \
    

    i found that the exchanger can not receive MY-EXTRA-HEADER from the httprequest in exchanger ProxyConnect func;

    but if i add a prefix “clusternet” before my header,the httprequest in exchanger ProxyConnect func can receive it and works well;

    $ curl -k -XGET  -H "Accept: application/json" \
      -H "Impersonate-User: clusternet" \
      -H "Authorization: ${PARENTCLUSTERAUTH}" \
      -H "Impersonate-Extra-**clusternet-MY-EXTRA-HEADER**: xxxxxx" \
    

    I wonder why only http header with “clusternet” prefix can pass to the exchanger.ProxyConnect? could somebody please explain the reason to me?

  • clusternet-hub took too much memory for lots of failed HelmReleases

    clusternet-hub took too much memory for lots of failed HelmReleases

    What happened:

    hub pod started 86m, used memory 8943Mi

    ➜  ~ k get po
    NAME                                    READY   STATUS    RESTARTS   AGE
    clusternet-hub-86b5bfc555-zd4rq         1/1     Running   0          86m
    clusternet-scheduler-64bf68577d-52kn4   1/1     Running   0          89m
    clusternet-scheduler-64bf68577d-q8rp6   1/1     Running   0          89m
    clusternet-scheduler-64bf68577d-sjxp5   1/1     Running   0          89m
    ➜  ~ k top po
    NAME                                    CPU(cores)   MEMORY(bytes)
    clusternet-hub-86b5bfc555-zd4rq         292m         8943Mi
    clusternet-scheduler-64bf68577d-52kn4   1m           13Mi
    clusternet-scheduler-64bf68577d-q8rp6   1m           13Mi
    clusternet-scheduler-64bf68577d-sjxp5   1m           15Mi
    

    What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Use local-running.sh script to create a local kind cluster, create some HelmChart resources, and deployed to 3 child clusters, wait some minutes, and the memory will continue to grow

    pprof message: image

    use command go tool pprof -http=:8081 pprof.clusternet-hub.alloc_objects.alloc_space.inuse_objects.inuse_space.005.pb.gz to see detail pprof.clusternet-hub.alloc_objects.alloc_space.inuse_objects.inuse_space.005.pb.gz

    Anything else we need to know?:

    Environment:

    • Clusternet version:
      • Clusternet-agent version (user clusternet-agent --version=json):
      • Clusternet-hub version (user clusternet-hub --version=json):
    • Kubernetes version (use kubectl version):
    • Cloud provider or hardware configuration:
    • OS (e.g: cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Others:
  • support helm install flags

    support helm install flags

    What would you like to be added:

    I hope that clusternet supports helm install flags options when installing chart.

    Why is this needed:

    helm chart install timeout time is default 5 mintues. If this time is exceeded, it will return failure. But my localization's installazition needs more 5 minutes sometimes. And now clusternet dont support helm install flags. So i cant use --timeout or --wait flags to Increase waiting time.

  • Bug report in

    Bug report in "PARENTURL"

    What happened:

    when we execute the step "PARENTURL=https://192.168.10.10 REGTOKEN=07401b.f395accd246ae52d envsubst < ./deploy/templates/clusternet_agent_secret.yaml | kubectl apply -f -", the cmd report that "'PARENTURL' is not recognized as an internal or external command" in the file folder "clusternet-main". And when we execute this command in the subfolder "deploy/agent", the cmd report thet "the system cannot find the path specified."

    What you expected to happen:

    we can create this secret successfully.

    How to reproduce it (as minimally and precisely as possible):

    do everything in "Get Started" step by step.

    Anything else we need to know?:

    Environment:

    • Clusternet version: v.0.2.0
      • Clusternet-agent version (user clusternet-agent --version=json): v.0.2.0
      • Clusternet-hub version (user clusternet-hub --version=json): v.0.2.0
    • Kubernetes version (use kubectl version): v.1.21.3
    • Cloud provider or hardware configuration:
    • OS (e.g: cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Others:
  • populate legacy secret-based sa token

    populate legacy secret-based sa token

    Signed-off-by: Di Xu [email protected]

    What type of PR is this?

    kind/bug

    What this PR does / why we need it:

    only for Kubernetes cluster >= v1.24.0

    Which issue(s) this PR fixes:

    Fixes #500

    Special notes for your reviewer:

  • `kubectl get clsrr` the server doesn't have a resource type

    `kubectl get clsrr` the server doesn't have a resource type "clsrr"

    What happened:

    Follow this guide of https://clusternet.io/docs/installation/install-the-hard-way:

    kubectl get clsrr: image

    children: image

    parent: image

    image

    What you expected to happen:

    kubectl get clsrr list children

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • Clusternet version:
      • Clusternet-agent version (user clusternet-agent --version=json): ghcr.io/clusternet/clusternet-agent:v0.11.0

        [root@k8s-m-002 ~]# kubectl get deploy -n clusternet-system -o wide
        NAME               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS         IMAGES                                        SELECTOR
        clusternet-agent   3/3     3            3           12h   clusternet-agent   ghcr.io/clusternet/clusternet-agent:v0.11.0   app=clusternet-agent
        
      • Clusternet-hub version (user clusternet-hub --version=json): ghcr.io/clusternet/clusternet-hub:v0.11.0

            [root@k8s-m-001 ~]# kubectl get deploy -n clusternet-system -o wide
        NAME                   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS             IMAGES                                            SELECTOR
        clusternet-hub         3/3     3            3           10h   clusternet-hub         ghcr.io/clusternet/clusternet-hub:v0.11.0         app=clusternet-hub
        clusternet-scheduler   3/3     3            3           10h   clusternet-scheduler   ghcr.io/clusternet/clusternet-scheduler:v0.11.0   app=clusternet-scheduler
        
    • Kubernetes version (use kubectl version):
      Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:10:45Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
      Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:04:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
      
    • Cloud provider or hardware configuration:
    • OS (e.g: cat /etc/os-release):
      - NAME="CentOS Linux"
      VERSION="7 (Core)"
      ID="centos"
      ID_LIKE="rhel fedora"
      VERSION_ID="7"
      PRETTY_NAME="CentOS Linux 7 (Core)"
      ANSI_COLOR="0;31"
      CPE_NAME="cpe:/o:centos:centos:7"
      HOME_URL="https://www.centos.org/"
      BUG_REPORT_URL="https://bugs.centos.org/"
      
      CENTOS_MANTISBT_PROJECT="CentOS-7"
      CENTOS_MANTISBT_PROJECT_VERSION="7"
      REDHAT_SUPPORT_PRODUCT="centos"
      REDHAT_SUPPORT_PRODUCT_VERSION="7"
      
    • Kernel (e.g. uname -a):
      Linux k8s-m-001 3.10.0-1160.25.1.el7.x86_64 #1 SMP Wed Apr 28 21:49:45 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    • Others:
  • feature: add kyverno patch config

    feature: add kyverno patch config

    feature: add kyverno style globalization and localization

    What type of PR is this?

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

  • replace crds before upgrade helm charts

    replace crds before upgrade helm charts

    What would you like to be added:

    replace crds before upgrade helm charts

    Why is this needed:

    due to the design of helm, crds will not upgrade when chart upgrade, that will cause conflict and helmrelease status falied, for example

     Warning  FailedSynced  26s (x18 over 25m)  clusternet-hub    unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Prometheus.spec): unknown field "hostNetwork" in com.coreos.monitoring.v1.Prometheus.spec
    

    helm community provided a workaround is execute kubectl replace -f ./crds

  • Support kyverno-style mutation in globalization and localization

    Support kyverno-style mutation in globalization and localization

    What would you like to be added:

    Support kyverno-style mutation in globalization and localization

    Why is this needed:

    The current globalization and localization strategies are not flexible enough to modify objects based on specific conditions

    story1: on tencent cloud, use qgpu for gpu virtualization, value 100 for a gpu card: tke.cloud.tencent.com/qgpu-core: 100

    on other cloud, use vgpu for gpu virtualization, value 50 for a gpu card: example.com/vgpu: 50

    for user-friendliness, resource type conversion and value conversion are required

    story2: on different clouds, there are different node label value for the same label key; for example: kubernetes.io/region: sg kubernetes.io/region: singapore

    for user-friendliness, resource type conversion and value conversion are required

    kyverno is the most popular open-source project to do validation and mutation, it can meet the above needs , but it is hard to use kyverno to hook all manifests and description

    the simple draft I came up with:

    add extra mutation logic

    point 1: add extra field

    const (
    	// HelmType applies Helm values for all matched HelmCharts.
    	// Note: HelmType only works with HelmChart(s).
    	HelmType OverrideType = "Helm"
    
    	// JSONPatchType applies a json patch for all matched objects.
    	// Note: JSONPatchType does not work with HelmChart(s).
    	JSONPatchType OverrideType = "JSONPatch"
    
    	// MergePatchType applies a json merge patch for all matched objects.
    	// Note: MergePatchType does not work with HelmChart(s).
    	MergePatchType OverrideType = "MergePatch"
    
    	// StrategicMergePatchType won't be supported, since `patchStrategy`
    	// and `patchMergeKey` can not be retrieved.
    
            .....
            KyvernoPathType = "Kyverno"
    )
    
    // OverrideConfig holds information that describes a override config.
    type OverrideConfig struct {
    	// Name indicate the OverrideConfig name.
    	//
    	// +optional
    	Name string `json:"name,omitempty"`
    
    	// Value represents override value.
    	//
    	// +required
    	// +kubebuilder:validation:Required
    	// +kubebuilder:validation:Type=string
    	Value string `json:"value"`
    
    	// Type specifies the override type for override value.
    	//
    	// +required
    	// +kubebuilder:validation:Required
    	// +kubebuilder:validation:Type=string
    	// +kubebuilder:validation:Enum=Helm;JSONPatch;MergePatch
    	Type OverrideType `json:"type"`
    
             // Mutation kyverno style mutation configuration
            // refer to https://github.com/kyverno/kyverno/blob/main/api/kyverno/v1/common_types.go
             Mutation kyvernoapi. Mutation
    }
    

    point 2: add mutating logic in Localizer

    point 3: currently the globalization only select specified Feed, for select more than one feed, the resource selector should be added to GlobalizationSpec

  • aggregating status for custom workloads

    aggregating status for custom workloads

    I see that extensibility is not supported here: https://github.com/clusternet/clusternet/blob/5f50c6e93ae338939dbadd51c476332e3ece0224/pkg/controllers/apps/aggregatestatus/aggregatestatus.go#L479-L491

  • abnormal manifests deletion when agent runs in parent cluster

    abnormal manifests deletion when agent runs in parent cluster

    What happened:

    kind/bug

    When clusternet-agent runs in parent cluster, if we delete a Subscription, then corresponding manifest feeds are deleted as well.

    This only happens when clusternet-agent runs in parent cluster.

    What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • Clusternet version:
      • Clusternet-agent version (user clusternet-agent --version=json):
      • Clusternet-hub version (user clusternet-hub --version=json):
    • Kubernetes version (use kubectl version):
    • Cloud provider or hardware configuration:
    • OS (e.g: cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Others:
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
OpenYurt - Extending your native Kubernetes to edge(project under CNCF)
OpenYurt - Extending your native Kubernetes to edge(project under CNCF)

openyurtio/openyurt English | 简体中文 What is NEW! Latest Release: September 26th, 2021. OpenYurt v0.5.0. Please check the CHANGELOG for details. First R

Jan 7, 2023
An operator for managing ephemeral clusters in GKE

Test Cluster Operator for GKE This operator provides an API-driven cluster provisioning for integration and performance testing of software that integ

Oct 22, 2022
Topology-tester - Application to easily test microservice topologies and distributed tracing including K8s and Istio

Topology Tester The Topology Tester app allows you to quickly build a dynamic mi

Jan 14, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Jan 3, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Feb 14, 2022
Access your Kubernetes Deployment over the Internet
Access your Kubernetes Deployment over the Internet

Kubexpose: Access your Kubernetes Deployment over the Internet Kubexpose makes it easy to access a Kubernetes Deployment over a public URL. It's a Kub

Dec 5, 2022
🐶 Kubernetes CLI To Manage Your Clusters In Style!
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Jan 9, 2023
Validation of best practices in your Kubernetes clusters
Validation of best practices in your Kubernetes clusters

Best Practices for Kubernetes Workload Configuration Fairwinds' Polaris keeps your clusters sailing smoothly. It runs a variety of checks to ensure th

Jan 9, 2023
Client extension for interacting with Kubernetes clusters from your k6 tests.

⚠️ This is a proof of concept As this is a proof of concept, it won't be supported by the k6 team. It may also break in the future as xk6 evolves. USE

Jan 2, 2023
A pain of glass between you and your Kubernetes clusters.

kube-lock A pain of glass between you and your Kubernetes clusters. Sits as a middle-man between you and kubectl, allowing you to lock and unlock cont

Oct 20, 2022
Hot-swap Kubernetes clusters while keeping your microservices up and running.

Okra Okra is a Kubernetes controller and a set of CRDs which provide advanced multi-cluster appilcation rollout capabilities, such as canary deploymen

Nov 23, 2022
a small form factor OpenShift/Kubernetes optimized for edge computing

Microshift Microshift is OpenShift1 Kubernetes in a small form factor and optimized for edge computing. Edge devices deployed out in the field pose ve

Dec 29, 2022
Secure Edge Networking Based On Kubernetes And KubeEdge.
Secure Edge Networking Based On Kubernetes And KubeEdge.

What is FabEdge FabEdge is an open source edge networking solution based on kubernetes and kubeedge. It solves the problems including complex network

Jan 3, 2023
Kubernetes controller for backing up public container images to our own registry repository

image-clone-controller Kubernetes controller which watches applications (Deployment and DaemonSet) and "caches" the images (public container images) b

Aug 28, 2022
kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters

kubequery powered by Osquery kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters kubequery will be packaged as

Dec 27, 2022