Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada

Karmada-logo

build Go Report Card LICENSE Releases Slack CII Best Practices

Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.

Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios, with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.

Why Karmada:

  • K8s Native API Compatible

    • Zero change upgrade, from single-cluster to multi-cluster
    • Seamless integration of existing K8s tool chain
  • Out of the Box

    • Built-in policy sets for scenarios, including: Active-active, Remote DR, Geo Redundant, etc.
    • Cross-cluster applications auto-scaling, failover and load-balancing on multi-cluster.
  • Avoid Vendor Lock-in

    • Integration with mainstream cloud providers
    • Automatic allocation, migration across clusters
    • Not tied to proprietary vendor orchestration
  • Centralized Management

    • Location agnostic cluster management
    • Support clusters in Public cloud, on-prem or edge
  • Fruitful Multi-Cluster Scheduling Policies

    • Cluster Affinity, Multi Cluster Splitting/Rebalancing,
    • Multi-Dimension HA: Region/AZ/Cluster/Provider
  • Open and Neutral

    • Jointly initiated by Internet, finance, manufacturing, teleco, cloud providers, etc.
    • Target for open governance with CNCF

Notice: this project is developed in continuation of Kubernetes Federation v1 and v2. Some basic concepts are inherited from these two versions.

Architecture

Architecture

The Karmada Control Plane consists of the following components:

  • Karmada API Server
  • Karmada Controller Manager
  • Karmada Scheduler

ETCD stores the karmada API objects, the API Server is the REST endpoint all other components talk to, and the Karmada Controller Manager perform operations based on the API objects you create through the API server.

The Karmada Controller Manager runs the various controllers, the controllers watch karmada objects and then talk to the underlying clusters' API servers to create regular Kubernetes resources.

  1. Cluster Controller: attach kubernetes clusters to Karmada for managing the lifecycle of the clusters by creating cluster object.

  2. Policy Controller: the controller watches PropagationPolicy objects. When PropagationPolicy object is added, it selects a group of resources matching the resourceSelector and create ResourceBinding with each single resource object.

  3. Binding Controller: the controller watches ResourceBinding object and create Work object corresponding to each cluster with single resource manifest.

  4. Execution Controller: the controller watches Work objects.When Work objects are created, it will distribute the resources to member clusters.

Concepts

Resource template: Karmada uses Kubernetes Native API definition for federated resource template, to make it easy to integrate with existing tools that already adopt on Kubernetes

Propagation Policy: Karmada offers standalone Propagation(placement) Policy API to define multi-cluster scheduling and spreading requirements.

  • Support 1:n mapping of Policy: workload, users don't need to indicate scheduling constraints every time creating federated applications.
  • With default policies, users can just interact with K8s API

Override Policy: Karmada provides standalone Override Policy API for specializing cluster relevant configuration automation. E.g.:

  • Override image prefix according to member cluster region
  • Override StorageClass according to cloud provider

The following diagram shows how Karmada resources are involved when propagating resources to member clusters.

karmada-resource-relation

Quick Start

This guide will cover:

  • Install karmada control plane components in a Kubernetes cluster which as known as host cluster.
  • Join a member cluster to karmada control plane.
  • Propagate an application by karmada.

Prerequisites

Install karmada control plane

1. Clone this repo to your machine:

git clone https://github.com/karmada-io/karmada

2. Change to karmada directory:

cd karmada

3. Deploy and run karmada control plane:

run the following script:

# hack/local-up-karmada.sh

This script will do following tasks for you:

  • Start a Kubernetes cluster to run the karmada control plane, aka. the host cluster.
  • Build karmada control plane components based on a current codebase.
  • Deploy karmada control plane components on host cluster.
  • Create member clusters and join to Karmada.

If everything goes well, at the end of the script output, you will see similar messages as follows:

Local Karmada is running.

To start using your karmada, run:
  export KUBECONFIG="$HOME/.kube/karmada.config"
Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.

To manage your member clusters, run:
  export KUBECONFIG="$HOME/.kube/members.config"
Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.

There are two contexts about karmada:

  • karmada-apiserver kubectl config use-context karmada-apiserver
  • karmada-host kubectl config use-context karmada-host

The karmada-apiserver is the main kubeconfig to be used when interacting with karmada control plane, while karmada-host is only used for debugging karmada installation with the host cluster. You can check all clusters at any time by running: kubectl config view. To switch cluster contexts, run kubectl config use-context [CONTEXT_NAME]

Demo

Demo

Propagate application

In the following steps, we are going to propagate a deployment by karmada.

1. Create nginx deployment in karmada.

First, create a deployment named nginx:

kubectl create -f samples/nginx/deployment.yaml

2. Create PropagationPolicy that will propagate nginx to member cluster

Then, we need create a policy to drive the deployment to our member cluster.

kubectl create -f samples/nginx/propagationpolicy.yaml

3. Check the deployment status from karmada

You can check deployment status from karmada, don't need to access member cluster:

$ kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           20s

Kubernetes compatibility

Kubernetes 1.15 Kubernetes 1.16 Kubernetes 1.17 Kubernetes 1.18 Kubernetes 1.19 Kubernetes 1.20 Kubernetes 1.21
Karmada v0.6 -
Karmada v0.7 -
Karmada v0.8
Karmada HEAD (master)

Key:

  • Karmada and the Kubernetes version are exactly compatible.
  • + Karmada has features or API objects that may not be present in the Kubernetes version.
  • - The Kubernetes version has features or API objects that Karmada can't use.

Meeting

Regular Community Meeting:

Resources:

Contact

If you have questions, feel free to reach out to us in the following ways:

Contributing

If you're interested in being a contributor and want to get involved in developing the Karmada code, please see CONTRIBUTING for details on submitting patches and the contribution workflow.

License

Karmada is under the Apache 2.0 license. See the LICENSE file for details.

Comments
  • crds 资源丢失

    crds 资源丢失

    What happened: 重启服务器之后,,karmada下的crds 资源都丢失了 What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • Karmada version:
    • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version):
    • Others:
  • Join cluster error

    Join cluster error

    [root@k8s-master ~]# kubectl karmada join kubernetes-admin --kubeconfig=/etc/karmada/karmada-apiserver.config --cluster-kubeconfig=$HOME/.kube/config W0120 16:17:30.264037 12485 cluster.go:106] failed to create cluster(kubernetes-admin). error: Cluster.cluster.karmada.io "kubernetes-admin" is invalid: [spec.secretRef.namespace: Required value, spec.secretRef.name: Required value, spec.impersonatorSecretRef.namespace: Required value, spec.impersonatorSecretRef.name: Required value] W0120 16:17:30.264245 12485 cluster.go:50] failed to create cluster(kubernetes-admin). error: Cluster.cluster.karmada.io "kubernetes-admin" is invalid: [spec.secretRef.namespace: Required value, spec.secretRef.name: Required value, spec.impersonatorSecretRef.namespace: Required value, spec.impersonatorSecretRef.name: Required value] Error: failed to create cluster(kubernetes-admin) object. error: Cluster.cluster.karmada.io "kubernetes-admin" is invalid: [spec.secretRef.namespace: Required value, spec.secretRef.name: Required value, spec.impersonatorSecretRef.namespace: Required value, spec.impersonatorSecretRef.name: Required value]

    kubectl-karmada version: make kubectl-karmada by the latest codes in github [root@k8s-master ~]# kubectl-karmada version kubectl karmada version: version.Info{GitVersion:"", GitCommit:"", GitTreeState:"clean", BuildDate:"2022-01-20T02:30:56Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

  • karmadactl support apply command

    karmadactl support apply command

    Signed-off-by: carlory [email protected]

    What type of PR is this?

    /kind feature

    What this PR does / why we need it:

    Some opensource products is deployed by a long-size yaml,such as calico https://docs.projectcalico.org/manifests/calico.yaml. this pr provide an easy way to deploy it to member cluster.

    (⎈ |karmada:default)➜  karmada git:(karmadactl-apply) go run cmd/karmadactl/karmadactl.go apply -h
    Apply a configuration to a resource by file name or stdin and propagate them into member clusters. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.
    
     JSON and YAML formats are accepted.
    
     Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current state is. See https://issues.k8s.io/34274.
    
     Note: It implements the function of 'kubectl apply' by default. If you want to propagate them into member clusters, please use 'kubectl apply --all-clusters'.
    
    Usage:
      karmadactl apply (-f FILENAME | -k DIRECTORY) [flags]
    
    Examples:
      # Apply the configuration without propagation into member clusters. It acts as 'kubectl apply'.
      karmadactl apply -f manifest.yaml
    
      # Apply resources from a directory and propagate them into all member clusters.
      karmadactl apply -f dir/ --all-clusters
    
    Flags:
          --all                             Select all resources in the namespace of the specified resource types.
          --all-clusters                    If present, propagates a group of resources to all member clusters.
          --allow-missing-template-keys     If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. (default true)
          --cascade string[="background"]   Must be "background", "orphan", or "foreground". Selects the deletion cascading strategy for the dependents (e.g. Pods created by a ReplicationController). Defaults to background. (default "background")
          --dry-run string[="unchanged"]    Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. (default "none")
          --field-manager string            Name of the manager used to track field ownership. (default "kubectl-client-side-apply")
      -f, --filename strings                that contains the configuration to apply
          --force                           If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.
          --force-conflicts                 If true, server-side apply will force the changes against conflicts.
          --grace-period int                Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion). (default -1)
      -h, --help                            help for apply
          --karmada-context string          Name of the cluster context in control plane kubeconfig file.
      -k, --kustomize string                Process a kustomization directory. This flag can't be used together with -f or -R.
      -n, --namespace string                If present, the namespace scope for this CLI request
          --openapi-patch                   If true, use openapi to calculate diff when the openapi presents and the resource can be found in the openapi spec. Otherwise, fall back to use baked-in types. (default true)
      -o, --output string                   Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).
          --overwrite                       Automatically resolve conflicts between the modified and live configuration by using values from the modified configuration (default true)
          --prune                           Automatically delete resource objects, that do not appear in the configs and are created by either apply or create --save-config. Should be used with either -l or --all.
          --prune-whitelist stringArray     Overwrite the default whitelist with <group/version/kind> for --prune
      -R, --recursive                       Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.
      -l, --selector string                 Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.
          --server-side                     If true, apply runs in the server instead of the client.
          --show-managed-fields             If true, keep the managedFields when printing objects in JSON or YAML format.
          --template string                 Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
          --timeout duration                The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object
          --validate string                 Must be one of: strict (or true), warn, ignore (or false).
                                            		"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.
                                            		"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise.
                                            		"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields. (default "strict")
          --wait                            If true, wait for resources to be gone before returning. This waits for finalizers.
    
    Global Flags:
          --add-dir-header                   If true, adds the file directory to the header of the log messages
          --alsologtostderr                  log to standard error as well as files
          --kubeconfig string                Paths to a kubeconfig. Only required if out-of-cluster.
          --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
          --log-dir string                   If non-empty, write log files in this directory
          --log-file string                  If non-empty, use this log file
          --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
          --logtostderr                      log to standard error instead of files (default true)
          --one-output                       If true, only write logs to their native severity level (vs also writing to each lower severity level)
          --skip-headers                     If true, avoid header prefixes in the log messages
          --skip-log-headers                 If true, avoid headers when opening log files
          --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
      -v, --v Level                          number for the log level verbosity
          --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging
    (⎈ |karmada:default)➜  karmada git:(karmadactl-apply) go run cmd/karmadactl/karmadactl.go apply -f ~/manifests.yaml
    deployment.apps/micro-dao-2048 created
    service/micro-dao-2048 created
    (⎈ |karmada:default)➜  karmada git:(karmadactl-apply) go run cmd/karmadactl/karmadactl.go apply -f ~/manifests.yaml --all-clusters
    deployment.apps/micro-dao-2048 unchanged
    propagationpolicy.policy.karmada.io/micro-dao-2048-6d7f8d5f5b created
    service/micro-dao-2048 unchanged
    propagationpolicy.policy.karmada.io/micro-dao-2048-76579ccd86 created
    (⎈ |karmada:default)➜  karmada git:(karmadactl-apply) kubectl get deploy,svc,pp
    NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/micro-dao-2048   0/2     4            0           37s
    
    NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP   2m7s
    service/micro-dao-2048   ClusterIP   10.99.253.139   <none>        80/TCP    37s
    
    NAME                                                            AGE
    propagationpolicy.policy.karmada.io/micro-dao-2048-6d7f8d5f5b   27s
    propagationpolicy.policy.karmada.io/micro-dao-2048-76579ccd86   26s
    

    Which issue(s) this PR fixes: Ref #1934

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    `karmadactl`: Introduced `apply` subcommand to apply a configuration to a resource by file name or stdin.
    
  • Reschedule bindings on cluster change

    Reschedule bindings on cluster change

    What happened: Unjoined clusters still remain in binding.spec.clusters

    What you expected to happen: Unjoined clusters should be deleted from binding.spec.clusters

    How to reproduce it (as minimally and precisely as possible): 1.Set up environment(script v0.8)

    root@myserver:~/karmada# hack/local-up-karmada.sh
    
    root@myserver:~/karmada# hack/create-cluster.sh member1 $HOME/.kube/karmada.config
    
    root@myserver:~/karmada# kubectl config use-context karmada-apiserver
    
    root@myserver:~/karmada# karmadactl join member1 --cluster-kubeconfig=$HOME/.kube/karmada.config
    
    root@myserver:~/karmada# kubectl apply -f samples/nginx
    
    root@myserver:~/karmada# kubectl get deploy
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   1/1     1            1           47h
    

    2.Unjoin member1

    root@myserver:~/karmada# karmadactl unjoin member1
    
    root@myserver:~/karmada# kubectl get clusters
    No resources found
    

    3.Check binding.spec.clusters

    root@myserver:~/karmada# kubectl describe rb
    ......
    Spec:
      Clusters:
        Name:  member1
    ......
    

    Anything else we need to know?: Is it an expected behavior? If not, who is supposed to take the responsibility to delete unjoined clusters from binding? Scheduler or other controllers (like cluster controller)?

    Environment:

    • Karmada version:v0.8.0
    • Others:
  • ANP cluster is inaccessible, please check authorization or network   /proxy/api 503 Service Unavailable

    ANP cluster is inaccessible, please check authorization or network /proxy/api 503 Service Unavailable

    What happened:

    使用ANP pull 模式

    kubectl get cluster --kubeconfig karmada-apiserver.config
    NAME      VERSION   MODE   READY   AGE
    member1   v1.19.6   Pull   True    13m
    member2   v1.19.6   Pull   True    9m29s
    

    karmada-agent 已经加了cluster-api-endpoint,proxy-server-address

          - command:
            - /bin/karmada-agent
            - --karmada-kubeconfig=/etc/kubeconfig/kubeconfig
            - --cluster-name=member1
            - --cluster-api-endpoint=https://member-ip:6443
            - --proxy-server-address=http://proxy-ip:8088
            - --cluster-status-update-frequency=10s
            - --v=4
    
    karmadactl get pod --kubeconfig karmada-apiserver.config
    Error: [cluster(member1) is inaccessible, please check authorization or network, cluster(member2) is inaccessible, please check authorization or network]
    

    What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • Karmada version: 1.2.1
    • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version): karmadactl version: version.Info{GitVersion:"v1.2.1", GitCommit:"de4972b74f848f78a58f9a0f4a4e85f243ba48f8", GitTreeState:"clean", BuildDate:"2022-07-14T09:33:33Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}
    • Others:
  • speed up docker build

    speed up docker build

    Signed-off-by: yingjinhui [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it: Speed up docker image building.

    Which issue(s) this PR fixes: Fixes #1729

    Special notes for your reviewer: Implement of https://github.com/karmada-io/karmada/issues/1729#issuecomment-1120238596.

    Does this PR introduce a user-facing change?:

    NONE
    
  • custom enable or disable of scheduler plugins

    custom enable or disable of scheduler plugins

    Signed-off-by: chaunceyjiang [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it: custom enable or disable of scheduler plugins

    Which issue(s) this PR fixes: Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    `karmada-scheduler`: Introduced `--plugins` flag to enable or disable scheduler plugins 
    
  • feat: agent report secret

    feat: agent report secret

    Signed-off-by: charlesQQ [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it: Allowed karmada-agent report secret for Pull mode cluster

    Which issue(s) this PR fixes: Part of https://github.com/karmada-io/karmada/issues/1946

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    `karmada-agent`: Introduced `--report-secrets` flag to allow secrets to be reported to the Karmada control plane during registering.
    
    
  • add e2etest for aggregated api endpoint

    add e2etest for aggregated api endpoint

    What type of PR is this?

    What this PR does / why we need it: Add e2e test case for aggregated-api-endpoint Which issue(s) this PR fixes: Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE
    
  • Reschedule ResourceBinding when adding a cluster

    Reschedule ResourceBinding when adding a cluster

    Signed-off-by: chaunceyjiang [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it:

    Which issue(s) this PR fixes: Fixes #2261

    Special notes for your reviewer: When a new cluster is joined, if the Placement is empty or the replicaSchedulingType is Duplicated. Will propagate resources to the new cluster

    Does this PR introduce a user-facing change?:

    `karmada-scheduler`: Now the scheduler starts to re-schedule in case of cluster state changes.
    
  • Add karmadactl addons subcommand

    Add karmadactl addons subcommand

    Co-authored-by: duanmeng [email protected] Signed-off-by: wuyingjun [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it: Add karmadactl addons subcommand Which issue(s) this PR fixes: Fixes https://github.com/karmada-io/karmada/issues/1957

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE
    
  • stop proxy cache asynchronously

    stop proxy cache asynchronously

    Signed-off-by: yingjinhui [email protected]

    What type of PR is this? /kind bug

    What this PR does / why we need it: When a member cluster is down, stopping the proxy cache for this cluster takes a few seconds(as below log, stop the pods cache takes 9s)

    I0106 17:25:07.764975   62644 resource_cache.go:35] Stop store for yjh-1 /v1, Resource=pods
    I0106 17:25:16.609796   62644 resource_cache.go:35] Stop store for yjh-1 /v1, Resource=nodes
    

    During stopping, it hold the lock of MultiClusterCache. This results in client requests will be blocked. Because request handlers also need this lock:

    https://github.com/karmada-io/karmada/blob/7b4c541bb818fb1e2677311aa34d8ff17b73d119/pkg/search/proxy/store/multi_cluster_cache.go#L162-L168

    https://github.com/karmada-io/karmada/blob/7b4c541bb818fb1e2677311aa34d8ff17b73d119/pkg/search/proxy/store/multi_cluster_cache.go#L319-L321

    Which issue(s) this PR fixes: Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    `karmada-search`: avoid proxy request block when member cluster down.
    
  • Support multiple schedulers

    Support multiple schedulers

    Signed-off-by: Poor12 [email protected]

    What type of PR is this? /kind api-change /kind feature

    What this PR does / why we need it: Karmada ships with a default scheduler. If the default scheduler does not suit users' needs, it's recommended to implement their own scheduler. Now pp has schedulerName field but Karmada has not fully adopted it.

    	// SchedulerName represents which scheduler to proceed the scheduling.
    	// If specified, the policy will be dispatched by specified scheduler.
    	// If not specified, the policy will be dispatched by default scheduler.
    	// +optional
    	SchedulerName string `json:"schedulerName,omitempty"`
    

    Which issue(s) this PR fixes: Fixes #279 Part of #3024

    Special notes for your reviewer: None

    Does this PR introduce a user-facing change?:

    api-change: binding.Spec add `SchedulerName` Field.
    karmada-scheduler: add `scheduler-name` and `leader-elect-resource-name` options for multiple schedulers configuration.
    
  • Update protoc setup token

    Update protoc setup token

    What type of PR is this?

    /kind cleanup

    What this PR does / why we need it: Use the automatic token, so that the task(Install Protoc) can be run in the forked repo.

    https://docs.github.com/en/actions/security-guides/automatic-token-authentication

    Which issue(s) this PR fixes:

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE
    
  • [umbrella]Support multiple schedulers

    [umbrella]Support multiple schedulers

    Karmada ships with a default scheduler. If the default scheduler does not suit users' needs, it's recommended to implement their own scheduler. Now pp has schedulerName field but Karmada has not fully adopted it.

    	// SchedulerName represents which scheduler to proceed the scheduling.
    	// If specified, the policy will be dispatched by specified scheduler.
    	// If not specified, the policy will be dispatched by default scheduler.
    	// +optional
    	SchedulerName string `json:"schedulerName,omitempty"`
    
    • [ ] set default scheduler name for propagationPolicy. linked issue: https://github.com/karmada-io/karmada/issues/279
    • [ ] add schedulerName for ResourceBinding
    • [ ] add schedulerName option for karmada-scheduler, karmada-scheduler can deal with the scheduling task by the scheduleName
    • [ ] metrics about scheduler add schedulerName label
    • [ ] strategy about multiple schedulers with descheduler and scheduler-estimator
    • [ ] docs about customizing specific scheduler https://github.com/karmada-io/website/pull/286
  • add karmada operator crd validations

    add karmada operator crd validations

    Signed-off-by: calvin [email protected]

    What type of PR is this? /kind feature

    What this PR does / why we need it: Before, I want to implement a build-in webhook for karmada operator and I think it's very complex and troublesome for both developers and users.

    The main reason is that the webhook need the tls to connect with apiserver, the means that before running karmada-operator pod, we need generate a certificate and mount to the pod. the pod and apiserver can be used for authentication each other. It's very unfriendly to users. In our scenario is every simple, we only need populate defaults to some fields, so, It,s too heavy for us to develop a webhook.

    In k8s community, there are two ways replace webhook in simple scenario:

    1. replace mutaing webhook with the way add +kubebuilder:default annotations to required fields and generate schama. https://kubernetes.io/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#efaulting

    2. replace validation webhook with CEL https://kubernetes.io/blog/2022/09/29/enforce-immutability-using-cel/#basics-of-validation-rules

    Which issue(s) this PR fixes: Part of # https://github.com/karmada-io/karmada/issues/2979

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE
    
  • The connection to the server was refused

    The connection to the server was refused

    What happened: Karmada suddenly stopped working: couldn't get current server API group list: Get "https://172.20.0.2:5433/api?timeout=32s": dial tcp 172.20.0.2:5433: connect: connection refused The connection to the server 172.20.0.2:5433 was refused - did you specify the right host or port?

    Anything else we need to know?: I installed karmada with hack/local-up-karmada.sh script. I expected to see three member clusters by default however only one kind cluster 'kind-member2' was created. Control plane was running fine.

    Environment:

    • Karmada version: v1.4.1
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Oct 27, 2022
provide api for cloud service like aliyun, aws, google cloud, tencent cloud, huawei cloud and so on

cloud-fitter 云适配 Communicate with public and private clouds conveniently by a set of apis. 用一套接口,便捷地访问各类公有云和私有云 对接计划 内部筹备中,后续开放,有需求欢迎联系。 开发者社区 开发者社区文档

Dec 20, 2022
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

Oct 19, 2021
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

Dec 16, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Dec 4, 2022
K8s controller implementing Multi-Cluster Services API based on AWS Cloud Map.

AWS Cloud Map MCS Controller for K8s Introduction AWS Cloud Map multi-cluster service discovery for Kubernetes (K8s) is a controller that implements e

Dec 17, 2022
Multi cluster kubernetes dashboard with batteries included. Build by developers, for developers.

kubetower Multi cluster kubernetes dashboard with batteries included. Built by developers, for developers. Features Restart deployments with one click

Nov 28, 2022
Enable dynamic and seamless Kubernetes multi-cluster topologies

Enable dynamic and seamless Kubernetes multi-cluster topologies Explore the docs » View Demo · Report Bug · Request Feature About the project Liqo is

Dec 30, 2022
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Sep 19, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

Feb 21, 2022
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

Jun 8, 2022
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Oct 24, 2021
Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Jan 25, 2022
Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB ATTENTION: Around January 11th, 2019, master on this repository will be

Dec 26, 2022
Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)

Kilo Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes. Overview Kilo connects nodes in a cluster by providing an e

Jan 1, 2023
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

Dec 17, 2021