Kubernetes Native Edge Computing Framework (project under CNCF)

KubeEdge

Build Status Go Report Card LICENSE Releases Documentation Status

KubeEdge is built upon Kubernetes and extends native containerized application orchestration and device management to hosts at the Edge. It consists of cloud part and edge part, provides core infrastructure support for networking, application deployment and metadata synchronization between cloud and edge. It also supports MQTT which enables edge devices to access through edge nodes.

With KubeEdge it is easy to get and deploy existing complicated machine learning, image recognition, event processing and other high level applications to the Edge. With business logic running at the Edge, much larger volumes of data can be secured & processed locally where the data is produced. With data processed at the Edge, the responsiveness is increased dramatically and data privacy is protected.

KubeEdge is an incubation-level hosted project by the Cloud Native Computing Foundation (CNCF). KubeEdge incubation announcement by CNCF.

Note:

The versions before 1.3 have not been supported, please try upgrade.

Advantages

  • Kubernetes-native support: Managing edge applications and edge devices in the cloud with fully compatible Kubernetes APIs.
  • Cloud-Edge Reliable Collaboration: Ensure reliable messages delivery without loss over unstable cloud-edge network.
  • Edge Autonomy: Ensure edge nodes run autonomously and the applications in edge run normally, when the cloud-edge network is unstable or edge is offline and restarted.
  • Edge Devices Management: Managing edge devices through Kubernetes native APIs implemented by CRD.
  • Extremely Lightweight Edge Agent: Extremely lightweight Edge Agent(EdgeCore) to run on resource constrained edge.

How It Works

KubeEdge consists of cloud part and edge part.

Architecture

In the Cloud

  • CloudHub: a web socket server responsible for watching changes at the cloud side, caching and sending messages to EdgeHub.
  • EdgeController: an extended kubernetes controller which manages edge nodes and pods metadata so that the data can be targeted to a specific edge node.
  • DeviceController: an extended kubernetes controller which manages devices so that the device metadata/status data can be synced between edge and cloud.

On the Edge

  • EdgeHub: a web socket client responsible for interacting with Cloud Service for the edge computing (like Edge Controller as in the KubeEdge Architecture). This includes syncing cloud-side resource updates to the edge, and reporting edge-side host and device status changes to the cloud.
  • Edged: an agent that runs on edge nodes and manages containerized applications.
  • EventBus: a MQTT client to interact with MQTT servers (mosquitto), offering publish and subscribe capabilities to other components.
  • ServiceBus: a HTTP client to interact with HTTP servers (REST), offering HTTP client capabilities to components of cloud to reach HTTP servers running at edge.
  • DeviceTwin: responsible for storing device status and syncing device status to the cloud. It also provides query interfaces for applications.
  • MetaManager: the message processor between edged and edgehub. It is also responsible for storing/retrieving metadata to/from a lightweight database (SQLite).

Kubernetes compatibility

Kubernetes 1.13 Kubernetes 1.14 Kubernetes 1.15 Kubernetes 1.16 Kubernetes 1.17 Kubernetes 1.18 Kubernetes 1.19
KubeEdge 1.3
KubeEdge 1.4
KubeEdge 1.5
KubeEdge HEAD (master)

Key:

  • KubeEdge and the Kubernetes version are exactly compatible.
  • + KubeEdge has features or API objects that may not be present in the Kubernetes version.
  • - The Kubernetes version has features or API objects that KubeEdge can't use.

Guides

Get start with this doc.

See our documentation on kubeedge.io for more details.

To learn deeply about KubeEdge, try some examples on examples.

Roadmap

Meeting

Regular Community Meeting:

Resources:

Contact

If you need support, start with the troubleshooting guide, and work your way through the process that we've outlined.

If you have questions, feel free to reach out to us in the following ways:

Contributing

If you're interested in being a contributor and want to get involved in developing the KubeEdge code, please see CONTRIBUTING for details on submitting patches and the contribution workflow.

License

KubeEdge is under the Apache 2.0 license. See the LICENSE file for details.

Comments
  • Metrics-Sever on KubeEdge (configuration process and vague document)

    Metrics-Sever on KubeEdge (configuration process and vague document)

    What happened:

    1. I cannot find the certgen.sh in the folder /etc/kubeedge/, and I found it in the original git clone folder in $GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh. Are they the same file or not? (Doc section 4.4.3 "third" step certification part for cloud core.)

    2. I have no idea how to activate cloudStream and edgeStream. In the document, it is said that I could modify cloudcore.yaml or edgecore.yaml. However, in the last sentence, it mentioned that we need to set both cloudStream and edgeStream to true!! (Doc section 4.4.3 "fifth" step cloudStream and edgeStream setting)

    3. I only found the edgecore service with the help of @GsssC (Doc section 4.4.3 "sixth" step for restarting cloudcore and edgecore.) However, I still cannot find a way to restart cloudcore. I cannot even find any services no matter which is activated or not related to KubeEdge service (cloudcore) by using this command sudo systemctl list-units --all. By the way, I used this command to restart edgecore: systemctl restart edgecore, and I found out that the kube-proxy containers may cause the problem. Also, I tried to use this command to restart cloudcore: cloudcore restart, though I am not sure if this is the right command or not.

    4. If the KubeEdge supports the CNI plugin or not? (@daixiang0 said that CNI plugin is not supported right now, but @GsssC said that it is supported.) I am using the weave-net plugin as my CNI plugin since I heard that it has good support for ARM CPU architecture. (Edge node is raspberry pi 4 or NVIDIA Jetson TX2(future)) If the answer is yes, could you please help me with the configuration issues. In k8s_weave-npc container, it said that unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined In k8s_weave container, it said that [kube-peers] Could not get cluster config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined Failed to get peers

    5. kubectl top nodes cannot get any KubeEdge edge node metrics. image

    What you expected to happen:

    1. Get the certgen.sh for certificates generation.

    2. Specified YAML file I do need to modify. (cloudcore.yaml or edgecore.yaml or both?)

    3. There is a command to restart cloudcore.

    4. No terminated error status on weave-net pod anymore.

    5. I can see metrics like other Kubernetes nodes.

    6. I can see metrics on Kubernetes Dashboard, Grafana, and Prometheus.

    How to reproduce it (as minimally and precisely as possible):

    1. construct a KubeEdge cluster (1 master node, 1 worker node(k8s), 1 edge node(KubeEdge))
    2. deploy Kubernetes Dashboard by Dan Wahlin GitHub scripts https://github.com/DanWahlin/DockerAndKubernetesCourseCode/tree/master/samples/dashboard-security
    3. deploy Grafana, Prometheus, kube-state-metrics, metrics-server by Dan Wahlin GitHub scripts https://github.com/DanWahlin/DockerAndKubernetesCourseCode/tree/master/samples/prometheus

    Anything else we need to know?: Document: https://docs.kubeedge.io/_/downloads/en/latest/pdf/ (section 4.4.3 - kubectl logs) (section 4.4.4 - metrics-server) Weave-Net Pod --> contaienrs logs in RPI4 image image Grafana, Prometheus, metrics-server, kube-state-metrics scripts: https://github.com/DanWahlin/DockerAndKubernetesCourseCode/tree/master/samples/prometheus

    Environment:

    • KubeEdge version(e.g. cloudcore/edgecore --version):

    cloudcore: image image

    edgecore: edgecore --version: command not found image

    CloudSide Environment:

    • Hardware configuration (e.g. lscpu): root@charlie-latest:/home/charlie# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 6 On-line CPU(s) list: 0-5 Thread(s) per core: 1 Core(s) per socket: 6 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 158 Model name: Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz Stepping: 10 CPU MHz: 800.071 CPU max MHz: 4100.0000 CPU min MHz: 800.0000 BogoMIPS: 6000.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 9216K NUMA node0 CPU(s): 0-5 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d

    • OS (e.g. cat /etc/os-release): image

    • Kernel (e.g. uname -a): image

    • Go version (e.g. go version): image

    • Others:

    EdgeSide Environment:

    • edgecore version (e.g. edgecore --version): command not found image

    • Hardware configuration (e.g. lscpu): image

    • OS (e.g. cat /etc/os-release): image

    • Kernel (e.g. uname -a): image

    • Go version (e.g. go version): image

    • Others:

    @GsssC Hi, I finally submitted an issue! Could you please give me a hand? Thank you very much in advance!

  • How to deploy the edge part into a k8s cluster

    How to deploy the edge part into a k8s cluster

    Which jobs are failing: I tried to deploy edgecore according to the documentation, but it didn't work.

    Which test(s) are failing: I tried to deploy edgecore according to the document. When I finished, the edge node is still in the state of NotReady. If I see the pods, I can see that the pod containing edgecore is in the pending state.

    Just think about the following (my edgenode's name is "172.31.23.166")

    $ kubectl get nodes
    NAME               STATUS     ROLES    AGE   VERSION
    172.31.23.166      NotReady   <none>   15s
    ip-172-31-27-157   Ready      master   17h   v1.14.1
    
    $ kubectl get pods -n kubeedge
    NAME                              READY   STATUS    RESTARTS   AGE
    172.31.23.166-7464f44944-nlbj2    0/2     Pending   0          9s
    edgecontroller-5464c96d6c-tmqfs   1/1     Running   0          42s
    
    $ kubectl describe pods 172.31.23.166-7464f44944-nlbj2 -n kubeedge
    Name:               172.31.23.166-7464f44944-nlbj2
    Namespace:          kubeedge
    Priority:           0
    PriorityClassName:  <none>
    Node:               <none>
    Labels:             k8s-app=kubeedge
                        kubeedge=edgenode
                        pod-template-hash=7464f44944
    Annotations:        <none>
    Status:             Pending
    IP:
    Controlled By:      ReplicaSet/172.31.23.166-7464f44944
    Containers:
      edgenode:
        Image:      kubeedge/edgecore:latest
        Port:       <none>
        Host Port:  <none>
        Limits:
          cpu:     200m
          memory:  1Gi
        Requests:
          cpu:     100m
          memory:  512Mi
        Environment:
          DOCKER_HOST:  tcp://localhost:2375
        Mounts:
          /etc/kubeedge/certs from certs (rw)
          /etc/kubeedge/edge/conf from conf (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-lpftk (ro)
      dind-daemon:
        Image:      docker:dind
        Port:       <none>
        Host Port:  <none>
        Requests:
          cpu:        20m
          memory:     512Mi
        Environment:  <none>
        Mounts:
          /var/lib/docker from docker-graph-storage (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-lpftk (ro)
    Conditions:
      Type           Status
      PodScheduled   False
    Volumes:
      certs:
        Type:          HostPath (bare host directory volume)
        Path:          /etc/kubeedge/certs
        HostPathType:
      conf:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      edgenodeconf
        Optional:  false
      docker-graph-storage:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
        SizeLimit:  <unset>
      default-token-lpftk:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-lpftk
        Optional:    false
    QoS Class:       Burstable
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type     Reason            Age                From               Message
      ----     ------            ----               ----               -------
      Warning  FailedScheduling  46s (x2 over 46s)  default-scheduler  0/2 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 1 Insufficient pods, 1 node(s) had taints that the pod didn't tolerate.
    

    Since when has it been failing:

    Reason for failure:

    Anything else we need to know:

    1. I set a taint on the primary node: node-role.kubernetes.io/master=:NoSchedule, which makes the master node unable to deploy any pods, I am not sure if this is correct
    2. The edge node does not do anything other than pulling the edgecore image and copying the edge certs file into the /etc/kubeedge/certs folder. Is there anything I missed?
    3. The edge deploy file has no information about the node select. When there are multiple edgenodes, how is it deployed to the correct node?
    4. Have you completed the function of connecting edge from cloud? Otherwise, how does cloud deploy pods to edge?
  • Bump ginkgo from v1 to v2

    Bump ginkgo from v1 to v2

    What type of PR is this?

    Add one of the following kinds: /kind feature /kind test

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #3829

    Special notes for your reviewer: This pr includes:

    1. upgrades vendor, go.mod and go.sum, also replace "github.com/onsi/ginkgo" with "github.com/onsi/ginkgo/v2".

    2. deprecated CurrentGinkgoTestDescription with CurrentSpecReport as suggested in the Migration Guide.

    3. refactors the ginkgo.Measure spec, and implement it with gmeasure.Experiment as suggested in the Migration Guide.

    4. revises the ginkgo version in scripts and github actions.

    Does this PR introduce a user-facing change?:

    
    
  • Pods are not leaving pending state

    Pods are not leaving pending state

    What happened: The pods are not leaving pending state on the cloud.

    What you expected to happen: The pod running on the edge node

    How to reproduce it (as minimally and precisely as possible): I have virtual mashines one is dealing the other one is dealing as edge. On the cloud: follow the instructions in the readme On the edge: follow the instructions in the readme Then I executed on the cloud side: kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml

    Anything else we need to know?: I have tested the edge setup with make edge_integration_test (all tests passed) Edge node state is Ready. kubectl describe nginx-deployment output: output-kubectl-describe.txt

    Environment:

    • KubeEdge version: 1233e7643b25a81b670fe1bb85a8a93d58d3a163
    • Hardware configuration: Mem: 7.6G; 2 CPU
    • OS (e.g. from /etc/os-release): os-release.txt
    • Kernel (e.g. uname -a): Linux node 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux
    • Others: vm running with libvirt quemu

    Do you need any more information?

  • Csidriver fixs

    Csidriver fixs

    What type of PR is this? /kind bug

    What this PR does / why we need it: This tries to fix some of the problems of the csi implementation.

    Which issue(s) this PR fixes:

    Fixes #2088

    Special notes for your reviewer: There are some of the fixes of the problems that I've found, but not all. I will continue working on this to drive csi driver work better.

    Does this PR introduce a user-facing change?:

    NONE
    
  • Edge node get not ready

    Edge node get not ready

    What happened: My edge node shows status not ready image

    What you expected to happen: My edge node hould be ready How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?: I post my logs below. cloudcore.log: image edgecode.log: image and my edge config file: image

    Environment:

    • KubeEdge version: v1.1.0
    • Hardware configuration: amd64
    • OS (e.g. from /etc/os-release): ubuntu 16
    • Kernel (e.g. uname -a):
    • Others:
  • Fix: clean containers after `keadm reset`

    Fix: clean containers after `keadm reset`

    What type of PR is this?

    /kind bug /kind design

    What this PR does / why we need it:

    We need to recycle staled containers to avoid resource occupation, which maybe cause memory leak or something.

    You can see the details in the following issue.

    Which issue(s) this PR fixes:

    Fixes #1973

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE
    
  • Add maxPods/CPU/MEM SystemReservedResource configuration item

    Add maxPods/CPU/MEM SystemReservedResource configuration item

    What type of PR is this? /kind feature

    What this PR does / why we need it: Add maxPods/CPU/MEM SystemReservedResource configuration item Which issue(s) this PR fixes: Fixes #1832

    Special notes for your reviewer: Change some API items Does this PR introduce a user-facing change?: NONE

    
    
  • Failed to mount configmap/secret volume because of

    Failed to mount configmap/secret volume because of "no such file or directory"

    What happened: Failed to mount configmap/secret volume because of "no such file or directory". Although we can be sure that related resources are included in the sqlite.

    I0527 10:35:29.789719     660 edged_volumes.go:54] Using volume plugin "kubernetes.io/empty-dir" to mount wrapped_kube-proxy
    I0527 10:35:29.800195     660 process.go:685] get a message {Header:{ID:8b57a409-25c9-454e-a9ae-b23f0b1861a9 ParentID: Timestamp:1590546929789 ResourceVersion: Sync:true} Router:{Source:edged Group:meta Operation:query Resource:kube-system/configmap/kube-proxy} Content:<nil>}
    I0527 10:35:29.800543     660 metaclient.go:121] send sync message kube-system/configmap/kube-proxy successed and response: {{ab5f3aab-11ff-48cf-8c3b-c5ded97678db 8b57a409-25c9-454e-a9ae-b23f0b1861a9 1590546929800  false} {metaManager meta response kube-system/configmap/kube-proxy} [{"data":{"config.conf":"apiVersion: kubeproxy.config.k8s.io/v1alpha1\nbindAddress: 0.0.0.0\nclientConnection:\n  acceptContentTypes: \"\"\n  burst: 0\n  contentType: \"\"\n  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf\n  qps: 0\nclusterCIDR: 192.168.0.0/16\nconfigSyncPeriod: 0s\nconntrack:\n  maxPerCore: null\n  min: null\n  tcpCloseWaitTimeout: null\n  tcpEstablishedTimeout: null\nenableProfiling: false\nhealthzBindAddress: \"\"\nhostnameOverride: \"\"\niptables:\n  masqueradeAll: false\n  masqueradeBit: null\n  minSyncPeriod: 0s\n  syncPeriod: 0s\nipvs:\n  excludeCIDRs: null\n  minSyncPeriod: 0s\n  scheduler: \"\"\n  strictARP: false\n  syncPeriod: 0s\nkind: KubeProxyConfiguration\nmetricsBindAddress: \"\"\nmode: \"\"\nnodePortAddresses: null\noomScoreAdj: null\nportRange: \"\"\nudpIdleTimeout: 0s\nwinkernel:\n  enableDSR: false\n  networkName: \"\"\n  sourceVip: \"\"","kubeconfig.conf":"apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\n    server: https://10.10.102.78:6443\n  name: default\ncontexts:\n- context:\n    cluster: default\n    namespace: default\n    user: default\n  name: default\ncurrent-context: default\nusers:\n- name: default\n  user:\n    tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token"},"metadata":{"creationTimestamp":"2020-04-21T14:50:46Z","labels":{"app":"kube-proxy"},"name":"kube-proxy","namespace":"kube-system","resourceVersion":"193","selfLink":"/api/v1/namespaces/kube-system/configmaps/kube-proxy","uid":"5651c863-c755-4da4-8039-b251efc82470"}}]}
    E0527 10:35:29.800949     660 configmap.go:249] Error creating atomic writer: stat /var/lib/edged/pods/25e6f0ea-6364-4bcc-9937-9760b6ec956a/volumes/kubernetes.io~configmap/kube-proxy: no such file or directory
    W0527 10:35:29.801070     660 empty_dir.go:392] Warning: Unmount skipped because path does not exist: /var/lib/edged/pods/25e6f0ea-6364-4bcc-9937-9760b6ec956a/volumes/kubernetes.io~configmap/kube-proxy
    I0527 10:35:29.801109     660 record.go:24] Warning FailedMount MountVolume.SetUp failed for volume "kube-proxy" : stat /var/lib/edged/pods/25e6f0ea-6364-4bcc-9937-9760b6ec956a/volumes/kubernetes.io~configmap/kube-proxy: no such file or directory
    E0527 10:35:29.801199     660 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/25e6f0ea-6364-4bcc-9937-9760b6ec956a-kube-proxy\" (\"25e6f0ea-6364-4bcc-9937-9760b6ec956a\")" failed. No retries permitted until 2020-05-27 10:37:31.80112802 +0800 CST m=+2599.727653327 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25e6f0ea-6364-4bcc-9937-9760b6ec956a-kube-proxy\") pod \"kube-proxy-gbdgw\" (UID: \"25e6f0ea-6364-4bcc-9937-9760b6ec956a\") : stat /var/lib/edged/pods/25e6f0ea-6364-4bcc-9937-9760b6ec956a/volumes/kubernetes.io~configmap/kube-proxy: no such file or directory"
    

    What you expected to happen: mount successffuly How to reproduce it (as minimally and precisely as possible): Sorry for I can not provide the way of reproduction. Anything else we need to know?:

    Environment:

    • KubeEdge version(e.g. cloudcore/edgecore --version): v1.3.0
  • Move Docs to website repository and proposals to enhancements repository.

    Move Docs to website repository and proposals to enhancements repository.

    What would you like to be added: Move docs to website repository and proposals to enhancement repository.

    Why is this needed: The hyperlinks in the docs have a .html link which results does not redirect to correct page while moving browsing the docs in github repository. Also moving proposals to enhancements and docs to website repository will help in seperation and easy management/review of source code, documentation and enhancement proposals.

    Thoughts ?? @kevin-wangzefeng @rohitsardesai83 @m1093782566 @CindyXing @qizha

  • add an unusual case on kind to resource conversion

    add an unusual case on kind to resource conversion

    What type of PR is this? /kind bug

    What this PR does / why we need it: It added an unusual Kind to Resource conversion. In #2769 , we found that for resources like Gateway, the corresponding resource name should be gateways instead of gatewaies. Another example is a resource composed of multiple words, such as ServiceEntry. These are special cases that needs to be handled specially, so I created a crdmap to record the resource-kind relationship of crd.

    Which issue(s) this PR fixes: Fixes #2769

    Special notes for your reviewer: none

    Does this PR introduce a user-facing change?: none

  • 边缘云节点安装后无法登入主机

    边缘云节点安装后无法登入主机

    What happened: 1、根据官网部署方式(文档写的差劲,小学生写的吧)安装后,边端连接不了云端,云端svc方式根本就没放开端口,边端连接不了,手工改的边端配置文件端口 2、边端部署后,虚机登入不了,在输入用户界面卡住,永远在输入用户界面 950cf1cc532ea696c20a12704724a81

    3、边端机器,reboot、shutdown命令用不了了 image

    Environment: k8s version: 1.23.5 kubeedge version :1.12.1 os: ubuntu-20.04.2 image image

  • Optimize the comparison mode of bool judgment

    Optimize the comparison mode of bool judgment

    Signed-off-by: Fish-pro [email protected]

    What type of PR is this?

    /kind cleanup

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE
    
  • Use errors.Is to check for a specific error

    Use errors.Is to check for a specific error

    Signed-off-by: Fish-pro [email protected]

    What type of PR is this?

    /kind cleanup

    What this PR does / why we need it:

    Comparing with == will fail on wrapped errors., use errors.Is to check for a specific error

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE
    
  • edge node twin  process failure, resource not found

    edge node twin process failure, resource not found

    W0104 11:41:14.564066 1 upstream.go:218] parse message: f3338ddb-88e5-4185-a131-21022c587893 resource type with error, message resource: node/edge, err: resource type not found I0104 11:41:14.564116 1 message_handler.go:122] edge node edge for project e632aba927ea4ac2b575ec1603d56f10 connected I0104 11:41:14.564219 1 node_session.go:136] Start session for edge node edge I0104 11:41:14.573279 1 upstream.go:89] Dispatch message: fd1bb4bb-a634-4b37-b62b-1199ed974814 I0104 11:41:14.573302 1 upstream.go:96] Message: fd1bb4bb-a634-4b37-b62b-1199ed974814, resource type is: membership/detail E0104 11:41:15.364230 1 upstream.go:1228] Query lease %s failed, error: %vedgelease.coordination.k8s.io "edge" not found W0104 11:41:15.438054 1 upstream.go:674] message: 222ad6d5-c857-435c-ac0b-ebe37c373a89 process failure, resource not found, namespace: default, name: edge W0104 11:41:15.440159 1 upstream.go:674] message: 71433214-03af-447c-bb8f-4440428b4b48 process failure, resource not found, namespace: default, name: edge I0104 11:43:14.456501 1 upstream.go:89] Dispatch message: fd1bb4bb-a634-4b37-b62b-1199ed974814 I0104 11:43:14.456528 1 upstream.go:96] Message: fd1bb4bb-a634-4b37-b62b-1199ed974814, resource type is: membership/detail I0104 11:44:14.464646 1 upstream.go:89] Dispatch message: fd1bb4bb-a634-4b37-b62b-1199ed974814 I0104 11:44:14.464665 1 upstream.go:96] Message: fd1bb4bb-a634-4b37-b62b-1199ed974814, resource type is: membership/detail I0104 11:45:14.785977 1 upstream.go:89] Dispatch message: fd1bb4bb-a634-4b37-b62b-1199ed974814 I0104 11:45:14.786007 1 upstream.go:96] Message: fd1bb4bb-a634-4b37-b62b-1199ed974814, resource type is: membership/detail I0104 11:46:14.478000 1 upstream.go:89] Dispatch message: fd1bb4bb-a634-4b37-b62b-1199ed974814 I0104 11:46:14.478025 1 upstream.go:96] Message: fd1bb4bb-a634-4b37-b62b-1199ed974814, resource type is: membership/detail

    root@cloud:/home/leapfive# kubectl get nodes NAME STATUS ROLES AGE VERSION cloud Ready control-plane,master 21d v1.22.0 edge Ready agent,edge 7m55s v1.22.6-kubeedge-v1.12.1-2+c02cdf50c550ed gpu Ready 22h v1.22.0

    root@cloud:/home/leapfive# kubectl get pod NAME READY STATUS RESTARTS AGE kubeedge-counter-app-6f88f7cb5c-cvn6d 1/1 Running 1 (25h ago) 20d kubeedge-led-app-85549cf568-6dqk4 1/1 Running 1 (25h ago) 5d1h kubeedge-nb2-led-5cb7cfc6bb-fdc8k 1/1 Running 0 17m kubeedge-pi-counter-68c86dc747-hd2wt 1/1 Running 0 17m

    root@NB2-SOC-BSP-ALPHA-V1:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9dfe71928c4a 1b4c79e285ab "/nb2-led nb2-led" 8 minutes ago Up 8 minutes k8s_kubeedge-nb2-led_kubeedge-nb2-led-5cb7cfc6bb-fdc8k_default_621c2c67-6f7f-465f-90b4-68fc6c1c7d69_0 e94d05db7c2f 65768280e31d "/pi-counter-app pi-…" 8 minutes ago Up 8 minutes k8s_kubeedge-pi-counter_kubeedge-pi-counter-68c86dc747-hd2wt_default_eb957e4d-5ce3-4e27-9b42-b79c30bd02c7_0 6488ae05d67c carlosedp/pause:3.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kubeedge-pi-counter-68c86dc747-hd2wt_default_eb957e4d-5ce3-4e27-9b42-b79c30bd02c7_0 e88199938d1b carlosedp/pause:3.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kubeedge-nb2-led-5cb7cfc6bb-fdc8k_default_621c2c67-6f7f-465f-90b4-68fc6c1c7d69_0 17edf3744aa8 carlosedp/pause:3.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_nvidia-device-plugin-daemonset-rskll_kube-system_e8c62219-cc28-4c72-b98b-c7d29a5fca75_0 746b6f50822c carlosedp/pause:3.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kube-flannel-ds-pbrmc_kube-flannel_3b9d8fe1-1c8d-4f90-9ed8-162c87139ce5_0 61c26206ac51 carlosedp/pause:3.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kube-proxy-mhmmp_kube-system_e997285c-2a1f-4612-a071-6557dca29d4a_0

  • KubeEdge WSL2 with systemd support

    KubeEdge WSL2 with systemd support

    What happened: Background: Windows recently enhanced its Windows Subsystem for Linux (WSL2) so that it fully supports systemd. Docker, virtualization in general, and also Kubernetes clusters such as kind, minikube, microk8s work with it now. I therefore tested out of curiosity whether KubeEdge works now out of the box with WSL2 after enabling systemd. Situation: I successfully can join WSL2 machines to my K8s cluster as edge nodes and they are also in Ready state.

    $ kubectl get nodes
    bagel    Ready      control-plane,master   57d    v1.22.0
    mammut   Ready      agent,edge             114m   v1.22.6-kubeedge-v1.11.1
    

    However the pods will get stuck in ContainerCannotRun status after scheduling. For every pod there's almost the same error message of following structure:

    Reason:       ContainerCannotRun
    Message:      xyz is mounted on xyz but it is not a shared or slave mount
    
    This is a example description of a pod that is stuck in state ContainerCannotRun (expand)
    k describe nginx-main-575658f585-9n2wz
    error: the server doesn't have a resource type "nginx-main-575658f585-9n2wz"
    ➜  ~ kubectl describe pod nginx-main-575658f585-9n2wz
    Name:             nginx-main-575658f585-9n2wz
    Namespace:        default
    Priority:         0
    Service Account:  default
    Node:             mammut/172.19.86.74
    Start Time:       Tue, 03 Jan 2023 17:22:03 +0100
    Labels:           app=nginx-main
                      pod-template-hash=575658f585
    Annotations:      <none>
    Status:           Running
    IP:               172.17.0.7
    IPs:
      IP:           172.17.0.7
    Controlled By:  ReplicaSet/nginx-main-575658f585
    Containers:
      nginx:
        Container ID:   docker://2053e6a2203c95f057169908e1cadcb536f176e1f68d327eafd4b3e7aef460c2
        Image:          nginx
        Image ID:       docker-pullable://nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286
        Port:           <none>
        Host Port:      <none>
        State:          Terminated
          Reason:       ContainerCannotRun
          Message:      path /var/lib/edged/pods/5e3a126c-b834-48cb-90d9-77d9fdd0c0e1/volumes/kubernetes.io~projected/kube-api-access-jzfvz is mounted on /var/lib/edged/pods/5e3a126c-b834-48cb-90d9-77d9fdd0c0e1/volumes/kubernetes.io~projected/kube-api-access-jzfvz but it is not a shared or slave mount
          Exit Code:    128
          Started:      Tue, 03 Jan 2023 17:57:54 +0100
          Finished:     Tue, 03 Jan 2023 17:57:54 +0100
        Last State:     Terminated
          Reason:       ContainerCannotRun
          Message:      path /var/lib/edged/pods/5e3a126c-b834-48cb-90d9-77d9fdd0c0e1/volumes/kubernetes.io~projected/kube-api-access-jzfvz is mounted on /var/lib/edged/pods/5e3a126c-b834-48cb-90d9-77d9fdd0c0e1/volumes/kubernetes.io~projected/kube-api-access-jzfvz but it is not a shared or slave mount
          Exit Code:    128
          Started:      Tue, 03 Jan 2023 17:57:54 +0100
          Finished:     Tue, 03 Jan 2023 17:57:54 +0100
        Ready:          False
        Restart Count:  11
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jzfvz (ro)
    Conditions:
      Type           Status
      Initialized    True
      Ready          False
      PodScheduled   True
    Volumes:
      kube-api-access-jzfvz:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   BestEffort
    Node-Selectors:              ip=172.19.86.74
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type    Reason     Age   From               Message
      ----    ------     ----  ----               -------
      Normal  Scheduled  39m   default-scheduler  Successfully assigned default/nginx-main-575658f585-9n2wz to mammut
    
    

    What you expected to happen: I expect the pods to turn into Running state.

    How to reproduce it (as minimally and precisely as possible): Setup:

    • Windows 11 machine
    • enable WSL and Hyper-V in the Windows Registry (Activate Windows Features -> tick Hyper-V/Hyper-V-Platform and Windows-Subsystem for Linux
    • Install WSL either via the Microsoft Store or manually and update to newest WSL version to get systemd support
    • Edit: /etc/wsl.conf
    [boot]
    systemd=true
    
    • restart WSL (full shutdown of WSL via wsl --shutdown from Powershell and start WSL)
    • install all kinds of tools (docker, kubectl, keadm, etc.)
    • setup K8s cluster on another machine

    Anything else we need to know?: I run a K8s cluster with version 1.22.0 with KubeEdge 1.11.1. I guess there's an rather easy fix to this, as K8s in general works on WSL2 now. I used the Helm Chart to deploy the KubeEdge Cloudcore. I'm not even sure if there are any changes required to KubeEdge or if its just networking related stuff. I'm willing to work on this issue if it's a rather easy fix and someone knows potential reasons for this error. It would be great however to discuss this in detail.

    By default the IP address of the WSL machine is not the same as the IP from Windows!

    journalctl logs (expand)
    ion: Sync:true MessageType:} Router:{Source:edged Destination: Group:meta Operation:query Resource:longhorn-system/secret/longhorn-grpc-tls} Content:<nil>}], resp[{Header:{ID: ParentID: Timestamp:0 ResourceVer
    sion: Sync:false MessageType:} Router:{Source: Destination: Group: Operation: Resource:} Content:<nil>}], err[timeout to get response for message 4265af07-91e4-4a1c-a3d2-2593fe9076b6]
    Jan 03 18:53:57 mammut edgecore[47191]: E0103 18:53:57.559320   47191 process.go:298] remote query failed: timeout to get response for message 4265af07-91e4-4a1c-a3d2-2593fe9076b6
    Jan 03 18:53:57 mammut edgecore[47191]: W0103 18:53:57.559365   47191 context_channel.go:159] Get bad anonName: when sendresp message, do nothing
    Jan 03 18:53:57 mammut edgecore[47191]: E0103 18:53:57.559416   47191 metaclient.go:112] send sync message longhorn-system/secret/longhorn-grpc-tls failed, error:timeout to get response for message 85a58ef0-34
    c0-49e3-b090-45c94c0ba189, retries: 2
    Jan 03 18:53:57 mammut edgecore[47191]: E0103 18:53:57.559480   47191 secret.go:195] Couldn't get secret longhorn-system/longhorn-grpc-tls: get secret from metaManager failed, err: timed out waiting for the co
    ndition
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.559527   47191 record.go:24] Warning FailedMount MountVolume.SetUp failed for volume "longhorn-grpc-tls" : get secret from metaManager failed, err: timed
    out waiting for the condition
    Jan 03 18:53:57 mammut edgecore[47191]: E0103 18:53:57.559598   47191 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/secret/c7ee4d3d-85c6-4cd4-bbbf-5c05b1b0a554-longhorn-grpc-tls podName:c7ee4d3d-85c6-4cd4-bbbf-5c05b1b0a554 nodeName:}" failed. No retries permitted until 2023-01-03 18:55:59.559556778 +0100 CET m=+6423.745431357 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "longhorn-grpc-tls" (UniqueName: "kubernetes.io/secret/c7ee4d3d-85c6-4cd4-bbbf-5c05b1b0a554-longhorn-grpc-tls") pod "longhorn-manager-tvhj8" (UID: "c7ee4d3d-85c6-4cd4-bbbf-5c05b1b0a554") : get secret from metaManager failed, err: timed out waiting for the condition
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.603567   47191 edged.go:992] worker [3] get pod addition item [edgemesh-agent-ht6mq]
    Jan 03 18:53:57 mammut edgecore[47191]: E0103 18:53:57.603593   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [edgemesh-agent-ht6mq] addition  error, backoff: [5m0s]
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.603605   47191 edged.go:997] worker [3] backoff pod addition item [edgemesh-agent-ht6mq] failed, re-add to queue
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.605996   47191 edged.go:992] worker [2] get pod addition item [nginx-main-575658f585-9n2wz]
    Jan 03 18:53:57 mammut edgecore[47191]: E0103 18:53:57.606017   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [nginx-main-575658f585-9n2wz] addition  error, backoff: [5m0s]
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.606036   47191 edged.go:997] worker [2] backoff pod addition item [nginx-main-575658f585-9n2wz] failed, re-add to queue
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.638294   47191 edged.go:992] worker [4] get pod addition item [edgemesh-agent-ht6mq]
    Jan 03 18:53:57 mammut edgecore[47191]: E0103 18:53:57.638341   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [edgemesh-agent-ht6mq] addition  error, backoff: [5m0s]
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.638354   47191 edged.go:997] worker [4] backoff pod addition item [edgemesh-agent-ht6mq] failed, re-add to queue
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.967442   47191 edged.go:992] worker [0] get pod addition item [edgemesh-agent-ht6mq]
    Jan 03 18:53:57 mammut edgecore[47191]: E0103 18:53:57.967552   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [edgemesh-agent-ht6mq] addition  error, backoff: [5m0s]
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.967642   47191 edged.go:997] worker [0] backoff pod addition item [edgemesh-agent-ht6mq] failed, re-add to queue
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.973585   47191 edged.go:992] worker [1] get pod addition item [nginx-main-575658f585-9n2wz]
    Jan 03 18:53:57 mammut edgecore[47191]: E0103 18:53:57.973699   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [nginx-main-575658f585-9n2wz] addition  error, backoff: [5m0s]
    Jan 03 18:53:57 mammut edgecore[47191]: I0103 18:53:57.973797   47191 edged.go:997] worker [1] backoff pod addition item [nginx-main-575658f585-9n2wz] failed, re-add to queue
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.037770   47191 edged.go:992] worker [3] get pod addition item [edgemesh-agent-ht6mq]
    Jan 03 18:53:58 mammut edgecore[47191]: E0103 18:53:58.037794   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [edgemesh-agent-ht6mq] addition  error, backoff: [5m0s]
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.037805   47191 edged.go:997] worker [3] backoff pod addition item [edgemesh-agent-ht6mq] failed, re-add to queue
    Jan 03 18:53:58 mammut edgecore[47191]: W0103 18:53:58.127940   47191 context_channel.go:159] Get bad anonName:3568dcf8-c100-46b7-b502-527e64c73954 when sendresp message, do nothing
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.344926   47191 edged.go:992] worker [2] get pod addition item [edgemesh-agent-ht6mq]
    Jan 03 18:53:58 mammut edgecore[47191]: E0103 18:53:58.344960   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [edgemesh-agent-ht6mq] addition  error, backoff: [5m0s]
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.344980   47191 edged.go:997] worker [2] backoff pod addition item [edgemesh-agent-ht6mq] failed, re-add to queue
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.349313   47191 edged.go:992] worker [4] get pod addition item [nginx-main-575658f585-9n2wz]
    Jan 03 18:53:58 mammut edgecore[47191]: E0103 18:53:58.349333   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [nginx-main-575658f585-9n2wz] addition  error, backoff: [5m0s]
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.349341   47191 edged.go:997] worker [4] backoff pod addition item [nginx-main-575658f585-9n2wz] failed, re-add to queue
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.379335   47191 edged.go:992] worker [0] get pod addition item [edgemesh-agent-ht6mq]
    Jan 03 18:53:58 mammut edgecore[47191]: E0103 18:53:58.379360   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [edgemesh-agent-ht6mq] addition  error, backoff: [5m0s]
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.379370   47191 edged.go:997] worker [0] backoff pod addition item [edgemesh-agent-ht6mq] failed, re-add to queue
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.569273   47191 edged.go:992] worker [1] get pod addition item [nginx-main-575658f585-9n2wz]
    Jan 03 18:53:58 mammut edgecore[47191]: E0103 18:53:58.569324   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [nginx-main-575658f585-9n2wz] addition  error, backoff: [5m0s]
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.569338   47191 edged.go:997] worker [1] backoff pod addition item [nginx-main-575658f585-9n2wz] failed, re-add to queue
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.768206   47191 edged.go:992] worker [3] get pod addition item [edgemesh-agent-ht6mq]
    Jan 03 18:53:58 mammut edgecore[47191]: E0103 18:53:58.768249   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [edgemesh-agent-ht6mq] addition  error, backoff: [5m0s]
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.768283   47191 edged.go:997] worker [3] backoff pod addition item [edgemesh-agent-ht6mq] failed, re-add to queue
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.788259   47191 edged.go:992] worker [2] get pod addition item [edgemesh-agent-ht6mq]
    Jan 03 18:53:58 mammut edgecore[47191]: E0103 18:53:58.788287   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [edgemesh-agent-ht6mq] addition  error, backoff: [5m0s]
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.788301   47191 edged.go:997] worker [2] backoff pod addition item [edgemesh-agent-ht6mq] failed, re-add to queue
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.794728   47191 edged.go:992] worker [4] get pod addition item [nginx-main-575658f585-9n2wz]
    Jan 03 18:53:58 mammut edgecore[47191]: E0103 18:53:58.794752   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [nginx-main-575658f585-9n2wz] addition  error, backoff: [5m0s]
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.794779   47191 edged.go:997] worker [4] backoff pod addition item [nginx-main-575658f585-9n2wz] failed, re-add to queue
    Jan 03 18:53:58 mammut edgecore[47191]: I0103 18:53:58.972275   47191 edged.go:992] worker [0] get pod addition item [nginx-main-575658f585-9n2wz]
    Jan 03 18:53:58 mammut edgecore[47191]: E0103 18:53:58.972313   47191 edged.go:995] consume pod addition backoff: Back-off consume pod [nginx-main-575658f585-9n2wz] addition  error, backoff: [5m0s]
    
    

    Environment:

    • Kubernetes version (use kubectl version):

    • KubeEdge version(e.g. cloudcore --version and edgecore --version):

    • Cloud nodes Environment:
      • Hardware configuration (e.g. lscpu):
      • OS (e.g. cat /etc/os-release):
      • Kernel (e.g. uname -a):
      • Go version (e.g. go version):
      • Others:
    • Edge nodes Environment:

      • edgecore version (e.g. edgecore --version): KubeEdge v1.11.1
      • Hardware configuration (e.g. lscpu):
      • OS (e.g. cat /etc/os-release):
    NAME="Ubuntu"
    VERSION="20.04.3 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.3 LTS"
    VERSION_ID="20.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=focal
    UBUNTU_CODENAME=focal
    
    • Kernel (e.g. uname -a):
    Linux mammut 5.15.79.1-microsoft-standard-WSL2 #1 SMP Wed Nov 23 01:01:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    
    • Go version (e.g. go version):
    • Others: wsl --version
    WSL version: 1.0.3.0
    Kernel version: 5.15.79.1
    WSLg version: 1.0.47
    MSRDC version: 1.2.3575
    Direct3D version: 1.606.4
    DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
    Windows version: 10.0.22621.963
    
  • replace default runtime docker with containerd

    replace default runtime docker with containerd

    What type of PR is this?

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    
    
Project Flogo is an open source ecosystem of opinionated event-driven capabilities to simplify building efficient & modern serverless functions, microservices & edge apps.
Project Flogo is an open source ecosystem of opinionated  event-driven capabilities to simplify building efficient & modern serverless functions, microservices & edge apps.

Project Flogo is an Open Source ecosystem for event-driven apps Ecosystem | Core | Flows | Streams | Flogo Rules | Go Developers | When to use Flogo |

Dec 31, 2022
A project outputs Bluetooth Low Energy (BLE) sensors data in InfluxDB line protocol formatA project outputs Bluetooth Low Energy (BLE) sensors data in InfluxDB line protocol format

Intro This project outputs Bluetooth Low Energy (BLE) sensors data in InfluxDB line protocol format. It integrates nicely with the Telegraf execd inpu

Apr 15, 2022
Raspberry pi project that controls jack-o-lantern via servo motor and PIR motion sensors
Raspberry pi project that controls jack-o-lantern via servo motor and PIR motion sensors

pumpkin-pi ?? Raspberry pi project that controls jack-o-lantern via servo motor and PIR motion sensors to simulate it "watching" you. Inspired by Ryde

Sep 13, 2022
Golang framework for robotics, drones, and the Internet of Things (IoT)
Golang framework for robotics, drones, and the Internet of Things (IoT)

Gobot (https://gobot.io/) is a framework using the Go programming language (https://golang.org/) for robotics, physical computing, and the Internet of

Dec 29, 2022
Gobot - Golang framework for robotics, drones, and the Internet of Things (IoT)
Gobot - Golang framework for robotics, drones, and the Internet of Things (IoT)

Gobot (https://gobot.io/) is a framework using the Go programming language (https://golang.org/) for robotics, physical computing, and the Internet of Things.

Jan 8, 2023
OpenYurt - Extending your native Kubernetes to edge(project under CNCF)
OpenYurt - Extending your native Kubernetes to edge(project under CNCF)

openyurtio/openyurt English | 简体中文 What is NEW! Latest Release: September 26th, 2021. OpenYurt v0.5.0. Please check the CHANGELOG for details. First R

Jan 7, 2023
An edge-native container management system for edge computing
An edge-native container management system for edge computing

SuperEdge is an open source container management system for edge computing to manage compute resources and container applications in multiple edge regions. These resources and applications, in the current approach, are managed as one single Kubernetes cluster. A native Kubernetes cluster can be easily converted to a SuperEdge cluster.

Dec 29, 2022
A Kubernetes Native Batch System (Project under CNCF)
A Kubernetes Native Batch System (Project under CNCF)

Volcano is a batch system built on Kubernetes. It provides a suite of mechanisms that are commonly required by many classes of batch & elastic workloa

Jan 9, 2023
Microshift is a research project that is exploring how OpenShift1 Kubernetes can be optimized for small form factor and edge computing.

Microshift is a research project that is exploring how OpenShift1 Kubernetes can be optimized for small form factor and edge computing.

Nov 1, 2021
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

Dec 17, 2021
a small form factor OpenShift/Kubernetes optimized for edge computing

Microshift Microshift is OpenShift1 Kubernetes in a small form factor and optimized for edge computing. Edge devices deployed out in the field pose ve

Dec 29, 2022
🦖 Streaming-Serverless Framework for Low-latency Edge Computing applications, running atop QUIC protocol, engaging 5G technology.
🦖 Streaming-Serverless Framework for Low-latency Edge Computing applications, running atop QUIC protocol, engaging 5G technology.

YoMo YoMo is an open-source Streaming Serverless Framework for building Low-latency Edge Computing applications. Built atop QUIC Transport Protocol an

Dec 29, 2022
Zero - If Google Drive says that 1 is under copyright, 0 must be under copyleft

zero Zero under copyleft license Google Drive's copyright detector says that fil

May 16, 2022
Provide cloud-edge message synergy solutions for companies and individuals.the cloud-edge message system based on NATS.

Swarm This project is a cloud-edge synergy solution based on NATS. quikly deploy cloud deploy on k8s #pull the project. git clone https://github.com/g

Jan 11, 2022
dockin ops is a project used to handle the exec request for kubernetes under supervision
dockin ops is a project used to handle the exec request for kubernetes under supervision

Dockin Ops - Dockin Operation service English | 中文 Dockin operation and maintenance management system is a safe operation and maintenance management s

Aug 12, 2022
🐻 The Universal Service Mesh. CNCF Sandbox Project.
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Aug 10, 2021
🐻 The Universal Service Mesh. CNCF Sandbox Project.
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Jan 8, 2023
MOSN is a cloud native proxy for edge or service mesh. https://mosn.io
MOSN is a cloud native proxy for edge or service mesh. https://mosn.io

中文 MOSN is a network proxy written in Golang. It can be used as a cloud-native network data plane, providing services with the following proxy functio

Dec 30, 2022
MatrixOne is a planet scale, cloud-edge native big data engine crafted for heterogeneous workloads.
MatrixOne is a planet scale, cloud-edge native big data engine crafted for heterogeneous workloads.

What is MatrixOne? MatrixOne is a planet scale, cloud-edge native big data engine crafted for heterogeneous workloads. It provides an end-to-end data

Dec 26, 2022
An easy-to-use Map Reduce Go parallel-computing framework inspired by 2021 6.824 lab1. It supports multiple workers on a single machine right now.

MapReduce This is an easy-to-use Map Reduce Go framework inspired by 2021 6.824 lab1. Feature Multiple workers on single machine right now. Easy to pa

Dec 5, 2022