OpenYurt - Extending your native Kubernetes to edge(project under CNCF)

openyurtio/openyurt


Version License Go Report Card

English | 简体中文

notification What is NEW!
Latest Release: September 26th, 2021. OpenYurt v0.5.0. Please check the CHANGELOG for details.
First Release: May 29th, 2020. OpenYurt v0.1.0-beta.1

OpenYurt is built based on upstream Kubernetes and now hosted by the Cloud Native Computing Foundation(CNCF) as a Sandbox Level Project.

OpenYurt has been designed to meet various DevOps requirements against typical edge infrastructures. It provides the same user experience for managing the edge applications as if they were running in the cloud infrastructure. It addresses specific challenges for cloud-edge orchestration in Kubernetes such as unreliable or disconnected cloud-edge networking, edge node autonomy, edge device management, region-aware deployment and so on. OpenYurt preserves intact Kubernetes API compatibility, is vendor agnostic, and more importantly, is SIMPLE to use.

Architecture

OpenYurt follows a classic cloud-edge architecture design. It uses a centralized Kubernetes control plane residing in the cloud site to manage multiple edge nodes residing in the edge sites. Each edge node has moderate compute resources available in order to run edge applications plus the required OpenYurt components. The edge nodes in a cluster can span multiple physical regions, which are referred to as Pools in OpenYurt.


The above figure demonstrates the core OpenYurt architecture. The major components consist of:

  • YurtHub: A node daemon that serves as a proxy for the outbound traffic from typical Kubernetes node daemons such as Kubelet, Kubeproxy, CNI plugins and so on. It caches the states of all the API resources that they might access in the edge node's local storage. In case the edge node is disconnected to the cloud, YurtHub can recover the states when the node restarts.
  • Yurt controller manager: It supplements the upstream node controller to support edge computing requirements. For example, Pods in the nodes that are in the autonomy mode will not be evicted from APIServer even if the node heartbeats are missing.
  • Yurt app manager: It manages two CRD resources introduced in OpenYurt: NodePool and YurtAppSet (previous UnitedDeployment). The former provides a convenient management for a pool of nodes within the same region or site. The latter defines a pool based application management workload.
  • Yurt tunnel (server/agent): TunnelServer connects with the TunnelAgent daemon running in each edge node via a reverse proxy to establish a secure network access between the cloud site control plane and the edge nodes that are connected to the intranet.

In addition, OpenYurt also includes auxiliary controllers for integration and customization purposes.

  • Node resource manager: It manages additional edge node resources such as LVM, QuotaPath and Persistent Memory. Please refer to node-resource-manager repo for more details.
  • Integrating EdgeX Foundry platform and uses Kubernetes CRD to manage edge devices!

OpenYurt introduces Yurt-edgex-manager to manage the lifecycle of the EdgeX Foundry software suite, and Yurt-device-controller to manage edge devices hosted by EdgeX Foundry via Kubernetes custom resources. Please refer to the respective repos for more details.

Prerequisites

Please check the resource and system requirements before installing OpenYurt.

Getting started

OpenYurt supports Kubernetes versions up to 1.20. Using higher Kubernetes versions may cause compatibility issues.

You can setup the OpenYurt cluster manually, but we recommend to start OpenYurt by using the yurtctl CLI tool. To quickly build and install yurtctl, assuming the build system has golang 1.13+ and bash installed, you can simply do the following:

git clone https://github.com/openyurtio/openyurt.git
cd openyurt
make build WHAT=cmd/yurtctl

The yurtctl binary can be found at _output/bin. The commonly used CLI commands include:

yurtctl convert --provider [minikube|kubeadm|kind]  // To convert an existing Kubernetes cluster to an OpenYurt cluster
yurtctl revert                                      // To uninstall and revert back to the original cluster settings
yurtctl join                                        // To allow a new node to join OpenYurt
yurtctl reset                                       // To revert changes to the node made by the join command

Please check yurtctl tutorial for more details.

Tutorials

To experience the power of OpenYurt, please try the detailed tutorials.

Roadmap

Community

Contributing

If you are willing to be a contributor for the OpenYurt project, please refer to our CONTRIBUTING document for details. We have also prepared a developer guide to help the code contributors.

Meeting

Item Value
APAC Friendly Community meeting Bi-weekly APAC (Starting Sep 2, 2020), Wednesday 11:00AM GMT+8
Meeting link APAC Friendly meeting https://us02web.zoom.us/j/82828315928?pwd=SVVxek01T2Z0SVYraktCcDV4RmZlUT09
Meeting notes Notes and agenda
Meeting recordings OpenYurt bilibili Channel

Contact

If you have any questions or want to contribute, you are welcome to communicate most things via GitHub issues or pull requests. Other active communication channels:

License

OpenYurt is under the Apache 2.0 license. See the LICENSE file for details. Certain implementations in OpenYurt rely on the existing code from Kubernetes and the credits go to the original Kubernetes authors.

Owner
OpenYurt
Extending your native Kubernetes to edge
OpenYurt
Comments
  • [BUG] kubectl exec failed with unable to upgrade connection after OpenYurt install

    [BUG] kubectl exec failed with unable to upgrade connection after OpenYurt install

    What happened:

    kubectl exec (or kubectl port-forward / istionctl ps) fails with the following error. Only master control-plane node can reproduce this issue.

    root@control-plane:~# kubectl exec --stdin --tty ubuntu22-deamonset-5q6rg -- date
    error: unable to upgrade connection: fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
    

    What you expected to happen:

    kubectl exec (or kubectl port-forward / istionctl ps) succeeds w/o any error.

    How to reproduce it (as minimally and precisely as possible):

    1. Setup Kubernetes Cluster with flannel, only control-plane is necesvirtualizedsary.
    2. OpenYurt v1.0 manual setting
    3. execute kubectl exec for any container running on control-plane.

    Anything else we need to know?:

    Environment:

    • OpenYurt version: v1.0.0 (git clone with this tag v1.0.0)

    • Kubernetes version (use kubectl version): v1.22.13

    • OS (e.g: cat /etc/os-release):

    root@ceci-control-plane:~# cat /etc/os-release 
    NAME="Ubuntu"
    VERSION="20.04.5 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.5 LTS"
    VERSION_ID="20.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=focal
    UBUNTU_CODENAME=focal
    
    • Kernel (e.g. uname -a):
    root@ceci-control-plane:~# uname -a
    Linux ceci-control-plane 5.4.0-126-generic #142-Ubuntu SMP Fri Aug 26 12:12:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    
    • Install tools: N.A

    others

    • This is 100% reproducible with vagrant virtualbox virtualized instance.
    • Using physical machine, we are unable to reproduce this issue.
    • Could be anything related underlying network interfaces? or options for kubeadm or kubelet?

    /kind bug

  • [BUG] Cannot setup openyurt with `yurtctl convert --provider kind`

    [BUG] Cannot setup openyurt with `yurtctl convert --provider kind`

    What happened: Hello, I'd like to deploy the openyurt cluster with yurtctl and kind. It seems that yurtctl supports kind with option --provider kind. However, when I used the following command, it resulted in error.

    yurtctl convert -t --provider kind --cloud-nodes ${cloudnodes}
    
    F0921 08:08:07.173618   12871 convert.go:98] fail to complete the convert option: failed to read file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, open /etc/systemd/system/kubelet.service.d/10-kubeadm.conf: no such file or directory
    

    I read the code and found that when yurtctl starts, it will read the 10-kubeadm.conf (at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf in default) to get pod manifest path. However, the file and directory does not exsit when using kind. Maybe we should come out a better way to solve it.

    What you expected to happen: We can use yurtctl to deploy openyurt with kind.

    How to reproduce it (as minimally and precisely as possible): Use yurtctl convert -t --provider kind --cloud-nodes ${cloudnodes} to deploy openyurt with kind.

    Environment:

    • OpenYurt version: commit: 797c43d
    • Kubernetes version (use kubectl version): 1.20
  • Failed to start yurt controller Pod (evicted)

    Failed to start yurt controller Pod (evicted)

    Which jobs are failing:

    Which test(s) are failing:

    Since when has it been failing:

    Testgrid link:

    Reason for failure:

    Anything else we need to know:

    labels

    /kind failing-test

  • tunnel-server: server connection closed

    tunnel-server: server connection closed

    I have setup a k8s cluster with the master and a working node on separate networks. I referenced this tutorial to setup the tunnel sever and agent, but I can't access the pod on edge node through yurt-tunnel. The logs from the tunnel-server:

    $ kubectl logs yurt-tunnel-server-74cfdd4bc7-7rrmr -n kube-system
    I1110 12:53:57.737387       1 cmd.go:143] server will accept yurttunnel-agent requests at: 192.168.1.101:10262, server will accept master https requests at: 192.168.1.101:10263server will accept master http request at: 192.168.1.101:10264
    W1110 12:53:57.737429       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
    I1110 12:53:57.968315       1 iptables.go:474] clear conntrack entries for ports ["10250" "10255"] and nodes ["192.168.1.101" "192.168.122.55" "127.0.0.1"]
    E1110 12:53:57.992841       1 iptables.go:491] clear conntrack for 192.168.1.101:10250 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    E1110 12:53:58.011089       1 iptables.go:491] clear conntrack for 192.168.122.55:10250 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    E1110 12:53:58.025873       1 iptables.go:491] clear conntrack for 127.0.0.1:10250 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    E1110 12:53:58.035197       1 iptables.go:491] clear conntrack for 192.168.1.101:10255 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    E1110 12:53:58.042357       1 iptables.go:491] clear conntrack for 192.168.122.55:10255 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    E1110 12:53:58.048433       1 iptables.go:491] clear conntrack for 127.0.0.1:10255 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
    I1110 12:54:03.073595       1 csrapprover.go:52] starting the crsapprover
    I1110 12:54:03.209064       1 csrapprover.go:174] successfully approve yurttunnel csr(csr-sqfnw)
    I1110 12:54:08.070368       1 anpserver.go:101] start handling request from interceptor
    I1110 12:54:08.070787       1 anpserver.go:137] start handling https request from master at 192.168.1.101:10263
    I1110 12:54:08.070872       1 anpserver.go:151] start handling http request from master at 192.168.1.101:10264
    I1110 12:54:08.071365       1 anpserver.go:189] start handling connection from agents
    I1110 12:54:09.087254       1 server.go:418] Connect request from agent ubuntu-standard-pc-i440fx-piix-1996
    I1110 12:54:09.087319       1 backend_manager.go:99] register Backend &{0xc000158480} for agentID ubuntu-standard-pc-i440fx-piix-1996
    W1110 12:54:24.273510       1 server.go:451] stream read error: rpc error: code = Canceled desc = context canceled
    I1110 12:54:24.273532       1 backend_manager.go:119] remove Backend &{0xc000158480} for agentID ubuntu-standard-pc-i440fx-piix-1996
    I1110 12:54:24.273562       1 server.go:531] <<< Close backend &{0xc000158480} of agent ubuntu-standard-pc-i440fx-piix-1996
    I1110 12:54:37.682857       1 csrapprover.go:174] successfully approve yurttunnel csr(csr-6lcjl)
    I1110 12:54:42.969063       1 server.go:418] Connect request from agent ubuntu-standard-pc-i440fx-piix-1996
    I1110 12:54:42.969111       1 backend_manager.go:99] register Backend &{0xc000158180} for agentID ubuntu-standard-pc-i440fx-piix-1996
    

    Login to the edge node, the tunnel-agent container log indicates a "connection closed" error. Any idea how to solve this issue? Thanks.

    I1110 12:54:37.583915       1 cmd.go:106] neither --kube-config nor --apiserver-addr is set, will use /etc/kubernetes/kubelet.conf as the kubeconfig
    I1110 12:54:37.583964       1 cmd.go:110] create the clientset based on the kubeconfig(/etc/kubernetes/kubelet.conf).
    I1110 12:54:37.647689       1 cmd.go:135] yurttunnel-server address: 192.168.1.101:31302
    I1110 12:54:37.647990       1 anpagent.go:54] start serving grpc request redirected from yurttunel-server: 192.168.1.101:31302
    E1110 12:54:37.657318       1 clientset.go:155] rpc error: code = Unavailable desc = connection closed
    I1110 12:54:42.970218       1 stream.go:255] Connect to server 3656d4f5-746f-411f-b0eb-c1c69b1ff2c1
    I1110 12:54:42.970241       1 clientset.go:184] sync added client connecting to proxy server 3656d4f5-746f-411f-b0eb-c1c69b1ff2c1
    I1110 12:54:42.970266       1 client.go:122] Start serving for serverID 3656d4f5-746f-411f-b0eb-c1c69b1ff2c1
    
  • fix: cache the server version info of kubernetes

    fix: cache the server version info of kubernetes

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line: /kind bug /kind documentation /kind enhancement /kind good-first-issue /kind feature /kind question /kind design /sig ai /sig iot /sig network /sig storage

    What this PR does / why we need it:

    cache the server version info of the kubernetes, and fix the issue of coreDNS is abnormal when edge node restart

    Which issue(s) this PR fixes:

    Fixes #880

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?

    
    

    other Note

  • [Vote] About the naming of the networking project.

    [Vote] About the naming of the networking project.

    Hi, dear community:

    As we are going to develop a new networking project to enhance the networking capabilities of OpenYurt. (proposal link https://github.com/openyurtio/openyurt/pull/637).

    We are looking forward to your advice on the project naming. Here are some candidates:

    • lite-vpn
    • NetEdge
    • FusionNet
    • FiberLink
    • Cobweb
    • Pigeon
    • Magpie
    • Wormhole
    • Raven
    • Over-connector
    • Hypernet
    • Roaming
    • PodNet

    Welcome to vote! Other names are very welcome too.

  • [vote] the name of component that provide governance ability in nodepool.

    [vote] the name of component that provide governance ability in nodepool.

    What would you like to be added: In the proposal, a new component(named pool-spirit) will be added in the edge NodePool. the new component mainly provides the following ability:

    • To store metadata for NodePool as a kv storage
    • To provide a distributed lock for leader electing.
    • To use native kubernetes API to provide these above two abilities.

    For these abilities, the name pool-spirit maybe make end user confused, so OpenYurt community have decided to rename the new component. all candidate names as following:

    • pool-spirit: 节点池精灵
    • pool-coordinator: 节点池协调器
    • pool-linker: 节点池连接器
    • pool-fort: 大本营
    • sheepdog:牧羊犬
    • yurt-minister: yurt主持
    • shepherd:牧羊人
    • hive: 蜂巢
    • pool-harbor: 节点池港口

    please select your favourite name and reply this issue.

    btw: other names for new component are also welcome.

  • [BUG] Coredns cannot resolve node hostname

    [BUG] Coredns cannot resolve node hostname

    What happened: I have deployed metrics-server on the cloud node. It continues to report the following error:

    E1203 12:49:23.192743       1 scraper.go:139] "Failed to scrape node" err="Get \"https://node-219:10250/stats/summary?only_cpu_and_memory=true\": context deadline exceeded" node="node-219"
    E1203 12:49:23.192760       1 scraper.go:139] "Failed to scrape node" err="Get \"https://node-221:10250/stats/summary?only_cpu_and_memory=true\": context deadline exceeded" node="node-221"
    E1203 12:49:23.192766       1 scraper.go:139] "Failed to scrape node" err="Get \"https://node-224:10250/stats/summary?only_cpu_and_memory=true\": context deadline exceeded" node="node-224"
    E1203 12:49:23.192769       1 scraper.go:139] "Failed to scrape node" err="Get \"https://center:10250/stats/summary?only_cpu_and_memory=true\": context deadline exceeded" node="center"
    E1203 12:49:23.192746       1 scraper.go:139] "Failed to scrape node" err="Get \"https://node-222:10250/stats/summary?only_cpu_and_memory=true\": context deadline exceeded" node="node-222"
    E1203 12:49:23.192801       1 scraper.go:139] "Failed to scrape node" err="Get \"https://node-218:10250/stats/summary?only_cpu_and_memory=true\": context deadline exceeded" node="node-218"
    E1203 12:49:23.192802       1 scraper.go:139] "Failed to scrape node" err="Get \"https://node-223:10250/stats/summary?only_cpu_and_memory=true\": context deadline exceeded" node="node-223"
    E1203 12:49:23.193890       1 scraper.go:139] "Failed to scrape node" err="Get \"https://node-220:10250/stats/summary?only_cpu_and_memory=true\": context deadline exceeded" node="node-220"
    E1203 12:49:23.193916       1 scraper.go:139] "Failed to scrape node" err="Get \"https://dell2015:10250/stats/summary?only_cpu_and_memory=true\": context deadline exceeded" node="dell2015"
    E1203 12:49:23.193923       1 scraper.go:139] "Failed to scrape node" err="Get \"https://node-225:10250/stats/summary?only_cpu_and_memory=true\": context deadline exceeded" node="node-225"
    I1203 12:49:23.445026       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
    I1203 12:49:33.445641       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
    

    When I turned on the log function of coredns and checked the logs, I found that coredns could not resolve the hostname:

    [INFO] 10.244.0.21:57363 - 10071 "AAAA IN node-221.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000101561s
    [INFO] 10.244.0.21:49140 - 15804 "A IN node-225.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000197454s
    [INFO] 10.244.0.21:52591 - 40725 "AAAA IN node-223.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000170565s
    [INFO] 10.244.0.21:46343 - 27268 "A IN node-222.svc.cluster.local. udp 44 false 512" NXDOMAIN qr,aa,rd 137 0.000144675s
    [INFO] 10.244.0.21:53605 - 21188 "AAAA IN dell2015.svc.cluster.local. udp 44 false 512" NXDOMAIN qr,aa,rd 137 0.00017991s
    [INFO] 10.244.0.21:56493 - 14043 "A IN node-223.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000145095s
    [INFO] 10.244.0.21:57767 - 20232 "A IN node-225.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000104592s
    [INFO] 10.244.0.21:55905 - 46769 "AAAA IN dell2015.svc.cluster.local. udp 44 false 512" NXDOMAIN qr,aa,rd 137 0.000100891s
    [INFO] 10.244.0.21:38400 - 21470 "A IN node-218.svc.cluster.local. udp 44 false 512" NXDOMAIN qr,aa,rd 137 0.000154556s
    [INFO] 10.244.0.21:42241 - 28115 "AAAA IN node-223.svc.cluster.local. udp 44 false 512" NXDOMAIN qr,aa,rd 137 0.00014793s
    [INFO] 10.244.0.21:46009 - 15495 "AAAA IN node-225.svc.cluster.local. udp 44 false 512" NXDOMAIN qr,aa,rd 137 0.000150244s
    [INFO] 10.244.0.21:43989 - 42034 "A IN node-223.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000086667s
    [INFO] 10.244.0.21:37473 - 36930 "AAAA IN node-218.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000160677s
    [INFO] 10.244.0.21:38626 - 9816 "A IN node-225.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.00009503s
    [INFO] 10.244.0.21:57427 - 45436 "A IN dell2015.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000181907s
    [INFO] 10.244.0.21:42602 - 2082 "AAAA IN node-223.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.00021916s
    [INFO] 10.244.0.21:48372 - 64152 "AAAA IN dell2015.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000215355s
    [INFO] 10.244.0.21:38931 - 17188 "A IN node-220.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000149272s
    [INFO] 10.244.0.21:47704 - 5818 "A IN node-218.svc.cluster.local. udp 44 false 512" NXDOMAIN qr,aa,rd 137 0.000100259s
    [INFO] 10.244.0.21:43007 - 5861 "AAAA IN dell2015.svc.cluster.local. udp 44 false 512" NXDOMAIN qr,aa,rd 137 0.00007362s
    [INFO] 10.244.0.21:56270 - 62782 "A IN dell2015.svc.cluster.local. udp 44 false 512" NXDOMAIN qr,aa,rd 137 0.000167426s
    

    In fact, I have mounted the yurt-tunnel-nodes configmap to coredns:

    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      ...
      name: coredns
      ...
    spec:
      ...
      template:
        ...
        spec:
          containers:
            - args:
                - '-conf'
                - /etc/coredns/Corefile
              image: 'registry.aliyuncs.com/google_containers/coredns:1.8.4'
              ...
              volumeMounts:
                - mountPath: /etc/edge       # here
                  name: edge
                  readOnly: true
                - mountPath: /etc/coredns
                  name: config-volume
                  readOnly: true
          ...
          volumes:
            - configMap:
                defaultMode: 420
                name: yurt-tunnel-nodes     # here
              name: edge
            - configMap:
                defaultMode: 420
                items:
                  - key: Corefile
                    path: Corefile
                name: coredns
              name: config-volume
      ...
    

    And I added the hosts to the configmap of coredns:

    ---
    apiVersion: v1
    data:
      Corefile: |
        .:53 {
            errors
            health {
               lameduck 5s
            }
            log {
            }
            ready
            hosts /etc/edge/tunnel-nodes {    # here
                reload 300ms
                fallthrough
            }
            kubernetes cluster.local in-addr.arpa ip6.arpa {
               pods verified
               fallthrough in-addr.arpa ip6.arpa
               ttl 30
            }
            prometheus :9153
            forward . /etc/resolv.conf {
               max_concurrent 1000
            }
            cache 30
            loop
            reload
            loadbalance
        }
    kind: ConfigMap
    metadata:
      name: coredns
      namespace: kube-system
      resourceVersion: '1363115'
    
    

    And the yurt-tunnel-nodes configmap is as shown below, where 10.107.2.246 is the ClusterIP of x-tunnel-server-internal-svc:

    ---
    apiVersion: v1
    data:
      tunnel-nodes: "10.107.2.246\tdell2015\n10.107.2.246\tnode-218\n10.107.2.246\tnode-219\n10.107.2.246\tnode-220\n10.107.2.246\tnode-221\n10.107.2.246\tnode-222\n10.107.2.246\tnode-223\n10.107.2.246\tnode-224\n10.107.2.246\tnode-225\n172.26.146.181\tcenter"
    kind: ConfigMap
    metadata:
      annotations: {}
      name: yurt-tunnel-nodes
      namespace: kube-system
      resourceVersion: '1296168'
    

    I think all these configures is well. So why coredns returns NXDOMAIN where solving the node hostname?

    What you expected to happen: Coredns can resolve the node hostname to the ClusterIP of x-tunnel-server-internal-svc.

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • OpenYurt version: 1.1
    • Kubernetes version (use kubectl version): 1.22.8
    • OS (e.g: cat /etc/os-release): Ubuntu 22.04.1 LTS
    • Kernel (e.g. uname -a): 5.15.0-46-generic
    • Install tools: Manually Setup
    • Others:

    others

    /kind bug

  • After completing the test node autonomy, the edge node status still keep ready

    After completing the test node autonomy, the edge node status still keep ready

    Situation description

    1. I installed the kubernetes cluster using kubeadm. The version of the cluster is 1.16. The cluster has a master and three nodes.

    2. After I finished installing open-yurt manually, I started trying to test whether the result of my installation was successful

    3. I used the Test node autonomy chapter in https://github.com/alibaba/openyurt/blob/master/docs/tutorial/yurtctl.md to test

    4. After I completed the actions in the Test node autonomy chapter, the edge node status still keep reday

    Operation steps

    1. I created a sample pod
    kubectl apply -f-<<EOF
    apiVersion: v1
    kind: Pod
    metadata:
      name: bbox
    spec:
      nodeName: node3       
      containers:
      - image: busybox
        command:
        - top
        name: bbox
    EOF
    
    • node3 is the edge node. I chose the simplest way to schedule the sample pod to the edge node, although this method is not recommended in the kubernetes documentation
    1. I modified yurt-hub.yaml. make the value of --server-addr= a non-existent ip and port
      - --server-addr=https://1.1.1.1:6448
      
    2. Then I used the curl -s http://127.0.0.1:10261 command to test and verify whether the edge node can work normally in offline mode. the result of the command is as expected
      {
        "kind": "Status",
        "metadata": {
      
        },
        "status": "Failure",
        "message": "request( get : /) is not supported when cluster is unhealthy",
        "reason": "BadRequest",
        "code": 400
      }
      
    3. But node3 status still keep ready. and yurt-hub enters pending state
      kubectl get nodes
      NAME     STATUS   ROLES    AGE   VERSION
      master   Ready    master   23h   v1.16.6
      node1    Ready    <none>   23h   v1.16.6
      node2    Ready    <none>   23h   v1.16.6
      node3    Ready    <none>   23h   v1.16.6
      
      # kubectl get pods -n kube-system | grep yurt
      yurt-controller-manager-59544577cc-t948z   1/1     Running   0          5h42m
      yurt-hub-node3                             0/1     Pending   0          5h32m
      

    Some configuration items and logs that may be used as reference

    1. Label information of each node
      root@master:~# kubectl describe nodes master | grep Labels
      Labels:             alibabacloud.com/is-edge-worker=false
      root@master:~# kubectl describe nodes node1 | grep Labels
      Labels:             alibabacloud.com/is-edge-worker=false
      root@master:~# kubectl describe nodes node2 | grep Labels
      Labels:             alibabacloud.com/is-edge-worker=false
      root@master:~# kubectl describe nodes node3 | grep Labels
      Labels:             alibabacloud.com/is-edge-worker=true
      
    2. Configuration of kube-controller-manager
          - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
          - --controllers=*,bootstrapsigner,tokencleaner,-nodelifecycle
          - --kubeconfig=/etc/kubernetes/controller-manager.conf
      
    3. /etc/kubernetes/manifests/yurthub.yml
      # cat yurthub.yml
      apiVersion: v1
      kind: Pod
      metadata:
        labels:
          k8s-app: yurt-hub
        name: yurt-hub
        namespace: kube-system
      spec:
        volumes:
        - name: pki
          hostPath:
            path: /etc/kubernetes/pki
            type: Directory
        - name: kubernetes
          hostPath:
            path: /etc/kubernetes
            type: Directory
        - name: pem-dir
          hostPath:
            path: /var/lib/kubelet/pki
            type: Directory
        containers:
        - name: yurt-hub
          image: openyurt/yurthub:latest
          imagePullPolicy: Always
          volumeMounts:
          - name: kubernetes
            mountPath: /etc/kubernetes
          - name: pki
            mountPath: /etc/kubernetes/pki
          - name: pem-dir
            mountPath: /var/lib/kubelet/pki
          command:
          - yurthub
          - --v=2
          - --server-addr=https://1.1.1.1:6448
          - --node-name=$(NODE_NAME)
          livenessProbe:
            httpGet:
              host: 127.0.0.1
              path: /v1/healthz
              port: 10261
            initialDelaySeconds: 300
            periodSeconds: 5
            failureThreshold: 3
          resources:
            requests:
              cpu: 150m
              memory: 150Mi
            limits:
              memory: 300Mi
          securityContext:
            capabilities:
              add: ["NET_ADMIN", "NET_RAW"]
          env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
        hostNetwork: true
        priorityClassName: system-node-critical
        priority: 2000001000
      
    4. /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
      # cat  /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
      # Note: This dropin only works with kubeadm and kubelet v1.11+
      [Service]
      #Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/var/lib/openyurt/kubelet.conf"
      Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/var/lib/openyurt/kubelet.conf"
      Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
      # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
      EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
      # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
      # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
      EnvironmentFile=-/etc/default/kubelet
      ExecStart=
      ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
      
    5. /var/lib/openyurt/kubelet.conf
      # cat /var/lib/openyurt/kubelet.conf
      apiVersion: v1
      clusters:
      - cluster:
          server: http://127.0.0.1:10261
        name: default-cluster
      contexts:
      - context:
          cluster: default-cluster
          namespace: default
        name: default-context
      current-context: default-context
      kind: Config
      preferences: {}
      users:
      - name: default-auth
      
    6. Use kubectl describe to view yurt-hub pod information
      # kubectl describe pods yurt-hub-node3 -n kube-system
      Name:                 yurt-hub-node3
      Namespace:            kube-system
      Priority:             2000001000
      Priority Class Name:  system-node-critical
      Node:                 node3/
      Labels:               k8s-app=yurt-hub
      Annotations:          kubernetes.io/config.hash: 7be1318d63088969eafcd2fa5887f2ef
                            kubernetes.io/config.mirror: 7be1318d63088969eafcd2fa5887f2ef
                            kubernetes.io/config.seen: 2020-08-18T08:41:27.431580091Z
                            kubernetes.io/config.source: file
      Status:               Pending
      IP:
      IPs:                  <none>
      Containers:
        yurt-hub:
          Image:      openyurt/yurthub:latest
          Port:       <none>
          Host Port:  <none>
          Command:
            yurthub
            --v=2
            --server-addr=https://10.10.13.82:6448
            --node-name=$(NODE_NAME)
          Limits:
            memory:  300Mi
          Requests:
            cpu:     150m
            memory:  150Mi
          Liveness:  http-get http://127.0.0.1:10261/v1/healthz delay=300s timeout=1s period=5s #success=1 #failure=3
          Environment:
            NODE_NAME:   (v1:spec.nodeName)
          Mounts:
            /etc/kubernetes from kubernetes (rw)
            /etc/kubernetes/pki from pki (rw)
            /var/lib/kubelet/pki from pem-dir (rw)
      Volumes:
        pki:
          Type:          HostPath (bare host directory volume)
          Path:          /etc/kubernetes/pki
          HostPathType:  Directory
        kubernetes:
          Type:          HostPath (bare host directory volume)
          Path:          /etc/kubernetes
          HostPathType:  Directory
        pem-dir:
          Type:          HostPath (bare host directory volume)
          Path:          /var/lib/kubelet/pki
          HostPathType:  Directory
      QoS Class:         Burstable
      Node-Selectors:    <none>
      Tolerations:       :NoExecute
      Events:            <none>
      
    7. Use docker ps on the edge node to view the log of the yurt-hub container. Intercept the last 20 lines
      # docker logs 0c89efbe949b --tail 20
      I0818 13:54:13.293068       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: dial tcp 1.1.1.1:6448: connect: connection refused
      I0818 13:54:13.561262       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 331.836µs, left 10 requests in flight
      I0818 13:54:15.746576       1 util.go:177] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node3?timeout=10s with status code 200, spent 83.127µs, left 10 requests in flight
      I0818 13:54:15.828560       1 util.go:177] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 436.489µs, left 10 requests in flight
      I0818 13:54:15.829628       1 util.go:177] kubelet patch pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3/status with status code 200, spent 307.187µs, left 10 requests in flight
      I0818 13:54:17.831366       1 util.go:177] kubelet delete pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 147.492µs, left 10 requests in flight
      I0818 13:54:17.833762       1 util.go:177] kubelet create pods: /api/v1/namespaces/kube-system/pods with status code 201, spent 111.762µs, left 10 requests in flight
      I0818 13:54:22.273899       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: dial tcp 1.1.1.1:6448: connect: connection refused
      I0818 13:54:23.486523       1 util.go:177] kubelet watch configmaps: /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=2161&timeout=7m54s&timeoutSeconds=474&watch=true with status code 200, spent 7m54.000780359s, left 9 requests in flight
      I0818 13:54:23.648871       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 266.182µs, left 10 requests in flight
      I0818 13:54:25.748497       1 util.go:177] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node3?timeout=10s with status code 200, spent 189.694µs, left 10 requests in flight
      I0818 13:54:25.830919       1 util.go:177] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 1.375535ms, left 10 requests in flight
      I0818 13:54:25.835015       1 util.go:177] kubelet patch pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3/status with status code 200, spent 1.363765ms, left 10 requests in flight
      I0818 13:54:33.733913       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 303.499µs, left 10 requests in flight
      I0818 13:54:34.261504       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
      I0818 13:54:35.751002       1 util.go:177] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node3?timeout=10s with status code 200, spent 144.723µs, left 10 requests in flight
      I0818 13:54:35.830895       1 util.go:177] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 1.146812ms, left 10 requests in flight
      I0818 13:54:35.834366       1 util.go:177] kubelet patch pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3/status with status code 200, spent 744.857µs, left 10 requests in flight
      I0818 13:54:42.274049       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: dial tcp 1.1.1.1:6448: connect: connection refused
      I0818 13:54:43.818381       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 248.672µs, left 10 requests in flight
      
    8. Use kubectl logs to view the logs of yurt-controller-manager. Intercept the last 20 lines
      # kubectl logs yurt-controller-manager-59544577cc-t948z -n kube-system --tail 20
      E0818 13:56:07.239721       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:10.560864       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:13.288544       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:16.726605       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:19.623694       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:23.572803       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:26.809117       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:29.021205       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:31.271086       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:34.083918       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:37.493386       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:40.222869       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:44.149011       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:47.699211       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:50.177053       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:52.553163       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:55.573328       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:56:58.677034       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:57:02.844152       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      E0818 13:57:05.044990       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
      

    At last

    ​ I very much hope that you can help me solve the problem or point out my mistakes. If there is any other information that needs to be provided, please communicate with me in time

  • ci(master): ln hotfix to build in macOS

    ci(master): ln hotfix to build in macOS

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line: /kind bug /kind documentation /kind enhancement /kind good-first-issue /kind feature /kind question /kind design /sig ai /sig iot /sig network /sig storage /sig storage

    /kind feature

    What this PR does / why we need it:

    hotfix build linux arch in macOS 'ln' not in this os

    Which issue(s) this PR fixes:

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?

    
    in macOS exec `GOOS=linux GOARCH=amd64 make build WHAT=cmd/yurtctl`,but build after  the local dir ln  into  linux dir.
    so I fix the `HOST_PLATFORM` default value is `$(go env GOOS)/$(go env GOARCH)`
    
    

    other Note

  • Proposal: OpenYurt Convertor Operator for converting K8S to OpenYurt

    Proposal: OpenYurt Convertor Operator for converting K8S to OpenYurt

    Signed-off-by: nunu [email protected]

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line: /kind bug /kind documentation /kind enhancement /kind good-first-issue /kind feature /kind question /kind design /sig ai /sig iot /sig network /sig storage /sig storage

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?

    
    

    other Note

  • add selfsigned pool-coordinator cert manager

    add selfsigned pool-coordinator cert manager

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line: /kind bug /kind documentation /kind enhancement /kind good-first-issue /kind feature /kind question /kind design /sig ai /sig iot /sig network /sig storage

    What this PR does / why we need it:

    Add self signed certmanager for poolcoordinator

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?

    
    

    other Note

  • [BUG] flush kube-proxy entries in nat table in edge-autonomy e2e test

    [BUG] flush kube-proxy entries in nat table in edge-autonomy e2e test

    What happened:

    Currently we use iptables -F in edge autonomy test when testing kube-proxy. However, it's defaultly flush the filter table while kube-proxy mainly uses nat table for service discovery. So, we need to clean the nat table instead.

    We may also not flush the nat table, because there's not only rules set by kube-proxy but also other components like flannel, which will not be recreate through restarting kube-proxy.

    Chain FLANNEL-POSTRTG (1 references)
    target     prot opt source               destination         
    RETURN     all  --  10.244.0.0/16        10.244.0.0/16        /* flanneld masq */
    MASQUERADE  all  --  10.244.0.0/16       !base-address.mcast.net/4  /* flanneld masq */ random-fully
    RETURN     all  -- !10.244.0.0/16        10.244.2.0/24        /* flanneld masq */
    MASQUERADE  all  -- !10.244.0.0/16        10.244.0.0/16        /* flanneld masq */ random-fully
    

    As a solution, we can only clean the nat rules of the service we will test.

    What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • OpenYurt version:
    • Kubernetes version (use kubectl version):
    • OS (e.g: cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:

    others

    /kind bug /kind test

  • make yurt-controller-manager take care of webhook configurations and …

    make yurt-controller-manager take care of webhook configurations and …

    …certs

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line: /kind bug /kind documentation /kind enhancement /kind good-first-issue /kind feature /kind question /kind design /sig ai /sig iot /sig network /sig storage

    What this PR does / why we need it:

    When yurt-controller-manager introduced pod webhook for create/update, the webhook configuration including ca/certs should be provisioned with the yurt-controller-manager pod as an atomic operation. Thus, the webhook configurations' creation/update and ca and certs would be taken care of by yurt-controller-manager itself. So this PR does these:

    1. ensure webhooks' configurations creation/update;
    2. self-sign the webhooks;

    Which issue(s) this PR fixes:

    Fixes #1040

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?

    
    

    other Note

  • using kubectl logs -f order,Error from server: Get...

    using kubectl logs -f order,Error from server: Get...

    What happened: the other node is normal,when i use kubectl logs,only in this node have error,i have rm tunnel-agent dir and restart,but,the error still exists. Error from server: Get https://xxxxx:10250/containerLogs/kube-system/yurt-tunnel-agent-7xg8z/yurt-tunnel-agent?follow=true: dial tcp xxxx:10250: connect: connection timed out

    this is tunnel agent logs I1224 14:06:16.217175 1 start.go:49] yurttunnel-agent version: projectinfo.Info{GitVersion:"v0.7.1", GitCommit:"f6fb68c", BuildDate:"2022-08-30T14:38:44Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64"} I1224 14:06:16.217651 1 options.go:148] ipv4=10.200.202.5&host=vm-202-5-centos is set for agent identifies I1224 14:06:16.217663 1 options.go:153] neither --kube-config nor --apiserver-addr is set, will use /etc/kubernetes/kubelet.conf as the kubeconfig I1224 14:06:16.217669 1 options.go:157] create the clientset based on the kubeconfig(/etc/kubernetes/kubelet.conf). I1224 14:06:16.220583 1 start.go:86] yurttunnel-server address: xxxxxx:32000 W1224 14:06:16.220663 1 filestore_wrapper.go:49] unexpected error occurred when loading the certificate: no cert/key files read at "/var/lib/yurttunnel-agent/pki/yurttunnel-agent-current.pem", ("", "") or ("/var/lib/yurttunnel-agent/pki", "/var/lib/yurttunnel-agent/pki"), will regenerate it I1224 14:06:16.220687 1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled I1224 14:06:16.220739 1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Rotating certificates I1224 14:06:17.225445 1 csr.go:188] error fetching v1 certificate signing request: the server could not find the requested resource I1224 14:06:17.627273 1 csr.go:283] certificate signing request csr-9kkm5 is issued I1224 14:06:18.827150 1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate expiration is 2023-12-24 14:01:16 +0000 UTC, rotation deadline is 2023-10-24 14:19:49.167659663 +0000 UTC I1224 14:06:18.827220 1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Waiting 7296h13m30.340444063s for next certificate rotation I1224 14:06:19.827365 1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate expiration is 2023-12-24 14:01:16 +0000 UTC, rotation deadline is 2023-10-08 00:53:46.921425722 +0000 UTC I1224 14:06:19.827391 1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Waiting 6898h47m27.09403708s for next certificate rotation I1224 14:06:21.221772 1 start.go:105] certificate yurttunnel-agent ok I1224 14:06:21.221930 1 anpagent.go:57] start serving grpc request redirected from yurttunnel-server: 8.219.150.133:32000 I1224 14:06:21.222157 1 util.go:75] "start handling meta requests(metrics/pprof)" server endpoint="127.0.0.1:10266" I1224 14:06:22.007848 1 client.go:224] "Connect to" server="ab5a1070-c5e8-47a0-aab1-3c9b06a98bf8" I1224 14:06:22.007869 1 clientset.go:190] "sync added client connecting to proxy server" serverID="ab5a1070-c5e8-47a0-aab1-3c9b06a98bf8" I1224 14:06:22.007895 1 client.go:326] "Start serving" serverID="ab5a1070-c5e8-47a0-aab1-3c9b06a98bf8" I1224 14:06:27.910149 1 client.go:224] "Connect to" server="ac1ae4c8-f8fe-4ed6-bbec-43bf396a59f3" I1224 14:06:27.910173 1 clientset.go:190] "sync added client connecting to proxy server" serverID="ac1ae4c8-f8fe-4ed6-bbec-43bf396a59f3" I1224 14:06:27.910236 1 client.go:326] "Start serving" serverID="ac1ae4c8-f8fe-4ed6-bbec-43bf396a59f3" I1224 14:06:34.009288 1 client.go:224] "Connect to" server="ac1ae4c8-f8fe-4ed6-bbec-43bf396a59f3" E1224 14:06:34.009315 1 clientset.go:186] "closing connection failure when adding a client" err="client for proxy server ac1ae4c8-f8fe-4ed6-bbec-43bf396a59f3 already exists" I1224 14:06:39.770591 1 client.go:224] "Connect to" server="cc260094-7b65-4f3c-8e96-69e9ea4ba4db" I1224 14:06:39.770616 1 clientset.go:190] "sync added client connecting to proxy server" serverID="cc260094-7b65-4f3c-8e96-69e9ea4ba4db" I1224 14:06:39.770644 1 client.go:326] "Start serving" serverID="cc260094-7b65-4f3c-8e96-69e9ea4ba4db"

    What you expected to happen: through tunnel,show pod logs How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • OpenYurt version: 0.7.1
    • Kubernetes version (use kubectl version): 1.18.20
    • OS (e.g: cat /etc/os-release): centos 7.9
    • Kernel (e.g. uname -a): 3.10
    • Install tools: kubeadm
    • Others:

    others /kind question

  • [BUG] Init

    [BUG] Init

    What happened:

    What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?:

    Environment:

    • OpenYurt version:
    • Kubernetes version (use kubectl version):
    • OS (e.g: cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:

    others

    /kind bug

  • add_initializer_test.go

    add_initializer_test.go

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line: /kind bug /kind documentation /kind enhancement /kind good-first-issue /kind feature /kind question /kind design /sig ai /sig iot /sig network /sig storage

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?

    
    

    other Note

dockin ops is a project used to handle the exec request for kubernetes under supervision
dockin ops is a project used to handle the exec request for kubernetes under supervision

Dockin Ops - Dockin Operation service English | 中文 Dockin operation and maintenance management system is a safe operation and maintenance management s

Aug 12, 2022
🐻 The Universal Service Mesh. CNCF Sandbox Project.
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Aug 10, 2021
🐻 The Universal Service Mesh. CNCF Sandbox Project.
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Jan 8, 2023
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Dec 30, 2022
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

Dec 17, 2021
a small form factor OpenShift/Kubernetes optimized for edge computing

Microshift Microshift is OpenShift1 Kubernetes in a small form factor and optimized for edge computing. Edge devices deployed out in the field pose ve

Dec 29, 2022
Secure Edge Networking Based On Kubernetes And KubeEdge.
Secure Edge Networking Based On Kubernetes And KubeEdge.

What is FabEdge FabEdge is an open source edge networking solution based on kubernetes and kubeedge. It solves the problems including complex network

Jan 3, 2023
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
A tool to bring existing Azure resources under Terraform's management

Azure Terrafy A tool to bring your existing Azure resources under the management of Terraform. Install go install github.com/magodo/aztfy@latest Usage

Dec 9, 2021
A tool to bring existing Azure resources under Terraform's management

Azure Terrafy A tool to bring your existing Azure resources under the management of Terraform. Goal Azure Terrafy imports the resources inside a resou

Jan 1, 2023
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

Oct 19, 2021
Simplified network and services for edge applications
Simplified network and services for edge applications

English | 简体中文 EdgeMesh Introduction EdgeMesh is a part of KubeEdge, and provides a simple network solution for the inter-communications between servi

Jan 1, 2023
A simple project (which is visitor counter) on kubernetesA simple project (which is visitor counter) on kubernetes

k8s playground This project aims to deploy a simple project (which is visitor counter) on kubernetes. Deploy steps kubectl apply -f secret.yaml kubect

Dec 16, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Kubernetes Operator for a Cloud-Native OpenVPN Deployment.

Meerkat is a Kubernetes Operator that facilitates the deployment of OpenVPN in a Kubernetes cluster. By leveraging Hashicorp Vault, Meerkat securely manages the underlying PKI.

Jan 4, 2023
Kubernetes Native Policy Management
Kubernetes Native Policy Management

Kyverno Kubernetes Native Policy Management Kyverno is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations u

Jan 2, 2023
Kubernetes Native Serverless Framework
Kubernetes Native Serverless Framework

kubeless is a Kubernetes-native serverless framework that lets you deploy small bits of code without having to worry about the underlying infrastructu

Dec 25, 2022
Cloud Native Configurations for Kubernetes

CNCK CNCK = Cloud Native Configurations for Kubernetes Make your Kubernetes applications more cloud native by injecting runtime cluster information in

Nov 4, 2021