A Kubernetes Network Fabric for Enterprises that is Rich in Functions and Easy in Operations

kube_ovn_logo

License Build Tag Go Report Card Slack Card FOSSA Status

中文教程

Kube-OVN, a CNCF Sandbox Level Project, integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises with the most functions and the easiest operation.

Community

The Kube-OVN community is waiting for you participation!

  • Follow us at Twitter
  • Chat with us at Slack
  • Other issues please send email to [email protected]
  • 微信用户加 liumengxinfly 进入 "Kube-OVN 开源交流群",请注明 Kube-OVN 和个人信息

Features

  • Namespaced Subnets: Each Namespace can have a unique Subnet (backed by a Logical Switch). Pods within the Namespace will have IP addresses allocated from the Subnet. It's also possible for multiple Namespaces to share a Subnet.
  • Subnet Isolation: Can configure a Subnet to deny any traffic from source IP addresses not within the same Subnet. Can whitelist specific IP addresses and IP ranges.
  • Network Policy: Implementing networking.k8s.io/NetworkPolicy API by high performance ovn ACL.
  • Static IP Addresses for Workloads: Allocate random or static IP addresses to workloads.
  • DualStack IP Support: Pod can run in IPv4-Only/IPv6-Only/DualStack mode.
  • Pod NAT and EIP: Manage the pod external traffic and external ip like tradition VM.
  • Multi-Cluster Network: Connect different clusters into one L3 network.
  • IPAM for Multi NIC: A cluster-wide IPAM for CNI plugins other than Kube-OVN, such as macvlan/vlan/host-device to take advantage of subnet and static ip allocation functions in Kube-OVN.
  • Dynamic QoS: Configure Pod/Gateway Ingress/Egress traffic rate limits on the fly.
  • Embedded Load Balancers: Replace kube-proxy with the OVN embedded high performance distributed L2 Load Balancer.
  • Distributed Gateways: Every Node can act as a Gateway to provide external network connectivity.
  • Namespaced Gateways: Every Namespace can have a dedicated Gateway for Egress traffic.
  • Direct External Connectivity:Pod IP can be exposed to external network directly.
  • BGP Support: Pod/Subnet IP can be exposed to external by BGP router protocol.
  • Traffic Mirror: Duplicated container network traffic for monitoring, diagnosing and replay.
  • Hardware Offload: Boost network performance and save CPU resource by offloading OVS flow table to hardware.
  • Vlan/Underlay Support: Kube-OVN also support underlay and Vlan mode network for better performance and direct connectivity with physic network.
  • DPDK Support: DPDK application now can run in Pod with OVS-DPDK.
  • ARM Support: Kube-OVN can run on x86_64 and arm64 platforms.
  • VPC Support: Multi-tenant network with overlapped address spaces.
  • TroubleShooting Tools: Handy tools to diagnose, trace, monitor and dump container network traffic to help troubleshooting complicate network issues.
  • Prometheus & Grafana Integration: Exposing network quality metrics like pod/node/service/dns connectivity/latency in Prometheus format.

Planned Future Work

  • Policy-based QoS
  • More Metrics and Traffic Graph
  • More Diagnosis and Tracing Tools

Network Topology

The Switch, Router and Firewall showed in the diagram below are all distributed on all Nodes. There is no single point of failure for in-cluster network.

topology

Monitoring Dashboard

Kube-OVN offers prometheus integration with grafana dashboards to visualise network quality.

dashboard

Quick Start

Kube-OVN is easy to install with all necessary components/dependencies included. If you already have a Kubernetes cluster without any cni plugin, please refer to the Installation Guide.

If you want to install Kubernetes from scratch, you can try kubespray or for Chinese users try kubeasz to deploy a production ready Kubernetes cluster with Kube-OVN embedded.

Documents

Contribution

We are looking forwards to your PR!

FAQ

  1. Q: How about the scalability of Kube-OVN?

    A: We have simulated 200 Nodes with 10k Pods by kubemark, and it works fine. Some community users have deployed one cluster with 250+ Nodes and 3k+ Pods in production. It's still not reach the limitation, but we don't have enough resources to find the limitation.

  2. Q: What's the Addressing/IPAM? Node-specific or cluster-wide?

    A: Kube-OVN use a cluster-wide IPAM, Pod address can float to any nodes in the cluster.

  3. Q: What's the encapsulation?

    A: For overlay mode, Kube-OVN uses Geneve to encapsulate packets between nodes. For Vlan/Underlay mode there is no encapsulation.

Kube-OVN vs. Other CNI Implementation

Different CNI Implementation has different function scope and network topology. There is no single implementation that can resolve all network problems. In this section, we compare Kube-OVN to some other options to give users a better understanding to assess which network will fit into your infrastructure.

Kube-OVN vs. ovn-kubernetes

ovn-kubernetes is developed by the ovn community to integration ovn for Kubernetes. As both projects use OVN/OVS as the data plane, they have some same function sets and architecture. The main differences come from the network topology and gateway implementation.

ovn-kubernetes implements a subnet-per-node network topology. That means each node will have a fixed cidr range, and the ip allocation is fulfilled by each node when the pod has been invoked by kubelet.

Kube-OVN implements a subnet-per-namespace network topology. That means a cidr can spread the entire cluster nodes, and the ip allocation is fulfilled by kube-ovn-controller at a central place. And then kube-ovn can apply lots of network configurations at subnet level, like cidr, gw, exclude_ips, nat and so on. This topology also gives Kube-OVN more ability to control how ip should be allocated, on top of this topology, Kube-OVN can allocate static ip for workloads.

We believe the subnet-per-namespace topology will give more flexibility to evolve the network.

On the gateway side, ovn-kubernetes uses native ovn gateway concept to control the traffic. The native ovn gateway relies on a dedicated nic or needs to transfer the nic ip to another device to bind the nic to the ovs bridge. This implementation can reach better performance, however not all environments meet the network requirements especially in the cloud.

Kube-OVN uses policy-route, ipset and iptables to implement the gateway functions that all by software, which can fit more infrastructure and give more flexibility to more function.

Kube-OVN vs. Calico

Calico is an open-source networking and network security solution for containers, virtual machines, and native host-based workloads. It's known for its good performance and security policy.

The main difference from the design point is the encapsulation method. Calico use no encapsulation or lightweight IPIP encapsulation and Kube-OVN uses geneve to encapsulate packets. No encapsulation can achieve better network performance for both throughput and latency. However, as this method will expose pod network directly to the underlay network with it comes with the burden on deploy and maintain. In some managed network environment where BGP and IPIP is not allowed, encapsulation is a must.

Use encapsulation can lower the requirement on networking, and isolate containers and underlay network from logical. We can use the overlay technology to build a much complex network concept, like router, gateway, and vpc. For performance, ovs can make use of hardware offload and DPDK to enhance throughput and latency.

Kube-OVN can also work in non-encapsulation mode, that take use of underlay switches to switch the packets or use hardware offload to achieve better performance than kernel datapath.

From the function set, Kube-OVN can offer some more abilities like static ip, QoS and traffic mirror. The subnet in Kube-OVN and ippool in Calico share some same function set.

License

FOSSA Status

Comments
  • ip资源未被回收,子网ip占用残留

    ip资源未被回收,子网ip占用残留

    Expected Behavior

    ip资源未被回收,子网ip占用残留

    Actual Behavior

    Steps to Reproduce the Problem

    apiVersion: kubeovn.io/v1
    kind: Subnet
    metadata:
      name: subnet-cdq57ea8j5gqg4vf8ak0
    spec:
      cidrBlock: 168.50.8.0/24
      default: false
      excludeIps:
      - 168.50.8.254
      gateway: 168.50.8.254
      gatewayNode: ""
      gatewayType: distributed
      natOutgoing: false
      private: false
      protocol: IPv4
      provider: ovn
      vpc: vpc-cdq56t28j5gqg4vf8ajg
    
    NAME                          PROVIDER   VPC                        PROTOCOL   CIDR            PRIVATE   NAT     DEFAULT   GATEWAYTYPE   V4USED   V4AVAILABLE   V6USED   V6AVAILABLE   EXCLUDEIPS
    subnet-cdq57ea8j5gqg4vf8ak0   ovn        vpc-cdq56t28j5gqg4vf8ajg   IPv4       168.50.8.0/24   false     false   false     distributed   11       242           0        0             ["168.50.8.254"]
    
    [root@iaas-cms-ctrl-1 ~]# k get ip | grep 168.50.8.
    vm-ce3fr4q8j5gh613m5u50.yiaas.net1.yiaas.ovn                                                       168.50.8.2               00:00:00:A9:2E:06   iaas-cms-ctrl-1   subnet-cdq57ea8j5gqg4vf8ak0
    vm-ce3vprq8j5ggeis9ivig.yiaas.net1.yiaas.ovn                                                       168.50.8.1               00:00:00:BF:18:12   iaas-cms-ctrl-1   subnet-cdq57ea8j5gqg4vf8ak0
    vm-ce418ki8j5ggeis9ivmg.yiaas.net1.yiaas.ovn                                                       168.50.8.2               00:00:00:21:DA:C3   iaas-cms-ctrl-2   subnet-cdq57ea8j5gqg4vf8ak0
    vm-ce41etq8j5ggeis9ivo0.yiaas.net1.yiaas.ovn                                                       168.50.8.3               00:00:00:52:22:AC   iaas-cms-ctrl-1   subnet-cdq57ea8j5gqg4vf8ak0
    vm-ce41hri8j5ggeis9ivqg.yiaas.net1.yiaas.ovn                                                       168.50.8.4               00:00:00:5A:55:C0   iaas-cms-ctrl-2   subnet-cdq57ea8j5gqg4vf8ak0
    vm-ce41kta8j5ggeis9ivs0.yiaas.net1.yiaas.ovn                                                       168.50.8.5               00:00:00:9A:39:09   iaas-cms-ctrl-1   subnet-cdq57ea8j5gqg4vf8ak0
    vm-ce441k28j5ggeis9ivug.yiaas.net1.yiaas.ovn                                                       168.50.8.6               00:00:00:E9:33:BB   iaas-cms-ctrl-2   subnet-cdq57ea8j5gqg4vf8ak0
    vm-ce4l38i8j5ggeis9j050.yiaas.net1.yiaas.ovn                                                       168.50.8.7               00:00:00:FC:DB:BC   iaas-cms-ctrl-1   subnet-cdq57ea8j5gqg4vf8ak0
    vm-ce4qfgq8j5ggeis9j070.yiaas.net1.yiaas.ovn                                                       168.50.8.8               00:00:00:EF:C2:10   iaas-cms-ctrl-2   subnet-cdq57ea8j5gqg4vf8ak0
    vm-ce4s0hq8j5ggeis9j0hg.yiaas.net1.yiaas.ovn                                                       168.50.8.9               00:00:00:81:B5:1B   iaas-cms-ctrl-2   subnet-cdq57ea8j5gqg4vf8ak0
    vm-ce4s9qq8j5ggeis9j1lg.yiaas.net1.yiaas.ovn                                                       168.50.8.10              00:00:00:B5:37:26   iaas-cms-ctrl-2   subnet-cdq57ea8j5gqg4vf8ak0
    

    Additional Info

    • Kubernetes version:

      Output of kubectl version:

    Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.7", GitCommit:"42c05a547468804b2053ecf60a3bd15560362fc2", GitTreeState:"clean", BuildDate:"2022-05-24T12:30:55Z", GoVersion:"go1.17.10", Compiler:"gc", Platform:"linux/amd64"}

    
    - kube-ovn version:
    
    

    v1.10.7

    
    - operation-system/kernel version:
    
    **Output of `awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release`:**
    **Output of `uname -r`:**
    

    CentOS Stream 8 5.4.223-1.el8.elrepo.x86_64

    
    <!-- Any other additional information -->
    
  • Kubeovn pod communication between different K8s cluster nodes not working when DPDK is enabled

    Kubeovn pod communication between different K8s cluster nodes not working when DPDK is enabled

    Hi,

    I have deployed a multinode Kubernetes setup with kubeovn as default cni. I installed Kubeovn with DPDK support following the link https://github.com/alauda/kube-ovn/blob/master/docs/dpdk.md on openstack VMS with OVS-DPDK and virtio type of network interfaces attached to cluster vms.

    But i am facing issue when my pods are scheduled on different nodes. They are not able to communicate with each other using Kubeovn interface also. I understand for DPDK based interfaces, this communication needs to be manually configured as userspace cni does not support this but Kubeovn interface communication must work fine. Same thing is working fine when i deploy kubeovn without DPDK support.

    My Environment Details are OS: Ubuntu18 Virtual Machines over openstack with ovs-dpdk RAM: 16GB Cores: 8 Nic: Virtio network device K8s: Version 1.134 Kubeovn with DPDK: "v1.3.0-pre"

    One thing i observed is there are no geneve ports added to br-int provided by Kubeovn in case DPDK is enabled

    Kubeovn with DPDK

    root@k8s-master:~# ovs-vsctl show 1d99a1eb-1d46-4016-8639-bc00ab08ca83 Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port mirror0 Interface mirror0 type: internal Port "7cd52e4a918f_h" Interface "7cd52e4a918f_h" Port ovn0 Interface ovn0 type: internal ovs_version: "2.13.0"

    Kubeovn Without DPDK

    root@k8s2-master:~# ovs-vsctl show 42591383-8fd4-4d44-b9c9-90be02958d71 Bridge br-int fail_mode: secure Port ovn-9f840e-0 Interface ovn-9f840e-0 type: geneve options: {csum="true", key=flow, remote_ip=<minion1_ip>} Port br-int Interface br-int type: internal Port f7aaa44c4a5c_h Interface f7aaa44c4a5c_h Port ovn0 Interface ovn0 type: internal Port ovn-b6bfb4-0 Interface ovn-b6bfb4-0 type: geneve options: {csum="true", key=flow, remote_ip=<minion2_ip>} Port ovn-7d41af-0 Interface ovn-7d41af-0 type: geneve options: {csum="true", key=flow, remote_ip=<minion3_ip>} Port mirror0 Interface mirror0 type: internal Port "92756136d181_h" Interface "92756136d181_h" ovs_version: "2.13.0"

    Even Kubernetes.deafult dns is not reachable in case DPDK is enabled for multihost K8s environment.

    Here are the trace of dnsutils container:

    ~# kubectl ko trace default/dnsutils 10.96.0.10 udp 53

    • kubectl exec ovn-central-5b86b448c8-6jb64 -n kube-system -- ovn-trace --ct=new ovn-default 'inport == "dnsutils.default" && ip.ttl == 64 && eth.src == 00:00:00:F7:E9:EE && ip4.src == 10.16.0.6 && eth.dst == 00:00:00:DD:D2:BC && ip4.dst == 10.96.0.10 && udp.src == 10000 && udp.dst == 53'

    udp,reg14=0x6,vlan_tci=0x0000,dl_src=00:00:00:f7:e9:ee,dl_dst=00:00:00:dd:d2:bc,nw_src=10.16.0.6,nw_dst=10.96.0.10,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=10000,tp_dst=53

    ingress(dp="ovn-default", inport="dnsutils.default") 0. ls_in_port_sec_l2 (ovn-northd.c:4629): inport == "dnsutils.default" && eth.src == {00:00:00:f7:e9:ee}, priority 50, uuid 06181115 next;

    1. ls_in_port_sec_ip (ovn-northd.c:4281): inport == "dnsutils.default" && eth.src == 00:00:00:f7:e9:ee && ip4.src == {10.16.0.6}, priority 90, uuid 292b42d3 next;
    2. ls_in_pre_acl (ovn-northd.c:4805): ip, priority 100, uuid 59276b34 reg0[0] = 1; next;
    3. ls_in_pre_lb (ovn-northd.c:4961): ip && ip4.dst == 10.96.0.10, priority 100, uuid b7662b98 reg0[0] = 1; next;
    4. ls_in_pre_stateful (ovn-northd.c:4992): reg0[0] == 1, priority 100, uuid 206aea47 ct_next;

    ct_next(ct_state=new|trk) 6. ls_in_acl (ovn-northd.c:5368): ip && (!ct.est || (ct.est && ct_label.blocked == 1)), priority 1, uuid 2b126df4 reg0[1] = 1; next; 10. ls_in_stateful (ovn-northd.c:5726): ct.new && ip4.dst == 10.96.0.10 && udp.dst == 53, priority 120, uuid f4ab5458 ct_lb(backends=10.16.0.2:53,10.16.0.4:53);

    ct_lb 19. ls_in_l2_lkup (ovn-northd.c:6912): eth.dst == 00:00:00:dd:d2:bc, priority 50, uuid af68a5a3 outport = "ovn-default-ovn-cluster"; output;

    egress(dp="ovn-default", inport="dnsutils.default", outport="ovn-default-ovn-cluster") 0. ls_out_pre_lb (ovn-northd.c:4977): ip, priority 100, uuid 6cca8aba reg0[0] = 1; next;

    1. ls_out_pre_acl (ovn-northd.c:4748): ip && outport == "ovn-default-ovn-cluster", priority 110, uuid 3fb2824f next;
    2. ls_out_pre_stateful (ovn-northd.c:4994): reg0[0] == 1, priority 100, uuid d7986c70 ct_next;

    ct_next(ct_state=est|trk /* default (use --ct to customize) */) 3. ls_out_lb (ovn-northd.c:5609): ct.est && !ct.rel && !ct.new && !ct.inv, priority 65535, uuid 2f537878 reg0[2] = 1; next; 7. ls_out_stateful (ovn-northd.c:5771): reg0[2] == 1, priority 100, uuid ed9ca3a1 ct_lb;

    ct_lb 9. ls_out_port_sec_l2 (ovn-northd.c:4695): outport == "ovn-default-ovn-cluster", priority 50, uuid 8bdc173c output; /* output to "ovn-default-ovn-cluster", type "patch" */

    ingress(dp="ovn-cluster", inport="ovn-cluster-ovn-default") 0. lr_in_admission (ovn-northd.c:7974): eth.dst == 00:00:00:dd:d2:bc && inport == "ovn-cluster-ovn-default", priority 50, uuid 780ff8c5 next;

    1. lr_in_lookup_neighbor (ovn-northd.c:8023): 1, priority 0, uuid 4e24c5d4 reg9[3] = 1; next;
    2. lr_in_learn_neighbor (ovn-northd.c:8029): reg9[3] == 1 || reg9[2] == 1, priority 100, uuid 8a3ad9f6 next;
    3. lr_in_ip_routing (ovn-northd.c:7598): ip4.dst == 10.16.0.0/16, priority 33, uuid 7e9728c9 ip.ttl--; reg8[0..15] = 0; reg0 = ip4.dst; reg1 = 10.16.0.1; eth.src = 00:00:00:dd:d2:bc; outport = "ovn-cluster-ovn-default"; flags.loopback = 1; next;
    4. lr_in_ip_routing_ecmp (ovn-northd.c:9593): reg8[0..15] == 0, priority 150, uuid 52ad4463 next;
    5. lr_in_arp_resolve (ovn-northd.c:9861): outport == "ovn-cluster-ovn-default" && reg0 == 10.16.0.4, priority 100, uuid 15ef66f9 eth.dst = 00:00:00:ae:ad:23; next;
    6. lr_in_arp_request (ovn-northd.c:10265): 1, priority 0, uuid 2b3d52d9 output;

    egress(dp="ovn-cluster", inport="ovn-cluster-ovn-default", outport="ovn-cluster-ovn-default") 3. lr_out_delivery (ovn-northd.c:10311): outport == "ovn-cluster-ovn-default", priority 100, uuid 8807ddda output; /* output to "ovn-cluster-ovn-default", type "patch" */

    ingress(dp="ovn-default", inport="ovn-default-ovn-cluster") 0. ls_in_port_sec_l2 (ovn-northd.c:4629): inport == "ovn-default-ovn-cluster", priority 50, uuid 9f808a14 next; 3. ls_in_pre_acl (ovn-northd.c:4745): ip && inport == "ovn-default-ovn-cluster", priority 110, uuid 1defc5dd next; 9. ls_in_lb (ovn-northd.c:5606): ct.est && !ct.rel && !ct.new && !ct.inv, priority 65535, uuid 9edb7d1e reg0[2] = 1; next; 10. ls_in_stateful (ovn-northd.c:5769): reg0[2] == 1, priority 100, uuid 02a8d618 ct_lb;

    ct_lb 19. ls_in_l2_lkup (ovn-northd.c:6912): eth.dst == 00:00:00:ae:ad:23, priority 50, uuid 63f29f2c outport = "coredns-86c58d9df4-82qgb.kube-system"; output;

    egress(dp="ovn-default", inport="ovn-default-ovn-cluster", outport="coredns-86c58d9df4-82qgb.kube-system") 0. ls_out_pre_lb (ovn-northd.c:4977): ip, priority 100, uuid 6cca8aba reg0[0] = 1; next;

    1. ls_out_pre_acl (ovn-northd.c:4807): ip, priority 100, uuid b6219115 reg0[0] = 1; next;
    2. ls_out_pre_stateful (ovn-northd.c:4994): reg0[0] == 1, priority 100, uuid d7986c70 ct_next;

    ct_next(ct_state=est|trk /* default (use --ct to customize) */) 3. ls_out_lb (ovn-northd.c:5609): ct.est && !ct.rel && !ct.new && !ct.inv, priority 65535, uuid 2f537878 reg0[2] = 1; next; 7. ls_out_stateful (ovn-northd.c:5771): reg0[2] == 1, priority 100, uuid ed9ca3a1 ct_lb;

    ct_lb 8. ls_out_port_sec_ip (ovn-northd.c:4281): outport == "coredns-86c58d9df4-82qgb.kube-system" && eth.dst == 00:00:00:ae:ad:23 && ip4.dst == {255.255.255.255, 224.0.0.0/4, 10.16.0.4, 10.16.255.255}, priority 90, uuid 662c2b5f next; 9. ls_out_port_sec_l2 (ovn-northd.c:4695): outport == "coredns-86c58d9df4-82qgb.kube-system" && eth.dst == {00:00:00:ae:ad:23}, priority 50, uuid 35ab4345 output; /* output to "coredns-86c58d9df4-82qgb.kube-system", type "" */

    • set +x

    Start OVS Tracing

    • kubectl exec ovs-ovn-k79ds -n kube-system -- ovs-appctl ofproto/trace br-int in_port=6,udp,nw_src=10.16.0.6,nw_dst=10.96.0.10,dl_src=00:00:00:F7:E9:EE,dl_dst=00:00:00:DD:D2:BC,tp_src=1000,tp_dst=53 Bad openflow flow syntax: in_port=6,udp,nw_src=10.16.0.6,nw_dst=10.96.0.10,dl_src=00:00:00:F7:E9:EE,dl_dst=00:00:00:DD:D2:BC,tp_src=1000,tp_dst=53: prerequisites not met for setting tp_src ovs-appctl: ovs-vswitchd: server returned an error command terminated with exit code 2

    kubectl ko nbctl list load_balancer

    _uuid : f2db64c0-7c90-468f-bf51-9807b93229b2 external_ids : {} health_check : [] ip_port_mappings : {} name : cluster-tcp-loadbalancer protocol : tcp selection_fields : [] vips : {"10.100.21.21:10665"="172.19.104.78:10665", "10.101.2.32:10660"="172.19.104.78:10660", "10.105.152.58:6642"="172.19.104.78:6642", "10.107.165.205:8080"="10.16.0.5:8080", "10.96.0.10:53"="10.16.0.2:53,10.16.0.4:53", "10.96.0.1:443"="172.19.104.78:6443", "10.97.248.180:6641"="172.19.104.78:6641"}

    _uuid : 2a4f9f2f-21cf-4bd6-a35e-d4575e7c9117 external_ids : {} health_check : [] ip_port_mappings : {} name : cluster-udp-loadbalancer protocol : udp selection_fields : [] vips : {"10.96.0.10:53"="10.16.0.2:53,10.16.0.4:53"}

    kubectl ko nbctl list logical_switch

    _uuid : 1e169528-a148-436f-811f-b3a83c089e04 acls : [821479e8-2151-4760-b27a-54d901ddfc70] dns_records : [] external_ids : {} forwarding_groups : [] load_balancer : [] name : join other_config : {exclude_ips="100.64.0.1", gateway="100.64.0.1", subnet="100.64.0.0/16"} ports : [1a6dcb70-084c-460e-a84f-7505f820f276, 1e4994f0-e749-4c8e-83e8-25502e11b769] qos_rules : []

    _uuid : bdc0fac5-6caf-4800-bd23-8aaf80c273ce acls : [c9ca44b2-d3a6-4b74-b668-baef0dc32d67] dns_records : [] external_ids : {} forwarding_groups : [] load_balancer : [2a4f9f2f-21cf-4bd6-a35e-d4575e7c9117, f2db64c0-7c90-468f-bf51-9807b93229b2] name : ovn-default other_config : {exclude_ips="10.16.0.1", gateway="10.16.0.1", subnet="10.16.0.0/16"} ports : [64a29d31-c9d7-4719-a09b-911d816982af, b150f67a-7e55-4fa2-ad92-63c45e74cd7a, b155c0b3-40fb-41b0-9b9a-8a163bb3497c, ca1a9e8b-78d3-42ce-804d-0e2244316e77, f17ea181-9bc8-4f5d-91e9-e79fc6ddd95c] qos_rules : []

  • ovs-ovn-dpdk启动报错

    ovs-ovn-dpdk启动报错

    Expected Behavior

    ovs-ovn-dpdk能正常启动

    Actual Behavior

    ovs-ovn-dpdk无法启动,为notready状态

    • 日志信息
    2022-08-04T02:30:59.948Z|02274|dpdk|ERR|EAL: Failed to attach device on primary process
    2022-08-04T02:30:59.948Z|02275|netdev_dpdk|WARN|Error attaching device '0000:86:00.0' to DPDK
    2022-08-04T02:30:59.948Z|02276|netdev|WARN|dpdk0: could not set configuration (Invalid argument)
    2022-08-04T02:31:00.017Z|02277|dpdk|ERR|EAL: failed to parse device "0000:86:00.0"
    2022-08-04T02:31:00.017Z|02278|dpdk|ERR|EAL: failed to parse device "0000:86:00.0"
    2022-08-04T02:31:00.017Z|02279|dpdk|ERR|EAL: Failed to attach device on primary process
    2022-08-04T02:31:00.017Z|02280|netdev_dpdk|WARN|Error attaching device '0000:86:00.0' to DPDK
    2022-08-04T02:31:00.017Z|02281|netdev|WARN|dpdk0: could not set configuration (Invalid argument)
    2022-08-04T02:31:00.190Z|02282|dpdk|ERR|EAL: failed to parse device "0000:86:00.0"
    2022-08-04T02:31:00.190Z|02283|dpdk|ERR|EAL: failed to parse device "0000:86:00.0"
    2022-08-04T02:31:00.190Z|02284|dpdk|ERR|EAL: Failed to attach device on primary process
    2022-08-04T02:31:00.190Z|02285|netdev_dpdk|WARN|Error attaching device '0000:86:00.0' to DPDK
    2022-08-04T02:31:00.190Z|02286|netdev|WARN|dpdk0: could not set configuration (Invalid argument)
    2022-08-04T02:31:00.222Z|02287|bridge|INFO|bridge br-phy: deleted interface br-phy on port 65534
    2022-08-04T02:31:00.222Z|02288|bridge|INFO|bridge br-hci-storage: deleted interface storage-0.201 on port 1
    2022-08-04T02:31:00.222Z|02289|bridge|INFO|bridge br-hci-storage: deleted interface br-hci-storage on port 65534
     * Exiting ovs-vswitchd (285253)
    2022-08-04T02:31:00.232Z|02290|bridge|INFO|bridge br-int: deleted interface 8bf1163514b0_h on port 54
    2022-08-04T02:31:00.232Z|02291|bridge|INFO|bridge br-int: deleted interface ovn0 on port 3
    2022-08-04T02:31:00.232Z|02292|bridge|INFO|bridge br-int: deleted interface mirror0 on port 2
    2022-08-04T02:31:00.232Z|02293|bridge|INFO|bridge br-int: deleted interface 227ed9e48a39_h on port 4
    2022-08-04T02:31:00.232Z|02294|bridge|INFO|bridge br-int: deleted interface d6e35d7c8413_h on port 5
    2022-08-04T02:31:00.232Z|02295|bridge|INFO|bridge br-int: deleted interface 8cfc916d3e32_h on port 1
    2022-08-04T02:31:00.232Z|02296|bridge|INFO|bridge br-int: deleted interface br-int on port 65534
    2022-08-04T02:31:00.232Z|02297|bridge|INFO|bridge br-int: deleted interface c75d0fcfd2c2_h on port 6
    2022-08-04T02:31:00.232Z|02298|bridge|INFO|bridge br-int: deleted interface ovn-7a4236-0 on port 7
    2022-08-04T02:31:00.232Z|02299|bridge|INFO|bridge br-int: deleted interface 49849c2edf51_h on port 8
    2022-08-04T02:31:00.233Z|02300|bridge|INFO|bridge br-int: deleted interface db048851620d_h on port 10
    2022-08-04T02:31:00.233Z|02301|bridge|INFO|bridge br-int: deleted interface b3ba1eebc8fa_h on port 9
    2022-08-04T02:31:00.233Z|02302|bridge|INFO|bridge br-int: deleted interface ce61268ce192_h on port 11
    2022-08-04T02:31:00.852Z|02303|ofproto_dpif_rid|ERR|recirc_id 48 left allocated when ofproto (br-int) is destructed
    2022-08-04T02:31:00.852Z|02304|ofproto_dpif_rid|ERR|recirc_id 51 left allocated when ofproto (br-int) is destructed
    2022-08-04T02:31:00.852Z|02305|ofproto_dpif_rid|ERR|recirc_id 47 left allocated when ofproto (br-int) is destructed
    2022-08-04T02:31:00.852Z|02306|ofproto_dpif_rid|ERR|recirc_id 49 left allocated when ofproto (br-int) is destructed
    2022-08-04T02:31:00.852Z|02307|ofproto_dpif_rid|ERR|recirc_id 53 left allocated when ofproto (br-int) is destructed
    2022-08-04T02:31:00.852Z|02308|ofproto_dpif_rid|ERR|recirc_id 52 left allocated when ofproto (br-int) is destructed
    2022-08-04T02:31:00.852Z|02309|ofproto_dpif_rid|ERR|recirc_id 50 left allocated when ofproto (br-int) is destructed
    2022-08-04T02:31:00.863Z|02310|connmgr|INFO|br-int<->unix#0: 133 flow_mods in the 8 s starting 9 s ago (57 adds, 58 deletes, 18 modifications)
    2022-08-04T02:31:01.435Z|00002|daemon_unix(monitor)|INFO|pid 285253 died, exit status 0, exiting
     * Exiting ovsdb-server (285197)
     * Exiting ovn-controller (285460)
     * Killing ovn-controller (285460)
    

    Steps to Reproduce the Problem

    1. 参照步骤 https://kubeovn.github.io/docs/v1.10.x/en/advance/dpdk/

    Additional Info

    1. 网卡类型cx5
    2. 驱动 uio_pci_generic
    3. os fedora-coreos-34.20210626.3.1
    4. ovn-dpdk镜像 kubeovn/kube-ovn:v1.10.0-dpdk
    • Kubernetes version:

      Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v0.21.0-beta.1", GitCommit:"52a2fd9", GitTreeState:"clean", BuildDate:"2022-02-08T01:26:19Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
      Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.1-1839+b93fd35dd03051-dirty", GitCommit:"b93fd35dd030519c24dd02f8bc2a7f873d2928cd", GitTreeState:"dirty", BuildDate:"2022-02-11T06:14:59Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
      
    • kube-ovn version:

      kubeovn/kube-ovn:v1.10.0-dpdk
      
    • operation-system/kernel version:

      Fedora CoreOS 34
      5.14.14-200.fc34.x86_64
      
  • Connectivity issue with kube-ovn 1.5.0

    Connectivity issue with kube-ovn 1.5.0

    With a Fedora32 CI setup (nft) I can't manage to have a working fresh install of kube-ovn 1.4.0.

    Here are the only changes between the 2 jobs : https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/-/commit/1f2a4ee2de4cf37d762815fcf1c497e2790a72a8 (upgrade version and yamls)

    Before I try to check deeper, any insight of what might have change between the 2 versions ?

  • 无法正确分配pod ip

    无法正确分配pod ip

    默认子网设置为172.30.0.0/16,但部署了一个测试的busybox应用,分配的IP为172.17.0.2(docker的默认网段),那位指点一下问题出在哪儿里,多谢!

    [root@host-10-19-17-139 ~]# kubectl get node -o wide
    NAME                STATUS   ROLES            AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
    host-10-19-17-139   Ready    compute,master   25d   v1.15.2   10.19.17.139   <none>        CentOS Linux 7 (Core)   5.2.11-1.el7.elrepo.x86_64   docker://19.3.1
    host-10-19-17-140   Ready    compute          25d   v1.15.2   10.19.17.140   <none>        CentOS Linux 7 (Core)   5.2.11-1.el7.elrepo.x86_64   docker://19.3.1
    host-10-19-17-141   Ready    compute          25d   v1.15.2   10.19.17.141   <none>        CentOS Linux 7 (Core)   5.2.11-1.el7.elrepo.x86_64   docker://19.3.1
    
    [root@host-10-19-17-139 ~]# kubectl get Subnet
    NAME          PROTOCOL   CIDR            PRIVATE   NAT     DEFAULT   GATEWAYTYPE   USED   AVAILABLE
    join          IPv4       100.64.0.0/16   false     false   false     distributed   3      65532
    ovn-default   IPv4       172.30.0.0/16   false     true    true      distributed   0      65535
    
    [root@host-10-19-17-139 ~]# kubectl get pod --all-namespaces -o wide
    NAMESPACE   NAME                                   READY   STATUS    RESTARTS   AGE   IP             NODE                NOMINATED NODE   READINESS GATES
    default     test-588865b-zd5vt                     1/1     Running   1          31m   172.17.0.2     host-10-19-17-141   <none>           <none>
    kube-ovn    kube-ovn-cni-dvgzj                     1/1     Running   0          33m   10.19.17.141   host-10-19-17-141   <none>           <none>
    kube-ovn    kube-ovn-cni-krsnc                     1/1     Running   0          33m   10.19.17.140   host-10-19-17-140   <none>           <none>
    kube-ovn    kube-ovn-cni-w74td                     1/1     Running   0          33m   10.19.17.139   host-10-19-17-139   <none>           <none>
    kube-ovn    kube-ovn-controller-86d7c8d6c4-p4ndm   1/1     Running   0          33m   10.19.17.141   host-10-19-17-141   <none>           <none>
    kube-ovn    kube-ovn-controller-86d7c8d6c4-zjvbv   1/1     Running   0          33m   10.19.17.139   host-10-19-17-139   <none>           <none>
    kube-ovn    ovn-central-8ddc7dd8-ww7mf             1/1     Running   0          39m   10.19.17.139   host-10-19-17-139   <none>           <none>
    kube-ovn    ovs-ovn-dbrg2                          1/1     Running   0          39m   10.19.17.139   host-10-19-17-139   <none>           <none>
    kube-ovn    ovs-ovn-jxjc5                          1/1     Running   0          39m   10.19.17.140   host-10-19-17-140   <none>           <none>
    kube-ovn    ovs-ovn-s5rxz                          1/1     Running   0          39m   10.19.17.141   host-10-19-17-141   <none>           <none>
    
  • service not working, seems wrong SNAT rules

    service not working, seems wrong SNAT rules

    Expected Behavior

    k8s serivce forward traffic correctly.

    Actual Behavior

    k8s serivce not forward traffic if backend pod is not on the same node of service.

    Steps to Reproduce the Problem

    1. create 2 pod of default subnet
    2. create a service of these 2 pod
    3. access service from host or from pod

    Additional Info

    • Kubernetes version:

      Output of kubectl version:

    1.21

    
    - kube-ovn version:
    
    

    1.10

    
    - operation-system/kernel version:
    
    **Output of `awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release`:**
    **Output of `uname -r`:**
    

    4.19.0

    
    We have try both ipvs and iptables, all failed.
    
    We did some digging , the conntrack rules shows src_ip=clusterip and dst_ip=pod_ip. we believe that is not right!
    src_ip should be changed to host_ip by SNAT, otherwise,  packages cannot be returned.
    
    Iptables rules add by ovn is below
    "iptables -A POSTROUTING -m set ! --match-set ovn40subnets src -m set ! --match-set ovn40other-node src -m set --match-set ovn40subnets-nat dst -j RETURN`"
    if we access a service with pod of default subnet, src_ip will be cluster_ip, which matchs " ! --match-set ovn40subnets src -m set ! --match-set ovn40other-node src" and dsc_ip will be default subnet witch matches "--match-set ovn40subnets-nat dst". It will definitely cause "RETURN" and no SNAT happened.
    
    If we add a SNAT rule before this "RETURN" rule, service become OK again.
    
    
  • dpdk commands can not run inside kube-ovn container

    dpdk commands can not run inside kube-ovn container

    I install Kube-OVN with OVS-DPDK following the guide https://github.com/alauda/kube-ovn/blob/master/docs/dpdk.md and replace ovs images with kubeovn/kube-ovn-dpdk:19.11.2

    But when I'm trying to debug dpdk, and find the following command can not work

    [root@node01 dpdk-stable-19.11.2]# kubectl exec -it -n kube-ovn ovs-ovn-wmmgw bash
    [root@node01 /]# dpdk-pdump 
    Illegal instruction
    [root@node01 /]# dpdk-proc-info 
    Illegal instruction
    [root@node01 /]# 
    
    

    DPDK kernel modules compile and install following http://docs.openvswitch.org/en/latest/intro/install/dpdk/#installing with dpdk version 19.11.2. Kernel Version: 4.14.172 Host OS Version: CentOS Linux release 7.4.1708 (Core)

  • 并发50批量创建子网,相同虚拟路由器pod发生大批量健康检查超时

    并发50批量创建子网,相同虚拟路由器pod发生大批量健康检查超时

    Steps to Reproduce the Problem

    1.在默认vpc并发50创建子网 2.创建过程中,默认虚拟子网下pod发生大批量健康检查超时 3.严重影响业务

    Additional Info

    • Kubernetes version:

      Output of kubectl version:

      v1.20.10
      
    • kube-ovn version:

      v1.7.3
      
    • operation-system/kernel version:

      Output of awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release: Output of uname -r:

       基于RHEL 8.4.1做的操作系统
      
  • Illegal instruction (core dumped)

    Illegal instruction (core dumped)

    Hi,

    We would like to use ACL feature and so we are deploying master branch. Monday we deployed and tested succesfully but today could not deploy master branch. So we planned to deploy Monday version and tried to create build related Monday commit and tried development guide. However we got many error related to "stack smashing detected". one of them as below.

    [line 0 : column 0] - loading files from package "cmd": err: signal: aborted (core dumped): stderr: *** stack smashing detected ***: terminated

    Could you please make a recommendation to overcome this issue?

    Thanks and regards.

  • Add OVS-DPDK support, for issue 104

    Add OVS-DPDK support, for issue 104

    This commit adds OVS-DPDK support to Kube-OVN. User instructions are included in a new file docs/dpdk.md

    A new Dockerfile has been added to include OVS-DPDK along with OVN. Where DPDK is required, this image is used for the ovs-ovn pod, in place of the existing kernel-OVS “kube-ovn” image. This Dockerfile is currently based on Fedora 32 for reasons noted as comments within the file. It should later be possible to change this to CentOS when full DPDK 19 support is available.

    I recommend the above Dockerfile is built and tagged as kube-ovn-dpdk:, where the version corresponds to the DPDK version used within the image (in this case 19:11) rather than the Kube-OVN version. I recommend this as DPDK applications have a strong dependency on DPDK version. If we force an end user to always use the latest version, then we will likely break their DPDK app. I propose over time we provide images for multiple DPDK versions and let the user pick to suit their needs. I don’t see these images or Dockerfiles requiring maintenance or support. They should be completely independent of Kube-OVN versions and releases.

    The install.sh script has been modified. It now takes a flag --with-dpdk= so the user can indicate they want to install OVS-DPDK based on which version of DPDK. Version of DPDK required will determine version of OVS and this will be already built into the Docker image provided. The Kube-OVN version installed is still set at the top of the script as the VERSION variable. This should still be the case going forward, Kube-OVN and DPDK versions should operate independently of each other. However, it’s something to watch. If future versions of Kube-OVN have a strong dependency on newer versions of OVS, then the older version of OVS used for DPDK may become an issue. We may have to update the install script so a user wanting an older version of DPDK has no choice but to use an older version of Kube-OVN that’s known to be compatible. I don’t foresee this being an issue, but one to watch as I said.

    New startup and healthcheck scripts added for OVS-DPDK.

  • TCP connection failed in Rocky Linux 8.6

    TCP connection failed in Rocky Linux 8.6

    Expected Behavior

    同一台宿主机 POD 之间 TCP 应该是通的。

    Actual Behavior

    同一台宿主机 POD 之间 TCP 不通

    Steps to Reproduce the Problem

    1. 在 vpc: ovn-cluster 的 subnet:ovn-default 下创建两个 POD,POD yaml如下 ` apiVersion: apps/v1 kind: Deployment metadata: name: deploy spec: selector: matchLabels: app: deploy replicas: 2 template: metadata: labels: app: deploy annotations: ovn.kubernetes.io/default_route: "true" ovn.kubernetes.io/logical_switch: ovn-default spec: nodeSelector: kubernetes.io/hostname: XXX containers:
      • name: centos image: centos:7 command: ["bash","-c","sleep 365d"] imagePullPolicy: Always tolerations:
      • key: key value: value effect: NoSchedule `
    2. 在其中一个 POD(POD-1) 中启动 tcp server,命令为:nc -l -t 12345
    3. 在另一个 POD(POD-2) 中启动 tcp client,命令为:ncat 172.10.0.97 12345
    4. 发现 tcp client 和 tcp server 无法联通。
    5. 在宿主机抓 POD-2 veth 流量如下图 image

    根据抓包分析,tcp 三次握手正常进行,但是当 tcp client 发了一个包后,tcp server 换了一个端口 5511 (本应该是 12345)来回包??

    结果 tcp 链接断掉。


    经过测试:

    • 1.9.2 在 "Rocky Linux 8.6 (Green Obsidian)" 4.18.0-372.9.1.el8.x86_64 不会出现上述问题
    • 1.10.1 在 "Rocky Linux 8.5 (Green Obsidian)" 4.18.0-348.23.1.el8_5.x86_64 不会出现上述问题
    • 1.10.1 在 "Rocky Linux 8.6 (Green Obsidian)" 4.18.0-372.9.1.el8.x86_64 会出现上述问题
    • 1.10.1 在 "Rocky Linux 8.6 (Green Obsidian)" 4.18.0-372.9.1.el8.x86_64 使用非 vpc: ovn-cluster 的 subnet 不会出现上述问题

    Additional Info

    • Kubernetes version:

      Output of kubectl version:

      Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:49:13Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}
      Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:43:11Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}
      
    • kube-ovn version:

      v1.10.1,commit 4935fa6adc8a0088b173603e819cec274996ed29
      
    • operation-system/kernel version:

      Output of awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release: Output of uname -r:

      "Rocky Linux 8.6 (Green Obsidian)"
      4.18.0-372.9.1.el8.x86_64
      
  • Network policy E2E fails

    Network policy E2E fails

    Expected Behavior

    Actual Behavior

    2022-12-30T09:53:48.0127828Z Summarizing 6 Failures:
    2022-12-30T09:53:48.0128460Z   [FAIL] [sig-network] Netpol NetworkPolicy between server and client [It] should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
    2022-12-30T09:53:48.0128842Z   /home/runner/go/pkg/mod/k8s.io/[email protected]/test/e2e/network/netpol/test_helper.go:126
    2022-12-30T09:53:48.0129486Z   [FAIL] [sig-network] Netpol NetworkPolicy between server and client [It] should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
    2022-12-30T09:53:48.0129860Z   /home/runner/go/pkg/mod/k8s.io/[email protected]/test/e2e/network/netpol/test_helper.go:126
    2022-12-30T09:53:48.0130500Z   [FAIL] [sig-network] Netpol NetworkPolicy between server and client [It] should allow ingress access on one named port [Feature:NetworkPolicy]
    2022-12-30T09:53:48.0130871Z   /home/runner/go/pkg/mod/k8s.io/[email protected]/test/e2e/network/netpol/test_helper.go:126
    2022-12-30T09:53:48.0131600Z   [FAIL] [sig-network] Netpol NetworkPolicy between server and client [It] should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
    2022-12-30T09:53:48.0131961Z   /home/runner/go/pkg/mod/k8s.io/[email protected]/test/e2e/network/netpol/test_helper.go:126
    2022-12-30T09:53:48.0132532Z   [FAIL] [sig-network] Netpol NetworkPolicy between server and client [It] should allow egress access to server in CIDR block [Feature:NetworkPolicy]
    2022-12-30T09:53:48.0132900Z   /home/runner/go/pkg/mod/k8s.io/[email protected]/test/e2e/network/netpol/test_helper.go:126
    2022-12-30T09:53:48.0133714Z   [FAIL] [sig-network] Netpol NetworkPolicy between server and client [It] should allow egress access on one named port [Feature:NetworkPolicy]
    2022-12-30T09:53:48.0134091Z   /home/runner/go/pkg/mod/k8s.io/[email protected]/test/e2e/network/netpol/test_helper.go:126
    

    Steps to Reproduce the Problem

    Additional Info

    • Kubernetes version:

      Output of kubectl version:

      (paste your output here)
      
    • kube-ovn version:

      (paste your output here)
      
    • operation-system/kernel version:

      Output of awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release: Output of uname -r:

      (paste your output here)
      
  • OVN IC E2E fails

    OVN IC E2E fails

    Expected Behavior

    Actual Behavior

    OVN IC E2E case should be able to update az name fails.

    Steps to Reproduce the Problem

    Additional Info

    • Kubernetes version:

      Output of kubectl version:

      (paste your output here)
      
    • kube-ovn version:

      (paste your output here)
      
    • operation-system/kernel version:

      Output of awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release: Output of uname -r:

      (paste your output here)
      
  • add u2o test case

    add u2o test case

    What type of this PR

    Examples of user facing changes:

    • Features
    • Bug fixes
    • Docs
    • Tests

    Which issue(s) this PR fixes:

    Fixes #2050

  • request ip return 500 no address allocated to pod $POD_NAME provider ovn, please see kube-ovn-controller logs to find errors

    request ip return 500 no address allocated to pod $POD_NAME provider ovn, please see kube-ovn-controller logs to find errors

    Expected Behavior

    The pod can successful created and start

    Actual Behavior

    the nginx pod shows that

      Warning  FailedCreatePodSandBox  4m7s (x92 over 54m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b0ce178c18403cbe12d02b36ef6588e2cd6c1dfc53703d1c35b23ada614d5282": plugin type="kube-ovn" failed (add): RPC failed; request ip return 500 no address allocated to pod default/nginx-5b4664bcd4-hk4s6 provider ovn, please see kube-ovn-controller logs to find errors
    

    kube-ovn-controller logs that

    I1222 17:59:54.688648       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:54.688693       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:54.712915       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 17:59:54.718920       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:54.718973       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:54.732737       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 17:59:54.743283       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:54.743350       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:54.754261       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 17:59:54.774560       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:54.774602       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:54.780337       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 17:59:54.820879       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:54.820938       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:54.827388       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 17:59:54.908164       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:54.908213       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:54.913282       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 17:59:55.073611       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:55.073693       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:55.079675       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 17:59:55.400343       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:55.400407       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:55.406840       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 17:59:56.046868       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:56.046925       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:56.052742       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 17:59:57.333755       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:57.333811       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:57.339608       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 17:59:59.899676       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 17:59:59.899717       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 17:59:59.906024       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 18:00:05.026723       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 18:00:05.026782       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 18:00:05.032407       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 18:00:15.272715       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 18:00:15.272809       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 18:00:15.279510       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 18:00:35.760030       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 18:00:35.760102       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 18:00:35.766227       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 18:01:16.727042       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 18:01:16.727101       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 18:01:16.732675       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 18:02:38.652971       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 18:02:38.653040       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 18:02:38.659001       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 18:05:22.499650       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 18:05:22.499856       7 ipam.go:51] allocate v4 10.16.0.16 v6  mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 18:05:22.507852       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 18:06:26.045239       7 gc.go:356] gc logical switch port nginx-5b4664bcd4-hk4s6.default
    I1222 18:06:26.045244       7 ovn-nbctl-legacy.go:112] delete lsp nginx-5b4664bcd4-hk4s6.default
    I1222 18:06:26.057211       7 subnet.go:460] release v4 10.16.0.16 mac 00:00:00:0F:9C:A4 for default/nginx-5b4664bcd4-hk4s6, add ip to released list
    I1222 18:10:50.188109       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 18:10:50.188184       7 ipam.go:51] allocate v4 10.16.0.23 v6  mac 00:00:00:8C:14:94 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 18:10:50.203880       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 18:18:26.181381       7 gc.go:356] gc logical switch port nginx-5b4664bcd4-hk4s6.default
    I1222 18:18:26.181386       7 ovn-nbctl-legacy.go:112] delete lsp nginx-5b4664bcd4-hk4s6.default
    I1222 18:18:26.193591       7 subnet.go:460] release v4 10.16.0.23 mac 00:00:00:8C:14:94 for default/nginx-5b4664bcd4-hk4s6, add ip to released list
    I1222 18:21:45.563920       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 18:21:45.563998       7 ipam.go:51] allocate v4 10.16.0.30 v6  mac 00:00:00:65:A9:91 for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 18:21:45.578536       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 18:30:26.316763       7 gc.go:356] gc logical switch port nginx-5b4664bcd4-hk4s6.default
    I1222 18:30:26.316768       7 ovn-nbctl-legacy.go:112] delete lsp nginx-5b4664bcd4-hk4s6.default
    I1222 18:30:26.345989       7 subnet.go:460] release v4 10.16.0.30 mac 00:00:00:65:A9:91 for default/nginx-5b4664bcd4-hk4s6, add ip to released list
    I1222 18:38:25.578563       7 pod.go:319] handle add pod default/nginx-5b4664bcd4-hk4s6
    I1222 18:38:25.578644       7 ipam.go:51] allocate v4 10.16.0.36 v6  mac 00:00:00:03:20:2B for default/nginx-5b4664bcd4-hk4s6 from subnet ovn-default
    E1222 18:38:25.593687       7 pod.go:331] error syncing 'default/nginx-5b4664bcd4-hk4s6': map: map[] does not contain declared merge key: name, requeuing
    I1222 18:48:26.445566       7 gc.go:356] gc logical switch port nginx-5b4664bcd4-hk4s6.default
    I1222 18:48:26.445582       7 ovn-nbctl-legacy.go:112] delete lsp nginx-5b4664bcd4-hk4s6.default
    I1222 18:48:26.456952       7 subnet.go:460] release v4 10.16.0.36 mac 00:00:00:03:20:2B for default/nginx-5b4664bcd4-hk4s6, add ip to released list
    

    Steps to Reproduce the Problem

    I don't know how to reproduce yet. That's my first try to kube-ovn

    Additional Info

    • Kubernetes version:
    1.26
    
    • kube-ovn version:
    v1.11.0
    
    • operation-system/kernel version:

      Output of awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release: Output of uname -r:

    Ubuntu 20.04.4 LTS
    5.4.0-100-generic
    
  • pod绑定eip导致当前网络不可用

    pod绑定eip导致当前网络不可用

    使用POD绑定EIP功能,参照相关章节配置

    image

    执行这个cm,当前的会话都会中断。。而且连不上去

    只能换管理网网段登上去,把这个cm撤回才能重新登录。

    该功能官网有这么一段说明:

    image

    节点的VirtualNetworkSwitch,必须是Open vSwitch吗? 而且需要把一个网卡bridged into the br-external bridge吗?

K8s-network-config-operator - Kubernetes network config operator to push network config to switches

Kubernetes Network operator Will add more to the readme later :D Operations The

May 16, 2022
fabric 1.4 bdls protocol on top of the SmartBFT 1.4 project

Hyperledger Fabric Note: This is a read-only mirror of the formal Gerrit repository, where active development is ongoing. Issue tracking is handled in

Feb 22, 2022
The DGL Operator makes it easy to run Deep Graph Library (DGL) graph neural network training on Kubernetes

DGL Operator The DGL Operator makes it easy to run Deep Graph Library (DGL) graph neural network distributed or non-distributed training on Kubernetes

Dec 19, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

Mar 25, 2022
The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes.
The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes.

The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes. Apache NiFI is a free, open-source solution that support powerful and scalable directed graphs of data routing, transformation, and system mediation logic.

Dec 26, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Reconciler - A library to avoid overstuffed Reconcile functions of Kubernetes operators

reconciler A library to avoid overstuffed Reconcile functions of Kubernetes oper

May 31, 2022
Package trn introduces a Range type with useful methods to perform complex operations over time ranges

Time Ranges Package trn introduces a Range type with useful methods to perform c

Aug 18, 2022
Planet Scale Robotics - Offload computation-heavy robotic operations to GPU powered world's first cloud-native robotics platform.

robolaunch ?? Planet Scale Robotics - Offload computation-heavy robotic operations to GPU powered world's first cloud-native robotics platform. robola

Jan 1, 2023
Go-Mongodb API - A sample REST API ( CRUD operations ) created using Golang

Go-Mongodb_API This is a sample REST API ( CRUD operations ) created using the G

May 31, 2022
Kubelet-bench - Example Go-based e2e benchmark for various Kubelet operations without spinning up whole K8s cluster

kubelet-bench An example of Go based e2e benchmark for various Kubelet operation

Mar 17, 2022
Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)

Kilo Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes. Overview Kilo connects nodes in a cluster by providing an e

Jan 1, 2023
Hubble - Network, Service & Security Observability for Kubernetes using eBPF
Hubble - Network, Service & Security Observability for Kubernetes using eBPF

Network, Service & Security Observability for Kubernetes What is Hubble? Getting Started Features Service Dependency Graph Metrics & Monitoring Flow V

Jan 2, 2023
Shiba - Minimalist Kubernetes network plugin

Shiba(柴) Shiba is a minimalist Kubernetes network plugin, as a replacement for f

Sep 12, 2022
KubeOrbit is an open-source abstraction layer library that turns easy apps testing&debuging on Kubernetes in a new way
KubeOrbit is an open-source abstraction layer library that turns easy apps testing&debuging on Kubernetes in a new way

KubeOrbit is an open-source abstraction layer library that turns easy apps testing&debuging on Kubernetes in a new way

Jan 6, 2023
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023