Hybridnet is an open source container networking solution, integrated with Kubernetes and used officially by following well-known PaaS platforms

Hybridnet

Go Report Card Github All Releases Version codecov License workflow check workflow build

What is Hybridnet?

Hybridnet is an open source container networking solution, integrated with Kubernetes and used officially by following well-known PaaS platforms,

  • ACK Distro of Alibaba Cloud
  • AECP of Alibaba Cloud
  • SOFAStack of Ant Financial Co.

Hybridnet focus on large-scale, user-friendly and heterogeneous infrastructure, now hundreds of clusters are running on hybridnet all over world.

Features

  • Flexible network models: three-level, Network, Subnet and IPInstance, all implemented in CRD
  • DualStack: three modes optional, IPv4Only, IPv6Only and DualStack
  • Hybrid network fabric: support overlay and underlay pods at same time
  • Advanced IPAM: Network/Subnet/IPInstance assignment; stateful workloads IP retain
  • Kube-proxy friendly: working well with iptables-mode kube-proxy
  • ARM support: run on x86_64 and arm64 architectures

Contributing

Hybridnet welcome contributions, including bug reports, feature requests and documentation improvements. If you want to contribute, please start with CONTRIBUTING.md

Contact

For any questions about hybridnet, please reach us via:

  • Slack: #general on the hybridnet slack
  • DingTalk: Group No.35109308
  • E-mail: private or security issues should be reported via e-mail addresses listed in the MAINTAINERS file

License

Apache 2.0 License

Owner
Alibaba
Alibaba Open Source
Alibaba
Comments
  • feat: support specify mac addr via annotation

    feat: support specify mac addr via annotation

    Pull Request Description

    Describe what this PR does / why we need it

    Introduce a new feature: support specifying MAC address via pod annotation.

    Does this pull request fix one issue?

    Fixes #190

    Describe how you did it

    1. Verify the validness of specified MAC address in validating webhook;
    2. Check whether the specified MAC address is in conflict with existing IP instances.

    Describe how to verify it

    Specify a static MAC address via annotation "networking.alibaba.com/specified-mac" on Pod.

    Special notes for reviews

  • bugfix: iptables compatibility for nftables-based hosts

    bugfix: iptables compatibility for nftables-based hosts

    Pull Request Description

    Describe what this PR does / why we need it

    iptables tools is confronted with nftables compatibility problem on host, eg. CentOS 8.

    kube-proxy and cilium has solved this problem, so should we.

    Does this pull request fix one issue?

    Fixes #29

    Describe how you did it

    COPY and RUN iptables-wrapper-install.sh in Dockerfile

    The shell script comes from: https://github.com/kubernetes-sigs/iptables-wrappers/

    Describe how to verify it

    Pull image. on k8s cluster on both iptables-legacy based hosts and iptables-nft based hosts.

    Run sudo iptables-save | grep RAMA, then you should see related entries on both types of hosts.

    Special notes for reviews

  • Support specify mac pool for stateful workloads

    Support specify mac pool for stateful workloads

    Pull Request Description

    Describe what this PR does / why we need it

    Support specify MAC address pool via pod annotation for stateful workloads.

    Does this pull request fix one issue?

    Fixes #190

    Describe how you did it

    1. Verify the validness of specified MAC addresses in validating webhook;
    2. Introduce SpecifiedMACAddress as one of the couple/recouple options;
    3. Check whether the specified MAC address is in conflict with existing IP instances.

    Describe how to verify it

    Similar to ip pool assignment which is described in wiki docs. Just specify MAC addresses via annotation networking.alibaba.com/mac-pool on stateful workload pod templates, and separate them by comma.

    Special notes for reviews

  • change Kube-ovn to Kube-OVN and fix grammar issues

    change Kube-ovn to Kube-OVN and fix grammar issues

    Pull Request Description

    Describe what this PR does / why we need it

    Change Kube-ovn to Kube-OVN and fix grammar issues

    Does this pull request fix one issue?

    No

    Describe how you did it

    No

    Describe how to verify it

    Doc change

    Special notes for reviews

    Also make some clarification that Kube-OVN has the default VPC and Subnet that can be used by workloads in all namespaces. Even if users have no idea about the VPC and the Subnet, the default settings will make all things work just like Flannel.

  • iptables not correctly configured on CentOS 8 host

    iptables not correctly configured on CentOS 8 host

    Bug Report

    Type: bug report

    What happened

    An compatibility problem might it be.

    I set up a k8s cluster with CentOS 8 (which links iptables to nftables) and runs pods on Overlay and Underlay networks. I observed that on the host machines, the cmd lsmod | grep ip_tables shows that ip_tables is used by iptable_nat, iptable_mangle, and iptable_filter.

    After checking the logs of daemon pods, I believe the iptables rule is written without error. But on host machine, no rama-related iptables rules shown through either iptables-save or nft rule list.

    To be mentioned, iptables-save warns that there are more rules on iptables-legacy.

    What you expected to happen

    I should observe rama-related rules in iptables-save.

    How to reproduce it (as minimally and precisely as possible)

    Set up a k8s cluster CentOS 8 nodes. Install rama and run several Overlay/Underlay pods.

    Anything else we need to know?

    CentOS 8 removes iptables from packages and links it to nftables.

    Kube-proxy works perfectly.

    Environment

    • rama version: v1
    • OS (e.g. cat /etc/os-release): CentOS 8
    • Kernel (e.g. uname -a): Linux 4.18.0-305.7.1.el8_4.x86_6
    • Kubernetes version: v1.21.
  • Update network.go

    Update network.go

    as mentioned in issue #350
    vlanId should not be greater than 4094


    Describe what this PR does / why we need it

    vanId range bugfix

    Does this pull request fix one issue?

    Fixes #350

    Describe how you did it

    changed 4096 to 4094

    Describe how to verify it

    Special notes for reviews

  • fix arm build error

    fix arm build error

    Pull Request Description

    Describe what this PR does / why we need it

    Does this pull request fix one issue?

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

  • introduce typha for large scale

    introduce typha for large scale

    Pull Request Description

    Describe what this PR does / why we need it

    1. Use the Calico Typha daemon to increase scale and reduce impact on the datastore.
    2. Add init container to clean felix iptables rule automatically while policy is disabled.

    Does this pull request fix one issue?

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

  • update chart for large scale

    update chart for large scale

    Pull Request Description

    Describe what this PR does / why we need it

    1. Make liveness probe of daemon configurable.
    2. Make metrics port of manager configurable.
    3. Introduce performance parameters of manager to chart for large scale.

    Does this pull request fix one issue?

    NONE

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

  • make cni conf configurable

    make cni conf configurable

    Pull Request Description

    Describe what this PR does / why we need it

    1. Rename the default name of cni conf file from /etc/cni/net.d/00-hybridnet.conflist to /etc/cni/net.d/06-hybridnet.conflist.
    2. Remove the bandwidth plugin in cni conf.
    3. Introduce some environment variables for install-cni container of daemon:
      • Use CNI_CONF_NAME to rename the cni conf file, e.g.,04-hybridnet.conflist.
      • Use NEEDED_COMMUNITY_CNI_PLUGINS to specify which cni community plugins need to be copied by hybridnet from inside container to the /opt/cni/bin/ directory of host, e.g. loopback,bandwidth.
      • Use CNI_CONF_SRC to specify which file in the container need to be copied to the /etc/cni/net.d directory of host, with which users can mount a custom cni conf file (e.g. using configmap as volume) into install-cni container and distribute it to every node.

    Does this pull request fix one issue?

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

  • enhanced address is being used from node to local pods unexpectedlly

    enhanced address is being used from node to local pods unexpectedlly

    Bug Report

    Type: bug report

    What happened

    image

    image

    Enhanced address is being used, when we want to ping local pods from node. This will cause the ICMP reply will never come back.

    What you expected to happen

    How to reproduce it (as minimally and precisely as possible)

    Anything else we need to know?

    Environment

    • hybridnet version: 3.2.0
    • OS (e.g. cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Kubernetes version:
    • Install tools:
    • Others:
  • Support IP multicast

    Support IP multicast

    Issue Description

    Type: feature request

    Describe what feature you want

    1. Support IP multicast in overlay network
    2. Support IP multicast between underlay network pod and underlying network

    Additional context

  • add a

    add a "can-reach" method to choose host NIC

    Issue Description

    Type: feature request

    Describe what feature you want

    Add "can-reach" parameters for daemon to choose host NIC, just like calico.

    Additional context

  • choose preferred host interface by subnet

    choose preferred host interface by subnet

    Issue Description

    Type: feature request

    Describe what feature you want

    Now hybridnet is choosing preferred host interface through two flags. They are not configurable enough. We need a new CRD which can be attached to subnets.

    Additional context

  • Multi-tenancy

    Multi-tenancy

    Issue Description

    Type: feature request

    Describe what feature you want

    Multi-tenancy is a common topic in kubernetes, as we know, some container networking solutions may have related abilities on this. If this done, the following advantages will reach,

    • Container networking isolation between tenants
    • Allow IPAM conflict between tenants

    Additional context

Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core network solution.
Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core network solution.

Connecting the Next Billion People Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core

Dec 31, 2022
IPIP.net officially supported IP database ipdb format parsing library

IPIP.net officially supported IP database ipdb format parsing library

Dec 27, 2022
Scout is a standalone open source software solution for DIY video security.
Scout is a standalone open source software solution for DIY video security.

scout Scout is a standalone open source software solution for DIY video security. https://www.jonoton-innovation.com Features No monthly fees! Easy In

Oct 25, 2022
A webservice made while learning Go following the Pluralsight course "Go: Getting Started"

go-webservice A webservice made while learning Go following the Pluralsight course "Go: Getting Started" Steps to get the webservice up and running (s

Dec 11, 2021
A small tool used to correspond to the IP address according to the name, id, and network alias of the docker container, which can be run as a DNS server

A small tool used to correspond to the IP address according to the name, id, and network alias of the docker container, which can be run as a DNS server

Apr 4, 2022
Zero Trust Network Communication Sentinel provides peer-to-peer, multi-protocol, automatic networking, cross-CDN and other features for network communication.
Zero Trust Network Communication Sentinel provides peer-to-peer, multi-protocol, automatic networking, cross-CDN and other features for network communication.

Thank you for your interest in ZASentinel ZASentinel helps organizations improve information security by providing a better and simpler way to protect

Nov 1, 2022
httpstream provides HTTP handlers for simultaneous streaming uploads and downloads of objects, as well as persistence and a standalone server.

httpfstream httpfstream provides HTTP handlers for simultaneous streaming uploads and downloads of files, as well as persistence and a standalone serv

May 1, 2021
A TCP proxy used to expose services onto a tailscale network without root. Ideal for container environments.

tailscale-sidecar This is barely tested software, I don't guarantee it works but please make an issue if you use it and find a bug. Pull requests are

Dec 30, 2022
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング
High performance async-io(proactor) networking for Golang。golangのための高性能非同期io(proactor)ネットワーキング

gaio Introduction 中文介绍 For a typical golang network program, you would first conn := lis.Accept() to get a connection and go func(net.Conn) to start a

Dec 29, 2022
🚀 gnet is a high-performance, lightweight, non-blocking, event-driven networking framework written in pure Go./ gnet 是一个高性能、轻量级、非阻塞的事件驱动 Go 网络框架。
🚀 gnet is a high-performance, lightweight, non-blocking, event-driven networking framework written in pure Go./ gnet 是一个高性能、轻量级、非阻塞的事件驱动 Go 网络框架。

English | ???? 中文 ?? Introduction gnet is an event-driven networking framework that is fast and lightweight. It makes direct epoll and kqueue syscalls

Jan 2, 2023
Fast event-loop networking for Go
Fast event-loop networking for Go

evio is an event loop networking framework that is fast and small. It makes direct epoll and kqueue syscalls rather than using the standard Go net pac

Dec 31, 2022
Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.
Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.

Packiffer is a lightweight cross-platform networking toolkit that let you sniff/analyze/inject/filter packets.

Dec 19, 2022
A decentralized P2P networking stack written in Go.

noise noise is an opinionated, easy-to-use P2P network stack for decentralized applications, and cryptographic protocols written in Go. noise is made

Dec 29, 2022
Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.
Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance.

Netpoll is a high-performance non-blocking I/O networking framework, which focused on RPC scenarios, developed by ByteDance. RPC is usually heavy on processing logic and therefore cannot handle I/O serially. But Go's standard library net designed blocking I/O API, so that the RPC framework can only follow the One Conn One Goroutine design.

Jan 2, 2023
Fork of Go stdlib's net/http that works with alternative TLS libraries like refraction-networking/utls.

github.com/ooni/oohttp This repository contains a fork of Go's standard library net/http package including patches to allow using this HTTP code with

Sep 29, 2022
High-performance, non-blocking, event-driven, easy-to-use networking framework written in Go, support tls/http1.x/websocket.

High-performance, non-blocking, event-driven, easy-to-use networking framework written in Go, support tls/http1.x/websocket.

Jan 8, 2023
🧪 Run common networking tests against your site.
🧪 Run common networking tests against your site.

dstp dstp, run common networking tests against your site. Usage Usage: dstp [OPTIONS] [ARGS]

Jan 3, 2023