Seesaw v2 is a Linux Virtual Server (LVS) based load balancing platform.

Seesaw v2

GoDoc

Note: This is not an official Google product.

About

Seesaw v2 is a Linux Virtual Server (LVS) based load balancing platform.

It is capable of providing basic load balancing for servers that are on the same network, through to advanced load balancing functionality such as anycast, Direct Server Return (DSR), support for multiple VLANs and centralised configuration.

Above all, it is designed to be reliable and easy to maintain.

Requirements

A Seesaw v2 load balancing cluster requires two Seesaw nodes - these can be physical machines or virtual instances. Each node must have two network interfaces - one for the host itself and the other for the cluster VIP. All four interfaces should be connected to the same layer 2 network.

Building

Seesaw v2 is developed in Go and depends on several Go packages:

Additionally, there is a compile and runtime dependency on libnl

On a Debian/Ubuntu style system, you should be able to prepare for building by running:

apt-get install golang
apt-get install libnl-3-dev libnl-genl-3-dev

If your distro has a go version before 1.5, you may need to fetch a newer release from https://golang.org/dl/.

If you are running go version 1.11 and above, you can use go modules to avoid installing go packages manually. By go 1.12, GO111MODULE defaults to auto, so remember to enable go module by export GO111MODULE=on.

If you are running before go version 1.11 or you don't want to enable GO111MODULE, after setting GOPATH to an appropriate location (for example ~/go):

go get -u golang.org/x/crypto/ssh
go get -u github.com/dlintw/goconf
go get -u github.com/golang/glog
go get -u github.com/miekg/dns
go get -u github.com/kylelemons/godebug/pretty
go get -u github.com/golang/protobuf/proto

Ensure that ${GOPATH}/bin is in your ${PATH} and in the seesaw directory:

make test
make install

If you wish to regenerate the protobuf code, the protobuf compiler is needed:

apt-get install protobuf-compiler

The protobuf code can then be regenerated with:

make proto

Installing

After make install has run successfully, there should be a number of binaries in ${GOPATH}/bin with a seesaw_ prefix. Install these to the appropriate locations:

SEESAW_BIN="/usr/local/seesaw"
SEESAW_ETC="/etc/seesaw"
SEESAW_LOG="/var/log/seesaw"

INIT=`ps -p 1 -o comm=`

install -d "${SEESAW_BIN}" "${SEESAW_ETC}" "${SEESAW_LOG}"

install "${GOPATH}/bin/seesaw_cli" /usr/bin/seesaw

for component in {ecu,engine,ha,healthcheck,ncc,watchdog}; do
  install "${GOPATH}/bin/seesaw_${component}" "${SEESAW_BIN}"
done

if [ $INIT = "init" ]; then
  install "etc/init/seesaw_watchdog.conf" "/etc/init"
elif [ $INIT = "systemd" ]; then
  install "etc/systemd/system/seesaw_watchdog.service" "/etc/systemd/system"
  systemctl --system daemon-reload
fi
install "etc/seesaw/watchdog.cfg" "${SEESAW_ETC}"

# Enable CAP_NET_RAW for seesaw binaries that require raw sockets.
/sbin/setcap cap_net_raw+ep "${SEESAW_BIN}/seesaw_ha"
/sbin/setcap cap_net_raw+ep "${SEESAW_BIN}/seesaw_healthcheck"

The setcap binary can be found in the libcap2-bin package on Debian/Ubuntu.

Configuring

Each node needs a /etc/seesaw/seesaw.cfg configuration file, which provides information about the node and who its peer is. Additionally, each load balancing cluster needs a cluster configuration, which is in the form of a text-based protobuf - this is stored in /etc/seesaw/cluster.pb.

An example seesaw.cfg file can be found in etc/seesaw/seesaw.cfg.example - a minimal seesaw.cfg provides the following:

  • anycast_enabled - True if anycast should be enabled for this cluster.
  • name - The short name of this cluster.
  • node_ipv4 - The IPv4 address of this Seesaw node.
  • peer_ipv4 - The IPv4 address of our peer Seesaw node.
  • vip_ipv4 - The IPv4 address for this cluster VIP.

The VIP floats between the Seesaw nodes and is only active on the current master. This address needs to be allocated within the same netblock as both the node IP address and peer IP address.

An example cluster.pb file can be found in etc/seesaw/cluster.pb.example - a minimal cluster.pb contains a seesaw_vip entry and two node entries. For each service that you want to load balance, a separate vserver entry is needed, with one or more vserver_entry sections (one per port/proto pair), one or more backends and one or more healthchecks. Further information is available in the protobuf definition - see pb/config/config.proto.

On an upstart based system, running restart seesaw_watchdog will start (or restart) the watchdog process, which will in turn start the other components.

Anycast

Seesaw v2 provides full support for anycast VIPs - that is, it will advertise an anycast VIP when it becomes available and will withdraw the anycast VIP if it becomes unavailable. For this to work the Quagga BGP daemon needs to be installed and configured, with the BGP peers accepting host-specific routes that are advertised from the Seesaw nodes within the anycast range (currently hardcoded as 192.168.255.0/24).

Command Line

Once initial configuration has been performed and the Seesaw components are running, the state of the Seesaw can be viewed and controlled via the Seesaw command line interface. Running seesaw (assuming /usr/bin is in your path) will give you an interactive prompt - type ? for a list of top level commands. A quick summary:

  • config reload - reload the cluster.pb from the current config source.
  • failover - failover between the Seesaw nodes.
  • show vservers - list all vservers configured on this cluster.
  • show vserver <name> - show the current state for the named vserver.

Troubleshooting

A Seesaw should have five components that are running under the watchdog - the process table should show processes for:

  • seesaw_ecu
  • seesaw_engine
  • seesaw_ha
  • seesaw_healthcheck
  • seesaw_ncc
  • seesaw_watchdog

All Seesaw v2 components have their own logs, in addition to the logging provided by the watchdog. If any of the processes are not running, check the corresponding logs in /var/log/seesaw (e.g. seesaw_engine.{log,INFO}).

Owner
Google
Google ❤️ Open Source
Google
Comments
  • config_server is limited to use with google.com domain

    config_server is limited to use with google.com domain

    Hi,

    I think config_server is limited to use with google.com domain.

    • engine/config/config.go
    configServerRE = regexp.MustCompile(`^[\w-\.]+\.google\.com\.?$`)
    
    • engine/config/fetcher.go
                    if !configServerRE.MatchString(server) {
                            log.Errorf("Invalid config server name: %q", server)
                            continue
                    }
    

    Does config_server feature is not supported yet?

    Best regards,

  • optimize the tcp healthcheck to reduce the thread usage

    optimize the tcp healthcheck to reduce the thread usage

    For the golang goroutine scheduler algorithm (GMP model), if the gorouting calls a blocking system call, the current P will not release the current M (thread). If there is another G want to run, a new M (thread) must be created. Current tcp health check use syscall.Connect to connect to the target. If there are many goroutines which call tcp health check at the same time, golang may create thousands of threads.

    Actually, golang runtime has provided the dial package to optimize this issue which will use the netpoll unblocking I/O. In my test env, I run 100000 goroutine, golang create nearly 8000 thread which is up to the max thread(10000) that golang can support by default.

    With the fix in this PR, the thread number is only about 120.

  • Unable to start SeeSaw

    Unable to start SeeSaw

    I have SeeSaw compiled and installed on Centos 7. The Watchdog service starts and kicks off the other 5 components but the Engine fails to start (see log below).

    The log file doesn't really give me any clues as to what its not happy about. Any pointers on where I should be looking ?

    [root@seesaw-1 bin]# ./seesaw_engine F0306 19:55:43.201250 60397 core.go:250] Failed to connect to NCC: Failed to establish connection: dial unix /var/run/seesaw/nc c/ncc.sock: connect: no such file or directory goroutine 1 [running]: github.com/golang/glog.stacks(0xc420117500, 0xc4201d2000, 0xb0, 0x1c6)

    [root@seesaw-1 seesaw]# more seesaw_engine.INFO Log file created at: 2018/03/06 19:46:52 Running on machine: seesaw-1 Binary: Built with gc go1.9.2 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0306 19:46:52.217240 60124 core.go:121] Seesaw Engine starting for seesaw F0306 19:46:53.724640 60124 core.go:250] Failed to connect to NCC: Failed to establish connection: dial unix /var/run/seesaw/ncc/ncc.sock: connect: no such file or directory goroutine 1 [running]: github.com/golang/glog.stacks(0xc42000e001, 0xc4202ce000, 0x3ba, 0x2710)

  • Introduce build inside the docker

    Introduce build inside the docker

    This should simplify adoption by the community, and may save couple of hours for new users who would like to just try the tool without additional efforts.

  • UDP Round Robin Only going to single server

    UDP Round Robin Only going to single server

    Dear All,

    I am trying to setup UDP round robin load balancing for a SIEM application.

    I have seesaw installed and working, I can see packets (using tcpdump on both vm's) going from the vip of 172.16.4.165 to the second server dl-clust-02.

    Expected Behaviour Each udp packet send to the VServer vip (172.16.4.165) from my logging endpoint would be sent to each server in turn i.e dl-clust-01 then dl-clust-02 then back to dl-clust-01 and so on.

    Actual Behaviour UDP packets are only sent to the dl-clust-02.

    seesaw.cfg

    `[cluster] anycast_enabled = false name = defencelogic-lb node_ipv4 = 172.16.4.163 peer_ipv4 = 172.16.4.164 vip_ipv4 = 172.16.4.160

    [config_server] primary = lb1. secondary = lb2.

    [interface] node = ens192 lb = ens160`

    cluster.pb `seesaw_vip: < fqdn: "logger.." ipv4: "172.16.4.160/24" status: PRODUCTION

    node: < fqdn: "lb1.." ipv4: "172.16.4.163/24" status: PRODUCTION

    node: < fqdn: "lb2.." ipv4: "172.16.4.164/24" status: PRODUCTION

    vserver: < name: "logsvr." entry_address: < fqdn: "logsvr.." ipv4: "172.16.4.165/24" status: PRODUCTION

    rp: "ad1@" vserver_entry: < protocol: UDP port: 12201 scheduler: RR server_low_watermark: 0.3 healthcheck: < type: ICMP_PING interval: 5 timeout: 3 retries: 1 >

    backend: < host: < fqdn: "dl-clust-01.." ipv4: "172.16.4.61/24" status: PRODUCTION > weight: 1

    backend: < host: < fqdn: "dl-clust-02.." ipv4: "172.16.4.62/24" status: PRODUCTION > weight: 1

    `

    Any helps appreciated.

  • use single connection in ncc_client

    use single connection in ncc_client

    ncc_client was dialing and closing in a circle that caused failure when large amounts of vservers were all calling ncc_client.

    This commit will only connect ncc server once and reuse the same single connection for all ipc calls from engine. Tests show that it's able to handle 1000 vservers.

    Fixed #48

  • Question: no neighbor statement issued to quagga?

    Question: no neighbor statement issued to quagga?

    Hi,

    Trying (and failing) to get seesaw to advertise through quagga I don't seem to find any neighbor statement being sent to quagga.

    What I see by parsing the code is something like this being sent:

    router bgp 64512
    address-family ipv4 unicast
    network a.b.c.d/32
    

    Manually toying with quagga with our network guys, I can't only get it to work by adding a neighbor like:

    router bgp 65500
    address-family ipv4 unicast
    network a.b.c.d/32
    neighbor e.f.g.h remote-as 65500
    

    I have the following in my cluster.pb:

    bgp_remote_asn: 65500
    bgp_local_asn: 65500
    bgp_peer: <
      fqdn: "name.of.my.router."
      ipv4: "e.f.g.h/28"
    >
    

    but as I see it, this peer is not used in any of the vty.Command calls? Am I missing something?

    Best regards, Lasse

  • Export stats struct/publisher interface for monitoring

    Export stats struct/publisher interface for monitoring

    Currently, exporting monitoring metrics would require patching the ecu package to add the necessary functionality. Exporting the types allows an implementation to provide a statistics publisher via the ECU config. This PR also makes a few changes to how ecu/stats.go operates.

  • Using go based netlink instead of libnl

    Using go based netlink instead of libnl

    Hi All,

    Is there a plan to use a go based netlink library instead of libnl in the future? Or Has it been considered already? This is a go based netlink implementation but im not sure if it already supports VS based messages yet.

    We are planning to use seesaw for a larger project like kubernetes to support IPVS and it would be great to know its roadmap.

    CC: @baptr

    Thanks in Advance, Dhilip

  • enables fallback and port for hashing schedulers

    enables fallback and port for hashing schedulers

    for sh and mh, by default, port is not included in hashing. This is against intuition.

    This commit enables sh-port and mh-port option.

    Also enables sh-fallback and mh-fallback, which is helpful when l/u threshold set for a service.

  • Add tunnel mode healthcheck

    Add tunnel mode healthcheck

    This PR adds the tunnel mode healthcheck support. It consists of following commits:

    • Add tunneling mode healthcheck in config.proto
    • Allow tunnel mode healthcheck to use specified source IP
    • Use engine node address for tunnel mode healthcheck source address
    • Let healthcheckManager aware of TUN mode healthcheck
    • Add test coverage for DSR/TUN healthchecks in dedup() logic
  • Support for clients on realservers

    Support for clients on realservers

    Maybe I'm missing something, but out-of-the-box seesaw doesn't seem to support having the backend servers initiate TCP connections to outside servers (say, Gmail mail servers) for LVS-DR.

    Here is a longer form explanation of an example setup with one outside client, one outside server, seesaw load balancer with a VIPand three backend real servers:

    CIP = Client IP address (outside) SIP = Server IP address (outside) VIP = Virtual IP address of the seesaw service / vserver RIP1 = Real (backend) server IP address number 1 RIP2 = Real (backend) server IP address number 2 RIP3 = Real (backend) server IP address number 3

    Typically for LVS-DR, a client from outside will initiate TCP (say, port 25) connection with [source IP=CIP, source port=34567, destination IP=VIP, destination port=25] to the load balancer. Load balancer (if it has a matching vserver configured for port 25) then forwards via MAC address re-write to one of the backend servers, say server with RIP3. TCP Server living on that server sees a TCP SYN packet with [source IP=CIP, source port=34567, destination IP=VIP, destination port=25], the VIP being configured on one of its dummy interfaces. The TCP server on port 25 on the backend server with RIP3 will then respond with a SYN,ACK TCP packet with [source IP=VIP, source port=25, destination IP=CIP, destination port=34567] directly to the router and the packets end up on the client with IP address=CIP that originally initiated the TCP connection.

    I am just describing this for clarity. This is what normal LVS-DR looks like and what seesaw typically does. This is when TCP connections are initiated from an outside client towards seesaw/VIP.

    What about the reverse? Say I have a TCP client on one of the real (backend) servers. I want this client to initiate TCP connection with [source IP=VIP, source port=34568, destination IP=SIP, destination IP=25]. For the initial SYN package, this works fine, as it is send directly via the router, not through the seesaw director node. The outside server listening on SIP then responds with SYN,ACK packet with [source IP=SIP, source port=25, destination IP=VIP, destination port=34568]. Then the problem occurs: The seesaw node sees the packet coming in and, since its sent towards a port that isn't configured as a vserver (ephemeral port 34568), drops the packet and the connection isn't established.

    Now, if I add the following configuration to the seesaw node by hand, it does work:

    # iptables -I INPUT -p tcp -m tcp -d $VIP --sport 25 -j MARK --set-mark 0x1
    # ipvsadm -A -f 1 -s rr
    # ipvsadm -a -f 1 -r $RIP3 -g  # Just using RIP3 as an example here
    

    Is there any other way to do this? As it looks now, I would have to fork seesaw to add this functionality into the code so it works with seesaw node switchover etc.

  • Any chance I can get it working with a macvlan interface?

    Any chance I can get it working with a macvlan interface?

    Hi,

    Would like to know if I can have a chance to see this working without the need of two physical interfaces but using for example a macvlan interface as lb interface.

    Thx for help.

  • Come up with a monitoring strategy

    Come up with a monitoring strategy

    We should come up with at least a high level strategy for monitoring seesaw nodes. Some possibilities:

    • OpenTelemetry
    • Prometheus
    • ???

    In any case, we should make it easy for alternative implementations to exist, should users need to plug in their own.

  • centos7.9 Failed to initialise LB interface: Failed to get network interface: route ip+net: no such network interface

    centos7.9 Failed to initialise LB interface: Failed to get network interface: route ip+net: no such network interface

    How to solve the problem after starting seesaw_watchdog.service on centos and reporting Failed to get network interface: route ip+net: no such network interface

    Deployment steps: yum install epel-release -y yum -y erase git yum -y install https://repo.ius.io/ius-release-el7.rpm yum -y install git222 ipvsadm golang protobuf-compiler libnl3-devel echo ip_vs > /etc/modules-load.d/ipvs.conf echo ip_vs_wrr>/etc/modules-load.d/ipvs.conf echo nf_conntrack_ipv4 > /etc/modules-load.d/nf_conntrack.conf modprobe dummy numdummies=1 echo "options dummy numdummies=1" > /etc/modprobe.d/dummy.conf systemctl restart systemd-modules-load.service ip link add ip+net type dummy

    cd /root && mkdir go && export GOPATH=/root/go go get -u golang.org/x/crypto/ssh go get -u github.com/dlintw/goconf go get -u github.com/golang/glog go get -u github.com/miekg/dns go get -u github.com/kylelemons/godebug/pretty go get -u github.com/golang/protobuf/proto export PATH=$PATH:${GOPATH}/bin go get -u github.com/google/seesaw

    cd /root/go/src/github.com/google/seesaw/ make test make install cp -r /root/go/src/github.com/google/seesaw/etc /root/go/bin/

    cd /root/go/bin && vi /root/go/bin/install.sh SEESAW_BIN="/usr/local/seesaw" SEESAW_ETC="/etc/seesaw" SEESAW_LOG="/var/log/seesaw"

    INIT=ps -p 1 -o comm=

    install -d "${SEESAW_BIN}" "${SEESAW_ETC}" "${SEESAW_LOG}"

    install "${GOPATH}/bin/seesaw_cli" /usr/bin/seesaw

    for component in {ecu,engine,ha,healthcheck,ncc,watchdog}; do install "${GOPATH}/bin/seesaw_${component}" "${SEESAW_BIN}" done

    if [ $INIT = "init" ]; then install "etc/init/seesaw_watchdog.conf" "/etc/init" elif [ $INIT = "systemd" ]; then install "etc/systemd/system/seesaw_watchdog.service" "/etc/systemd/system" systemctl --system daemon-reload fi install "etc/seesaw/watchdog.cfg" "${SEESAW_ETC}"

    Enable CAP_NET_RAW for seesaw binaries that require raw sockets.

    /sbin/setcap cap_net_raw+ep "${SEESAW_BIN}/seesaw_ha" /sbin/setcap cap_net_raw+ep "${SEESAW_BIN}/seesaw_healthcheck"

    chmod +x install.sh ./install.sh systemctl status seesaw_watchdog systemctl enable seesaw_watchdog cd /root/go/bin/etc/seesaw cp cluster.pb.example seesaw.cfg.example /etc/seesaw cd /etc/seesaw mv cluster.pb.example cluster.pb mv seesaw.cfg.example seesaw.cfg systemctl --system daemon-reload systemctl --now enable seesaw_watchdog.service

  • engine.sock no such file or directory

    engine.sock no such file or directory

    I installed seesaw on linux but when i start seesaw_cli i get error: Dial failed: dial unix /var/run/seesaw/engine/engine.sock: connect: no such file or directory.

    Also when i start seesaw_engine i get error: Dial failed: dial unix /var/run/seesaw/ncc/ncc.sock: connect: no such file or directory.

    What are this .sock files?

Related tags
Consul Load-Balancing made simple
Consul Load-Balancing made simple

Notes From release 1.5.15 onward, fabio changes the default GOGC from 800 back to the golang default of 100. Apparently this made some sense back in t

Dec 31, 2022
Client-Side Load Balancing for Golang

cslb Client-Side Load Balancer This Project is in early developing state Feature Multiple client-side load balancing solutions support Round-Robin DNS

Aug 29, 2022
DNS/DoT to DoH proxy with load-balancing, fail-over and SSL certificate management

dns-proxy Configuration Variable Example Description TLS_DOMAIN my.duckdns.org Domain name without wildcards. Used to create wildcard certificate and

Oct 26, 2022
Serve endpoint metadata for client side load balancing

Servok Servok is a service that provides endpoint metadata for client side load balancing. See CONTRIBUTING.md for instructions on how to contribute a

Dec 9, 2021
Simple Nginx Load Balancing Use Docker Engine
Simple Nginx Load Balancing Use Docker Engine

Load Balancing Menggunakan Nginx Load Balancing adalah sebuah mekanisme untuk membagi atau mendistribusikan trafik ke beberapa server. Nginx selain be

Dec 14, 2021
Laptop Booking Application in Golang and gRPC, load-balancing with NGINX, and fully compatible with HTTPS OpenAPI v3

Laptop Booking Application in Golang and gRPC Goals GitHub CI & Coverage Badge Serialize protobuf messages Create laptop unary gRPC Search laptop Serv

Jun 17, 2022
Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal

Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal If I figure out how to make it work.. How it works! The Vippy LB PoC uses BGP/IPVS and E

Mar 10, 2022
A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF

VC5 A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF This is very much a proof of concept at this stage - mos

Dec 22, 2022
LazySSH is an SSH server that acts as a jump host only, and dynamically starts temporary virtual machines.

LazySSH is an SSH server that acts as a jump host only, and dynamically starts temporary virtual machines. If you find yourself briefly starti

Dec 11, 2022
A simple UDP server to make a virtual secure channel with the clients

udpsocket I made this package to make a virtual stateful connection between the client & server using the UDP protocol for a golang game server (as yo

Jun 18, 2022
High-performance PHP application server, load-balancer and process manager written in Golang
High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a serv

Jan 1, 2023
High-performance PHP application server, load-balancer and process manager written in Golang
High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a serv

Dec 9, 2021
gRPC LRU-cache server and client with load test

gRPC k-v storage with LRU-cache server & client + load test. Specify LRU-cache capacity: server/cmd/app.go -> StorageCapacity go build ./server/cmd/*

Dec 26, 2021
Cat Balancer is line based load balancer for net cat nc.
Cat Balancer is line based load balancer for net cat nc.

Cat Balancer Cat Balancer is line based load balancer for net cat nc. Usage cb [-p <producers-port>] [-c <consumers-port>] One Producer to One Consum

Jul 6, 2022
Lightweight http response time based load balancer written in Go

HTTP Load Balancer Specifications http servers should always return time taken to proceed request in headers as EXECUTION_TIME in ms this load balance

Feb 22, 2022
Using Envoy Proxy to load-balance gRPC services on GKE with header value based Session Affinity

Using Envoy Proxy to load-balance gRPC services on GKE with header value based S

Aug 24, 2022
Simple load-balancer for npchat servers, based on the xor distance metric between node & user id

npchat-helmsman Simple load-balancer for npchat servers, based on the xor distance metric between node & user id. Local Development Clone this reposit

Jan 15, 2022
Netmaker is a tool for creating and managing virtual networks
Netmaker is a tool for creating and managing virtual networks

Netmaker is a tool for creating and managing virtual networks. The goal is to make virtual/overlay/mesh networking easy for non-networking people. It should be like clicking a button. Netmaker consists of a server, an agent, and a UI.

Jan 2, 2023
apache dubbo gateway,L7 proxy,virtual host,k8s ingress controller.
apache dubbo gateway,L7 proxy,virtual host,k8s ingress controller.

apache dubbo gateway,L7 proxy,virtual host,k8s ingress controller.

Jul 22, 2022