gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era.

gobetween

Tag Build Status Go Report Card Docs Docker Snap Status Telegram License

gobetween - modern & minimalistic load balancer and reverse-proxy for the ☁️ Cloud era.

Current status: Maintenance mode, accepting PRs. Currently in use in several highly loaded production environments.

Features

  • Fast L4 Load Balancing

  • Clear & Flexible Configuration with TOML or JSON

    • File - read configuration from the file
    • URL - query URL by HTTP and get configuration from the response body
    • Consul - query Consul key-value storage API for configuration
  • Management REST API

    • System Information - general server info
    • Configuration - dump current config
    • Servers - list, create & delete
    • Stats & Metrics - for servers and backends including rx/tx, status, active connections & etc.
  • Discovery

    • Static - hardcode backends list in the config file
    • Docker - query backends from Docker / Swarm API filtered by label
    • Exec - execute an arbitrary program and get backends from its stdout
    • JSON - query arbitrary http url and pick backends from response json (of any structure)
    • Plaintext - query arbitrary http and parse backends from response text with customized regexp
    • SRV - query DNS server and get backends from SRV records
    • Consul - query Consul Services API for backends
    • LXD - query backends from LXD
  • Healthchecks

    • Ping - simple TCP ping healthcheck
    • Exec - execute arbitrary program passing host & port as options, and read healthcheck status from the stdout
    • Probe - send specific bytes to backend (udp, tcp or tls) and expect a correct answer (bytes or regexp)
  • Balancing Strategies (with SNI support)

    • Weight - select backend from pool based relative weights of backends
    • Roundrobin - simple elect backend from pool in circular order
    • Iphash - route client to the same backend based on client ip hash
    • Iphash1 - same as iphash but backend removal consistent (clients remain connecting to the same backend, even if some other backends down)
    • Leastconn - select backend with least active connections
    • Leastbandwidth - backends with least bandwidth
  • Integrates seamlessly with Docker and with any custom system (thanks to Exec discovery and healthchecks)

  • Single binary distribution

Architecture

gobetween

Usage

Hacking

Debug and Test

Run several web servers for tests in different terminals:

  • $ python -m SimpleHTTPServer 8000
  • $ python -m SimpleHTTPServer 8001

Instead of Python's internal HTTP module, you can also use a single binary (Go based) webserver like: https://github.com/udhos/gowebhello

gowebhello has support for SSL sertificates as well (HTTPS mode), in case you want to do quick demos of the TLS+SNI capabilities of gobetween.

Put localhost:8000 and localhost:8001 to static_list of static discovery in config file, then try it:

  • $ gobetween -c gobetween.toml

  • $ curl http://localhost:3000

Enable profiler and debug issues you encounter

[profiler]
enabled = true     # false | true
bind    = ":6060"  # "host:port"

Performance

It's Fast! See Performance Testing

The Name

It's a play on words: gobetween ("go between").

Also, it's written in Go, and it's a proxy so it's something that stays between 2 parties 😄

License

MIT. See LICENSE file for more details.

Authors & Maintainers

All Contributors

Community

  • Join gobetween Telegram group here.

Logo

Logo by Max Demchenko

Owner
Yaroslav Pogrebnyak
Programmer, writer, speaker and musician. Perfectionist and Buddha follower.
Yaroslav Pogrebnyak
Comments
  • LXD backend discovery support

    LXD backend discovery support

    Hello,

    I think gobetween would be a great fit for LXD containers, so I made a proof-of-concept discovery. The idea is to configure the LXD discovery like any other discovery mechanism, and then launch an LXD container like so:

    $ lxc launch ubuntu foo --config user.gobetween.label="foo" --config user.gobetween.private_port=80
    

    LXD reserves the user.* config key for user-specified metadata. The full config reference is here

    Right now, discovery is only done on the local server, so only local containers are discovered. However, it could be extended to support remote LXD server(s).

    I'm happy to answer any questions about this feature. In addition, I wasn't able to find a doc that lists the requirements for contributing a patch, so please let me know if I need to do anything else.

    What I would really like to do is use gobetween to dynamically generate server entries based on containers. This is because LXD does not provide a built-in mechanism to automatically handle external-to-container traffic (similar to publishing a docker port). In effect, gobetween would supply that support. I understand this might be too much of a niche case for gobetween, though. Perhaps a better solution is to write a glue utility that bridges the LXD server and gobetween by using the gobetween REST API rather than have gobetween directly support this.

  • gobetween becomes

    gobetween becomes "stuck" (sometimes)

    I have GB configured as an SNI router for multiple backend services.

    EDIT: OS: CentOS 7.5+ updates Docker: Docker version 18.06.1-ce, build e68fc7a (installed as per instructions on docker.com)

    What happens is that after a bit of run time (say a few hours), GB stops responding. i.e. it accepts the connection, but doesn't forward it to the backend. I test using openssl s_client -connect ...

    Restarting the GB docker brings things back to life.

    I have been using GB v 0.5.0 (and now 0.6.0)

    My LoadBalancer VM is typically a 2 CPU 1 GB machine.

    This problem hit again today, so I have made my load balancer VM to 4 CPU 4 GB (I know for the usage it is way too high). I am in now wait-and-watch mode if this happens again.

    What debug info could I capture to help isolate the problem, the next time this occurs?

    FWIW, I found an earlier issue which could be related (not sure though) https://github.com/yyyar/gobetween/issues/74

    Relevant section of the config file:

    [api]
    enabled = true
    bind = ":444"
    
    [logging]
    level = "debug"
    output = "stdout"
    
    [defaults]
    protocol = "tcp"
    balance = "leastconn"
    max_connections = 0
    client_idle_timeout = "0"
    backend_idle_timeout = "0"
    backend_connection_timeout = "0"
    ##############################################################################
    
    [servers]
    
    [servers.in443]
    protocol = "tcp"
    bind = ":443"
    sni_enabled = true
    
    [servers.in443.sni]
    enabled = true
    read_timeout = "10s"
    hostname_matching_strategy = "regexp"
    unexpected_hostname_strategy = "reject"
    
    [servers.in443.discovery]
    kind = "consul"
    failpolicy = "setempty"
    consul_host = "myconsulserver:8500"
    # all services should register themselves with this service name
    consul_service_name = "in443service"
    interval = "60s"
    timeout = "10s"
    
    [servers.in443.healthcheck]
    kind = "ping"
    interval = "60s"
    ping_timeout_duration = "5s"
    

    Regards, Shantanu

  • Error: use of closed network connection

    Error: use of closed network connection

    Hi,

    I have user gobetween proxy quite long time, but now entered onti sisutation when unknown what to do and how to fix issue. Full error message:

    2018-07-19 05:12:57 [ERROR] (udp/server): Error sending data to backend write udp 10.24.87.230:35226->10.24.87.231:4729: use of closed network connection
    

    Running CentOS 7.5. SELinux is disabled go version go1.10.3 linux/amd64

    Config:

    #
    # gobetween.toml - sample config file
    #
    # Website: http://gobetween.io
    # Documentation: https://github.com/yyyar/gobetween/wiki/Configuration
    #
    # Logging configuration
    #
    [logging]
    level = "info"   # "debug" | "info" | "warn" | "error"
    #output = "stdout" # "stdout" | "stderr" | "/path/to/gobetween.log"
    output = "/var/log/gobetween/gobetween.log" # "stdout" | "stderr" | "/path/to/gobetween.log"
    
    # REST API server configuration
    #
    [api]
    enabled = true  # true | false
    bind = ":8888"  # "host:port"
    cors = false    # cross-origin resource sharing
    
    #
    # Default values for server configuration, may be overriden in [servers] sections.
    # All "duration" fields (for examole, postfixed with '_timeout') have the following format:
    # <int><duration> where duration can be one of 'ms', 's', 'm', 'h'.
    # Examples: "5s", "1m", "500ms", etc. "0" value means no limit
    #
    [defaults]
    max_connections = 0              # Maximum simultaneous connections to the server
    client_idle_timeout = "0"        # Client inactivity duration before forced connection drop
    backend_idle_timeout = "0"       # Backend inactivity duration before forced connection drop
    backend_connection_timeout = "0" # Backend connection timeout (ignored in udp)
    
    [servers.vflow]
    bind = "0.0.0.0:4729"
    protocol = "udp"
    
      [servers.vflow.udp]
      max_responses = 0
      max_requests = 1
    
      [servers.vflow.discovery]
      kind = "static"
      static_list = [
          "10.24.87.231:4729 weight=1",
          "10.24.87.232:4729 weight=1"
      ]
    
      [servers.vflow.healthcheck]
      kind = "exec"
      interval = "30s"
      timeout = "30s" 
    
      exec_command = "/usr/share/exec_healthcheck.sh"
      exec_expected_positive_output = "1"
      exec_expected_negative_output = "0"
    

    I have tried to comment out healthcheck part, but didn't helped

  • Improve UDP performance

    Improve UDP performance

    I do a test about UDP and find the result not so good with iperf. the testbed is 4core cpu, 4G memory vm and the transfer upload bandwidth is about 50M bps. at the same testbed, ipvs is about 150M bps and haproxy tcp about 450M bps. FYI。

  • Do you have plan on supporting TCP+TLS to connect to backends?

    Do you have plan on supporting TCP+TLS to connect to backends?

    Hi all, I have one question if you don't mind, I see that the API supports TLS connection, but what about the TCP reverse proxy? Will you support TLS in the future?

  • [Add] Support for Prometheus Metrics Endpoint

    [Add] Support for Prometheus Metrics Endpoint

    This Adds a Prometheus Metrics Endpoint at port 9284.

    Currently the service would need to be restarted to clear out old metrics. Still not sure how I want to handle the cleanup of that. But this does what we want for our use case.

    Always up for Feedback.

  • Track backends list in udp session

    Track backends list in udp session

    Hi This patchset implements tracking backends (using exec-dscovery) when udp proxying, in case of client does not close UDP socket.

    In my environment I need to proxy UDP and I use exec-discovery to find backends (backend soft also provides TCP ping-API, and I can discovery backends)

    So, when backends list changed (i.e. exec discovers), client session backend list does not change, and gobetween sends UDP-datagrams to "dead" backends.

    It seems like this: gobetween.toml

    [servers]
    
    [servers.brubeck]
    bind = "127.0.0.1:8125"
    protocol = "udp"
    
      [servers.brubeck.udp]
      max_responses = 0
    
      [servers.brubeck.discovery]
      kind = "exec"
      interval = "2s"
      timeout = "2s"
      exec_command = ["/home/operator/bin/exec.sh"] #simple curl-based HTTP-check
    

    Client app, which send UDP packets:

    import socket
    from time import sleep
    
    UDP_IP = "127.0.0.1"
    UDP_PORT = 8125
    MESSAGE = "complex.delete_me.tttt:1|c"
    
    sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) #don't close socket while app works
    
    while True:
        print("send")
        sock.sendto(MESSAGE.encode(), (UDP_IP, UDP_PORT))
        sleep(1)
    

    Starting gobetween and app:

    bin/gobetween --config /home/operator/gobetween.toml &
    /home/operator/sendudp.py
    

    See traffic:

    11:35:02.513432 IP 10.0.2.9.58559 > 10.9.6.81.8126: UDP, length 26 # send to backend01
    ...
    

    When backend01 fails, client continues send to him:

    11:35:02.513432 IP 10.0.2.9.58559 > 10.9.6.81.8126: UDP, length 26 # backend01 fails, but gobetween proxy to him...
    

    This patchset fix this behaviour: after new discovery client will send UDP-datagrams to new backend.

    (Maybe, it is better to add corresponding configuration option? I may do this.)

    In all other cases backward compatibility saved.

    Thanks in advance!

  • the max_requests configure no works & stats of xxx_connections wrong for udp in the new 0.6 release

    the max_requests configure no works & stats of xxx_connections wrong for udp in the new 0.6 release

    hi all: for the latest release, the udp max_requests doesn't work as expected. this is my configure used [servers.11020b75d191432ba6beb04d51d7b400] bind = "10.212.1.103:2345" protocol = "udp"

    balance = "roundrobin"

    #max_connections = 1 client_idle_timeout = "60s" backend_idle_timeout = "60s" backend_connection_timeout = "60s" [servers.11020b75d191432ba6beb04d51d7b400.udp] # (optional) max_requests = 1 # (optional) if > 0 accepts no more requests than max_requests and closes session (since 0.5.0) max_responses = 0 # (required) if > 0 accepts no more responses that max_responses from backend and closes session (will be optional since 0.5.0)

    [servers.11020b75d191432ba6beb04d51d7b400.discovery]
    kind = "static"
    failpolicy = "keeplast"
    static_list = [
    
      "192.168.48.235:2345",
    
    ]
    
    [servers.11020b75d191432ba6beb04d51d7b400.healthcheck]
    fails = 2
    passes = 2
    interval = "5s"
    timeout="5s"
    kind = "exec"
    exec_command = "/usr/share/healthcheck.sh"  # (required) command to execute
    exec_expected_positive_output = "success"           # (required) expected output of command in case of success
    exec_expected_negative_output = "fail"
    

    I do two test cases: the first case: the max_requests: 1 test result: there is no any backend response to client. the stats output: { "active_connections": 0, "rx_total": 0, "tx_total": 4, "rx_second": 0, "tx_second": 0, "backends": [ { "host": "192.168.48.235", "port": "2345", "priority": 1, "weight": 1, "stats": { "live": true, "discovered": true, "total_connections": 0, "active_connections": 0, "refused_connections": 0, "rx": 0, "tx": 4, "rx_second": 0, "tx_second": 0 } } ]

    the second test: the max_requests: 2 test result: there more than two clients work at same time. the stats output: { "active_connections": 0, "rx_total": 8642, "tx_total": 891, "rx_second": 145, "tx_second": 15, "backends": [ { "host": "192.168.48.235", "port": "2345", "priority": 1, "weight": 1, "stats": { "live": true, "discovered": true, "total_connections": 103, "active_connections": 11, "refused_connections": 0, "rx": 8352, "tx": 861, "rx_second": 145, "tx_second": 15 } } ] } actually all the clients are working all the time, but the stats total_connections increase quickly. I'm sure the clients(ip+port) not idle and the total_connections not overdue.

    could you confirm this issue or tell me where is my wrong? thanks.

  • SNI support for proxying

    SNI support for proxying

    Hi, Is there a plan in the near future to implement SNI based routing?

    Regards, Shantanu

    BTW: Awesome software which I discovered only by accident!!! The Consul discovery backend is truly great!!! 👍 👍

  • Error: dial tcp :0: getsockopt: connection refused

    Error: dial tcp :0: getsockopt: connection refused

    Hello, Im trying gobetween on my dev machine - Im using docker-machine & boot2docker (1.12.3)

    This is my simple docker compose:

    version: '2'
    
    networks:
      backend:
    
    services:
      app:
        image: php:7.0-alpine
        expose:
          - 8080
        volumes:
          - .:/data
        command: php -S 0.0.0.0:8080 /data/index.php
        networks:
          - backend
        labels:
          - "scale.app=true"
    
      gobetween:
        image: yyyar/gobetween
        depends_on:
          - app
        ports:
          - "80:80"
        volumes:
          - "./gobetween/conf:/etc/gobetween/conf/:rw"
          - "/var/run/docker.sock:/var/run/docker.sock"
        networks:
          - backend
    

    Gobetween conf looks like this:

    [logging]
    level = "debug"
    
    [servers.app]
    bind = "0.0.0.0:80"
    protocol = "tcp"
    balance = "roundrobin"
    
      [servers.app.discovery]
        interval = "10s"
        timeout = "2s"
        kind = "docker"
        docker_endpoint = "unix://var/run/docker.sock"  # Docker / Swarm API
        docker_container_label = "scale.app=true"  # label to filter containers
        docker_container_private_port = 8080   # gobetween will take public container port for this private port
        docker_container_host_env_var = "HOSTNAME"
    

    Gobetween is able to discover new backends, when i try to scale up the app, but proxying is not working - see log output:

    app_1        | PHP 7.0.13 Development Server started at Fri Nov 25 20:08:30 2016
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (manager): Initializing...
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (server): Creating 'app': 0.0.0.0:80 roundrobin docker none
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (scheduler): Starting scheduler
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (manager): Initialized
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (api): API disabled
    gobetween_1  | 2016-11-25 20:08:31 [INFO ] (dockerFetch): Fetching unix://var/run/docker.sock scale.app=true 8080
    gobetween_1  | 2016-11-25 20:08:40 [DEBUG] (server.handle): Accepted 10.211.55.2:52649 -> [::]:80
    gobetween_1  | 2016-11-25 20:08:40 [DEBUG] (server.handle): Accepted 10.211.55.2:52648 -> [::]:80
    gobetween_1  | 2016-11-25 20:08:40 [ERROR] (server.handle): dial tcp :0: getsockopt: connection refused
    gobetween_1  | 2016-11-25 20:08:40 [ERROR] (server.handle): dial tcp :0: getsockopt: connection refused
    

    Am I missing something?

  • Buildin for Windows

    Buildin for Windows

    I tried to build for Windows AMD64

    1. I tried building on Windows for Windows
    2. I tried to build on Linux for Windows

    On both platforms I get the same issue :-(

    ../pkg/mod/github.com/eric-lindau/[email protected]/udp.go:29:47: undefined: syscall.IPPROTO_RAW ../pkg/mod/github.com/eric-lindau/[email protected]/udp.go:38:37: cannot use int(fd) (type int) as type syscall.Handle in argument to syscall.SetsockoptInt ../pkg/mod/github.com/eric-lindau/[email protected]/udp.go:38:63: undefined: syscall.IP_HDRINCL

    I know it must be possible because I downloaded the windows binary and that works fine... :-)

  • Prometheus metrics listener crashes randomly at startup

    Prometheus metrics listener crashes randomly at startup

    What's the problem?

    Running gobetween under OpenWRT x64 starts the process normally most of the time, but on some occasions, the following stack trace can be seen when Prometheus metrics are enabled (via json config in this example):

    "metrics": {
      "enabled": true,
      "bind": "0.0.0.0:9284"
    },  
    

    Faulty startup sequence:

    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: 2022/12/26 12:16:15 gobetween v0.8.0
    Mon Dec 26 12:16:15 2022 daemon.info gobetween[42675]: 2022-12-26 12:16:15 [INFO ] (manager): Initializing...
    Mon Dec 26 12:16:15 2022 daemon.info gobetween[42675]: 2022-12-26 12:16:15 [INFO ] (server): Creating 'REDACTED': 0.0.0.0:22 weight static ping
    Mon Dec 26 12:16:15 2022 daemon.info gobetween[42675]: 2022-12-26 12:16:15 [INFO ] (scheduler): Starting scheduler REDACTED
    Mon Dec 26 12:16:15 2022 daemon.info gobetween[42675]: 2022-12-26 12:16:15 [INFO ] (server): Creating 'REDACTED': 0.0.0.0:6443 weight static ping
    Mon Dec 26 12:16:15 2022 daemon.info gobetween[42675]: 2022-12-26 12:16:15 [INFO ] (scheduler): Starting scheduler REDACTED
    Mon Dec 26 12:16:15 2022 daemon.info gobetween[42675]: 2022-12-26 12:16:15 [INFO ] (server): Creating 'REDACTED': 0.0.0.0:6448 weight static ping
    Mon Dec 26 12:16:15 2022 daemon.info gobetween[42675]: 2022-12-26 12:16:15 [INFO ] (scheduler): Starting scheduler REDACTED
    Mon Dec 26 12:16:15 2022 daemon.info gobetween[42675]: 2022-12-26 12:16:15 [INFO ] (server): Creating 'REDACTED': 0.0.0.0:443 weight static ping
    Mon Dec 26 12:16:15 2022 daemon.info gobetween[42675]: 2022-12-26 12:16:15 [INFO ] (healthcheck/worker): Sending to scheduler: {{REDACTED 6443} false}
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: panic: runtime error: invalid memory address or nil pointer dereference
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xcf95c6]
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]:
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: goroutine 80 [running]:
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: github.com/prometheus/client_golang/prometheus.(*GaugeVec).GetMetricWithLabelValues(...)
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: 	/home/yyyar/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/gauge.go:183
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: github.com/prometheus/client_golang/prometheus.(*GaugeVec).WithLabelValues(0x0, 0xc0005ae2a0, 0x3, 0x3, 0xc0006a2040, 0xc00062e0d0)
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: 	/home/yyyar/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/gauge.go:215 +0x26
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: github.com/yyyar/gobetween/metrics.ReportHandleBackendLiveChange(0xc0006a2040, 0x10, 0xc000036210, 0x1d, 0xc00003622e, 0x4, 0x0)
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: 	/home/yyyar/workspace/gobetween/src/metrics/metrics.go:236 +0xc2
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: github.com/yyyar/gobetween/server/scheduler.(*Scheduler).HandleBackendLiveChange(0xc0000f2bf0, 0xc000036210, 0x1d, 0xc00003622e, 0x4, 0x0)
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: 	/home/yyyar/workspace/gobetween/src/server/scheduler/scheduler.go:224 +0x17e
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: github.com/yyyar/gobetween/server/scheduler.(*Scheduler).Start.func1(0xc0000f2bf0, 0xc0002d21e0, 0xc00010c540)
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: 	/home/yyyar/workspace/gobetween/src/server/scheduler/scheduler.go:129 +0x66c
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: created by github.com/yyyar/gobetween/server/scheduler.(*Scheduler).Start
    Mon Dec 26 12:16:15 2022 daemon.err gobetween[42675]: 	/home/yyyar/workspace/gobetween/src/server/scheduler/scheduler.go:113 +0x21c
    

    Correct startup sequence:

    Mon Dec 26 12:16:52 2022 daemon.err gobetween[42784]: 2022/12/26 12:16:52 gobetween v0.8.0
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (manager): Initializing...
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (server): Creating 'REDACTED': 0.0.0.0:443 weight static ping
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (scheduler): Starting scheduler REDACTED
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (server): Creating 'REDACTED': 0.0.0.0:22 weight static ping
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (scheduler): Starting scheduler REDACTED
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (server): Creating 'REDACTED': 0.0.0.0:6443 weight static ping
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (scheduler): Starting scheduler REDACTED
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (server): Creating 'REDACTED': 0.0.0.0:6448 weight static ping
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (scheduler): Starting scheduler REDACTED
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (manager): Initialized
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (metrics): Starting up Metrics server 0.0.0.0:9284
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (api): Starting up API
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (api): Starting HTTP server 0.0.0.0:8888
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (healthcheck/worker): Sending to scheduler: {{REDACTED 6443} false}
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (healthcheck/worker): Sending to scheduler: {{REDACTED 6443} false}
    Mon Dec 26 12:16:52 2022 daemon.info gobetween[42784]: 2022-12-26 12:16:52 [INFO ] (healthcheck/worker): Sending to scheduler: {{REDACTED 6443} false}
    

    System info

    root@REDACTED:~# uname -a
    Linux us-east-00-router-01 5.10.146 #0 SMP Fri Oct 14 22:44:41 2022 x86_64 GNU/Linux
    root@REDACTED:~# cat /etc/openwrt_release 
    DISTRIB_ID='OpenWrt'
    DISTRIB_RELEASE='22.03.2'
    DISTRIB_REVISION='r19803-9a599fee93'
    DISTRIB_TARGET='x86/64'
    DISTRIB_ARCH='x86_64'
    DISTRIB_DESCRIPTION='OpenWrt 22.03.2 r19803-9a599fee93'
    

    Let me know if I missed anything.

    Thanks.

  • Gobetween main URL is not working

    Gobetween main URL is not working

    Hi I have installed 3 solr instances in my machine as "https://solr1:9000", "https://solr2:9002" and "https://solr3:9003" when I hit any of these URL's it's loading fine.

    I have installed "gobetween" as a service in my windows laptop using NSSM. It's installed as a service successfully and service is running also. Below are the lines I changed into "gobetween.toml" file. `# ---------- tcp example ----------- # [servers.solrcloud] protocol = "tcp" bind = "localhost:3010"

    [servers.solrcloud.discovery] kind = "static" static_list = ["localhost:9000","localhost:9002","localhost:9003"]`

    Issues/Questions:

    1. When I hit the URL "http://localhost:3010" I am getting the error "The page isn't working" what i am doing wrong here?
    2. I am not able to find any logs created.
    3. Do I need to make any other changes in the "gobetween.toml" file?
    4. I have tried the "static_list = ["solr1:9000", "solr2:9002", "solr3:9003"} as well.

    Can you help me on this?

  • Ubuntu | Apparmor enforcing

    Ubuntu | Apparmor enforcing

    Hello,

    Anyone had any issues with Apparmor restricting Go-between?

    Fresh install, Apparmor is enforcing all Go-between profiles and processes causing all config and ports to be shut.

    Profile can then be manually removed to bypass this, but by reading forums with Ubuntu this shouldn't be needed, and the software in question could have been restricted due to potential security issues or exploits.

    Anyone had similar issues?

    Kr,

  • Skip RRSIG records in response.

    Skip RRSIG records in response.

    Fixes #327.

    Without this change a domain that as DNSSEC enabled resulted in the following output:

    {"level":"info","msg":"Fetching 1.1.1.1:53 _api._tcp.k8s.example.net.","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"debug","msg":"Fetching 1.1.1.1:53 A/AAAA node1.example.net.","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"warning","msg":"No IP found for node1.example.net., skipping...","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"error","msg":"srv error Non-SRV record in SRV answer retrying in 2s","name":"discovery","time":"2022-02-27T12:30:28Z"}
    {"level":"info","msg":"Applying failpolicy keeplast","name":"discovery","time":"2022-02-27T12:30:28Z"}
    

    With this change the result is:

    {"level":"debug","msg":"Fetching 1.1.1.1:53 A/AAAA node1.example.net.","name":"srvFetch","time":"2022-02-27T07:27:35-05:00"}
    {"level":"debug","msg":"Initial check ping for {X.X.X.X 6443}","name":"healthcheck/worker","time":"2022-02-27T07:27:42-05:00"}
    {"level":"debug","msg":"Got check result ping: {{X.X.X.X 6443} 2}","name":"healthcheck/worker","time":"2022-02-27T07:27:42-05:00"}
    {"level":"info","msg":"Sending to scheduler: {{X.X.X.X 6443} 2}","name":"healthcheck/worker","time":"2022-02-27T07:27:42-05:00"}
    
    
  • DNSSEC breaks SRV discovery

    DNSSEC breaks SRV discovery

    I am trying to use SRV discovery in gobetween (Docker :latest) and it is failing because the domain I am using has DNSSEC enabled. The results in a dns.RRSIG to be included in the answers from the DNS server.

    Output from gobetween:

    {"level":"info","msg":"Fetching 1.1.1.1:53 _api._tcp.k8s.example.net.","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"debug","msg":"Fetching 1.1.1.1:53 A/AAAA node1.example.net.","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"warning","msg":"No IP found for node1.example.net., skipping...","name":"srvFetch","time":"2022-02-27T12:30:28Z"}
    {"level":"error","msg":"srv error Non-SRV record in SRV answer retrying in 2s","name":"discovery","time":"2022-02-27T12:30:28Z"}
    {"level":"info","msg":"Applying failpolicy keeplast","name":"discovery","time":"2022-02-27T12:30:28Z"}
    
Cat Balancer is line based load balancer for net cat nc.
Cat Balancer is line based load balancer for net cat nc.

Cat Balancer Cat Balancer is line based load balancer for net cat nc. Usage cb [-p <producers-port>] [-c <consumers-port>] One Producer to One Consum

Jul 6, 2022
A modern layer 7 load balancer from baidu

BFE BFE is a modern layer 7 load balancer from baidu. Advantages Multiple protocols supported, including HTTP, HTTPS, SPDY, HTTP2, WebSocket, TLS, Fas

Dec 30, 2022
Kiwi-balancer - A balancer is a gateway between the clients and the server

Task description Imagine a standard client-server relationship, only in our case

Feb 11, 2022
Proxy - Minimalistic TCP relay proxy.

Proxy Minimalistic TCP relay proxy. Installation ensure you have go >= 1.17 installed clone the repo cd proxy go install main.go Examples Listen on po

May 22, 2022
High-performance PHP application server, load-balancer and process manager written in Golang
High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a serv

Jan 1, 2023
High-performance PHP application server, load-balancer and process manager written in Golang
High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a serv

Dec 9, 2021
A load balancer supporting multiple LB strategies written in Go
A load balancer supporting multiple LB strategies written in Go

farely A load balancer supporting multiple LB strategies written in Go. Goal The goal of this project is purley educational, I started it as a brainst

Dec 21, 2022
KgLb - L4 Load Balancer
KgLb - L4 Load Balancer

KgLb KgLb is L4 a load balancer built on top of linux ip virtual server (ip_vs). It provides rich functionality such as discovery, health checks for r

Dec 16, 2022
A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF

VC5 A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF This is very much a proof of concept at this stage - mos

Dec 22, 2022
Basic Load Balancer
Basic Load Balancer

Load Balancer Work flow based on code snippet Trade-offs: 1. Using etcd as a global variable map. 2. Using etcd to store request references rather tha

Nov 1, 2021
Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal

Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal If I figure out how to make it work.. How it works! The Vippy LB PoC uses BGP/IPVS and E

Mar 10, 2022
A Load-balancer made from steel
A Load-balancer made from steel

slb The Steel Load Balancer A load-balancer forged in the fires of Sheffield Getting slb Prebuilt binaries for armv7 and amd64 exist in the releases p

Nov 13, 2022
Lightweight http response time based load balancer written in Go

HTTP Load Balancer Specifications http servers should always return time taken to proceed request in headers as EXECUTION_TIME in ms this load balance

Feb 22, 2022
A Service Load Balancer for Kubernetes.

PureLB - is a Service Load Balancer for Kubernetes PureLB is a load-balancer orchestrator for Kubernetes clusters. It uses standard Linux networking a

Dec 24, 2022
Consistelancer - Consistent hashing load balancer for Kubernetes

Setup minikube start kubectl apply -f k8s-env.yaml skaffold dev # test locks ku

Sep 28, 2022
Simple load-balancer for npchat servers, based on the xor distance metric between node & user id

npchat-helmsman Simple load-balancer for npchat servers, based on the xor distance metric between node & user id. Local Development Clone this reposit

Jan 15, 2022
mt-multiserver-proxy is a reverse proxy designed for linking multiple Minetest servers together

mt-multiserver-proxy mt-multiserver-proxy is a reverse proxy designed for linking multiple Minetest servers together. It is the successor to multiserv

Nov 17, 2022
Tcp-proxy - A dead simple reverse proxy server.

tcp-proxy A proxy that forwords from a host to another. Building go build -ldflags="-X 'main.Version=$(git describe --tags $(git rev-list --tags --max

Jan 2, 2022
Example of how to write reverse proxy in Go that runs on Cloud Run with Tailscale

Cloudrun Tailscale Reverse Proxy Setup Create a ephemeral key in Tailscale Set TAILSCALE_AUTHKEY in your Cloud Run environment variables Set TARGET_UR

Dec 18, 2022