ClickHouse http proxy and load balancer

Go Report Card Build Status Coverage

chproxy

English | 简体中文

Chproxy, is an http proxy and load balancer for ClickHouse database. It provides the following features:

  • May proxy requests to multiple distinct ClickHouse clusters depending on the input user. For instance, requests from appserver user may go to stats-raw cluster, while requests from reportserver user may go to stats-aggregate cluster.
  • May map input users to per-cluster users. This prevents from exposing real usernames and passwords used in ClickHouse clusters. Additionally this allows mapping multiple distinct input users to a single ClickHouse user.
  • May accept incoming requests via HTTP and HTTPS.
  • May limit HTTP and HTTPS access by IP/IP-mask lists.
  • May limit per-user access by IP/IP-mask lists.
  • May limit per-user query duration. Timed out or canceled queries are forcibly killed via KILL QUERY.
  • May limit per-user requests rate.
  • May limit per-user number of concurrent requests.
  • All the limits may be independently set for each input user and for each per-cluster user.
  • May delay request execution until it fits per-user limits.
  • Per-user response caching may be configured.
  • Response caches have built-in protection against thundering herd problem aka dogpile effect.
  • Evenly spreads requests among replicas and nodes using least loaded + round robin technique.
  • Monitors node health and prevents from sending requests to unhealthy nodes.
  • Supports automatic HTTPS certificate issuing and renewal via Let’s Encrypt.
  • May proxy requests to each configured cluster via either HTTP or HTTPS.
  • Prepends User-Agent request header with remote/local address and in/out usernames before proxying it to ClickHouse, so this info may be queried from system.query_log.http_user_agent.
  • Exposes various useful metrics in prometheus text format.
  • Configuration may be updated without restart - just send SIGHUP signal to chproxy process.
  • Easy to manage and run - just pass config file path to a single chproxy binary.
  • Easy to configure:
server:
  http:
    listen_addr: ":9090"
    allowed_networks: ["127.0.0.0/24"]

users:
  - name: "default"
    to_cluster: "default"
    to_user: "default"

# by default each cluster has `default` user which can be overridden by section `users`
clusters:
  - name: "default"
    nodes: ["127.0.0.1:8123"]

How to install

Precompiled binaries

Precompiled chproxy binaries are available here. Just download the latest stable binary, unpack and run it with the desired config:

./chproxy -config=/path/to/config.yml

Building from source

Chproxy is written in Go. The easiest way to install it from sources is:

go get -u github.com/Vertamedia/chproxy

If you don't have Go installed on your system - follow this guide.

Why it was created

ClickHouse may exceed max_execution_time and max_concurrent_queries limits due to various reasons:

  • max_execution_time may be exceeded due to the current implementation deficiencies.
  • max_concurrent_queries works only on a per-node basis. There is no way to limit the number of concurrent queries on a cluster if queries are spread across cluster nodes.

Such "leaky" limits may lead to high resource usage on all the cluster nodes. After facing this problem we had to maintain two distinct http proxies in front of our ClickHouse cluster - one for spreading INSERTs among cluster nodes and another one for sending SELECTs to a dedicated node where limits may be enforced somehow. This was fragile and inconvenient to manage, so chproxy has been created :)

Use cases

Spread INSERTs among cluster shards

Usually INSERTs are sent from app servers located in a limited number of subnetworks. INSERTs from other subnetworks must be denied.

All the INSERTs may be routed to a distributed table on a single node. But this increases resource usage (CPU and network) on the node comparing to other nodes, since it must parse each row to be inserted and route it to the corresponding node (shard).

It would be better to spread INSERTs among available shards and to route them directly to per-shard tables instead of distributed tables. The routing logic may be embedded either directly into applications generating INSERTs or may be moved to a proxy. Proxy approach is better since it allows re-configuring ClickHouse cluster without modification of application configs and without application downtime. Multiple identical proxies may be started on distinct servers for scalability and availability purposes.

The following minimal chproxy config may be used for this use case:

server:
  http:
      listen_addr: ":9090"

      # Networks with application servers.
      allowed_networks: ["10.10.1.0/24"]

users:
  - name: "insert"
    to_cluster: "stats-raw"
    to_user: "default"

clusters:
  - name: "stats-raw"

    # Requests are spread in `round-robin` + `least-loaded` fashion among nodes.
    # Unreachable and unhealthy nodes are skipped.
    nodes: [
      "10.10.10.1:8123",
      "10.10.10.2:8123",
      "10.10.10.3:8123",
      "10.10.10.4:8123"
    ]

Spread SELECTs from reporting apps among cluster nodes

Reporting apps usually generate various customer reports from SELECT query results. The load generated by such SELECTs on ClickHouse cluster may vary depending on the number of online customers and on the generated report types. It is obvious that the load must be limited in order to prevent cluster overload.

All the SELECTs may be routed to a distributed table on a single node. But this increases resource usage (RAM, CPU and network) on the node comparing to other nodes, since it must do final aggregation, sorting and filtering for the data obtained from cluster nodes (shards).

It would be better to create identical distributed tables on each shard and spread SELECTs among all the available shards.

The following minimal chproxy config may be used for this use case:

server:
  http:
      listen_addr: ":9090"

      # Networks with reporting servers.
      allowed_networks: ["10.10.2.0/24"]

users:
  - name: "report"
    to_cluster: "stats-aggregate"
    to_user: "readonly"
    max_concurrent_queries: 6
    max_execution_time: 1m

clusters:
  - name: "stats-aggregate"
    nodes: [
      "10.10.20.1:8123",
      "10.10.20.2:8123"
    ]
    users:
      - name: "readonly"
        password: "****"

Authorize users by passwords via HTTPS

Suppose you need to access ClickHouse cluster from anywhere by username/password. This may be used for building graphs from ClickHouse-grafana or tabix. It is bad idea to transfer unencrypted password and data over untrusted networks. So HTTPS must be used for accessing the cluster in such cases. The following chproxy config may be used for this use case:

server:
  https:
    listen_addr: ":443"
    autocert:
      cache_dir: "certs_dir"

users:
  - name: "web"
    password: "****"
    to_cluster: "stats-raw"
    to_user: "web"
    max_concurrent_queries: 2
    max_execution_time: 30s
    requests_per_minute: 10
    deny_http: true

    # Allow `CORS` requests for `tabix`.
    allow_cors: true

    # Enable requests queueing - `chproxy` will queue up to `max_queue_size`
    # of incoming requests for up to `max_queue_time` until they stop exceeding
    # the current limits.
    # This allows gracefully handling request bursts when more than
    # `max_concurrent_queries` concurrent requests arrive.
    max_queue_size: 40
    max_queue_time: 25s

    # Enable response caching. See cache config below.
    cache: "shortterm"

clusters:
  - name: "stats-raw"
    nodes: [
     "10.10.10.1:8123",
     "10.10.10.2:8123",
     "10.10.10.3:8123",
     "10.10.10.4:8123"
    ]
    users:
      - name: "web"
        password: "****"

caches:
  - name: "shortterm"
    dir: "/path/to/cache/dir"
    max_size: 150Mb

    # Cached responses will expire in 130s.
    expire: 130s

All the above configs combined

All the above cases may be combined in a single chproxy config:

server:
  http:
      listen_addr: ":9090"
      allowed_networks: ["10.10.1.0/24","10.10.2.0/24"]
  https:
    listen_addr: ":443"
    autocert:
      cache_dir: "certs_dir"

users:
  - name: "insert"
    allowed_networks: ["10.10.1.0/24"]
    to_cluster: "stats-raw"
    to_user: "default"

  - name: "report"
    allowed_networks: ["10.10.2.0/24"]
    to_cluster: "stats-aggregate"
    to_user: "readonly"
    max_concurrent_queries: 6
    max_execution_time: 1m

  - name: "web"
    password: "****"
    to_cluster: "stats-raw"
    to_user: "web"
    max_concurrent_queries: 2
    max_execution_time: 30s
    requests_per_minute: 10
    deny_http: true
    allow_cors: true
    max_queue_size: 40
    max_queue_time: 25s
    cache: "shortterm"

clusters:
  - name: "stats-aggregate"
    nodes: [
      "10.10.20.1:8123",
      "10.10.20.2:8123"
    ]
    users:
    - name: "readonly"
      password: "****"

  - name: "stats-raw"
    nodes: [
     "10.10.10.1:8123",
     "10.10.10.2:8123",
     "10.10.10.3:8123",
     "10.10.10.4:8123"
    ]
    users:
      - name: "default"

      - name: "web"
        password: "****"

caches:
  - name: "shortterm"
    dir: "/path/to/cache/dir"
    max_size: 150Mb
    expire: 130s

Configuration

Server

Chproxy may accept requests over HTTP and HTTPS protocols. HTTPS must be configured with custom certificate or with automated Let's Encrypt certificates.

Access to chproxy can be limitied by list of IPs or IP masks. This option can be applied to HTTP, HTTPS, metrics, user or cluster-user.

Users

There are two types of users: in-users (in global section) and out-users (in cluster section). This means all requests will be matched to in-users and if all checks are Ok - will be matched to out-users with overriding credentials.

Suppose we have one ClickHouse user web with read-only permissions and max_concurrent_queries: 4 limit. There are two distinct applications reading from ClickHouse. We may create two distinct in-users with to_user: "web" and max_concurrent_queries: 2 each in order to avoid situation when a single application exhausts all the 4-request limit on the web user.

Requests to chproxy must be authorized with credentials from user_config. Credentials can be passed via BasicAuth or via user and password query string args.

Limits for in-users and out-users are independent.

Clusters

Chproxy can be configured with multiple clusters. Each cluster must have a name and either a list of nodes or a list of replicas with nodes. See cluster-config for details. Requests to each cluster are balanced among replicas and nodes using round-robin + least-loaded approach. The node priority is automatically decreased for a short interval if recent requests to it were unsuccessful. This means that the chproxy will choose the next least loaded healthy node among least loaded replica for every new request.

Additionally each node is periodically checked for availability. Unavailable nodes are automatically excluded from the cluster until they become available again. This allows performing node maintenance without removing unavailable nodes from the cluster config.

Chproxy automatically kills queries exceeding max_execution_time limit. By default chproxy tries to kill such queries under default user. The user may be overriden with kill_query_user.

If cluster's users section isn't specified, then default user is used with no limits.

Caching

Chproxy may be configured to cache responses. It is possible to create multiple cache-configs with various settings. Response caching is enabled by assigning cache name to user. Multiple users may share the same cache. Currently only SELECT responses are cached. Caching is disabled for request with no_cache=1 in query string. Optional cache namespace may be passed in query string as cache_namespace=aaaa. This allows caching distinct responses for the identical query under distinct cache namespaces. Additionally, an instant cache flush may be built on top of cache namespaces - just switch to new namespace in order to flush the cache.

Security

Chproxy removes all the query params from input requests (except the user's params and listed here) before proxying them to ClickHouse nodes. This prevents from unsafe overriding of various ClickHouse settings.

Be careful when configuring limits, allowed networks, passwords etc. By default chproxy tries detecting the most obvious configuration errors such as allowed_networks: ["0.0.0.0/0"] or sending passwords via unencrypted HTTP.

Special option hack_me_please: true may be used for disabling all the security-related checks during config validation (if you are feeling lucky :) ).

Example of full configuration:

# Whether to print debug logs.
#
# By default debug logs are disabled.
log_debug: true

# Whether to ignore security checks during config parsing.
#
# By default security checks are enabled.
hack_me_please: true

# Optional response cache configs.
#
# Multiple distinct caches with different settings may be configured.
caches:
    # Cache name, which may be passed into `cache` option on the `user` level.
    #
    # Multiple users may share the same cache.
  - name: "longterm"

    # Path to directory where cached responses will be stored.
    dir: "/path/to/longterm/cachedir"

    # Maximum cache size.
    # `Kb`, `Mb`, `Gb` and `Tb` suffixes may be used.
    max_size: 100Gb

    # Expiration time for cached responses.
    expire: 1h

    # When multiple requests with identical query simultaneously hit `chproxy`
    # and there is no cached response for the query, then only a single
    # request will be proxied to clickhouse. Other requests will wait
    # for the cached response during this grace duration.
    # This is known as protection from `thundering herd` problem.
    #
    # By default `grace_time` is 5s. Negative value disables the protection
    # from `thundering herd` problem.
    grace_time: 20s

  - name: "shortterm"
    dir: "/path/to/shortterm/cachedir"
    max_size: 100Mb
    expire: 10s

# Optional network lists, might be used as values for `allowed_networks`.
network_groups:
  - name: "office"
    # Each item may contain either IP or IP subnet mask.
    networks: ["127.0.0.0/24", "10.10.0.1"]

  - name: "reporting-apps"
    networks: ["10.10.10.0/24"]

# Optional lists of query params to send with each proxied request to ClickHouse.
# These lists may be used for overriding ClickHouse settings on a per-user basis.
param_groups:
    # Group name, which may be passed into `params` option on the `user` level.
  - name: "cron-job"
    # List of key-value params to send
    params:
      - key: "max_memory_usage"
        value: "40000000000"

      - key: "max_bytes_before_external_group_by"
        value: "20000000000"

  - name: "web"
    params:
      - key: "max_memory_usage"
        value: "5000000000"

      - key: "max_columns_to_read"
        value: "30"

      - key: "max_execution_time"
        value: "30"

# Settings for `chproxy` input interfaces.
server:
  # Configs for input http interface.
  # The interface works only if this section is present.
  http:
    # TCP address to listen to for http.
    # May be in the form IP:port . IP part is optional.
    listen_addr: ":9090"

    # List of allowed networks or network_groups.
    # Each item may contain IP address, IP subnet mask or a name
    # from `network_groups`.
    # By default requests are accepted from all the IPs.
    allowed_networks: ["office", "reporting-apps", "1.2.3.4"]

    # ReadTimeout is the maximum duration for proxy to reading the entire
    # request, including the body.
    # Default value is 1m
    read_timeout: 5m

    # WriteTimeout is the maximum duration for proxy before timing out writes of the response.
    # Default is largest MaxExecutionTime + MaxQueueTime value from Users or Clusters
    write_timeout: 10m

    # IdleTimeout is the maximum amount of time for proxy to wait for the next request.
    # Default is 10m
    idle_timeout: 20m

  # Configs for input https interface.
  # The interface works only if this section is present.
  https:
    # TCP address to listen to for https.
    listen_addr: ":443"

    # Paths to TLS cert and key files.
    # cert_file: "cert_file"
    # key_file: "key_file"

    # Letsencrypt config.
    # Certificates are automatically issued and renewed if this section
    # is present.
    # There is no need in cert_file and key_file if this section is present.
    # Autocert requires application to listen on :80 port for certificate generation
    autocert:
      # Path to the directory where autocert certs are cached.
      cache_dir: "certs_dir"

      # The list of host names proxy is allowed to respond to.
      # See https://godoc.org/golang.org/x/crypto/acme/autocert#HostPolicy
      allowed_hosts: ["example.com"]

  # Metrics in prometheus format are exposed on the `/metrics` path.
  # Access to `/metrics` endpoint may be restricted in this section.
  # By default access to `/metrics` is unrestricted.
  metrics:
    allowed_networks: ["office"]

# Configs for input users.
users:
    # Name and password are used to authorize access via BasicAuth or
    # via `user`/`password` query params.
    # Password is optional. By default empty password is used.
  - name: "web"
    password: "****"

    # Requests from the user are routed to this cluster.
    to_cluster: "first cluster"

    # Input user is substituted by the given output user from `to_cluster`
    # before proxying the request.
    to_user: "web"

    # Whether to deny input requests over HTTP.
    deny_http: true

    # Whether to allow `CORS` requests like `tabix` does.
    # By default `CORS` requests are denied for security reasons.
    allow_cors: true

    # Requests per minute limit for the given input user.
    #
    # By default there is no per-minute limit.
    requests_per_minute: 4

    # Response cache config name to use.
    #
    # By default responses aren't cached.
    cache: "longterm"

    # An optional group of params to send to ClickHouse with each proxied request.
    # These params may be set in param_groups block.
    #
    # By default no additional params are sent to ClickHouse.
    params: "web"

    # The maximum number of requests that may wait for their chance
    # to be executed because they cannot run now due to the current limits.
    #
    # This option may be useful for handling request bursts from `tabix`
    # or `clickhouse-grafana`.
    #
    # By default all the requests are immediately executed without
    # waiting in the queue.
    max_queue_size: 100

    # The maximum duration the queued requests may wait for their chance
    # to be executed.
    # This option makes sense only if max_queue_size is set.
    # By default requests wait for up to 10 seconds in the queue.
    max_queue_time: 35s

  - name: "default"
    to_cluster: "second cluster"
    to_user: "default"
    allowed_networks: ["office", "1.2.3.0/24"]

    # The maximum number of concurrently running queries for the user.
    #
    # By default there is no limit on the number of concurrently
    # running queries.
    max_concurrent_queries: 4

    # The maximum query duration for the user.
    # Timed out queries are forcibly killed via `KILL QUERY`.
    #
    # By default there is no limit on the query duration.
    max_execution_time: 1m

    # Whether to deny input requests over HTTPS.
    deny_https: true

# Configs for ClickHouse clusters.
clusters:
    # The cluster name is used in `to_cluster`.
  - name: "first cluster"

    # Protocol to use for communicating with cluster nodes.
    # Currently supported values are `http` or `https`.
    # By default `http` is used.
    scheme: "http"

    # Cluster node addresses.
    # Requests are evenly distributed among them.
    nodes: ["127.0.0.1:8123", "shard2:8123"]

    # DEPRECATED: Each cluster node is checked for availability using this interval.
    # By default each node is checked for every 5 seconds.
    # Use `heartbeat.interval`.
    heartbeat_interval: 1m

    # User configuration for heart beat requests.
    # Credentials of the first user in clusters.users will be used for heart beat requests to clickhouse.
    heartbeat:
      # An interval for checking all cluster nodes for availability
      # By default each node is checked for every 5 seconds.
      interval: 1m

      # A timeout of wait response from cluster nodes
      # By default 3s
      timeout: 10s

      # The parameter to set the URI to request in a health check
      # By default "/?query=SELECT%201"
      request: "/?query=SELECT%201%2B1"

      # Reference response from clickhouse on health check request
      # By default "1\n"
      response: "2\n"

    # Timed out queries are killed using this user.
    # By default `default` user is used.
    kill_query_user:
      name: "default"
      password: "***"

    # Configuration for cluster users.
    users:
        # The user name is used in `to_user`.
      - name: "web"
        password: "password"
        max_concurrent_queries: 4
        max_execution_time: 1m

  - name: "second cluster"
    scheme: "https"

    # The cluster may contain multiple replicas instead of flat nodes.
    #
    # Chproxy selects the least loaded node among the least loaded replicas.
    replicas:
      - name: "replica1"
        nodes: ["127.0.1.1:8443", "127.0.1.2:8443"]
      - name: "replica2"
        nodes: ["127.0.2.1:8443", "127.0.2.2:8443"]

    users:
      - name: "default"
        max_concurrent_queries: 4
        max_execution_time: 1m

      - name: "web"
        max_concurrent_queries: 4
        max_execution_time: 10s
        requests_per_minute: 10
        max_queue_size: 50
        max_queue_time: 70s
        allowed_networks: ["office"]

Full specification is located here

Metrics

Metrics are exposed in prometheus text format at /metrics path.

Name Type Description Labels
bad_requests_total Counter The number of unsupported requests
cache_hits_total Counter The amount of cache hits cache, user, cluster, cluster_user
cache_items Gauge The number of items in each cache cache
cache_miss_total Counter The amount of cache misses cache, user, cluster, cluster_user
cache_size Gauge Size of each cache cache
cached_response_duration_seconds Summary Duration for cached responses. Includes the duration for sending response to client cache, user, cluster, cluster_user
canceled_request_total Counter The number of requests canceled by remote client user, cluster, cluster_user, replica, cluster_node
cluster_user_queue_overflow_total Counter The number of overflows for per-cluster_user request queues user, cluster, cluster_user
concurrent_limit_excess_total Counter The number of rejected requests due to max_concurrent_queries limit user, cluster, cluster_user, replica, cluster_node
concurrent_queries Gauge The number of concurrent queries at the moment user, cluster, cluster_user, replica, cluster_node
config_last_reload_successful Gauge Whether the last configuration reload attempt was successful
config_last_reload_success_timestamp_seconds Gauge Timestamp of the last successful configuration reload
host_health Gauge Health state of hosts by clusters cluster, replica, cluster_node
host_penalties_total Counter The number of given penalties by host cluster, replica, cluster_node
killed_request_total Counter The number of requests killed by proxy user, cluster, cluster_user, replica, cluster_node
proxied_response_duration_seconds Summary Duration for responses proxied from clickhouse user, cluster, cluster_user, replica, cluster_node
request_body_bytes_total Counter The amount of bytes read from request bodies user, cluster, cluster_user, replica, cluster_node
request_duration_seconds Summary Request duration. Includes possible queue wait time user, cluster, cluster_user, replica, cluster_node
request_queue_size Gauge Request queue size at the moment user, cluster, cluster_user
request_success_total Counter The number of successfully proxied requests user, cluster, cluster_user, replica, cluster_node
request_sum_total Counter The number of processed requests user, cluster, cluster_user, replica, cluster_node
response_body_bytes_total Counter The amount of bytes written to response bodies user, cluster, cluster_user, replica, cluster_node
status_codes_total Counter Distribution by response status codes user, cluster, cluster_user, replica, cluster_node, code
timeout_request_total Counter The number of timed out requests user, cluster, cluster_user, replica, cluster_node
user_queue_overflow_total Counter The number of overflows for per-user request queues user, cluster, cluster_user

An example of Grafana's dashboard for chproxy metrics is available here

dashboard example

FAQ

  • Is chproxy production ready?

    Yes, we successfully use it in production for both INSERT and SELECT requests.

  • What about chproxy performance?

    A single chproxy instance easily proxies 1Gbps of compressed INSERT data while using less than 20% of a single CPU core in our production setup.

  • Does chproxy support native interface for ClickHouse?

    No. Because currently all our services work with ClickHouse only via HTTP. Support for native interface may be added in the future.

Owner
Contentsquare
Web optimization, testing & UX solutions
Contentsquare
Comments
  • Wildcarded users

    Wildcarded users

    Description

    Implements Wildcarded users concept https://hackmd.io/LXk8tLI8Q3etKax-HWtGxQ as agreed in https://github.com/ContentSquare/chproxy/pull/170

    The idea is to have chproxy users with a suffix, that is out if chproxy control, like analyst_* . For such users password from request to chproxy is retransmitted to ClickHouse as is. So, analyst_jane and analyst_bob can send their requests to ClickHouse.

    Rationale: avoid nececity to have all users listed in chproxy config, ability to use LDAP or kerberos facilities at ClickHouse side.

    Pull request type

    Please check the type of change your PR introduces:

    • [x ] Feature

    Checklist

    • [ x] Linter passes correctly
    • [ ] Add tests which fail without the change (if possible)
    • [x ] All tests passing
    • [ x] Extended the README / documentation, if necessary

    golangci-lint 1.49.0 reports several errors that are not related to the change

    Does this introduce a breaking change?

    [x ] No

    Further comments

    I need some advice on how to add a test that actually covers the new functionality

  • [QUESTION] Strange benchmarking

    [QUESTION] Strange benchmarking

    Hello, I am trying to benchmark chproxy. Instead of ClickHouse I use nginx that is tweaked to support healthchecks. Three nodes, one runs ab (client), another chproxy and the third one is nginx.

    The result is rather decent

    [ilejn@golshtein-centos-2 ~]$ ab -t 10 -c 20 -k 'http://10.92.16.214:9090/?query=SELECT%201'
    This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking 10.92.16.214 (be patient)
    Completed 5000 requests
    Completed 10000 requests
    Completed 15000 requests
    Completed 20000 requests
    Completed 25000 requests
    Finished 27211 requests
    
    
    Server Software:        nginx/1.18.0
    Server Hostname:        10.92.16.214
    Server Port:            9090
    
    Document Path:          /?query=SELECT%201
    Document Length:        612 bytes
    
    Concurrency Level:      20
    Time taken for tests:   10.001 seconds
    Complete requests:      27211
    Failed requests:        0
    Write errors:           0
    Keep-Alive requests:    27211
    Total transferred:      22830029 bytes
    HTML transferred:       16653132 bytes
    Requests per second:    2720.91 [#/sec] (mean)
    Time per request:       7.350 [ms] (mean)
    Time per request:       0.368 [ms] (mean, across all concurrent requests)
    Transfer rate:          2229.34 [Kbytes/sec] received
    
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.0      0       1
    Processing:     1    7   3.3      7      41
    Waiting:        1    7   3.3      7      41
    Total:          1    7   3.3      7      41
    
    Percentage of the requests served within a certain time (ms)
      50%      7
      66%      8
      75%      9
      80%      9
      90%     10
      95%     13
      98%     15
      99%     19
     100%     41 (longest request)
    

    While after a while I see significant performance degradation (same test, 10 times longer)

    [ilejn@golshtein-centos-2 ~]$ ab -t 100 -c 20 -k 'http://10.92.16.214:9090/?query=SELECT%201'
    This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking 10.92.16.214 (be patient)
    Completed 5000 requests
    Completed 10000 requests
    Completed 15000 requests
    Completed 20000 requests
    Completed 25000 requests
    Completed 30000 requests
    Completed 35000 requests
    Completed 40000 requests
    Completed 45000 requests
    Completed 50000 requests
    Finished 50000 requests
    
    
    Server Software:        nginx/1.18.0
    Server Hostname:        10.92.16.214
    Server Port:            9090
    
    Document Path:          /?query=SELECT%201
    Document Length:        612 bytes
    
    Concurrency Level:      20
    Time taken for tests:   61.051 seconds
    Complete requests:      50000
    Failed requests:        6157
       (Connect: 0, Receive: 0, Length: 6157, Exceptions: 0)
    Write errors:           0
    Non-2xx responses:      6157
    Keep-Alive requests:    50000
    Total transferred:      39142022 bytes
    HTML transferred:       28259954 byFailed requests:        6157tes
    Requests per second:    818.99 [#/sec] (mean)
    Time per request:       24.420 [ms] (mean)
    Time per request:       1.221 [ms] (mean, across all concurrent requests)
    Transfer rate:          626.11 [Kbytes/sec] received
    
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.0      0       1
    Processing:     1   24  44.1      9     484
    Waiting:        1   24  44.1      9     484
    Total:          1   24  44.1      9     484
    
    Percentage of the requests served within a certain time (ms)
      50%      9
      66%     11
      75%     13
      80%     15
      90%     80
      95%    124
      98%    180
      99%    217
     100%    484 (longest request)
    

    Note, that RPS is 3 timew lower in the second test. What is may be even more important, there is a number of request failures (Non-2xx responses: 6157) in the longer test. This is 'Bad Gateway''.

    chproxy utilizes two available CPU cores, nginx something like 10% and hardly can be a bottleneck.

    Direct communication with nginx does not show any issues

    [ilejn@golshtein-centos-2 ~]$ ab -t 100 -c 20  'http://10.92.6.192:8124/?query=SELECT%201'
    This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking 10.92.6.192 (be patient)
    Completed 5000 requests
    Completed 10000 requests
    Completed 15000 requests
    Completed 20000 requests
    Completed 25000 requests
    Completed 30000 requests
    Completed 35000 requests
    Completed 40000 requests
    Completed 45000 requests
    Completed 50000 requests
    Finished 50000 requests
    
    
    Server Software:        nginx/1.18.0
    Server Hostname:        10.92.6.192
    Server Port:            8124
    
    Document Path:          /?query=SELECT%201
    Document Length:        612 bytes
    
    Concurrency Level:      20
    Time taken for tests:   10.540 seconds
    Complete requests:      50000
    Failed requests:        0
    Write errors:           0
    Total transferred:      42700000 bytes
    HTML transferred:       30600000 bytes
    Requests per second:    4744.05 [#/sec] (mean)
    Time per request:       4.216 [ms] (mean)
    Time per request:       0.211 [ms] (mean, across all concurrent requests)
    Transfer rate:          3956.46 [Kbytes/sec] received
    
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    2   3.3      2      52
    Processing:     0    2   3.4      2      52
    Waiting:        0    2   3.4      1      52
    Total:          1    4   6.1      3      83
    
    Percentage of the requests served within a certain time (ms)
      50%      3
      66%      3[ilejn@golshtein-centos-2 ~]$ ab -t 100 -c 20  'http://10.92.6.192:8124/?query=SELECT%201'
    This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking 10.92.6.192 (be patient)
    Completed 5000 requests
    Completed 10000 requests
    Completed 15000 requests
    Completed 20000 requests
    Completed 25000 requests
    Completed 30000 requests
    Completed 35000 requests
    Completed 40000 requests
    Completed 45000 requests
    Completed 50000 requests
    Finished 50000 requests
    
    
    Server Software:        nginx/1.18.0
    Server Hostname:        10.92.6.192
    Server Port:            8124
    
    Document Path:          /?query=SELECT%201
    Document Length:        612 bytes
    
    Concurrency Level:      20
    Time taken for tests:   10.540 seconds
    Complete requests:      50000
    Failed requests:        0
    Write errors:           0
    Total transferred:      42700000 bytes
    HTML transferred:       30600000 bytes
    Requests per second:    4744.05 [#/sec] (mean)
    Time per request:       4.216 [ms] (mean)
    Time per request:       0.211 [ms] (mean, across all concurrent requests)
    Transfer rate:          3956.46 [Kbytes/sec] received
    
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    2   3.3      2      52
    Processing:     0    2   3.4      2      52
    Waiting:        0    2   3.4      1      52
    Total:          1    4   6.1      3      83
    
    Percentage of the requests served within a certain time (ms)
      50%      3
      66%      3
      75%      4
      80%      4
      90%      5
      95%      6
      98%     26
      99%     45
     100%     83 (longest request)
      75%      4
      80%      4
      90%      5
      95%      6
      98%     26
      99%     45
     100%     83 (longest request)
    

    I am digging into this at the moment. Hints and suggestions are highly appreciated.

  • Can't connect to chproxy with clickhouse-cli and DBeaver with redis cache enabled (v1.17.0)

    Can't connect to chproxy with clickhouse-cli and DBeaver with redis cache enabled (v1.17.0)

    Hi guys, it's me again :)

    I've discovered a following problem with a v1.17.0: After some time of the normal operation, i can't connect to chproxy instance or run any queries, using https://github.com/hatarist/clickhouse-cli or DBeaver, and getting following errors:

    # clickhouse-cli                                                                                                                                                                    
    clickhouse-cli version: 0.3.6
    Connecting to <host>:443
    Error: Failed to connect. (Remote end closed connection without response)
    

    That's how it look like in chproxy logs

    DEBUG: 2022/09/05 08:26:22 proxy.go:78: [ Id: 1711E74C4DFA0E1D; User "user"(1) proxying as "cluster_user"(2) to "host:8443"(2); RemoteAddr: "addr:41480"; LocalAddr
    : "10.198.12.110:443"; Duration: 17 μs]: request start
    

    I have a distributed redis cache for that user enabled. Re-starting chproxy instance is not helping, but re-starting the redis instance does. It seem like some data in the redis cache triggering this issue.

    Here is a part of chproxy config, related to the cache and user:

    - name: my-supa-redis-cache
      mode: redis
      redis:
        addresses:
        - redis-host:6379
        username: default
        password: <>
      expire: 300s
    ...
    - name: user
      password: <>
      to_cluster: cluster
      to_user: cluster_user
      max_concurrent_queries: 4
      max_execution_time: 86400s
      requests_per_minute: 10
      deny_http: true
      deny_https: false
      allow_cors: true
      allowed_networks:
      - all
      cache: my-supa-redis-cache
      params: user_params
    

    I set max_execution_time: 86400s for the reason, explained in https://github.com/ContentSquare/chproxy/issues/201#issuecomment-1210525823, because default 30s limit is way too low.

    UPD: query result with a curl look like following:

    $ echo "select 1" | curl --netrc-file netrc  -v -k  'https://host/'  --show-error  -d @-;
    ...
    * Server auth using Basic with user 'user'       
    > POST / HTTP/1.1                               
    > Host: <host>                             
    > Authorization: Basic ...
    > User-Agent: curl/7.64.0                                  
    > Accept: */*                                   
    > Content-Length: 8                                     
    > Content-Type: application/x-www-form-urlencoded
    >                                               
    * upload completely sent off: 8 out of 8 bytes  
    * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):      
    * TLSv1.3 (IN), TLS alert, close notify (256):    
    * Empty reply from server                                        
    * Connection #0 to host <host> left intact  
    curl: (52) Empty reply from server    
    

    Do you have any ideas why this is happening? Thanks!

  • Amount of keys in redis cache is constantly growing

    Amount of keys in redis cache is constantly growing

    chproxy v1.16.3

    There are redis cache configured as following:

    - name: cache-name
      mode: redis
      redis:
        addresses:
        - hostname:6379
      expire: 300s
    

    There are following keys appearing:

    799) "26243a2fa2435c33269f75da4e359f3e-transaction"
    800) "62fcbcea7329cd2615aa4b5a792b0420-transaction"
    801) "063cdf4e211ee0441f15323503515d57-transaction"
    802) "802fa878e67d0516bcd41ddf3521e826-transaction"
    803) "be3ca83940942e976e55f9c04f8ea44c-transaction"
    804) "c8c7144ba9a34b53a6c6069facc3b3d5-transaction"
    805) "1b604171cf60b2320b477e440e39cfcc-transaction"
    806) "4406a4e641f31e7b91b3e24e59d9c6ef-transaction"
    807) "ab841321dadb6921814bad6113b9b0bb-transaction"
    808) "9205c903c51f8fcb1a9e9e9fa0a08959-transaction"
    ...
    
    redis> TTL 9205c903c51f8fcb1a9e9e9fa0a08959-transaction
    (integer) -1
    
    redis> get 9205c903c51f8fcb1a9e9e9fa0a08959-transaction
    "2"
    redis> DEBUG OBJECT 9205c903c51f8fcb1a9e9e9fa0a08959-transaction
    Value at:0x7fb41d22cd20 refcount:1 encoding:int serializedlength:2 lru:15865814 lru_seconds_idle:6
    

    Somehow those keys has no TTL set, and it's amount is constantly growing, even if the actual cached result has been removed by the TTL image

    Despite the small size of those keys, with a large amount of transactions those keys may consume all the memory of the redis instance, and then pushed out by the LRU, when memory will be required. In this case, consumed memory will stay ~constant near the configured maximum for the instance.

    Is it expected behavior?

  • add namespace(chproxy) option for metrics name. close #105

    add namespace(chproxy) option for metrics name. close #105

    Description

    • [ ] add prefix "chproxy_" for all Prometheus metrics
    • [ ] format some files

    Pull request type

    • [ ] Feature
    • [ ] Code style update (formatting, renaming)

    Does this introduce a breaking change?

    • [ ] Yes
  • Add proxy.skip_tls_verify config option

    Add proxy.skip_tls_verify config option

    Description

    If server.proxy.skip_tls_verify is set to true, create a custom transport for the reverse proxy that disables TLS verification; this would be used e.g. for clusters that have self-signed certificates for testing.

    Pull request type

    Please check the type of change your PR introduces:

    • [ ] Bugfix
    • [X] Feature
    • [ ] Code style update (formatting, renaming)
    • [ ] Refactoring (no functional changes, no api changes)
    • [ ] Build related changes
    • [ ] Documentation content changes
    • [ ] Other (please describe):

    Checklist

    • [X] Linter passes correctly
    • [X] Add tests which fail without the change (if possible)
    • [X] All tests passing
    • [X] Extended the README / documentation, if necessary

    Does this introduce a breaking change?

    • [ ] Yes
    • [X] No
  • [BUG] Error message

    [BUG] Error message "concurrent query failed"

    Describe the bug I use chproxy with redis cache (keydb in fact, but it's the same), chproxy are used between clickhouse and dataviz (Grafana). But I have randomly the error "concurrent query failed" (and this error are cached :/).

    This message are not really explicit, when I see the code the error was fired by the cache code. What can be the origin of this error and how I can solve it ?

    Expected behavior No error and request dont failed

    Environment information chproxy: 1.17.0 Clickhouse: 22.8.3.13 Keydb: v6.3.1

    Kubernetes cluster

  • add binary/docker images for darwin amd64 & arm64

    add binary/docker images for darwin amd64 & arm64

    I've just pulled down docker images versions 1.16.4-arm64 and 1.17.0-arm64 for testing purposes on my M1 Macbook Pro, and am getting this error when running either of them:

    docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/chproxy": stat /chproxy: no such file or directory: unknown.

    I've tried a couple different configurations (pulled directly from the Chproxy website) with the docker installation command (also pulled from the website):

    $ docker run -d -v /Users/myuser/working/chproxy/config.yml:/config.yml contentsquareplatform/chproxy:1.16.4-arm64 -config /config.yml

    The error reads perhaps as though the entrypoint is not being found. Any thoughts on where to go from here?

  • [BUG] Chproxy adds 30s max_execution_time to users by default

    [BUG] Chproxy adds 30s max_execution_time to users by default

    Describe the bug Chproxy adds 30s max_execution_time to users by default in contrast to the docs. ("By default there is no limit on the query duration." - https://github.com/ContentSquare/chproxy/blob/f4a34c3d09120578979abba15c23f683c2345951/docs/content/en/configuration/default.md?plain=1#L228)

    To Reproduce In the config dont set any max_execution_time , observe the logs of chproxy and see that max_execution_time is added to the user with 30s max_execution_time

    Environment information Docker container Chproxy version 1.6.0

  • Support compression transfer data

    Support compression transfer data

    I use the clickhouse-jdbc to write the data to the Clickhouse(1.1.54318) I will set the properties compress and decompress as true. The jdbc driver uses LZ4 to compress and decompress the data. And it can transfer data with Clickhouse(1.1.54318). When I connect to chproxy, it doesn't work properly.

    error log

    Caused by: java.io.IOException: Magic is not correct: 103
    	at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.readNextBlock(ClickHouseLZ4Stream.java:93) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.checkNext(ClickHouseLZ4Stream.java:74) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.read(ClickHouseLZ4Stream.java:50) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.response.StreamSplitter.readFromStream(StreamSplitter.java:85) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.response.StreamSplitter.next(StreamSplitter.java:47) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.response.ClickHouseResultSet.<init>(ClickHouseResultSet.java:65) ~[clickhouse-jdbc-0.1.34.jar:na]
    	at ru.yandex.clickhouse.ClickHouseStatementImpl.executeQuery(ClickHouseStatementImpl.java:117) ~[clickhouse-jdbc-0.1.34.jar:na]
    

    config.yml

    hack_me_please: false
    
    server:
      http:
          listen_addr: "0.0.0.0:9090"
          allowed_networks: ["192.168.1.0/24"]
      metrics:
          allowed_networks: ["192.168.1.0/24"]
    
    users:
      - name: "default"
        to_cluster: "default"
        to_user: "default"
        allow_cors: true
    
  • Memory consumption spikes after upgrading to v1.16.x

    Memory consumption spikes after upgrading to v1.16.x

    We're running chproxy in k8s, before we were using v1.13.2 and memory consumption was constantly low, we used 512Mi limit and it was enough.

    After upgrading to v1.16.x we've got chproxy PODs OOMed constantly, and with 8Gi limit picture look like following: image

    There are very clear seasonality, in the beginning of 10th minute we have a memory consumption spike. But not always image

    Here is some go memstats metrics, exported by the chproxy instance: image

    For this instance, we do not have any file_system type caches, but have 3 caches in Redis. We have 4 different clusters, 10 cluster users and about 150 users configured.

    I tested this configuration on the another chproxy instance, constantly staying idle (no user-requests served), and memory consumption there remain constant (no spikes).

    For now, even with a 8Gi limit we have chproxy PODs oomed sometimes :( Our current version is 1.16.3

    Any ideas how to find a cause of this issue? Thanks!

  • [attempt to complete transaction] entry not found for key

    [attempt to complete transaction] entry not found for key

    Report an error by viewing the log: ERROR: 2022/12/15 06:10:21 transaction_registry_inmem.go:80: [attempt to complete transaction] entry not found for key: 522a4cae58031153f199bb4b0e2d5854, registering new entry with 1 status

    Look code: image

    version: 1.19.0

    key configuration:

     parms:
        - key: "max_execution_time"
           value: "1800"
    
      - name: "shortterm"
        mode: "file_system"
        file_system:
          dir: "/data3/chproxy/shortterm/cachedir"
          max_size: 30Gb
        expire: 1800s
        grace_time: 120s
    

    And, Not sure where the problem is.

  • [BUG] failed to reach redis: got 4 elements in cluster

    [BUG] failed to reach redis: got 4 elements in cluster

    use redis cluster config address [ "172.18.239.134:6379","172.18.239.133:6379", "172.18.239.132:6379"] main.go:55: error while applying config: failed to reach redis: got 4 elements in cluster info address, expected 2 or 3

    image

  • [Feature] Add configuration parameter to disable KILL QUERY feature

    [Feature] Add configuration parameter to disable KILL QUERY feature

    Is your feature request related to a problem? Please describe. There is an issue with KILL QUERY functionality that may unexpectedly hit ClickHouse max_concurrent_queries_for_all_users limit. Let's say in ClickHouse we have set max_concurrent_queries_for_all_users = 100. In CHProxy we have set max_concurrent_queries = 80. 20 concurrent queries is left for in some urgent/maintenance purposes. Image now situation when we have 80 heavy concurrent queries and 30 of them was discarded by end-users. CHProxy in this case will generate another 30 queries (KILL QUERY ...) that will hit ClickHouse max_concurrent_queries_for_all_users limit and we will not be able to issue any other query until enough queries will be killed. BTW ClickHouse now have dedicated settings for similar functionality - https://clickhouse.com/docs/en/operations/settings/settings/#cancel-http-readonly-queries-on-client-close

    Describe the solution you'd like A new settings to disable CHProxy kill query functionality fully.

  • [BUG] failling test due to a

    [BUG] failling test due to a "too many open files" error

    The current master branch (commit f8f3e8336dd4c128a266ee609ec3269543573db2) has a failing test on my laptop & @sigua-cs laptop whereas it was working before

    go test ./...

    ok github.com/contentsquare/chproxy 15.089s --- FAIL: TestCacheClean (0.26s) filesystem_cache_test.go:214: failed to put it to cache: cache "foobar": cannot create file: 98f713d98e229692173f2b397f5cd86a : open test-data/98f713d98e229692173f2b397f5cd86a: too many open files ERROR: 2022/10/12 12:25:19 redis_cache.go:76: failed to fetch nb of bytes in redis: ERR unknown commandinfo, with args beginning with:memory, ERROR: 2022/10/12 12:25:19 redis_cache.go:76: failed to fetch nb of bytes in redis: ERR unknown commandinfo, with args beginning with:memory, FAIL FAIL github.com/contentsquare/chproxy/cache 1.402s ok github.com/contentsquare/chproxy/chdecompressor (cached) ? github.com/contentsquare/chproxy/clients [no test files] ok github.com/contentsquare/chproxy/config (cached) ? github.com/contentsquare/chproxy/log [no test files] FAIL

    Even if the CI is working (probably because the ulimit is higher on the github actoin), the aim of this task is to understand which commit has created this change of behavior and fix the root cause (if needed).

  • [BUG] cache not expire

    [BUG] cache not expire

    Describe the bug cache not expire ,chproxy version 1.17.2

    To Reproduce 10s expire;but cache always hit; my config.yml: server: http: listen_addr: ":19090" #allowed_networks: ["0.0.0.0/0"] hack_me_please: true log_debug: true users:

    • name: "default" password: "root123" to_cluster: "default" to_user: "default" max_concurrent_queries: 1000 max_execution_time: 10m #Enable response caching. See cache config below. cache: "default_cache"

    clusters:

    • name: "default"

      Requests are spread in round-robin + least-loaded fashion among nodes.

      Unreachable and unhealthy nodes are skipped.

      nodes: [ "192.168.100.74:8123", "192.168.192.116:8123", "192.168.192.117:8123", "192.168.192.118:8123" ] users:

      • name: "default" password: "root" caches:
      • name: "default_cache" mode: "file_system" file_system: dir: "/home/zhuzhihao/919/chproxy_bin/cache" max_size: 150Mb

      Cached responses will expire in 130s.

      expire: 10s

    Expected behavior A clear and concise description of what you expected to happen.

  • [DOC] outdated links in chproxy.org

    [DOC] outdated links in chproxy.org

    the doc shown in chproxy uses tools likes:

    • goreport
    • travis
    • gocovers but the links to the tools are either outdated (link to an old version of chproxy) or don't work
Simple Reverse Proxy Load Balancer

lb - a reverse proxy load-balancing server, It implements the Weighted Round Robin Balancing algorithm

Mar 23, 2022
Kiwi-balancer - A balancer is a gateway between the clients and the server

Task description Imagine a standard client-server relationship, only in our case

Feb 11, 2022
Lightweight http response time based load balancer written in Go

HTTP Load Balancer Specifications http servers should always return time taken to proceed request in headers as EXECUTION_TIME in ms this load balance

Feb 22, 2022
High-performance PHP application server, load-balancer and process manager written in Golang
High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a serv

Jan 1, 2023
High-performance PHP application server, load-balancer and process manager written in Golang
High-performance PHP application server, load-balancer and process manager written in Golang

RoadRunner is an open-source (MIT licensed) high-performance PHP application server, load balancer, and process manager. It supports running as a serv

Dec 9, 2021
A modern layer 7 load balancer from baidu

BFE BFE is a modern layer 7 load balancer from baidu. Advantages Multiple protocols supported, including HTTP, HTTPS, SPDY, HTTP2, WebSocket, TLS, Fas

Dec 30, 2022
A load balancer supporting multiple LB strategies written in Go
A load balancer supporting multiple LB strategies written in Go

farely A load balancer supporting multiple LB strategies written in Go. Goal The goal of this project is purley educational, I started it as a brainst

Dec 21, 2022
KgLb - L4 Load Balancer
KgLb - L4 Load Balancer

KgLb KgLb is L4 a load balancer built on top of linux ip virtual server (ip_vs). It provides rich functionality such as discovery, health checks for r

Dec 16, 2022
A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF

VC5 A distributed Layer 2 Direct Server Return (L2DSR) load balancer for Linux using XDP/eBPF This is very much a proof of concept at this stage - mos

Dec 22, 2022
Basic Load Balancer
Basic Load Balancer

Load Balancer Work flow based on code snippet Trade-offs: 1. Using etcd as a global variable map. 2. Using etcd to store request references rather tha

Nov 1, 2021
Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal

Vippy - A Virtual IP/BGP/IPVS Load-Balancer for Equinix Metal If I figure out how to make it work.. How it works! The Vippy LB PoC uses BGP/IPVS and E

Mar 10, 2022
A Load-balancer made from steel
A Load-balancer made from steel

slb The Steel Load Balancer A load-balancer forged in the fires of Sheffield Getting slb Prebuilt binaries for armv7 and amd64 exist in the releases p

Nov 13, 2022
A Service Load Balancer for Kubernetes.

PureLB - is a Service Load Balancer for Kubernetes PureLB is a load-balancer orchestrator for Kubernetes clusters. It uses standard Linux networking a

Dec 24, 2022
Consistelancer - Consistent hashing load balancer for Kubernetes

Setup minikube start kubectl apply -f k8s-env.yaml skaffold dev # test locks ku

Sep 28, 2022
Simple load-balancer for npchat servers, based on the xor distance metric between node & user id

npchat-helmsman Simple load-balancer for npchat servers, based on the xor distance metric between node & user id. Local Development Clone this reposit

Jan 15, 2022
Http-logging-proxy - A HTTP Logging Proxy For Golang

http-logging-proxy HTTP Logging Proxy Description This project builds a simple r

Aug 1, 2022
A simple tool to convert socket5 proxy protocol to http proxy protocol

Socket5 to HTTP 这是一个超简单的 Socket5 代理转换成 HTTP 代理的小工具。 如何安装? Golang 用户 # Required Go 1.17+ go install github.com/mritd/s2h@master Docker 用户 docker pull m

Jan 2, 2023
DNS/DoT to DoH proxy with load-balancing, fail-over and SSL certificate management

dns-proxy Configuration Variable Example Description TLS_DOMAIN my.duckdns.org Domain name without wildcards. Used to create wildcard certificate and

Oct 26, 2022