Collects many small inserts to ClickHouse and send in big inserts

ClickHouse-Bulk

Build Status codecov download binaries Go Report Card godoc

Simple Yandex ClickHouse insert collector. It collect requests and send to ClickHouse servers.

Installation

Download binary for you platorm

or

Use docker image

or from sources (Go 1.13+):

git clone https://github.com/nikepan/clickhouse-bulk
cd clickhouse-bulk
go build

Features

  • Group n requests and send to any of ClickHouse server
  • Sending collected data by interval
  • Tested with VALUES, TabSeparated formats
  • Supports many servers to send
  • Supports query in query parameters and in body
  • Supports other query parameters like username, password, database
  • Supports basic authentication

For example:

INSERT INTO table3 (c1, c2, c3) VALUES ('v1', 'v2', 'v3')
INSERT INTO table3 (c1, c2, c3) VALUES ('v4', 'v5', 'v6')

sends as

INSERT INTO table3 (c1, c2, c3) VALUES ('v1', 'v2', 'v3')('v4', 'v5', 'v6')

Options

  • -config - config file (json); default config.json

Configuration file

{
  "listen": ":8124",
  "flush_count": 10000, // check by \n char
  "flush_interval": 1000, // milliseconds
  "clean_interval": 0, // how often cleanup internal tables - e.g. inserts to different temporary tables, or as workaround for query_id etc. milliseconds
  "remove_query_id": true, // some drivers sends query_id which prevents inserts to be batched
  "dump_check_interval": 300, // interval for try to send dumps (seconds); -1 to disable
  "debug": false, // log incoming requests
  "dump_dir": "dumps", // directory for dump unsended data (if clickhouse errors)
  "clickhouse": {
    "down_timeout": 60, // wait if server in down (seconds)
    "connect_timeout": 10, // wait for server connect (seconds)
    "tls_server_name": "", // override TLS serverName for certificate verification (e.g. in cases you share same "cluster" certificate across multiple nodes)
    "insecure_tls_skip_verify": false, // INSECURE - skip certificate verification at all
    "servers": [
      "http://127.0.0.1:8123"
    ]
  }
}

Environment variables (used for docker image)

  • CLICKHOUSE_BULK_DEBUG - enable debug logging
  • CLICKHOUSE_SERVERS - comma separated list of servers
  • CLICKHOUSE_FLUSH_COUNT - count of rows for insert
  • CLICKHOUSE_FLUSH_INTERVAL - insert interval
  • CLICKHOUSE_CLEAN_INTERVAL - internal tables clean interval
  • DUMP_CHECK_INTERVAL - interval of resend dumps
  • CLICKHOUSE_DOWN_TIMEOUT - wait time if server is down
  • CLICKHOUSE_CONNECT_TIMEOUT - clickhouse server connect timeout
  • CLICKHOUSE_TLS_SERVER_NAME - server name for TLS certificate verification
  • CLICKHOUSE_INSECURE_TLS_SKIP_VERIFY - skip certificate verification at all

Quickstart

./clickhouse-bulk and send queries to :8124

Metrics

manual check main metrics curl -s http://127.0.0.1:8124/metrics | grep "^ch_"

  • ch_bad_servers 0 - actual count of bad servers
  • ch_dump_count 0 - dumps saved from launch
  • ch_queued_dumps 0 - actual dump files id directory
  • ch_good_servers 1 - actual good servers count
  • ch_received_count 40 - received requests count from launch
  • ch_sent_count 1 - sent request count from launch

Tips

For better performance words FORMAT and VALUES must be uppercase.

Comments
  • No working clickhouse servers

    No working clickhouse servers

    { "listen": ":8124", "flush_count": 10000, "flush_interval": 1000, "dump_check_interval": 300, "debug": false, "dump_dir": "dumps", "clickhouse": { "down_timeout": 60, "connect_timeout": 10, "servers": [ "http://clickhouse:[email protected]:8123" ] } }

    $ curl -s http://127.0.0.1:8124/metrics | grep "^ch_" ch_bad_servers 0 ch_dump_count 0 ch_good_servers 0 ch_queued_dumps 0 ch_received_count 0 ch_sent_count 0

    $curl http://clickhouse:[email protected]:8123 Ok.

    It seems that he does not see servers, how can I solve this problem?

  • ERROR 502: No working clickhouse servers

    ERROR 502: No working clickhouse servers

    {
      "listen": ":8123",
      "flush_count": 10000,
      "flush_interval": 3000,
      "debug": true,
      "dump_dir": "dumps",
    
      "clickhouse": {
        "down_timeout": 300,
        "servers": [
          "http://0.0.0.0:8070"
        ]
      }
    }
    
    2018/07/11 09:15:27 query query=Insert+into+Log_buffer+FORMAT+JSONEachRow&input_format_skip_unknown_fields=1 {"ts":"2018-07-11 09:15:27","level":"DEBUG","logger":"plugins.base_core","pid":19847,"procname":"wkr:1","file":"base_core.py:352","body":"Action start for '***********'","node":"US-2","jobid":"51399907","uid":"2","type":"monitor","plug":"*****"}
    2018/07/11 09:15:27 Send ERROR 502: No working clickhouse servers
    

    while direct insert into CH works fine -

    $ curl 0.0.0.0:8087
    Ok.
    

    🙏

    PS Не выдерживает нагрузки?

  • Q: is bulk a remedy for max_connections_count

    Q: is bulk a remedy for max_connections_count

    Hi there, Could you kindly confirm my initial thoughts about using your tool as a savior to my system?

    I have a scenario where small inserts of data are posted to Clickhouse (e.g gps updates from a number of mobile devices). Often Clickhouse returns http 500 error due to max connection count reached or due to timeout. There are some MV that are being calculated on inserts so that might slow it down. I changed default value from 100 to 500 but it doesn't seem to help just more queries are waiting.

    I thought that using your tool can improve the situation as bulk inserts are advised due to performance boost. Other option that I think of is usage of buffer tables.

    Thanks!

  • ERROR: server error (400) Wrong server status 400 after updating to ClickHouse server version 21.4.4.30 (official build)

    ERROR: server error (400) Wrong server status 400 after updating to ClickHouse server version 21.4.4.30 (official build)

    Hi. The proxy can not execute queries an prints this log after the update but it has worked before the update

    request: "INSERT INTO `display` (`uuid`,`user_id`,`app_uuid`,`uuid1`,`uuid2`,`created_at`) VALUES\n('7ecd41eb-58b6-44d2-ab59-01235bc32135',86,'00806453-89a0-4fd2-9f9f-2b012f45049e','0069f823-f901-48c6-b8bb-3d5a5d61d470','4264487b-fa40-47ae-939b-a492df46caaa','1618914155')"
    2021/04/21 07:12:41.249482 INFO: sending 1 rows to http://192.168.88.1:8123 of INSERT INTO `display` (`uuid`,`user_id`,`app_uuid`,`uuid1`,`uuid1`,`created_at`) VALUES
    2021/04/21 07:12:41.257645 INFO: sent 1 rows to http://192.168.88.1:8123 of INSERT INTO `display` (`uuid`,`user_id`,`app_uuid`,`uuid1`,`uuid1`,`created_at`) VALUES
    2021/04/21 07:12:41.257817 ERROR: server error (400) Wrong server status 400:
    

    Please take a look, it is very urgent for us. Best Regards Arthur

  • ConnectTimeout option is not tend to work properly

    ConnectTimeout option is not tend to work properly

    Hi,

    At NewClickhouse method your have been implemented not right behavior for the ConnectTimeout options:

    c.ConnectTimeout = connectTimeout
    if c.ConnectTimeout > 0 {
        c.ConnectTimeout = 10
    }
    

    So, if I will set any positive value for ConnectTimeout - it will be not used but rewritten to 10 seconds;

  • Not working?

    Not working?

    Hello. I write simple test script on go:

    package main
    
    import (
        "fmt"
        "io/ioutil"
        "net/http"
        "strings"
    )
    
    func main() {
        fmt.Println("Hello, playground")
        for i := 0; i < 500; i++ {
            post(fmt.Sprintf("(%d)", i))
        }
        println("done")
    }
    
    func post(b string) {
        bod := strings.NewReader(b)
        req, err := http.NewRequest("POST", "http://127.0.0.1:8124/?query=INSERT%20INTO%20t%20VALUES", bod)
        if err != nil {
            panic(err)
        }
        resp, err := http.DefaultClient.Do(req)
        if err != nil {
            panic(err)
        }
        defer resp.Body.Close()
        _, err = ioutil.ReadAll(resp.Body)
    
        if err != nil {
            panic(err)
        }
    }
    

    I have default params in config A see in log this records:

    2019/11/18 18:14:07 DEBUG: query query=INSERT%20INTO%20t%20VALUES (493)
    2019/11/18 18:14:07 DEBUG: query query=INSERT%20INTO%20t%20VALUES (494)
    2019/11/18 18:14:07 DEBUG: query query=INSERT%20INTO%20t%20VALUES (495)
    2019/11/18 18:14:07 DEBUG: query query=INSERT%20INTO%20t%20VALUES (496)
    2019/11/18 18:14:07 DEBUG: query query=INSERT%20INTO%20t%20VALUES (497)
    2019/11/18 18:14:07 DEBUG: query query=INSERT%20INTO%20t%20VALUES (498)
    2019/11/18 18:14:07 DEBUG: query query=INSERT%20INTO%20t%20VALUES (499)
    2019/11/18 18:14:08 INFO: send 500 rows to http://u:pass@ip:8123 of INSERT INTO t VALUES
    

    But i see in CH: curl 'some:8123?query=SELECT%20MAX(a)%20FROM%20t' 255 Looks like first packet was sended 2 times:

    250
    251
    252
    253
    254
    255
    0
    1
    2
    3
    4
    5
    6
    
  • Credential disclosure in logs

    Credential disclosure in logs

    If forwarding queries on to a server requiring authentication, if URL is of the form http://username:password@localhost:8123, these credentials are disclosed in the log file by https://github.com/nikepan/clickhouse-bulk/blob/4f084dd00b9c39e21a32cffedfc54d314bb46f18/clickhouse.go#L182

    We should redact the password portion of this string before echoing it.

  • Correct flush of data to clickhouse after sending

    Correct flush of data to clickhouse after sending "STOP signal"

    There are some problems with sending "STOP signal".

    1. After sending "STOP signal" POST query not works. Insert data is lost. If I use standart SendQuery instead of Send method everything is ok. If I put Sleep into Send befor flushing it the table it is also ok.
    2. It is unclear whether the data was flushed to the clickhouse

    I propose the following solution.

  • ERROR: Send (503) No working clickhouse servers; response

    ERROR: Send (503) No working clickhouse servers; response

    Периодически под нагрузкой падает такой лог

    clickhouse-bulk_1 | 2021/03/05 11:03:10.847398 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.847752 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.847858 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.847954 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.848023 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.848081 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.848224 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.848425 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.848488 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.848615 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.848839 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.848950 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.849238 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.849771 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.850151 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.850361 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.850426 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.850513 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.850565 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:03:10.850753 ERROR: Send (503) No working clickhouse servers; response clickhouse-bulk_1 | 2021/03/05 11:04:32.243796 INFO: sending 26 rows to http://default:root@11111111:8123 of INSERT INTO lkdn_profiles.employees ( clickhouse-bulk_1 | 2021/03/05 11:04:42.244128 ERROR: server down (502): Post http://default:***@11111111:8123: net/http: request canceled (Client.Timeout exceeded while awaiting headers) clickhouse-bulk_1 | 2021/03/05 11:04:42.244156 INFO: sending 26 rows to http://default:root@11111111:8123 of INSERT INTO lkdn_profiles.employees ( clickhouse-bulk_1 | 2021/03/05 11:04:52.244517 ERROR: server down (502): Post http://default:***@11111111:8123: net/http: request canceled (Client.Timeout exceeded while awaiting headers) clickhouse-bulk_1 | 2021/03/05 11:04:52.244552 INFO: sending 26 rows to http://default:root@11111111:8123 of INSERT INTO lkdn_profiles.employees ( clickhouse-bulk_1 | 2021/03/05 11:05:02.244919 ERROR: server down (502): Post http://default:***@11111111:8123: net/http: request canceled (Client.Timeout exceeded while awaiting headers) clickhouse-bulk_1 | 2021/03/05 11:05:02.244950 INFO: sending 26 rows to http://default:root@11111111:8123 of INSERT INTO lkdn_profiles.employees ( clickhouse-bulk_1 | 2021/03/05 11:05:12.245236 ERROR: server down (502): Post http://default:***@11111111:8123: net/http: request canceled (Client.Timeout exceeded while awaiting headers) clickhouse-bulk_1 | 2021/03/05 11:05:12.245261 INFO: sending 26 rows to http://default:root@11111111:8123 of INSERT INTO lkdn_profiles.employees ( clickhouse-bulk_1 | 2021/03/05 11:05:22.245596 ERROR: server down (502): Post http://default:***@11111111:8123: net/http: request canceled (Client.Timeout exceeded while awaiting headers) clickhouse-bulk_1 | 2021/03/05 11:05:22.245626 ERROR: server error (503) No working clickhouse servers

    но при этом сам сервер кх жив echo 'SELECT 1' | curl 'http://default:root@1111:8123/' --data-binary @- 1

  • add TLS options: serverName & skip verify

    add TLS options: serverName & skip verify

    imagine the situation you have a single DNS record pointing to multiple clickhouse nodes, but if a single one fails then that single endpoint get marked as down - bad - so you have to list your servers one by one, but what if your certificate CN is valid only for that cluster record - now you can use tlsServerName so you can use even IP addresses, or whatever you want as long as clickhouse server certificate CN matches tlsServerName

  • Multiline inserts, fix memory leak caused by query_id, fix stack trace when no live server

    Multiline inserts, fix memory leak caused by query_id, fix stack trace when no live server

    • feature: support for multiline inserts (some CH drivers are sending FORMAT TabSeparated with multiple rows e.g. Java ExecuteBatch) so we have count number of rows in data
    • improvement: log in µs & log only insert statements - SELECTs are not that interesting and actual query isn't logged at all
    • improvement: enable debug with ENV var
    • fix: memory leak caused by query_id in query params - basicaly every query is unique -> new map[]Table for every query with query_id -> tables were never deleted. I have added two options: clean_interval - all tables which are not updated for clean_interval will be deleted; remove_query_id - will remove query_id=... from query params before create / insert to Table (in our load the mem usage was 800MB+ within ~3h, with remove_query_id is the mem usage stable ~45MB) this can probably solve #23
    • fix: return 503 instead of stack trace when there is no live clickhouse endpoint
  • Very strange error on insert

    Very strange error on insert

    Periodicaly not found or database, or auth error

    -- auto-generated definition create table test ( id Int32, name String ) engine = MergeTree PARTITION BY id PRIMARY KEY id ORDER BY (id, name) SETTINGS index_granularity = 8192;

    ⇨ http server started on [::]:8124 2021/09/10 21:26:10.503957 DEBUG: query INSERT INTO gc.test (id, name) VALUES (7, 'xcvbx') 2021/09/10 21:26:11.506905 INFO: sending 1 rows to http://10.0.10.141:8123 of INSERT INTO gc.test (id, name) VALUES 2021/09/10 21:26:11.521327 INFO: sent 1 rows to http://10.0.10.141:8123 of INSERT INTO gc.test (id, name) VALUES 2021/09/10 21:26:16.768973 DEBUG: query INSERT INTO gc.test (id, name) VALUES (8, 'xcvbx') 2021/09/10 21:26:17.504073 INFO: sending 1 rows to http://10.0.10.142:8123 of INSERT INTO gc.test (id, name) VALUES 2021/09/10 21:26:17.517043 INFO: sent 1 rows to http://10.0.10.142:8123 of INSERT INTO gc.test (id, name) VALUES 2021/09/10 21:26:17.517161 ERROR: Send (500) Wrong server status 500: response: Code: 516, e.displayText() = DB::Exception: chtdidx: Authentication failed: password is incorrect or there is no user with such name (version 21.2.2.8 (official build))

    request: "INSERT INTO gc.test (id, name) VALUES\n(8, 'xcvbx')"; response Code: 516, e.displayText() = DB::Exception: chtdidx: Authentication failed: password is incorrect or there is no user with such name (version 21.2.2.8 (official build))

    2021/09/10 21:26:19.228692 DEBUG: query INSERT INTO gc.test (id, name) VALUES (8, 'xcvbx') 2021/09/10 21:26:19.508245 INFO: sending 1 rows to http://10.0.10.143:8123 of INSERT INTO gc.test (id, name) VALUES 2021/09/10 21:26:19.522896 INFO: sent 1 rows to http://10.0.10.143:8123 of INSERT INTO gc.test (id, name) VALUES 2021/09/10 21:26:19.523012 ERROR: Send (500) Wrong server status 500: response: Code: 516, e.displayText() = DB::Exception: chtdidx: Authentication failed: password is incorrect or there is no user with such name (version 21.2.2.8 (official build))

    request: "INSERT INTO gc.test (id, name) VALUES\n(8, 'xcvbx')"; response Code: 516, e.displayText() = DB::Exception: chtdidx: Authentication failed: password is incorrect or there is no user with such name (version 21.2.2.8 (official build))

    2021/09/10 21:26:22.539692 DEBUG: query INSERT INTO gc.test (id, name) VALUES (8, 'xcvbx') 2021/09/10 21:26:23.503982 INFO: sending 1 rows to http://10.0.10.141:8123 of INSERT INTO gc.test (id, name) VALUES 2021/09/10 21:26:23.531772 INFO: sent 1 rows to http://10.0.10.141:8123 of INSERT INTO gc.test (id, name) VALUES

  • Basic authentication credentials being leaked in logs

    Basic authentication credentials being leaked in logs

    Version: 1.3.3

    The upstream servers credentials are leaked into the log output on:

    • Initial boot: use servers XXXX
    • Sending metrics: sending N rows to XXXX
    • Sent metrics: sent N rows to XXXX
  • Option to periodically flush collected data from memory to the disk if needed.

    Option to periodically flush collected data from memory to the disk if needed.

    This option could be helpful in cases when service might be killed with oom or unexpected reboot server, etc. To prevent losing all of the data collected in memory and guarantee delivery after recovery service. Useful new options:

    1. Enable flush to disk
    2. How often
    3. Retention policy Could this feature be implemented?
  • Better config read error logging

    Better config read error logging

    Currently, if you load an improperly formatted config JSON, e.g. by adding // comments to it, then the log out put will say the file was not found, which is not necessarily true. This fixes that by displaying the correct error. It also shortcuts and returns when there's an error loading any config because that causes a fatal exit anyway

Go-clickhouse - ClickHouse client for Go

ClickHouse client for Go 1.18+ This client uses native protocol to communicate w

Jan 9, 2023
This Service collects slow queries and returns them in response.

pgsql-api-go This Service collects slow queries and returns them in response. Status This service is the very first version of the project. App is up

Dec 30, 2021
MatrixOne is a planet scale, cloud-edge native big data engine crafted for heterogeneous workloads.
MatrixOne is a planet scale, cloud-edge native big data engine crafted for heterogeneous workloads.

What is MatrixOne? MatrixOne is a planet scale, cloud-edge native big data engine crafted for heterogeneous workloads. It provides an end-to-end data

Dec 26, 2022
ClickHouse http proxy and load balancer
ClickHouse http proxy and load balancer

chproxy English | 简体中文 Chproxy, is an http proxy and load balancer for ClickHouse database. It provides the following features: May proxy requests to

Jan 3, 2023
Distributed tracing using OpenTelemetry and ClickHouse

Distributed tracing backend using OpenTelemetry and ClickHouse Uptrace is a dist

Jan 2, 2023
Mogo: a lightweight browser-based logs analytics and logs search platform for some datasource(ClickHouse, MySQL, etc.)
Mogo: a lightweight browser-based logs analytics and logs search platform for some datasource(ClickHouse, MySQL, etc.)

mogo Mogo is a lightweight browser-based logs analytics and logs search platform

Dec 30, 2022
Bifrost ---- 面向生产环境的 MySQL 同步到Redis,MongoDB,ClickHouse,MySQL等服务的异构中间件
Bifrost ---- 面向生产环境的 MySQL 同步到Redis,MongoDB,ClickHouse,MySQL等服务的异构中间件

Bifrost ---- 面向生产环境的 MySQL 同步到Redis,ClickHouse等服务的异构中间件 English 漫威里的彩虹桥可以将 雷神 送到 阿斯加德 和 地球 而这个 Bifrost 可以将 你 MySQL 里的数据 全量 , 实时的同步到 : Redis MongoDB Cl

Dec 30, 2022
support clickhouse

Remote storage adapter This is a write adapter that receives samples via Prometheus's remote write protocol and stores them in Graphite, InfluxDB, cli

Dec 7, 2022
Jaeger ClickHouse storage plugin implementation

Jaeger ClickHouse Jaeger ClickHouse gRPC storage plugin. This is WIP and it is based on https://github.com/bobrik/jaeger/tree/ivan/clickhouse/plugin/s

Feb 15, 2022
Clickhouse support for GORM

clickhouse Clickhouse support for GORM Quick Start package main import ( "fmt" "github.com/sweetpotato0/clickhouse" "gorm.io/gorm" ) // User

Oct 18, 2022
Mar 7, 2022
Small tool that analyzes the data of my crappy file format for catalogging things

Things Catalog Analyzer I recently started catalogging all the things I own. I simply wanted to have an overview over my things, think about what I ca

Nov 7, 2021
A tool to run queries in defined frequency and expose the count as prometheus metrics. Supports MongoDB and SQL
A tool to run queries in defined frequency and expose the count as prometheus metrics. Supports MongoDB and SQL

query2metric A tool to run db queries in defined frequency and expose the count as prometheus metrics. Why ? Product metrics play an important role in

Jul 1, 2022
OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.

OctoSQL OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases, streaming sources and file formats using

Dec 29, 2022
WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

Jan 1, 2023
A Go rest API project that is following solid and common principles and is connected to local MySQL database.
A Go rest API project that is following solid and common principles and is connected to local MySQL database.

This is an intermediate-level go project that running with a project structure optimized RESTful API service in Go. API's of that project is designed based on solid and common principles and connected to the local MySQL database.

Dec 25, 2022
An observability database aims to ingest, analyze and store Metrics, Tracing and Logging data.
An observability database aims to ingest, analyze and store Metrics, Tracing and Logging data.

BanyanDB BanyanDB, as an observability database, aims to ingest, analyze and store Metrics, Tracing and Logging data. It's designed to handle observab

Dec 31, 2022
Database Access Layer for Golang - Testable, Extendable and Crafted Into a Clean and Elegant API

REL Modern Database Access Layer for Golang. REL is golang orm-ish database layer for layered architecture. It's testable and comes with its own test

Dec 29, 2022