Type-safe Redis client for Golang

All-in-one tool to optimize performance and monitor errors & logs

Redis client for Golang

build workflow PkgGoDev Documentation Chat

Ecosystem

Features

Installation

go-redis supports 2 last Go versions and requires a Go version with modules support. So make sure to initialize a Go module:

go mod init github.com/my/repo

And then install go-redis/v8 (note v8 in the import; omitting it is a popular mistake):

go get github.com/go-redis/redis/v8

Quickstart

import (
    "context"
    "github.com/go-redis/redis/v8"
)

var ctx = context.Background()

func ExampleClient() {
    rdb := redis.NewClient(&redis.Options{
        Addr:     "localhost:6379",
        Password: "", // no password set
        DB:       0,  // use default DB
    })

    err := rdb.Set(ctx, "key", "value", 0).Err()
    if err != nil {
        panic(err)
    }

    val, err := rdb.Get(ctx, "key").Result()
    if err != nil {
        panic(err)
    }
    fmt.Println("key", val)

    val2, err := rdb.Get(ctx, "key2").Result()
    if err == redis.Nil {
        fmt.Println("key2 does not exist")
    } else if err != nil {
        panic(err)
    } else {
        fmt.Println("key2", val2)
    }
    // Output: key value
    // key2 does not exist
}

Look and feel

Some corner cases:

// SET key value EX 10 NX
set, err := rdb.SetNX(ctx, "key", "value", 10*time.Second).Result()

// SET key value keepttl NX
set, err := rdb.SetNX(ctx, "key", "value", redis.KeepTTL).Result()

// SORT list LIMIT 0 2 ASC
vals, err := rdb.Sort(ctx, "list", &redis.Sort{Offset: 0, Count: 2, Order: "ASC"}).Result()

// ZRANGEBYSCORE zset -inf +inf WITHSCORES LIMIT 0 2
vals, err := rdb.ZRangeByScoreWithScores(ctx, "zset", &redis.ZRangeBy{
    Min: "-inf",
    Max: "+inf",
    Offset: 0,
    Count: 2,
}).Result()

// ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 AGGREGATE SUM
vals, err := rdb.ZInterStore(ctx, "out", &redis.ZStore{
    Keys: []string{"zset1", "zset2"},
    Weights: []int64{2, 3}
}).Result()

// EVAL "return {KEYS[1],ARGV[1]}" 1 "key" "hello"
vals, err := rdb.Eval(ctx, "return {KEYS[1],ARGV[1]}", []string{"key"}, "hello").Result()

// custom command
res, err := rdb.Do(ctx, "set", "key", "value").Result()

Run the test

go-redis will start a redis-server and run the test cases.

The paths of redis-server bin file and redis config file are definded in main_test.go:

var (
	redisServerBin, _  = filepath.Abs(filepath.Join("testdata", "redis", "src", "redis-server"))
	redisServerConf, _ = filepath.Abs(filepath.Join("testdata", "redis", "redis.conf"))
)

For local testing, you can change the variables to refer to your local files, or create a soft link to the corresponding folder for redis-server and copy the config file to testdata/redis/:

ln -s /usr/bin/redis-server ./go-redis/testdata/redis/src
cp ./go-redis/testdata/redis.conf ./go-redis/testdata/redis/

Lastly, run:

go test

See also

Comments
  • undefined: otel.Meter or cannot find package

    undefined: otel.Meter or cannot find package "go.opentelemetry.io/otel/api/trace"

    To fix cannot find package "go.opentelemetry.io/otel/api/trace" or undefined: otel.Meter:

    1. Make sure to initialize a Go module: go mod init github.com/my/repo

    2. Make sure to use correct import path with v8 in the end: go get github.com/go-redis/redis/v8

    For example:

    mkdir /tmp/redis-test
    cd /tmp/redis-test
    go mod init redis-test
    go get github.com/go-redis/redis/v8
    

    The root cause

    The error is not caused by OpenTelemetry. OpenTelemetry is just the first module Go tries to install. And the error will not go away until you start using Go modules properly.

    The presence of $GOROOT or $GOPATH in error messages indicates that you are NOT using Go modules.

  • V8 performance degradation ~20%

    V8 performance degradation ~20%

    @monkey92t

    Hi, thank you for your tests. I ran your tests in our environment, and saw similar comparative results. However, when I slightly modified the tests to reflect more accurately of our use case (and how Go HTTP spawn goroutine for each request), all the sudden the performance is degraded for V8. This is especially evident with 100+ concurrency.

    2 changes that were made:

    1. instead of pre-spawn Go routine and run fixed number of Get/Set in a for loop (this is retained using get2/set2), it runs through fixed number of requests and spawn a Go routine (only up to currency) to process them.
    2. each request will generate a random key so the load is spread across the Redis cluster.

    Both V7/V8 saw a decrease in throughput comparing using pre-spawn Go routines vs a Go routine per request. However, decrease for V7 is very small as expected, but V8 is quite dramatic.

    go-redis version: v7.4.0 and v8.6.0

    redis-cluster (version 5.0.7): master: 84 instances slave: 84 instances

    This is the RedisCluster test result: https://github.com/go-redis/redis/files/6158805/Results.pdf

    This is the test program: https://github.com/go-redis/redis/files/6158824/perftest.go.gz

  • high memory usage + solution

    high memory usage + solution

    Hi,

    I noticed that the memory usage was very high in my project. I did a memory profiling with inuse_space, and 90% of my memory is used by go-redis in WriteBuffer. If I understand correctly, each connection in the pool has its own WriteBuffer.

    My projects runs 80 goroutines (on 8 CPUs) and each goroutine SET Redis keys. My Redis keys are large: several MB. (less than 100 MB) So, it's very easy to understand why the memory usage is very high.

    I think I have a solution, but it requires changes in go-redis internals. We could use a global sync.Pool of WriteBuffer instead.

    WDYT ?

  • Constantly Reestablishing Connections in Cluster Mode

    Constantly Reestablishing Connections in Cluster Mode

    Expected Behavior

    Creating a cluster client using pretty much default settings should not overwhelm Redis with constant barrage of new connections.

    redis.NewClusterClient(&redis.ClusterOptions{
        Addrs: []string{redisAddr},
        TLSConfig: &tls.Config{},
    })
    

    Current Behavior

    Occasionally, at times completely unrelated to system load/traffic, we are seeing connections being constantly re-established to one of the cluster nodes in our Redis cluster. We are using ElastiCache Redis in cluster mode with TLS enabled, and there seems to be no trigger we can find for this behavior. We also do not see any relevant logs in our service's systemd output in journalctl, other than

    redis_writer:85 {}        Error with write attempt: context deadline exceeded
    

    which seems more like a symptom of an overloaded Redis cluster node rather than a cause.

    When this issue happens, running CLIENT LIST on the affected Redis node shows age=0 or age=1 for all connections every time, which reinforces that connections are being dropped constantly for some reason. New connections plummet on other shards in the Redis cluster, and are all concentrated on one.

    New Connections (Cloudwatch)

    NewConnections

    Current Connections (Cloudwatch)

    CurrConnections

    In the example Cloudwatch graphs above we can also see that the issue can move between Redis cluster shards. As you can see, we're currently running with a 4-shard cluster, where each shard has 1 replica.

    Restarting our service does not address this problem, and to address it we basically need to do a hard reset (completely stop the clients for a while, then start them up again).

    We've reached out to AWS support and they have found no issues with our ElastiCache Redis cluster on their end. Additionally, there are no ElastiCache events happening at the time this issue is triggered.

    Possible Solution

    In this issue I'm mainly hoping to get insight into how I could better troubleshoot this issue and/or if there are additional client options we can use to try and mitigate this worst case scenario (i.e. rate limiting the creation of new connections in the cluster client) in absence of a root-cause fix.

    My main questions are:

    1. Is there a way for me to gather more data that would be helpful for the Redis/go-redis experts here?
    2. Is there a way for us to rate-limit the creation of new connections in the ClusterClient to keep things from getting too out of control if this does continue to occur?
    3. Has anyone else encountered a similar issue with Cluster mode, whether or not it was with ElastiCache Redis?

    Steps to Reproduce

    The description of our environment/service implementation below, as well as the snippet of our NewClusterClient call at the beginning of this issue, provide a fairly complete summary of how we're using both go-redis and ElastiCache Redis. We've not been able to consistently trigger this issue since it often happens when we're not load testing, and are mainly looking for answers for some of our questions above.

    Context (Environment)

    We're running a service that has a simple algorithm for claiming work from a Redis set, doing something with it, and then cleaning it up from Redis. In a nutshell, the algorithm is as follows:

    • SRANDMEMBER pending 10 - grab up to 10 random items from the pool of available work
    • ZADD in_progress <current_timestamp> <grabbed_item> for each of our items we got in the previous step
    • Any work items we weren't able to ZADD have been claimed by some other instance of the service, skip them
    • Once we're done with a work item, SREM pending <grabbed_item>
    • Periodically ZREMRANGEBYSCORE in_progress -inf <5_seconds_ago> so that claimed items aren't claimed forever

    Currently we run this algorithm on 6 EC2 instances, each running one service. Since each instance has 4 CPU cores, go-redis is calculating a max connection pool size of 20 for our ClusterClient. Each service has 20 goroutines performing this algorithm, and each goroutine sleeps 10ms between each invocation of the algorithm.

    At a steady state with no load on the system (just a handful of heartbeat jobs being added to pending every minute) we see a maximum of ~8% EngineCPUUtilization on each Redis shard, and 1-5 new connections/minute. Overall, pretty relaxed. When this issue has triggered recently, it's happened from this steady state, not during load tests.

    Our service is running on EC2 instances running Ubuntu 18.04 (Bionic), and we have tried using github.com/go-redis/redis/v8 v8.0.0 and github.com/go-redis/redis/v8 v8.11.2 - both have run into this issue.

    As mentioned earlier, we're currently running with a 4-shard ElastiCache Redis cluster with TLS enabled, where each shard has 1 replica.

    Detailed Description

    N/A

    Possible Implementation

    N/A

  • Add redis.Scan() to scan results from redis maps into structs.

    Add redis.Scan() to scan results from redis maps into structs.

    The package uses reflection to decode default types (int, string etc.) from Redis map results (key-value pair sequences) into struct fields where the fields are matched to Redis keys by tags.

    Similar to how encoding/json allows custom decoders usingUnmarshalJSON(), the package supports decoding of arbitrary types into struct fields by defining a Decode(string) errorfunction on types.

    The field/type spec of every struct that's passed to Scan() is cached in the package so that subsequent scans avoid iteration and reflection of the struct's fields.

    Issue: https://github.com/go-redis/redis/issues/1603

  • hscan adds support for i386 platform

    hscan adds support for i386 platform

    set: GOARCH=386

    redis 127.0.0.1:6379>>set a 100
    redis 127.0.0.1:6379>>set b 123456789123456789
    
    type Demo struct {
        A int8 `redis:"a"`
        B int64 `redis:"b"`
    }
    
    client := redis.NewClient(&Options{
            Network:      "tcp",
            Addr:         "127.0.0.1:6379",
    })  
    ctx := context.Background()
    d := &Demo{}
    err := client.MGet(ctx, "a", "b").Scan(d)
    t.Log(d, err)
    

    it should run normally on the i386 platform, and there should not be such an error: strconv.ParseInt: parsing "123456789123456789": value out of range

  • Add Limiter interface

    Add Limiter interface

    This is an alternative to https://github.com/go-redis/redis/pull/874. Basically it defines rate limiter interface which allows to implement different limiting strategies in separate packages.

    @xianglinghui what do you think? Is provided API enough to cover your needs? I am aware that code like https://github.com/go-redis/redis/blob/master/ring.go#L618-L621 requires some work in go-redis, but other than that it seems to be enough.

  • connection pool timeout

    connection pool timeout

    I am using Redis as a caching layer for long running web services. I initialize the connection like so:

    var (
        Queues  *redis.Client
        Tracker *redis.Client
    )
    
    func Connect(url string) {
        // cut away redis://
        url = url[8:]
    
        // connect to db #0
        Queues = redis.NewClient(&redis.Options{
            Addr:     url,
            Password: "",
            DB:       0,
        })
    
        _, err := Queues.Ping().Result()
        if err != nil {
            panic(err)
        }
    
        // connect to db #1
        Tracker = redis.NewClient(&redis.Options{
            Addr:     url,
            Password: "",
            DB:       1,
        })
    
        _, err = Tracker.Ping().Result()
        if err != nil {
            panic(err)
        }
    }
    

    Abeit in an upcoming patch (sysadmin is deploying a Redis cluster) it will be like so:

    var (
        Cluster *redis.ClusterClient
    )
    
    func ConnectCluster(cluster, password string) {
        addresses := strings.Split(cluster, ",")
        Cluster = redis.NewClusterClient(&redis.ClusterOptions{
            Addrs: addresses,
            // Password: password,
        })
    
        _, err := Cluster.Ping().Result()
        if err != nil {
            panic(err)
        }
    }
    

    The above code gets run once when service boots up in main.go and the *redis.ClusterClient is being used for the lifetime of the process.

    I realize there is an iherent problem with this approach, which is manifesting itself in connections timing out after a few days, and crashing the application: redis: connection pool timeout See logs here

    Please advise, what would be a proper approach to use go-redis in this situation?

  • dial tcp: i/o timeout

    dial tcp: i/o timeout

    I am using go-redis version v6.14.2. I have my application running in an AWS cluster behind loadbalancer. All redis requests failed in one of the nodes in the cluster. Rest of the nodes were working as expected. Application started working properly after a restart. We are using ElastiCache. Can you please help me with identifying the issue ?? If it is previously known issue and is solved in latest version, can you point me to that link ??

    The error was "dial tcp: i/o timeout".

    Below is my cluster configuration excluding redis host address and password:

    • ReadOnly : true
    • RouteByLatency : true
    • RouteRandomly : true
    • DialTimeout : 300ms
    • ReadTimeout : 30s
    • Write Timeout : 30s
    • PoolSize : 12000
    • PoolTimeout : 32
    • IdleTimeout : 120s
    • IdleCheckFrequency : 1s
    import (
    goRedisClient "github.com/go-redis/redis"
    )
    
    func GetRedisClient() *goRedisClient.ClusterClient {
    clusterClientOnce.Do(func() {
    redisClusterClient = goRedisClient.NewClusterClient(
    &goRedisClient.ClusterOptions{
    Addrs: viper.GetStringSlice("redis.hosts"),
    ReadOnly: true,
    RouteByLatency: true,
    RouteRandomly: true,
    Password: viper.GetString("redis.password"),
    
    			DialTimeout:  viper.GetDuration("redis.dial_timeout"),
    			ReadTimeout:  viper.GetDuration("redis.read_timeout"),
    			WriteTimeout: viper.GetDuration("redis.write_timeout"),
    
    			PoolSize:           viper.GetInt("redis.max_active_connections"),
    			PoolTimeout:        viper.GetDuration("redis.pool_timeout"),
    			IdleTimeout:        viper.GetDuration("redis.idle_connection_timeout"),
    			IdleCheckFrequency: viper.GetDuration("redis.idle_check_frequency"),
    		},
    	)
    
    	if err := redisClusterClient.Ping().Err(); err != nil {
    		log.WithError(err).Error(errorCreatingRedisClusterClient)
    	}
    })
    return redisClusterClient
    }
    

    As suggested in comments,https://github.com/go-redis/redis/issues/1194, I wrote the following snippet to dial and test nodes health for each slot. There were no errors. As mentioned, its happening randomly in one of the clients.Not always. It happened again after 3-4 months. And it is always fixed after a restart.

    func CheckRedisSlotConnection(testCase string) {
    	fmt.Println(viper.GetStringSlice("redis.hosts"))
    	fmt.Println("Checking testcase " + testCase)
    	client := redis.GetRedisClient()
    	slots := client.ClusterSlots().Val()
    	addresses := []string{}
    	for _, slot := range slots {
    		for _, node := range slot.Nodes {
    			addresses = append(addresses, node.Addr)
    		}
    	}
    	fmt.Println("Received " + strconv.Itoa(len(addresses)) + " Slots")
    	for _, address := range addresses {
    		fmt.Println("Testing address " + address)
    		conn, err := net.DialTimeout("tcp", address, 500*time.Millisecond)
    		if err != nil {
    			fmt.Println("Error dialing to address " + address + " Error " + err.Error())
    			continue
    		}
    		fmt.Println("Successfully dialled to address " + address)
    		err = conn.Close()
    		if err != nil {
    			fmt.Println("Error closing connection " + err.Error())
    			continue
    		}
    	}
    }
    
  • Attempt to cleanup cluster logic.

    Attempt to cleanup cluster logic.

    @dim I tried to refactor code a bit to learn more about Redis cluster. Changes:

    • NewClusterClient does not return error any more, because NewClient does not too. I personally think that app can't do anything useful except exiting when NewClusterClient returns an error. So panic should be a good alternative.
    • Now ClusterClient.process tries next available replica before falling back to the randomClient. I am not sure that this change is correct, but I hope so :)
    • randomClient is completely rewritten so it does not require allocating seen map[string]struct{}{} on every request. It also checks that node is online before returning.
  • How to implement periodic refresh topology

    How to implement periodic refresh topology

    My redis cluster is on top of kubernetes, so sometimes i may move the entire cluster to another set of nodes and they all change ip address. So my go-redis client needs to refresh the topology from time to time. I am wondering is there a config to do that? Or do i need to send some cluster-nodes command from time to time?

  • Doubts about the retry mechanism

    Doubts about the retry mechanism

    Hi, I would like to know how the retry mechanism ensures that the command is not executed on the server side? In particular, why do io.EOF and io.ErrUnexpectedEOF allow retries?

    Also the v8 version of _process has the variable retryTimeout always equal to 1, which should be a bug. v9 version has no problem.

  • ERR unknown command '--cluster'

    ERR unknown command '--cluster'

    I have created a RedisClient using go-redis

    rdClient := rd.NewClusterClient(rdClusterOpts)
    

    I can do other database operation using the client

    out,err := rdClient.Ping(context.TODO()).Result()
    PONG
    

    I can also do get set operation using the client. When I try to rebalance the slots, it shows an error.

    out, err := rdClient.Do(context.TODO(), "--cluster", "rebalance", "10.244.0.98", "--cluster-use-empty-masters").Result()
    

    It shows the Error

    ERR unknown command '--cluster', with args beginning with: 'rebalance' '10.244.0.96:6379' '--cluster-use-empty-masters
    

    Is there any way to perform the Redis Cluster Manager commands using go-redis ?

  • ExpireNX command should not exist in v8

    ExpireNX command should not exist in v8

    The Cmdable's ExpireNX command exists in branch v8 even though it depends on functionality that were added only in Redis 7.

    Expected Behavior

    Cmdable should not define ExpireNX(), ExpireXX(), ExpireGT(), ExpireLT()

    Current Behavior

    using these functions results in "(error) ERR wrong number of arguments for 'expire' command"

    Possible Solution

    remove ExpireNX(), ExpireXX(), ExpireGT(), ExpireLT() from v8 branch

    Steps to Reproduce

    1. call ExpireNX with any "valid" arguments
    2. get error
  • ReadTimeout doesn't seem to work

    ReadTimeout doesn't seem to work

    Issue tracker is used for reporting bugs and discussing new features. Please use stackoverflow for supporting issues.

    ReadTimeout value used in the FailoverOptions for redis.NewFailoverClient to connect to a redis sentinel doesn't seem to have an effect.

    Expected Behavior

    If the read operation is taking more time than the ReadTimeout value, it should return an error.

    Current Behavior

    ReadTimeout exceeds, but no error is returned.

    Possible Solution

    Steps to Reproduce

    1. Connect to a redis sentinel with a ReadTimeout value defined. When the value is smaller, it's easier to reproduce. In the example, I've used 30 microseconds
    2. Do a read operation which takes more than the ReadTimeout and observe the result

    Code to reproduce:

    `package main

    import ( "fmt" "github.com/go-redis/redis" "time" )

    func main() { master := "mymaster" SentinelServers := []string{"localhost:26379"} Password := "" PoolSize := 9 DB := 11 client := redis.NewFailoverClient(&redis.FailoverOptions{ MasterName: master, SentinelAddrs: SentinelServers, Password: Password, PoolSize: PoolSize, DB: DB, ReadTimeout: time.Duration(30) * time.Microsecond, })

    res, err := client.Ping().Result()
    if err != nil {
    	fmt.Printf("Ping error %s\n", err)
    }
    fmt.Println(res)
    
    key := "key2"
    value := "val2"
    err = client.Set(key, value, time.Duration(30)*time.Second).Err()
    if err != nil {
    	fmt.Printf("Writing error %s\n", err)
    }
    
    start := time.Now()
    val, err := client.Get(key).Result()
    if err != nil {
    	fmt.Printf("Reading error %s\n", err)
    	return
    }
    elapsed := time.Since(start)
    fmt.Printf("Cache read in %s\n", elapsed)
    if err != nil {
    	fmt.Println(err)
    }
    fmt.Println(val)
    

    } `

    Context (Environment)

    There are occasions where the read operations take up to 10s to complete. Our aim is to unblock such operations by using the ReadTimeout value.

    • module version: github.com/go-redis/redis v6.15.9+incompatible
    • redis version: 7.0 (docker.io/bitnami/redis-sentinel:7.0)

    Detailed Description

    Possible Implementation

  • SSubscribe doesn't return errors (including cross slot errors with multiple keys)

    SSubscribe doesn't return errors (including cross slot errors with multiple keys)

    SSubscribe doesn't return errors including cross slot errors when multiple keys are specified that don't map to the same hash slot.

    Expected Behavior

    Expected multiple return with error func (c *ClusterClient) SSubscribe(ctx context.Context, channels ...string) (*PubSub, error)

pggen - generate type safe Go methods from Postgres SQL queries

pggen - generate type safe Go methods from Postgres SQL queries pggen is a tool that generates Go code to provide a typesafe wrapper around Postgres q

Jan 3, 2023
Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client,

Devcloud-go Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client, you can use them w

Jun 9, 2022
Go client for Redis

Redigo Redigo is a Go client for the Redis database. Features A Print-like API with support for all Redis commands. Pipelining, including pipelined tr

Dec 29, 2022
REST based Redis client built on top of Upstash REST API

An HTTP/REST based Redis client built on top of Upstash REST API.

Jul 31, 2022
Typescript type declaration to PostgreSQL CREATE TABLE converter

ts2psql NOTE: This is WIP. Details in this readme are ideal state. Current usage: go build && ./ts2psql (or go build && ts2psql if on Windows OS). A s

Jan 13, 2022
Golang Redis Postgres to-do Project
Golang Redis Postgres to-do Project

Golang Backend Project Problem Statement Build a to-do application with Golang a

Oct 17, 2022
WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

Jan 1, 2023
Query redis with SQL
Query redis with SQL

reqlite reqlite makes it possible to query data in Redis with SQL. Queries are executed client-side with SQLite (not on the redis server). This projec

Dec 23, 2022
Go library that stores data in Redis with SQL-like schema

Go library that stores data in Redis with SQL-like schema. The goal of this library is we can store data in Redis with table form.

Mar 14, 2022
A demo project that automatically restarts with a trio of docker, redis and go and transmits page visits.
A demo project that automatically restarts with a trio of docker, redis and go and transmits page visits.

A demo project that automatically restarts with a trio of docker, redis and go and transmits page visits.

Feb 6, 2022
Cross-platform client for PostgreSQL databases

pgweb Web-based PostgreSQL database browser written in Go. Overview Pgweb is a web-based database browser for PostgreSQL, written in Go and works on O

Dec 30, 2022
Go client for AMQP 0.9.1

Go RabbitMQ Client Library This is an AMQP 0.9.1 client with RabbitMQ extensions in Go. Project Maturity This project has been used in production syst

Jan 6, 2023
Interactive client for PostgreSQL and MySQL
Interactive client for PostgreSQL and MySQL

dblab Interactive client for PostgreSQL and MySQL. Overview dblab is a fast and lightweight interactive terminal based UI application for PostgreSQL a

Jan 8, 2023
Cross-platform client for PostgreSQL databases

pgweb Web-based PostgreSQL database browser written in Go. Overview Pgweb is a web-based database browser for PostgreSQL, written in Go and works on O

Dec 30, 2022
[mirror] the database client and tools for the Go vulnerability database

The Go Vulnerability Database golang.org/x/vulndb This repository is a prototype of the Go Vulnerability Database. Read the Draft Design. Neither the

Dec 29, 2022
Migration tool for ksqlDB, which uses the ksqldb-go client.
Migration tool for ksqlDB, which uses the ksqldb-go client.

ksqldb-migrate Migration tool for ksqlDB, which uses the ksqldb-go client.

Nov 15, 2022
A client for TiKV

client-tikv ./tikv-client --pd 127.0.0.1:2379,127.0.0.2:2379,127.0.0.3:2379 usage You can query the value directly according to the key. tikv> select

Apr 16, 2022
Client to import measurements to timestream databases.

Timestream DB Client Client to import measurements to timestream databases. Supported Databases/Services AWS Timestream AWS Timestream Run NewTimestre

Jan 11, 2022
Go-clickhouse - ClickHouse client for Go

ClickHouse client for Go 1.18+ This client uses native protocol to communicate w

Jan 9, 2023