Type-safe Redis client for Golang

Redis client for Golang

Build Status PkgGoDev Documentation Chat

❤️ Uptrace.dev - distributed traces, logs, and errors in one place

Ecosystem

Features

Installation

go-redis supports 2 last Go versions and requires a Go version with modules support. So make sure to initialize a Go module:

go mod init github.com/my/repo

And then install go-redis/v8 (note v8 in the import; omitting it is a popular mistake):

go get github.com/go-redis/redis/v8

Quickstart

import (
    "context"
    "github.com/go-redis/redis/v8"
)

var ctx = context.Background()

func ExampleClient() {
    rdb := redis.NewClient(&redis.Options{
        Addr:     "localhost:6379",
        Password: "", // no password set
        DB:       0,  // use default DB
    })

    err := rdb.Set(ctx, "key", "value", 0).Err()
    if err != nil {
        panic(err)
    }

    val, err := rdb.Get(ctx, "key").Result()
    if err != nil {
        panic(err)
    }
    fmt.Println("key", val)

    val2, err := rdb.Get(ctx, "key2").Result()
    if err == redis.Nil {
        fmt.Println("key2 does not exist")
    } else if err != nil {
        panic(err)
    } else {
        fmt.Println("key2", val2)
    }
    // Output: key value
    // key2 does not exist
}

Look and feel

Some corner cases:

// SET key value EX 10 NX
set, err := rdb.SetNX(ctx, "key", "value", 10*time.Second).Result()

// SET key value keepttl NX
set, err := rdb.SetNX(ctx, "key", "value", redis.KeepTTL).Result()

// SORT list LIMIT 0 2 ASC
vals, err := rdb.Sort(ctx, "list", &redis.Sort{Offset: 0, Count: 2, Order: "ASC"}).Result()

// ZRANGEBYSCORE zset -inf +inf WITHSCORES LIMIT 0 2
vals, err := rdb.ZRangeByScoreWithScores(ctx, "zset", &redis.ZRangeBy{
    Min: "-inf",
    Max: "+inf",
    Offset: 0,
    Count: 2,
}).Result()

// ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 AGGREGATE SUM
vals, err := rdb.ZInterStore(ctx, "out", &redis.ZStore{
    Keys: []string{"zset1", "zset2"},
    Weights: []int64{2, 3}
}).Result()

// EVAL "return {KEYS[1],ARGV[1]}" 1 "key" "hello"
vals, err := rdb.Eval(ctx, "return {KEYS[1],ARGV[1]}", []string{"key"}, "hello").Result()

// custom command
res, err := rdb.Do(ctx, "set", "key", "value").Result()

Run the test

go-redis will start a redis-server and run the test cases.

The paths of redis-server bin file and redis config file are definded in main_test.go:

var (
	redisServerBin, _  = filepath.Abs(filepath.Join("testdata", "redis", "src", "redis-server"))
	redisServerConf, _ = filepath.Abs(filepath.Join("testdata", "redis", "redis.conf"))
)

For local testing, you can change the variables to refer to your local files, or create a soft link to the corresponding folder for redis-server and copy the config file to testdata/redis/:

ln -s /usr/bin/redis-server ./go-redis/testdata/redis/src
cp ./go-redis/testdata/redis.conf ./go-redis/testdata/redis/

Lastly, run:

go test

See also

Comments
  • undefined: otel.Meter or cannot find package

    undefined: otel.Meter or cannot find package "go.opentelemetry.io/otel/api/trace"

    To fix cannot find package "go.opentelemetry.io/otel/api/trace" or undefined: otel.Meter:

    1. Make sure to initialize a Go module: go mod init github.com/my/repo

    2. Make sure to use correct import path with v8 in the end: go get github.com/go-redis/redis/v8

    For example:

    mkdir /tmp/redis-test
    cd /tmp/redis-test
    go mod init redis-test
    go get github.com/go-redis/redis/v8
    

    The root cause

    The error is not caused by OpenTelemetry. OpenTelemetry is just the first module Go tries to install. And the error will not go away until you start using Go modules properly.

    The presence of $GOROOT or $GOPATH in error messages indicates that you are NOT using Go modules.

  • V8 performance degradation ~20%

    V8 performance degradation ~20%

    @monkey92t

    Hi, thank you for your tests. I ran your tests in our environment, and saw similar comparative results. However, when I slightly modified the tests to reflect more accurately of our use case (and how Go HTTP spawn goroutine for each request), all the sudden the performance is degraded for V8. This is especially evident with 100+ concurrency.

    2 changes that were made:

    1. instead of pre-spawn Go routine and run fixed number of Get/Set in a for loop (this is retained using get2/set2), it runs through fixed number of requests and spawn a Go routine (only up to currency) to process them.
    2. each request will generate a random key so the load is spread across the Redis cluster.

    Both V7/V8 saw a decrease in throughput comparing using pre-spawn Go routines vs a Go routine per request. However, decrease for V7 is very small as expected, but V8 is quite dramatic.

    go-redis version: v7.4.0 and v8.6.0

    redis-cluster (version 5.0.7): master: 84 instances slave: 84 instances

    This is the RedisCluster test result: https://github.com/go-redis/redis/files/6158805/Results.pdf

    This is the test program: https://github.com/go-redis/redis/files/6158824/perftest.go.gz

  • high memory usage + solution

    high memory usage + solution

    Hi,

    I noticed that the memory usage was very high in my project. I did a memory profiling with inuse_space, and 90% of my memory is used by go-redis in WriteBuffer. If I understand correctly, each connection in the pool has its own WriteBuffer.

    My projects runs 80 goroutines (on 8 CPUs) and each goroutine SET Redis keys. My Redis keys are large: several MB. (less than 100 MB) So, it's very easy to understand why the memory usage is very high.

    I think I have a solution, but it requires changes in go-redis internals. We could use a global sync.Pool of WriteBuffer instead.

    WDYT ?

  • Constantly Reestablishing Connections in Cluster Mode

    Constantly Reestablishing Connections in Cluster Mode

    Expected Behavior

    Creating a cluster client using pretty much default settings should not overwhelm Redis with constant barrage of new connections.

    redis.NewClusterClient(&redis.ClusterOptions{
        Addrs: []string{redisAddr},
        TLSConfig: &tls.Config{},
    })
    

    Current Behavior

    Occasionally, at times completely unrelated to system load/traffic, we are seeing connections being constantly re-established to one of the cluster nodes in our Redis cluster. We are using ElastiCache Redis in cluster mode with TLS enabled, and there seems to be no trigger we can find for this behavior. We also do not see any relevant logs in our service's systemd output in journalctl, other than

    redis_writer:85 {}        Error with write attempt: context deadline exceeded
    

    which seems more like a symptom of an overloaded Redis cluster node rather than a cause.

    When this issue happens, running CLIENT LIST on the affected Redis node shows age=0 or age=1 for all connections every time, which reinforces that connections are being dropped constantly for some reason. New connections plummet on other shards in the Redis cluster, and are all concentrated on one.

    New Connections (Cloudwatch)

    NewConnections

    Current Connections (Cloudwatch)

    CurrConnections

    In the example Cloudwatch graphs above we can also see that the issue can move between Redis cluster shards. As you can see, we're currently running with a 4-shard cluster, where each shard has 1 replica.

    Restarting our service does not address this problem, and to address it we basically need to do a hard reset (completely stop the clients for a while, then start them up again).

    We've reached out to AWS support and they have found no issues with our ElastiCache Redis cluster on their end. Additionally, there are no ElastiCache events happening at the time this issue is triggered.

    Possible Solution

    In this issue I'm mainly hoping to get insight into how I could better troubleshoot this issue and/or if there are additional client options we can use to try and mitigate this worst case scenario (i.e. rate limiting the creation of new connections in the cluster client) in absence of a root-cause fix.

    My main questions are:

    1. Is there a way for me to gather more data that would be helpful for the Redis/go-redis experts here?
    2. Is there a way for us to rate-limit the creation of new connections in the ClusterClient to keep things from getting too out of control if this does continue to occur?
    3. Has anyone else encountered a similar issue with Cluster mode, whether or not it was with ElastiCache Redis?

    Steps to Reproduce

    The description of our environment/service implementation below, as well as the snippet of our NewClusterClient call at the beginning of this issue, provide a fairly complete summary of how we're using both go-redis and ElastiCache Redis. We've not been able to consistently trigger this issue since it often happens when we're not load testing, and are mainly looking for answers for some of our questions above.

    Context (Environment)

    We're running a service that has a simple algorithm for claiming work from a Redis set, doing something with it, and then cleaning it up from Redis. In a nutshell, the algorithm is as follows:

    • SRANDMEMBER pending 10 - grab up to 10 random items from the pool of available work
    • ZADD in_progress <current_timestamp> <grabbed_item> for each of our items we got in the previous step
    • Any work items we weren't able to ZADD have been claimed by some other instance of the service, skip them
    • Once we're done with a work item, SREM pending <grabbed_item>
    • Periodically ZREMRANGEBYSCORE in_progress -inf <5_seconds_ago> so that claimed items aren't claimed forever

    Currently we run this algorithm on 6 EC2 instances, each running one service. Since each instance has 4 CPU cores, go-redis is calculating a max connection pool size of 20 for our ClusterClient. Each service has 20 goroutines performing this algorithm, and each goroutine sleeps 10ms between each invocation of the algorithm.

    At a steady state with no load on the system (just a handful of heartbeat jobs being added to pending every minute) we see a maximum of ~8% EngineCPUUtilization on each Redis shard, and 1-5 new connections/minute. Overall, pretty relaxed. When this issue has triggered recently, it's happened from this steady state, not during load tests.

    Our service is running on EC2 instances running Ubuntu 18.04 (Bionic), and we have tried using github.com/go-redis/redis/v8 v8.0.0 and github.com/go-redis/redis/v8 v8.11.2 - both have run into this issue.

    As mentioned earlier, we're currently running with a 4-shard ElastiCache Redis cluster with TLS enabled, where each shard has 1 replica.

    Detailed Description

    N/A

    Possible Implementation

    N/A

  • Add redis.Scan() to scan results from redis maps into structs.

    Add redis.Scan() to scan results from redis maps into structs.

    The package uses reflection to decode default types (int, string etc.) from Redis map results (key-value pair sequences) into struct fields where the fields are matched to Redis keys by tags.

    Similar to how encoding/json allows custom decoders usingUnmarshalJSON(), the package supports decoding of arbitrary types into struct fields by defining a Decode(string) errorfunction on types.

    The field/type spec of every struct that's passed to Scan() is cached in the package so that subsequent scans avoid iteration and reflection of the struct's fields.

    Issue: https://github.com/go-redis/redis/issues/1603

  • hscan adds support for i386 platform

    hscan adds support for i386 platform

    set: GOARCH=386

    redis 127.0.0.1:6379>>set a 100
    redis 127.0.0.1:6379>>set b 123456789123456789
    
    type Demo struct {
        A int8 `redis:"a"`
        B int64 `redis:"b"`
    }
    
    client := redis.NewClient(&Options{
            Network:      "tcp",
            Addr:         "127.0.0.1:6379",
    })  
    ctx := context.Background()
    d := &Demo{}
    err := client.MGet(ctx, "a", "b").Scan(d)
    t.Log(d, err)
    

    it should run normally on the i386 platform, and there should not be such an error: strconv.ParseInt: parsing "123456789123456789": value out of range

  • Add Limiter interface

    Add Limiter interface

    This is an alternative to https://github.com/go-redis/redis/pull/874. Basically it defines rate limiter interface which allows to implement different limiting strategies in separate packages.

    @xianglinghui what do you think? Is provided API enough to cover your needs? I am aware that code like https://github.com/go-redis/redis/blob/master/ring.go#L618-L621 requires some work in go-redis, but other than that it seems to be enough.

  • dial tcp: i/o timeout

    dial tcp: i/o timeout

    I am using go-redis version v6.14.2. I have my application running in an AWS cluster behind loadbalancer. All redis requests failed in one of the nodes in the cluster. Rest of the nodes were working as expected. Application started working properly after a restart. We are using ElastiCache. Can you please help me with identifying the issue ?? If it is previously known issue and is solved in latest version, can you point me to that link ??

    The error was "dial tcp: i/o timeout".

    Below is my cluster configuration excluding redis host address and password:

    • ReadOnly : true
    • RouteByLatency : true
    • RouteRandomly : true
    • DialTimeout : 300ms
    • ReadTimeout : 30s
    • Write Timeout : 30s
    • PoolSize : 12000
    • PoolTimeout : 32
    • IdleTimeout : 120s
    • IdleCheckFrequency : 1s
    import (
    goRedisClient "github.com/go-redis/redis"
    )
    
    func GetRedisClient() *goRedisClient.ClusterClient {
    clusterClientOnce.Do(func() {
    redisClusterClient = goRedisClient.NewClusterClient(
    &goRedisClient.ClusterOptions{
    Addrs: viper.GetStringSlice("redis.hosts"),
    ReadOnly: true,
    RouteByLatency: true,
    RouteRandomly: true,
    Password: viper.GetString("redis.password"),
    
    			DialTimeout:  viper.GetDuration("redis.dial_timeout"),
    			ReadTimeout:  viper.GetDuration("redis.read_timeout"),
    			WriteTimeout: viper.GetDuration("redis.write_timeout"),
    
    			PoolSize:           viper.GetInt("redis.max_active_connections"),
    			PoolTimeout:        viper.GetDuration("redis.pool_timeout"),
    			IdleTimeout:        viper.GetDuration("redis.idle_connection_timeout"),
    			IdleCheckFrequency: viper.GetDuration("redis.idle_check_frequency"),
    		},
    	)
    
    	if err := redisClusterClient.Ping().Err(); err != nil {
    		log.WithError(err).Error(errorCreatingRedisClusterClient)
    	}
    })
    return redisClusterClient
    }
    

    As suggested in comments,https://github.com/go-redis/redis/issues/1194, I wrote the following snippet to dial and test nodes health for each slot. There were no errors. As mentioned, its happening randomly in one of the clients.Not always. It happened again after 3-4 months. And it is always fixed after a restart.

    func CheckRedisSlotConnection(testCase string) {
    	fmt.Println(viper.GetStringSlice("redis.hosts"))
    	fmt.Println("Checking testcase " + testCase)
    	client := redis.GetRedisClient()
    	slots := client.ClusterSlots().Val()
    	addresses := []string{}
    	for _, slot := range slots {
    		for _, node := range slot.Nodes {
    			addresses = append(addresses, node.Addr)
    		}
    	}
    	fmt.Println("Received " + strconv.Itoa(len(addresses)) + " Slots")
    	for _, address := range addresses {
    		fmt.Println("Testing address " + address)
    		conn, err := net.DialTimeout("tcp", address, 500*time.Millisecond)
    		if err != nil {
    			fmt.Println("Error dialing to address " + address + " Error " + err.Error())
    			continue
    		}
    		fmt.Println("Successfully dialled to address " + address)
    		err = conn.Close()
    		if err != nil {
    			fmt.Println("Error closing connection " + err.Error())
    			continue
    		}
    	}
    }
    
  • Attempt to cleanup cluster logic.

    Attempt to cleanup cluster logic.

    @dim I tried to refactor code a bit to learn more about Redis cluster. Changes:

    • NewClusterClient does not return error any more, because NewClient does not too. I personally think that app can't do anything useful except exiting when NewClusterClient returns an error. So panic should be a good alternative.
    • Now ClusterClient.process tries next available replica before falling back to the randomClient. I am not sure that this change is correct, but I hope so :)
    • randomClient is completely rewritten so it does not require allocating seen map[string]struct{}{} on every request. It also checks that node is online before returning.
  • How to implement periodic refresh topology

    How to implement periodic refresh topology

    My redis cluster is on top of kubernetes, so sometimes i may move the entire cluster to another set of nodes and they all change ip address. So my go-redis client needs to refresh the topology from time to time. I am wondering is there a config to do that? Or do i need to send some cluster-nodes command from time to time?

  • redis: can't parse

    redis: can't parse "ype\":\"PerfdataValue\",\"unit\":\"\",\"value\":0.0,\"warn\":null}],\"status\":{\"checkercomponent\":{\"checker\":{\"i"

    We at @Icinga are developing two applications, one writes to Redis (and publishes events) and the other reads (and subscribes for the events).

    The writer PUBLISHes periodically data like...

    {"ApiListener":{"perfdata":[{"counter":false,"crit":null,"label":"api_num_conn_endpoints","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_endpoints","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_http_clients","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_clients","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_relay_queue_item_rate","max":null,"min":null,"type":"PerfdataValue","unit":"","value":46.399999999999998579,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_relay_queue_items","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_sync_queue_item_rate","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_sync_queue_items","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_work_queue_count","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_work_queue_item_rate","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_json_rpc_work_queue_items","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null},{"counter":false,"crit":null,"label":"api_num_not_conn_endpoints","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null}],"status":{"api":{"conn_endpoints":[],"http":{"clients":0.0},"identity":"CENSOREDCENSOREDCENSOREDCENSO","json_rpc":{"clients":0.0,"relay_queue_item_rate":46.399999999999998579,"relay_queue_items":0.0,"sync_queue_item_rate":0.0,"sync_queue_items":0.0,"work_queue_count":0.0,"work_queue_item_rate":0.0,"work_queue_items":0.0},"not_conn_endpoints":[],"num_conn_endpoints":0.0,"num_endpoints":0.0,"num_not_conn_endpoints":0.0,"zones":{"alexanders-mbp.int.netways.de":{"client_log_lag":0.0,"connected":true,"endpoints":["alexanders-mbp.int.netways.de"],"parent_zone":""}}}}},"CIB":{"perfdata":[],"status":{"active_host_checks":1.8500000000000000888,"active_host_checks_15min":1649.0,"active_host_checks_1min":111.0,"active_host_checks_5min":562.0,"active_service_checks":21.350000000000001421,"active_service_checks_15min":19280.0,"active_service_checks_1min":1281.0,"active_service_checks_5min":6399.0,"avg_execution_time":0.021172960599263507958,"avg_latency":0.011358479658762613354,"max_execution_time":0.077728986740112304688,"max_latency":0.045314073562622070312,"min_execution_time":0.001573085784912109375,"min_latency":0.0,"num_hosts_acknowledged":0.0,"num_hosts_down":1.0,"num_hosts_flapping":0.0,"num_hosts_in_downtime":0.0,"num_hosts_pending":0.0,"num_hosts_unreachable":0.0,"num_hosts_up":0.0,"num_services_acknowledged":0.0,"num_services_critical":3.0,"num_services_flapping":0.0,"num_services_in_downtime":0.0,"num_services_ok":4.0,"num_services_pending":0.0,"num_services_unknown":3.0,"num_services_unreachable":12.0,"num_services_warning":2.0,"passive_host_checks":0.0,"passive_host_checks_15min":0.0,"passive_host_checks_1min":0.0,"passive_host_checks_5min":0.0,"passive_service_checks":0.0,"passive_service_checks_15min":0.0,"passive_service_checks_1min":0.0,"passive_service_checks_5min":0.0,"remote_check_queue":0.0,"uptime":18855.292195796966553}},"CheckResultReader":{"perfdata":[],"status":{"checkresultreader":{}}},"CheckerComponent":{"perfdata":[{"counter":false,"crit":null,"label":"checkercomponent_checker_idle","max":null,"min":null,"type":"PerfdataValue","unit":"","value":13.0,"warn":null},{"counter":false,"crit":null,"label":"checkercomponent_checker_pending","max":null,"min":null,"type":"PerfdataValue","unit":"","value":0.0,"warn":null}],"status":{"checkercomponent":{"checker":{"idle":13.0,"pending":0.0}}}},"CompatLogger":{"perfdata":[],"status":{"compatlogger":{}}},"ElasticsearchWriter":{"perfdata":[],"status":{"elasticsearchwriter":{}}},"ExternalCommandListener":{"perfdata":[],"status":{"externalcommandlistener":{}}},"FileLogger":{"perfdata":[],"status":{"filelogger":{"main-log":1.0}}},"GelfWriter":{"perfdata":[],"status":{"gelfwriter":{}}},"GraphiteWriter":{"perfdata":[],"status":{"graphitewriter":{}}},"IcingaApplication":{"perfdata":[],"status":{"icingaapplication":{"app":{"enable_event_handlers":true,"enable_flapping":true,"enable_host_checks":true,"enable_notifications":true,"enable_perfdata":true,"enable_service_checks":true,"environment":"production","node_name":"alexanders-mbp.int.netways.de","pid":7700.0,"program_start":1531475256.183437109,"version":"v2.8.4-779-g45b3429fa"}}}},"InfluxdbWriter":{"perfdata":[],"status":{"influxdbwriter":{}}},"LivestatusListener":{"perfdata":[],"status":{"livestatuslistener":{}}},"NotificationComponent":{"perfdata":[],"status":{"notificationcomponent":{"notification":1.0}}},"OpenTsdbWriter":{"perfdata":[],"status":{"opentsdbwriter":{}}},"PerfdataWriter":{"perfdata":[],"status":{"perfdatawriter":{}}},"StatusDataWriter":{"perfdata":[],"status":{"statusdatawriter":{}}},"SyslogLogger":{"perfdata":[],"status":{"sysloglogger":{}}}}
    

    ... and the reader consumes that using this library.

    Wireshark shows nothing special, just these messages and some PINGs, but after a while the reader hits internal/proto/reader.go:106 with line being ...

    ype":"PerfdataValue","unit":"","value":0.0,"warn":null}],"status":{"checkercomponent":{"checker":{"idle":13.0,"pending":0.0}}}},"CompatLogger":{"perfdata":[],"status":{"compatlogger":{}}},"ElasticsearchWriter":{"perfdata":[],"status":{"elasticsearchwriter":{}}},"ExternalCommandListener":{"perfdata":[],"status":{"externalcommandlistener":{}}},"FileLogger":{"perfdata":[],"status":{"filelogger":{"main-log":1.0}}},"GelfWriter":{"perfdata":[],"status":{"gelfwriter":{}}},"GraphiteWriter":{"perfdata":[],"status":{"graphitewriter":{}}},"IcingaApplication":{"perfdata":[],"status":{"icingaapplication":{"app":{"enable_event_handlers":true,"enable_flapping":true,"enable_host_checks":true,"enable_notifications":true,"enable_perfdata":true,"enable_service_checks":true,"environment":"production","node_name":"CENSOREDCENSOREDCENSOREDCENSO","pid":7700.0,"program_start":1531475256.183437109,"version":"v2.8.4-779-g45b3429fa"}}}},"InfluxdbWriter":{"perfdata":[],"status":{"influxdbwriter":{}}},"LivestatusListener":{"perfdata":[],"status":{"livestatuslistener":{}}},"NotificationComponent":{"perfdata":[],"status":{"notificationcomponent":{"notification":1.0}}},"OpenTsdbWriter":{"perfdata":[],"status":{"opentsdbwriter":{}}},"PerfdataWriter":{"perfdata":[],"status":{"perfdatawriter":{}}},"StatusDataWriter":{"perfdata":[],"status":{"statusdatawriter":{}}},"SyslogLogger":{"perfdata":[],"status":{"sysloglogger":{}}}}
    
  • redis: can't marshal map[string]main.student (implement encoding.BinaryMarshaler)

    redis: can't marshal map[string]main.student (implement encoding.BinaryMarshaler)

    Issue tracker is used for reporting bugs and discussing new features. Please use stackoverflow for supporting issues.

    Expected Behavior

    I hope HSet can support store map[string]struct.

    Current Behavior

    redis: can't marshal map[string]main.student (implement encoding.BinaryMarshaler)

    Steps to Reproduce

    type student struct { Name string Age int }

    func (s student) MarshalBinary() (data []byte, err error) { data, err = json.Marshal(&s) return }

    func main() { rdb := redis.NewClient(&redis.Options{ Addr: "localhost:6379", Password: "2000.redis", }) m := make(map[string]student) m["1"] = student{ "sxy", 18, } m["2"] = student{ "szw", 0, } ctx := context.Background() cmd := rdb.HSet(ctx, "student", m) fmt.Println(cmd.Err()) x := rdb.HGet(ctx, "student", "1") fmt.Println(x) }

  • "ClusterClient.Ping().Result() error: got 4 elements in cluster info address, expected 2 or 3" when used for accessing redis cluster

    Issue tracker is used for reporting bugs and discussing new features. Please use stackoverflow for supporting issues.

    go-redis reports error when it is used to access redis-cluster

    Expected Behavior

    go-redis/redis should works well with redis cluster

    Current Behavior

    go-redis/redis ClusterClient.Ping().Result() reports error "got 4 elements in cluster info address, expected 2 or 3" (my redis cluster has six nodes)

    Possible Solution

    Steps to Reproduce

    1. I build my redis cluster with docker and the redis's version is
    "GOSU_VERSION=1.14",
    "REDIS_VERSION=7.0.0",
    "REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-7.0.0.tar.gz",
    

    and my go version is

    # go version
    go version go1.18.1 linux/amd64
    
    1. I am sure my redis cluster works well, for example, I make a test with redis-cli as follows
    [email protected]:/home/root/code/redis-cluster/cmd# docker run -it --rm redis redis-cli -c -h 192.168.0.28 -p 6700 -a 123456
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    192.168.0.28:6700> cluster nodes
    27bfce46e99d870f7ca4ae5c51df30737581e0cf 192.168.0.180:[email protected] master - 0 1651637749193 7 connected 0-5460
    d93f9f77dc49e7177f6cadf6f32cd27e1ed4ead0 192.168.0.28:[email protected] slave 27139ca982574acfb125a9d0421d99c47789f7c4 0 1651637750196 3 connected
    c3c553a61c78fcdfd07ba6c72402f1d0a23bc006 192.168.0.141:[email protected] slave f366d818f6c5cdf4ab7843ca8a14f39d58e0fa9d 0 1651637748189 2 connected
    27139ca982574acfb125a9d0421d99c47789f7c4 192.168.0.141:[email protected] master - 0 1651637748000 3 connected 10923-16383
    fab30d84431b0e7cebdf0ed257b99e3192ba2d8d 192.168.0.28:[email protected] myself,slave 27bfce46e99d870f7ca4ae5c51df30737581e0cf 0 1651637749000 7 connected
    f366d818f6c5cdf4ab7843ca8a14f39d58e0fa9d 192.168.0.180:[email protected] master - 0 1651637751199 2 connected 5461-10922
    192.168.0.28:6700> cluster info
    cluster_state:ok
    cluster_slots_assigned:16384
    cluster_slots_ok:16384
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:6
    cluster_size:3
    cluster_current_epoch:7
    cluster_my_epoch:7
    cluster_stats_messages_ping_sent:45321
    cluster_stats_messages_pong_sent:45740
    cluster_stats_messages_sent:91061
    cluster_stats_messages_ping_received:45740
    cluster_stats_messages_pong_received:45321
    cluster_stats_messages_update_received:5
    cluster_stats_messages_received:91066
    total_cluster_links_buffer_limit_exceeded:0
    192.168.0.28:6700>
    192.168.0.28:6700> keys *
    1) "add"
    2) "abb"
    3) "bbb"
    4) "kkkkkk"
    5) "age"
    6) "aac"
    192.168.0.28:6700>
    

    When I set or get key with redis-cli, it works well. 3. Then I test with a simple go program, as follows

    package main
    
    import (
            "context"
            "encoding/json"
            "fmt"
            "github.com/go-redis/redis/v8"
            "time"
    )
    
    func main() {
            rdb := redis.NewClusterClient(&redis.ClusterOptions{
                    Password: "123456",
                    Addrs: []string{"192.168.0.28:6700", "192.168.0.180:6701", "192.168.0.141:6702", "192.168.0.28:6900", "192.168.0.180:6901", "192.168.0.141:6902"},
            })
    
            _, err := rdb.Ping(context.Background()).Result()
            if err != nil {
                    panic(err)
            }
    
            err = testString(rdb)
            if err != nil {
                    panic(err)
            }
    }
    
    func testString(client *redis.ClusterClient) error {
            key := "mykey"
            err := client.Set(context.Background(), key, "myvalue", time.Minute).Err()
            if err != nil {
                    fmt.Println("set err", err)
                    return err
            }
    
            fmt.Println("--------------------------------------")
            for i := 0; i < 6; i++ {
                    keyCnt, err := client.Exists(context.Background(), key).Result()
                    if err != nil {
                            fmt.Println("client.Exists error: ", err)
                    }
                    fmt.Println("client.Exists val: ", keyCnt)
    
                    val, err := client.Get(context.Background(), key).Result()
                    if err != nil {
                            if err == redis.Nil { // key does not exist
                                    fmt.Println("key not exist")
                            } else {
                                    fmt.Println("Get err", err)
                                    return err
                            }
                    } else {
                            fmt.Printf("key: %v, value:%v\n", key, val)
                    }
    
                    time.Sleep(15 * time.Second)
            }
    
            return nil
    }
    

    build and run this program, it report error, as follows

    # ./main
    panic: got 4 elements in cluster info address, expected 2 or 3
    
    goroutine 1 [running]:
    main.main()
            /home/root/code/redis-cluster/cmd/main.go:19 +0x171
    

    I think this is a bug, please check and fix it. Thank you very much!

    Context (Environment)

    Detailed Description

    Possible Implementation

  • chore: arrange the arguments 'start' and 'stop' of 'XRevRange(N)' in the way as the order of the redis command.

    chore: arrange the arguments 'start' and 'stop' of 'XRevRange(N)' in the way as the order of the redis command.

    In the redis command 'XREVRANGE', 'end' is followed by 'start'.

    https://redis.io/commands/xrevrange/ XREVRANGE key end start [COUNT count]

    In go-redis 'XRevRange' and 'XRevRangeN', 'start' is followed by 'stop'.

    XRevRange(stream, start, stop string) XRevRangeN(stream, start, stop string, count int64)

    There is no problem with the program's behavior, but it might be confusing. So in 'XRevRange' and 'XRevRangeN', 'stop' is followed by 'start'.

  • Support Redis 7.0.0

    Support Redis 7.0.0

    Version 7.0.0 of Redis was recently released, and contains a variety of backwards-incompatible changes.

    One such example is that version 7.0.0 has extended the number of fields in the "CLUSTER SLOTS" command response, by adding a new "hostname" field. The current code in this library returns an error when it encounters such a response, since it expects only 2 or 3 elements (the IP, Port, and optionally the ID).

    Please work towards support for Version 7.0.0 by resolving this and other incompatibilities.

  • add/remove shards in redis ring

    add/remove shards in redis ring

    We use redis ring client to shard access to redis for our rate limit infrastructure in Kubernetes https://opensource.zalando.com/skipper/tutorials/ratelimit/#redis-based-cluster-ratelimits. I would like to add and remove shards on demand while Kubernetes is scaling-out redis instances. I tried to implement it by closing and recreating the redis ring, but I think it would be better (less locks required) to trigger it via a library call.

    One idea I had was to have a func() []string that is called every configurable time.Duration with a time.Ticker to set the Members and propagate these into the library ringShards. Or we could also do the triggering ourselves and the library just provides ReconfigureShards(shards []string). Do you have a better idea how to make this happen?

    I am willing to create a PR if it makes sense for you.

Golang client for redislabs' ReJSON module with support for multilple redis clients (redigo, go-redis)

Go-ReJSON - a golang client for ReJSON (a JSON data type for Redis) Go-ReJSON is a Go client for ReJSON Redis Module. ReJSON is a Redis module that im

May 11, 2022
Redis client Mock Provide mock test for redis query

Redis client Mock Provide mock test for redis query, Compatible with github.com/go-redis/redis/v8 Install Confirm that you are using redis.Client the

May 10, 2022
GoBigdis is a persistent database that implements the Redis server protocol. Any Redis client can interface with it and start to use it right away.

GoBigdis GoBigdis is a persistent database that implements the Redis server protocol. Any Redis client can interface with it and start to use it right

Apr 27, 2022
Bxd redis benchmark - Redis benchmark tool for golang

使用 redis benchmark 工具, 测试 10 20 50 100 200 1k 5k 字节 value 大小,redis get set 性能。 r

Jan 22, 2022
redis client implement by golang, inspired by jedis.

godis redis client implement by golang, refers to jedis. this library implements most of redis command, include normal redis command, cluster command,

Apr 25, 2022
Redis client for Golang
Redis client for Golang

Redis client for Golang To ask questions, join Discord or use Discussions. Newsl

Dec 23, 2021
Redis client for Golang
Redis client for Golang

Redis client for Golang Discussions. Newsletter to get latest updates. Documentation Reference Examples RealWorld example app Other projects you may l

Dec 30, 2021
Go client for Redis

Redigo Redigo is a Go client for the Redis database. Features A Print-like API with support for all Redis commands. Pipelining, including pipelined tr

May 14, 2022
Go Redis Client

xredis Built on top of github.com/garyburd/redigo with the idea to simplify creating a Redis client, provide type safe calls and encapsulate the low l

Jan 23, 2022
godis - an old Redis client for Go

godis Implements a few database clients for Redis. There is a stable client and an experimental client, redis and exp, respectively. To use any of the

Apr 16, 2022
Google Go Client and Connectors for Redis

Go-Redis Go Clients and Connectors for Redis. The initial release provides the interface and implementation supporting the (~) full set of current Red

Apr 16, 2022
Redis client library for Go

go-redis go-redis is a Redis client library for the Go programming language. It's built on the skeleton of gomemcache. It is safe to use by multiple g

Jul 15, 2020
Redisx: a library of Go utilities built on the redigo redis client library

redisx redisx is a library of Go utilities built on the redigo redis client libr

Dec 24, 2021
A Golang implemented Redis Server and Cluster.
A Golang implemented Redis Server and Cluster.

Godis is a golang implementation of Redis Server, which intents to provide an example of writing a high concurrent middleware using golang.

May 7, 2022
A golang tool to view Redis data in terminal
A golang tool to view Redis data in terminal

Redis Viewer A tool to view Redis data in terminal. Usage: KeyBoard Description ctrl+c exit redis viewer ↑ previous key ↓ next key ← previous page → n

May 12, 2022
High-performance framework for building redis-protocol compatible TCP servers/services

Redeo The high-performance Swiss Army Knife for building redis-protocol compatible servers/services. Parts This repository is organised into multiple

May 7, 2022
Simple key-value store abstraction and implementations for Go (Redis, Consul, etcd, bbolt, BadgerDB, LevelDB, Memcached, DynamoDB, S3, PostgreSQL, MongoDB, CockroachDB and many more)

gokv Simple key-value store abstraction and implementations for Go Contents Features Simple interface Implementations Value types Marshal formats Road

May 11, 2022
Redis Sorted Sets Benchmark

redis-zbench-go Redis Sorted Sets Benchmark Overview This repo contains code to trigger load ( ZADD ) or query (ZRANGEBYLEX key min max) benchmarks, w

May 18, 2021
Use Redis' MONITOR to draw things in a terminal
Use Redis' MONITOR to draw things in a terminal

Redis Top Redistop uses MONITOR to watch Redis commands and shows per command and per host statistics. Because MONITOR streams back all commands, its

Mar 22, 2022