Redis-shake is a tool for synchronizing data between two redis databases. Redis-shake是一个用于在两个redis之间同步数据的工具,满足用户非常灵活的同步、迁移需求。

RedisShake is mainly used to synchronize data from one redis to another.
Thanks to the Douyu's WSD team for the support.

Redis-Shake


Redis-shake is developed and maintained by NoSQL Team in Alibaba-Cloud Database department.
Redis-shake has made some improvements based on redis-port, including bug fixes, performance improvements and feature enhancements.

Main Functions


The type can be one of the followings:

  • decode: Decode dumped payload to human readable format (hex-encoding).
  • restore: Restore RDB file to target redis.
  • dump: Dump RDB file from source redis.
  • sync: Sync data from source redis to target redis by sync or psync command. Including full synchronization and incremental synchronization.
  • rump: Sync data from source redis to target redis by scan command. Only support full synchronization. Plus, RedisShake also supports fetching data from given keys in the input file when scan command is not supported on the source side. This mode is usually used when sync and psync redis commands aren't supported.

Please check out the conf/redis-shake.conf to see the detailed parameters description.

Support


Redis version from 2.x to 5.0. Supports Standalone, Cluster, and some proxies type like Codis, twemproxy, Aliyun Cluster Proxy, Tencent Cloud Proxy and so on.
For codis and twemproxy, there maybe some constraints, please checkout this question.

Configuration

Redis-shake has several parameters in the configuration(conf/redis-shake.conf) that maybe confusing, if this is your first time using, please visit this tutorial.

Verification


User can use RedisFullCheck to verify correctness.

Metric


Redis-shake offers metrics through restful api and log file.

  • restful api: curl 127.0.0.1:9320/metric.
  • log: the metric info will be printed in the log periodically if enable.
  • inner routine heap: curl http://127.0.0.1:9310/debug/pprof/goroutine?debug=2

Redis Type


Both the source and target type can be standalone, opensource cluster and proxy. Although the architecture patterns of different vendors are different for the proxy architecture, we still support different cloud vendors like alibaba-cloud, tencent-cloud and so on.
If the target is open source redis cluster, redis-shake uses redis-go-cluster driver to write data. When target type is proxy, redis-shakes write data in round-robin way.
If the source is redis cluster, redis-shake launches multiple goroutines for parallel pull. User can use rdb.parallel to control the RDB syncing concurrency.
The "move slot" operations must be disabled on the source side.

Code branch rules


Version rules: a.b.c.

  • a: major version
  • b: minor version. even number means stable version.
  • c: bugfix version
branch name rules
master master branch, do not allowed push code. store the latest stable version. develop branch will merge into this branch once new version created.
develop(main branch) develop branch. all the bellowing branches fork from this.
feature-* new feature branch. forked from develop branch and then merge back after finish developing, testing, and code review.
bugfix-* bugfix branch. forked from develop branch and then merge back after finish developing, testing, and code review.
improve-* improvement branch. forked from develop branch and then merge back after finish developing, testing, and code review.

Tag rules:
Add tag when releasing: "release-v{version}-{date}". for example: "release-v1.0.2-20180628"
User can use -version to print the version.

Usage


You can directly download the binary in the release package, and use start.sh script to start it directly: ./start.sh redis-shake.conf sync.
You can also build redis-shake yourself according to the following steps, the go and govendor must be installed before compile:

  • git clone https://github.com/alibaba/RedisShake.git
  • cd RedisShake
  • export GOPATH=`pwd`
  • cd src/vendor
  • govendor sync #please note: must install govendor first and then pull all dependencies: go get -u github.com/kardianos/govendor
  • cd ../../ && ./build.sh
  • ./bin/redis-shake -type=$(type_must_be_sync_dump_restore_decode_or_rump) -conf=conf/redis-shake.conf #please note: user must modify collector.conf first to match needs.

Shake series tool


We also provide some tools for synchronization in Shake series.

Plus, we have a DingDing(钉钉) group so that users can join and discuss, please scan the code. DingDing

Thanks


Username Mail
ceshihao [email protected]
wangyiyang [email protected]
muicoder [email protected]
zhklcf [email protected]
shuff1e [email protected]
xuhualin [email protected]
Owner
Alibaba
Alibaba Open Source
Alibaba
Comments
  • 同步过程中报:DbSyncer[14] Event:FlushFail

    同步过程中报:DbSyncer[14] Event:FlushFail

    在redis-shake在全量同步的过程中报下面这个错误,会是什么原因: DbSyncer[14] Event:FlushFail Id:redis-shake Error:dial tcp 10.18.73.20:11234: i/o timeout [stack]: 1 github.com/alibaba/RedisShake/redis-shake/dbSync/syncIncrease.go:390 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).sendTargetCommand.func1 0 github.com/alibaba/RedisShake/redis-shake/dbSync/syncIncrease.go:444 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).sendTargetCommand ... ...

  • [PANIC] restore command error key:privilege_1409124_xxxxx err:Do failed[MOVED 2361 xxxxx:10000]

    [PANIC] restore command error key:privilege_1409124_xxxxx err:Do failed[MOVED 2361 xxxxx:10000]

    开源版redis cluster,版本为3.0.3,进行同步报错。

    [error]: Do failed[MOVED 2361 10.8.60.123:10000] [stack]: 1 /Users/vinllen-ali/code/redis-shake-inner/redis-shake/src/redis-shake/common/utils.go:844 redis-shake/common.RestoreRdbEntry 0 /Users/vinllen-ali/code/redis-shake-inner/redis-shake/src/redis-shake/sync.go:447 redis-shake.(*dbSyncer).syncRDBFile.func1.1 ... ...

  • [BUG] 三主三从架构,增量同步失败

    [BUG] 三主三从架构,增量同步失败

    2022/02/10 15:04:26 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:27 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:27 [ERROR] dbSyncer[0] send offset to source redis failed[write tcp xx.xx.xx.34:42527-> xx.xx.xx.34:6379: write: connection reset by peer] [stack]: 0 github.com/alibaba/RedisShake/redis-shake/dbSync/syncBegin.go:145 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).pSyncPipeCopy.func1 ... ... 2022/02/10 15:04:27 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:27 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:28 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:28 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:28 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:29 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:29 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:29 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:30 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:30 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:30 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:31 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:31 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:31 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:32 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:32 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0

    2022/02/10 15:04:39 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:39 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:39 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:40 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:40 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:40 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:41 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:41 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:41 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:42 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:42 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:42 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:43 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:43 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:43 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:44 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:44 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:44 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:45 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:45 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:45 [ERROR] dbSyncer[2] send offset to source redis failed[write tcp xx.xx.xx.34:45407-> xx.xx.xx.37:6379: write: connection reset by peer] [stack]: 0 github.com/alibaba/RedisShake/redis-shake/dbSync/syncBegin.go:145 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).pSyncPipeCopy.func1 ... ... 2022/02/10 15:04:45 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:46 [WARN] DbSyncer[0] Event:GetFakeSlaveOffsetFail Id:redis-shake Warn:OffsetNotFoundInInfo 2022/02/10 15:04:46 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:46 [WARN] DbSyncer[2] Event:GetFakeSlaveOffsetFail Id:redis-shake Warn:OffsetNotFoundInInfo 2022/02/10 15:04:46 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:46 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:47 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:47 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:47 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:48 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:48 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:48 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:49 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:49 [INFO] DbSyncer[2] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:49 [INFO] DbSyncer[1] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0 2022/02/10 15:04:49 [ERROR] dbSyncer[1] send offset to source redis failed[write tcp xx.xx.xx.34:54523->xx.xx.xx.35:6379: write: connection reset by peer] [stack]: 0 github.com/alibaba/RedisShake/redis-shake/dbSync/syncBegin.go:145 github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).pSyncPipeCopy.func1 ... ... 2022/02/10 15:04:50 [INFO] DbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0

  • 同步数据报错

    同步数据报错

    版本:develop,4a26b1ca10bc9c6849b30fc73004d135c8227063,go1.10.3,2019-03-08_13:43:41 redis版本: 源:redis_version:3.2.4 目标:5.0.2 报错: 2019/04/23 19:49:49 [PANIC] Event:NetErrorWhileReceive Id:redis-shake Error:EOF [stack]: 0 /home/zhuzhao.cx/redis-shake/src/redis-shake/sync.go:405 redis-shake.(*CmdSync).SyncCommand.func2

  • 高ops的时候,redis-shake占用CPU比较高,13M/s,3W ops+

    高ops的时候,redis-shake占用CPU比较高,13M/s,3W ops+

    pprof查出,是net.Read调用太频繁,可以适当延迟,目前发现2ms比较合适,我不知道最佳是多少,1的话,效果不理想。 原来我们的cpu占用为125%,pprof出来,net.Read为71%,当我们加上time.Sleep之后,直接将为50%左右,并且不影响同步 然后openConnet可以 net.DialTCP,可以设置ReadBuffer和WriteBuffer,然后当做net.Conn传出去即可。

  • 开启psync,同步报错,[PANIC] invalid psync response, fullsync

    开启psync,同步报错,[PANIC] invalid psync response, fullsync

    2019/04/04 15:55:38 [INFO] redis-shake configuration: {"Id":"redis-shake","LogFile":"","SystemProfile":9310,"HttpProfile":9320,"NCpu":0,"Parallel":4,"InputRdb":"local","OutputRdb":"local_dump","SourceAddress":"192.168.111.93:6379","SourcePasswordRaw":"","SourcePasswordEncoding":"","SourceVersion":7,"SourceAuthType":"auth","TargetAddress":"192.168.111.94:6379","TargetPasswordRaw":"","TargetPasswordEncoding":"","TargetVersion":7,"TargetDB":-1,"TargetAuthType":"auth","FakeTime":"","Rewrite":true,"FilterDB":"","FilterKey":[],"FilterSlot":[],"BigKeyThreshold":524288000,"Psync":true,"Metric":true,"MetricPrintLog":false,"HeartbeatUrl":"","HeartbeatInterval":3,"HeartbeatExternal":"test external","HeartbeatNetworkInterface":"","SenderSize":104857600,"SenderCount":5000,"SenderDelayChannelSize":65535,"KeepAlive":0,"PidPath":"","RedisConnectTTL":0,"ReplaceHashTag":false,"ExtraInfo":false,"SockFileName":"","SockFileSize":0,"HeartbeatIp":"127.0.0.1","ShiftTime":0,"TargetRedisVersion":"4.0.12","TargetReplace":true}
    2019/04/04 15:55:38 [INFO] sync from '192.168.111.93:6379' to '192.168.111.94:6379' http '9320'
    2019/04/04 15:55:38 [INFO] sync from '192.168.111.93:6379' to '192.168.111.94:6379'
    2019/04/04 15:55:38 [PANIC] invalid psync response, fullsync
    [error]: bad resp CRLF end
        6   /Users/wangyiyang/Documents/GitHub/RedisShake/src/pkg/redis/decoder.go:179
                pkg/redis.(*Decoder).decodeSingleLineBulkBytesArray
        5   /Users/wangyiyang/Documents/GitHub/RedisShake/src/pkg/redis/decoder.go:97
                pkg/redis.(*Decoder).decodeResp
        4   /Users/wangyiyang/Documents/GitHub/RedisShake/src/pkg/redis/decoder.go:32
                pkg/redis.Decode
        3   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/common/utils.go:162
                redis-shake/common.SendPSyncFullsync
        2   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/sync.go:180
                redis-shake.(*CmdSync).SendPSyncCmd
        1   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/sync.go:119
                redis-shake.(*CmdSync).Main
        0   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/main/main.go:115
                main.main
            ... ...
    [stack]:
        3   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/common/utils.go:164
                redis-shake/common.SendPSyncFullsync
        2   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/sync.go:180
                redis-shake.(*CmdSync).SendPSyncCmd
        1   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/sync.go:119
                redis-shake.(*CmdSync).Main
        0   /Users/wangyiyang/Documents/GitHub/RedisShake/src/redis-shake/main/main.go:115
                main.main
            ... ...
    
  • 是否支持 aws elastic cache 的同步?

    是否支持 aws elastic cache 的同步?

    如题,由于 AWS ElasticCache 禁用了 syncpsync 命令,因此 RedisShake 走 sync 模式肯定是不行的, 那 aws 其实是有给每个集群的分片提供单独的 sync 命令,比如 shard 1 的话,可能是一个前缀:xhma21xfks,然后要同步 shard 1 的 sync 命令就会是 xhma21xfkssync,然后 shard 2 可能会是另外一个 prefix,比如:nmfu2sl5o,它的 sync 等价命令就会是 nmfu2sl5osync,那这种形式,RedisShake 有办法变相支持一下吗?

    我这边有看到咱们在不支持 psync、sync 的情况下提供了 scan 的方式来同步数据,但是这个方案更像是迁移场景,我们这边更多是同步数据的常驻服务场景,所以还不太适用。

    希望能帮忙给个建议,感谢!

  • Rdb finished,tcp broken pipe

    Rdb finished,tcp broken pipe

    问题描述

    redis4.0迁移数据向redis5.0 部分shard rdb 文件写完时 tcp broken pipe。

    --日志--

    2022-11-30 11:36:26 PNC write tcp 172.20.122.103:59626->55.50.3.21:6381: write: broken pipe
    panic: write tcp 172.20.122.103:59626->55.50.3.21:6381: write: broken pipe
    
    goroutine 1 [running]:
    github.com/rs/zerolog.(*Logger).Panic.func1({0xc00231a050, 0x0})
    github.com/rs/[email protected]/log.go:359 +0x2d
    github.com/rs/zerolog.(*Event).msg(0xc0000920c0, {0xc00231a050, 0x43})
    github.com/rs/[email protected]/event.go:156 +0x2b8
    github.com/rs/zerolog.(*Event).Msg(...)
    github.com/rs/[email protected]/event.go:108
    github.com/alibaba/RedisShake/internal/log.logFinally(0xc0000920c0, {0xc00231a000, 0xc000124288}, {0x0, 0x2f, 0xc000016390})
    github.com/alibaba/RedisShake/internal/log/func.go:77 +0x53
    github.com/alibaba/RedisShake/internal/log.Panicf({0xc00231a000, 0x43}, {0x0, 0x0, 0x0})
    github.com/alibaba/RedisShake/internal/log/func.go:27 +0x57
    github.com/alibaba/RedisShake/internal/log.PanicError({0x81bc20, 0xc00015e410})
    github.com/alibaba/RedisShake/internal/log/func.go:31 +0x33
    github.com/alibaba/RedisShake/internal/client.(*Redis).SendBytes(0xc000024100, {0xc002314000, 0xc000038800, 0xc0001ea870})
    github.com/alibaba/RedisShake/internal/client/redis.go:92 +0x3a
    github.com/alibaba/RedisShake/internal/writer.(*redisWriter).Write(0xc0001e74c0, 0xc00012f540)
    github.com/alibaba/RedisShake/internal/writer/redis.go:54 +0x12d
    github.com/alibaba/RedisShake/internal/writer.(*RedisClusterWriter).Write(0xc00021a000, 0xc00012f540)
    github.com/alibaba/RedisShake/internal/writer/redis_cluster.go:114 +0x16e
    main.main()
    github.com/alibaba/RedisShake/cmd/redis-shake/main.go:109 +0x7fa
    

    源端 Redis 版本:4.0.1,自建


    目的端 Redis 版本:5.02,是集群,云厂商

  • 某些情况下,同步过程中丢失数据

    某些情况下,同步过程中丢失数据

    RedisShake版本:release-v1.2.2-20190403 源redis版本:3.2.8 目的redis版本:4.0.10 cluster 配置:仅增大了 parallel 参数,其余参数均为默认参数 现象:同步过程中,无错误提示,同步完成之后,保持同步进程继续运行。 统计两端key总数,发现目的redis的key数量约为源redis的90%。 使用redis-full-check默认参数进行检查,报错:https://github.com/alibaba/RedisFullCheck/issues/36 使用 redis-full-check -m 3 进行检查,发现两端一致,即工具认为key是一致的。

    从调试信息中找到了两端不一致的key:uc_basic_data_2852157937_1_28521579370000000000000000000010

    源redis: > type uc_basic_data_2852157937_1_28521579370000000000000000000010 hash > ttl uc_basic_data_2852157937_1_28521579370000000000000000000010 (integer) 596453

    目的redis: > type uc_basic_data_2852157937_1_28521579370000000000000000000010 none > ttl uc_basic_data_2852157937_1_28521579370000000000000000000010 (integer) -2

  •  read: connection reset by peer

    read: connection reset by peer

    两次redis同步遇到相同问题 Snipaste_2021-08-19_10-58-02

    redis的相关配置为:

    repl-timeout 60
    client-output-buffer-limit slave 0 0 0 
    

    源端日志显示为

    * Background saving terminated with success
    # Connection with slave client id #2894126 lost.
    

    已尝试参照 https://github.com/alibaba/RedisShake/issues/282 修改 parallel = 64,但并未成功 尝试过修改repl-timeout为300,也试过修改client-output-buffer-limit slave 为256m 64m 60,也没有成功。 所以请教一下作者,是否有什么好的调整建议。

  • release-v2.0.3-20200724 opens too many connections to destination redis host

    release-v2.0.3-20200724 opens too many connections to destination redis host

    Running in "sync" mode as follows ./redis-shake.linux -conf=standalone2standalone.conf -type=sync

    Used configuration:

    source.type=standalone
    source.address=10.XX.XX.XX:6379
    
    target.type=standalone
    target.address=10.XX.XX.XXX:6379
    

    At most 6 mb/s data transfer seen at log informations. Screen Shot 2021-03-04 at 13 35 29

    Is there any configurative way to increase this data transfer rate with decreasing the count of opening network connections?

    Network monitoring logs are not exist for now. I can provide them also if needed.

  • Error when synchronizing Redis 4.0 standalone to redis 5.0 cluster

    Error when synchronizing Redis 4.0 standalone to redis 5.0 cluster

    • [x] 请确保已经看过 wiki:https://github.com/alibaba/RedisShake/wiki
    • [x] 请确保已经学习过 Markdown 语法,良好的排版有助于维护人员了解你的问题
    • [x] 请在此提供足够的信息供社区维护人员排查问题
    • [x] 请在提交 issue 前删除此模板中多余的文字,包括这几句话

    sync.toml

    type = "sync"
    
    [source]
    version = 4.0 # redis version, such as 2.8, 4.0, 5.0, 6.0, 6.2, 7.0, ...
    address = "10.106.xx.xxx:6379"
    username = "" # keep empty if not using ACL
    password = "" # keep empty if no authentication is required
    tls = false
    elasticache_psync = "" # using when source is ElastiCache. ref: https://github.com/alibaba/RedisShake/issues/373
    
    [target]
    type = "cluster" # "standalone" or "cluster"
    version = 5.0 # redis version, such as 2.8, 4.0, 5.0, 6.0, 6.2, 7.0, ...
    # When the target is a cluster, write the address of one of the nodes.
    # redis-shake will obtain other nodes through the `cluster nodes` command.
    address = "10.174.xx.xxx:11309"
    username = "" # keep empty if not using ACL
    password = "" # keep empty if no authentication is required
    tls = false
    
    [advanced]
    dir = "data"
    
    # runtime.GOMAXPROCS, 0 means use runtime.NumCPU() cpu cores
    ncpu = 4
    
    # pprof port, 0 means disable
    pprof_port = 0
    
    # metric port, 0 means disable
    metrics_port = 0
    
    # log
    log_file = "redis-shake.log"
    log_level = "info" # debug, info or warn
    log_interval = 5 # in seconds
    
    # redis-shake gets key and value from rdb file, and uses RESTORE command to
    # create the key in target redis. Redis RESTORE will return a "Target key name
    # is busy" error when key already exists. You can use this configuration item
    # to change the default behavior of restore:
    # panic:   redis-shake will stop when meet "Target key name is busy" error.
    # rewrite: redis-shake will replace the key with new value.
    # ignore:  redis-shake will skip restore the key when meet "Target key name is busy" error.
    rdb_restore_command_behavior = "rewrite" # panic, rewrite or skip
    
    # pipeline
    pipeline_count_limit = 1024
    
    # Client query buffers accumulate new commands. They are limited to a fixed
    # amount by default. This amount is normally 1gb.
    target_redis_client_max_querybuf_len = 1024_000_000
    
    # In the Redis protocol, bulk requests, that are, elements representing single
    # strings, are normally limited to 512 mb.
    target_redis_proto_max_bulk_len = 512_000_000
    

    CLI

    sudo ./bin/redis-shake sync.toml
    

    LOG

    2022-12-16 16:11:40 INF GOOS: darwin, GOARCH: arm64
    2022-12-16 16:11:40 INF Ncpu: 4, GOMAXPROCS: 4
    2022-12-16 16:11:40 INF pid: 84003
    2022-12-16 16:11:40 INF pprof_port: 0
    2022-12-16 16:11:40 INF No lua file specified, will not filter any cmd.
    2022-12-16 16:11:40 INF no password. address=[10.174.xx.xxx:11309]
    2022-12-16 16:11:40 INF redisClusterWriter load cluster nodes. line=10a174a54a214a98211000000000000000000024 10.174.xx.xxx:11289@11289 master - 1671174700348 1671174700348 99 connected 12288-16383
    2022-12-16 16:11:40 INF no password. address=[10.174.xx.xxx:11289]
    2022-12-16 16:11:40 INF redisWriter connected to redis successful. address=[10.174.xx.xxx:11289]
    2022-12-16 16:11:40 INF redisClusterWriter load cluster nodes. line=10a174a54a209a90311000000000000000000000 10.174.xx.xxx:11309@11309 myself,master - 1671174700348 1671174700348 99 connected 0-4095
    2022-12-16 16:11:40 INF no password. address=[10.174.xx.xxx:11309]
    2022-12-16 16:11:40 INF redisWriter connected to redis successful. address=[10.174.xx.xxx:11309]
    2022-12-16 16:11:40 INF redisClusterWriter load cluster nodes. line=10a174a54a215a98211000000000000000000008 10.174.xx.xxx:11289@11289 master - 1671174700348 1671174700348 99 connected 4096-8191
    2022-12-16 16:11:40 INF no password. address=[10.174.xx.xxx:11289]
    2022-12-16 16:11:40 INF redisWriter connected to redis successful. address=[10.174.xx.xxx:11289]
    2022-12-16 16:11:40 INF redisClusterWriter load cluster nodes. line=10a174a54a216a93211000000000000000000016 10.174.xx.xxx:11239@11239 master - 1671174700348 1671174700348 99 connected 8192-12287
    2022-12-16 16:11:40 INF no password. address=[10.174.xx.xxx:11239]
    2022-12-16 16:11:40 INF redisWriter connected to redis successful. address=[10.174.xx.xxx:11239]
    2022-12-16 16:11:40 INF redisClusterWriter connected to redis cluster successful. addresses=[10.174.xx.xxx:11289 10.174.xx.xxx:11309 10.174.xx.xxx:11289 10.174.xx.xxx:11239]
    2022-12-16 16:11:40 INF no password. address=[10.106.xx.xxx:6379]
    2022-12-16 16:11:40 INF psyncReader connected to redis successful. address=[10.106.xx.xxx:6379]
    2022-12-16 16:11:40 WRN remove file. filename=[220782554095.aof]
    2022-12-16 16:11:40 WRN remove file. filename=[dump.rdb]
    2022-12-16 16:11:40 INF start save RDB. address=[10.106.xx.xxx:6379]
    2022-12-16 16:11:40 INF send [replconf listening-port 10007]
    2022-12-16 16:11:40 INF send [PSYNC ? -1]
    2022-12-16 16:11:40 INF receive [FULLRESYNC ce6bf38546f40d30f619158710725fc42c1496de 220792215831]
    2022-12-16 16:11:40 INF source db is doing bgsave. address=[10.106.xx.xxx:6379]
    2022-12-16 16:11:40 INF source db bgsave finished. timeUsed=[0.54]s, address=[10.106.xx.xxx:6379]
    2022-12-16 16:11:40 INF received rdb length. length=[18161262]
    2022-12-16 16:11:40 INF create dump.rdb file. filename_path=[dump.rdb]
    2022-12-16 16:11:43 INF save RDB finished. address=[10.106.xx.xxx:6379], total_bytes=[18161262]
    2022-12-16 16:11:43 INF start send RDB. address=[10.106.xx.xxx:6379]
    2022-12-16 16:11:43 INF start save AOF. address=[10.106.xx.xxx:6379]
    2022-12-16 16:11:43 INF RDB version: 8
    2022-12-16 16:11:43 INF RDB AUX fields. key=[redis-ver], value=[4.0.8]
    2022-12-16 16:11:43 INF RDB AUX fields. key=[redis-bits], value=[64]
    2022-12-16 16:11:43 INF RDB AUX fields. key=[ctime], value=[1671174700]
    2022-12-16 16:11:43 INF RDB AUX fields. key=[used-mem], value=[65444160]
    2022-12-16 16:11:43 INF AOFWriter open file. filename=[220792215831.aof]
    2022-12-16 16:11:43 INF RDB repl-stream-db: 0
    2022-12-16 16:11:43 INF RDB AUX fields. key=[repl-id], value=[ce6bf38546f40d30f619158710725fc42c1496de]
    2022-12-16 16:11:43 INF RDB AUX fields. key=[repl-offset], value=[220792215831]
    2022-12-16 16:11:43 INF RDB AUX fields. key=[aof-preamble], value=[0]
    2022-12-16 16:11:43 INF RDB resize db. db_size=[38604], expire_size=[371]
    2022-12-16 16:11:45 INF syncing rdb. percent=[42.11]%, allowOps=[4408.40], disallowOps=[0.00], entryId=[22042], InQueueEntriesCount=[16], unansweredBytesCount=[0]bytes, rdbFileSize=[0.017]G, rdbSendSize=[0.007]G
    2022-12-16 16:11:46 INF send RDB finished. address=[10.106.xx.xxx:6379], repl-stream-db=[0]
    2022-12-16 16:11:47 INF AOFReader open file. aof_filename=[220792215831.aof]
    2022-12-16 16:11:47 PNC redisWriter received error. error=[ERR Unsupported command], argv=[PUBLISH __sentinel__:hello 10.106.xx.xxx,26379,8c54fa8b3b2bde3acaaffbdc94d87e650d271198,33,mymaster,10.106.xx.xxx,6379,19], slots=[], reply=[<nil>]
    panic: redisWriter received error. error=[ERR Unsupported command], argv=[PUBLISH __sentinel__:hello 10.106.xx.xxx,26379,8c54fa8b3b2bde3acaaffbdc94d87e650d271198,33,mymaster,10.106.xx.xxx,6379,19], slots=[], reply=[<nil>]
    
    goroutine 20 [running]:
    github.com/rs/zerolog.(*Logger).Panic.func1({0x1400023ab60?, 0x0?})
    	github.com/rs/[email protected]/log.go:359 +0x30
    github.com/rs/zerolog.(*Event).msg(0x14000090660, {0x1400023ab60, 0xd6})
    	github.com/rs/[email protected]/event.go:156 +0x244
    github.com/rs/zerolog.(*Event).Msg(...)
    	github.com/rs/[email protected]/event.go:108
    github.com/alibaba/RedisShake/internal/log.logFinally(0x14000090660, {0x104e5e835?, 0x18?}, {0x1400025ff68?, 0x1400004ee01?, 0x104e0613c?})
    	github.com/alibaba/RedisShake/internal/log/func.go:77 +0x60
    github.com/alibaba/RedisShake/internal/log.Panicf({0x104e5e835, 0x45}, {0x1400025ff68, 0x4, 0x4})
    	github.com/alibaba/RedisShake/internal/log/func.go:27 +0x50
    github.com/alibaba/RedisShake/internal/writer.(*redisWriter).flushInterval(0x1400007c140)
    	github.com/alibaba/RedisShake/internal/writer/redis.go:75 +0x2f8
    created by github.com/alibaba/RedisShake/internal/writer.NewRedisWriter
    	github.com/alibaba/RedisShake/internal/writer/redis.go:35 +0x1a0
    

    Although the source and target are different, I have synced in the same environment before.

    This time the error pops up like this. Can you tell me what this error log means?

    The difference from that time is that this cluster has all different IPs.

  • redis 4.0 cluster 同步到redis 6.0 报错

    redis 4.0 cluster 同步到redis 6.0 报错

    问题描述

    redis 4.0同步到redis 6.0 期间报错,redis shake退出

    redis-shake 的日志:

    panic: write tcp 192.168.xxx.2:53364->192.168.xxx.50:6690: write: broken pipe
    
    goroutine 1 [running]:
    github.com/rs/zerolog.(*Logger).Panic.func1({0xc00009ecd0, 0x0})
            github.com/rs/[email protected]/log.go:359 +0x2d
    github.com/rs/zerolog.(*Event).msg(0xc000092300, {0xc00009ecd0, 0x46})
            github.com/rs/[email protected]/event.go:156 +0x2b8
    github.com/rs/zerolog.(*Event).Msg(...)
            github.com/rs/[email protected]/event.go:108
    github.com/alibaba/RedisShake/internal/log.logFinally(0xc000092300, {0xc00009ec30, 0xc0002ee720}, {0x0, 0x32, 0xc000355a40})
            github.com/alibaba/RedisShake/internal/log/func.go:77 +0x53
    github.com/alibaba/RedisShake/internal/log.Panicf({0xc00009ec30, 0x46}, {0x0, 0x0, 0x0})
            github.com/alibaba/RedisShake/internal/log/func.go:27 +0x57
    github.com/alibaba/RedisShake/internal/log.PanicError({0x81bc20, 0xc000351310})
            github.com/alibaba/RedisShake/internal/log/func.go:31 +0x33
    github.com/alibaba/RedisShake/internal/client.(*Redis).flush(0xc00009c180)
            github.com/alibaba/RedisShake/internal/client/redis.go:100 +0x2a
    github.com/alibaba/RedisShake/internal/client.(*Redis).SendBytes(0xc0000126c0, {0xc00008de00, 0xa82738, 0xc0000a0870})
            github.com/alibaba/RedisShake/internal/client/redis.go:94 +0x45
    github.com/alibaba/RedisShake/internal/writer.(*redisWriter).Write(0xc00005ca40, 0xc0003fb400)
            github.com/alibaba/RedisShake/internal/writer/redis.go:54 +0x12d
    github.com/alibaba/RedisShake/internal/writer.(*RedisClusterWriter).Write(0x0, 0xc0003fb400)
            github.com/alibaba/RedisShake/internal/writer/redis_cluster.go:100 +0x92
    main.main()
            github.com/alibaba/RedisShake/cmd/redis-shake/main.go:109 +0x7fa
    panic: redisWriter received error. error=[EOF], argv=[script load local val = 0 if redis.call('GET', KEYS[1]) == ARGV[1] then val = redis.call('DEL', KEYS[1]) end return tostring(val)], slots=[], reply=[<nil>]
    
    goroutine 20 [running]:
    github.com/rs/zerolog.(*Logger).Panic.func1({0xc0004c00d0, 0x0})
            github.com/rs/[email protected]/log.go:359 +0x2d
    github.com/rs/zerolog.(*Event).msg(0xc00038eae0, {0xc0004c00d0, 0xca})
            github.com/rs/[email protected]/event.go:156 +0x2b8
    github.com/rs/zerolog.(*Event).Msg(...)
            github.com/rs/[email protected]/event.go:108
    github.com/alibaba/RedisShake/internal/log.logFinally(0xc00038eae0, {0x7b742a, 0x40c0ca}, {0xc000313f78, 0x745ea0, 0x407001})
            github.com/alibaba/RedisShake/internal/log/func.go:77 +0x53
    github.com/alibaba/RedisShake/internal/log.Panicf({0x7b742a, 0x45}, {0xc000313f78, 0x4, 0x4})
            github.com/alibaba/RedisShake/internal/log/func.go:27 +0x57
    github.com/alibaba/RedisShake/internal/writer.(*redisWriter).flushInterval(0xc00005ca40)
            github.com/alibaba/RedisShake/internal/writer/redis.go:75 +0x3bd
    created by github.com/alibaba/RedisShake/internal/writer.NewRedisWriter
            github.com/alibaba/RedisShake/internal/writer/redis.go:35 +0x19c
    shake 的日志贴在这里
    

    源端 Redis 版本:版本号,自建还是云厂商? redis-cli 4.0.10 自建 ,集群 日志:

    22337:M 16 Dec 10:08:32.420 * Full resync requested by slave 192.168.xxx.2:10007
    22337:M 16 Dec 10:08:32.420 * Starting BGSAVE for SYNC with target: disk
    22337:M 16 Dec 10:08:32.693 * Background saving started by pid 9366
    22337:M 16 Dec 10:08:47.381 # Connection with slave 192.168.xxx.2:10007 lost.
    22337:M 16 Dec 10:09:28.406 * Slave 192.168.xxx.2:10007 asks for synchronization
    22337:M 16 Dec 10:09:28.406 * Full resync requested by slave 192.168.xxx.2:10007
    22337:M 16 Dec 10:09:28.406 * Can't attach the slave to the current BGSAVE. Waiting for next BGSAVE for SYNC
    

    目的端 Redis 版本:版本号,是否是集群?自建还是云厂商? redis 6.2.6 是集群,自建 日志:

    24861:M 16 Dec 2022 10:55:23.038 * 1 changes in 3600 seconds. Saving...
    24861:M 16 Dec 2022 10:55:23.038 * Background saving started by pid 45524
    45524:C 16 Dec 2022 10:55:23.040 * DB saved on disk
    45524:C 16 Dec 2022 10:55:23.040 * RDB: 0 MB of memory used by copy-on-write
    24861:M 16 Dec 2022 10:55:23.058 * Background saving terminated with success
    24861:M 16 Dec 2022 10:55:45.750 * Starting automatic rewriting of AOF on 6715309000% growth
    24861:M 16 Dec 2022 10:55:45.752 * Background append only file rewriting started by pid 45529
    24861:M 16 Dec 2022 10:55:47.435 * AOF rewrite child asks to stop sending diffs.
    45529:C 16 Dec 2022 10:55:47.435 * Parent agreed to stop sending diffs. Finalizing AOF...
    45529:C 16 Dec 2022 10:55:47.435 * Concatenating 4.19 MB of AOF diff received from parent.
    45529:C 16 Dec 2022 10:55:47.447 * SYNC append only file rewrite performed
    日志贴在这里
    
  • 对于双redis-cluster间的数据同步问题

    对于双redis-cluster间的数据同步问题

    文档说明集群与集群间数据同步是参照 [单机到集群] 部署 4多个 redis-shake 进行数据同步。 按照一般配置redis-cluster集群采用三主三从部署的话。

    问题1:是否也要按照6个实例启动6个服务?

    问题2:还有对于目标 [target] type = "cluster" address = "192.168.1.1:6380" # 集群 D 中任意一个节点地址

    此处address是否可以采用2的版本,将所有的集群节点写在此处并且按;隔离?

  • redis-shake 发起同步后 target 端的哨兵会发现 souces 端的哨兵

    redis-shake 发起同步后 target 端的哨兵会发现 souces 端的哨兵

    问题描述

    redis-shake 发起同步后,target 端的哨兵会发现 source 端的哨兵。在某些场景下 target 端的哨兵会发起主从切换将 target 端的实例降级为 slave 导致 redis-shake 同步失败,请问如何避免此类情况( sources 和 target 的哨兵都不能关闭)。

    同步前 source 主从状态 8 BYPHY9PZ5DSTO%Z~9Y4IS

    同步前 target 主从状态 QI%726EZ@7UUZ$V8{_3G}CC

    同步前 target 哨兵的情况 image

    同步后的 target 哨兵的情况 image

    发起 redis-shake 同步后的状态

    redis-shake 的日志:

    shake 的日志贴在这里
    

    源端 Redis 版本:3.0,自建 redis,使用 standalone 模式


    目的端 Redis 版本:7.0,自建 redis,使用 standalone 模式

  • redis集群之间同步异常

    redis集群之间同步异常

    同步一段时间后,redis-shake自动挂掉 {"level":"panic","time":"2022-11-24T16:46:49+08:00","message":"EOF"} {"level":"info","time":"2022-11-24T16:46:49+08:00","message":"AOFWriter close file. filename=[1385795846.aof], filesize=[21401536]"} 这是最后的日志。不知道什么原因造成。

This is the code example how to use SQL to query data from any relational databases in Go programming language.

Go with SQL example This is the code example how to use SQL to query data from any relational databases in Go programming language. To start, please m

Mar 12, 2022
🏋️ dbbench is a simple database benchmarking tool which supports several databases and own scripts

dbbench Table of Contents Description Example Installation Supported Databases Usage Custom Scripts Troubeshooting Development Acknowledgements Descri

Dec 30, 2022
Go package for sharding databases ( Supports every ORM or raw SQL )
Go package for sharding databases ( Supports every ORM or raw SQL )

Octillery Octillery is a Go package for sharding databases. It can use with every OR Mapping library ( xorm , gorp , gorm , dbr ...) implementing data

Dec 16, 2022
Cross-platform client for PostgreSQL databases

pgweb Web-based PostgreSQL database browser written in Go. Overview Pgweb is a web-based database browser for PostgreSQL, written in Go and works on O

Dec 30, 2022
Universal command-line interface for SQL databases

usql A universal command-line interface for PostgreSQL, MySQL, Oracle Database, SQLite3, Microsoft SQL Server, and many other databases including NoSQ

Jan 9, 2023
SQL API is designed to be able to run queries on databases without any configuration by simple HTTP call.

SQL API SQL API is designed to be able to run queries on databases without any configuration by simple HTTP call. The request contains the DB credenti

Dec 2, 2022
Go sqlite3 http vfs: query sqlite databases over http with range headers

sqlite3vfshttp: a Go sqlite VFS for querying databases over http(s) sqlite3vfshttp is a sqlite3 VFS for querying remote databases over http(s). This a

Dec 27, 2022
The open-source collaborative IDE for your databases.
The open-source collaborative IDE for your databases.

The open-source collaborative IDE for your databases in your browser. About Slashbase is an open-source collaborative IDE for your databases in your b

Sep 4, 2022
Cross-platform client for PostgreSQL databases

pgweb Web-based PostgreSQL database browser written in Go. Overview Pgweb is a web-based database browser for PostgreSQL, written in Go and works on O

Dec 30, 2022
test ALL the databases

This project is an integration test, testing various Go database drivers (for the database/sql package). To run these tests, in this directory, run:

Dec 12, 2022
Use SQL to instantly query instances, networks, databases, and more from Scaleway. Open source CLI. No DB required.
Use SQL to instantly query instances, networks, databases, and more from Scaleway. Open source CLI. No DB required.

Scaleway Plugin for Steampipe Use SQL to query infrastructure servers, networks, databases and more from your Scaleway project. Get started → Document

Nov 16, 2022
Manage SQL databases, users and grant using kubernetes manifests

SqlOperator Operate sql databases, users and grants. This is a WIP project and should not at all be used in production at this time. Feel free to vali

Nov 28, 2021
Manage Schema for KubeDB managed Databases

schema-manager Manage Schema for KubeDB managed Databases Installation To install KubeDB, please follow the guide here. Using KubeDB Want to learn how

Feb 19, 2022
A go Library for scan database/sql rows to struct、slice、other types. And it support multiple databases connection management

ploto A go Library for scan database/sql rows to struct、slice、other types. And it support multiple databases connection management It's not an ORM. wo

Nov 3, 2022
SQLite extension for accessing other SQL databases

dblite SQLite extension for accessing other SQL databases, in SQLite. Similar to how Postgres Foreign Data Wrappers enable access to other databases i

Dec 23, 2022
Client to import measurements to timestream databases.

Timestream DB Client Client to import measurements to timestream databases. Supported Databases/Services AWS Timestream AWS Timestream Run NewTimestre

Jan 11, 2022
Use SQL to query databases, logs and more from PlanetScale

Use SQL to instantly query PlanetScale databases, branches and more. Open source CLI. No DB required.

Sep 30, 2022
A simple auditor of SQL databases.

DBAuditor SQL数据库审计系统,目前支持SQL注入攻击审计 环境配置 sudo apt install golang 运行方式 将待审计语句填入test.txt中,然后运行主程序: 直接运行: go run main.go 编译运行: go build main.go ./main 主要目

Nov 9, 2022
MySQL Storage engine conversion,Support mutual conversion between MyISAM and InnoDB engines.

econvert MySQL Storage engine conversion 简介 此工具用于MySQL存储引擎转换,支持CTAS和ALTER两种模式,目前只支持MyISAM和InnoDB存储引擎相互转换,其它引擎尚不支持。 注意:当对表进行引擎转换时,建议业务停止访问或者极少量访问时进行。 原

Oct 25, 2021