Multiple databases, read-write splitting FOR GORM

DBResolver

DBResolver adds multiple databases support to GORM, the following features are supported:

  • Multiple sources, replicas
  • Read/Write Splitting
  • Automatic connection switching based on the working table/struct
  • Manual connection switching
  • Sources/Replicas load balancing
  • Works for RAW SQL

Quick Start

import (
  "gorm.io/gorm"
  "gorm.io/plugin/dbresolver"
  "gorm.io/driver/mysql"
)

DB, err := gorm.Open(mysql.Open("db1_dsn"), &gorm.Config{})

DB.Use(dbresolver.Register(dbresolver.Config{
  // use `db2` as sources, `db3`, `db4` as replicas
  Sources:  []gorm.Dialector{mysql.Open("db2_dsn")},
  Replicas: []gorm.Dialector{mysql.Open("db3_dsn"), mysql.Open("db4_dsn")},
  // sources/replicas load balancing policy
  Policy: dbresolver.RandomPolicy{},
}).Register(dbresolver.Config{
  // use `db1` as sources (DB's default connection), `db5` as replicas for `User`, `Address`
  Replicas: []gorm.Dialector{mysql.Open("db5_dsn")},
}, &User{}, &Address{}).Register(dbresolver.Config{
  // use `db6`, `db7` as sources, `db8` as replicas for `orders`, `Product`
  Sources:  []gorm.Dialector{mysql.Open("db6_dsn"), mysql.Open("db7_dsn")},
  Replicas: []gorm.Dialector{mysql.Open("db8_dsn")},
}, "orders", &Product{}, "secondary"))

Transaction

When using transaction, DBResolver will use the transaction and won't switch to sources/replicas

Automatic connection switching

DBResolver will automatically switch connection based on the working table/struct

For RAW SQL, DBResolver will extract the table name from the SQL to match the resolver, and will use sources unless the SQL begins with SELECT, for example:

// `User` Resolver Examples
DB.Table("users").Rows() // replicas `db5`
DB.Model(&User{}).Find(&AdvancedUser{}) // replicas `db5`
DB.Exec("update users set name = ?", "jinzhu") // sources `db1`
DB.Raw("select name from users").Row().Scan(&name) // replicas `db5`
DB.Create(&user) // sources `db1`
DB.Delete(&User{}, "name = ?", "jinzhu") // sources `db1`
DB.Table("users").Update("name", "jinzhu") // sources `db1`

// Global Resolver Examples
DB.Find(&Pet{}) // replicas `db3`/`db4`
DB.Save(&Pet{}) // sources `db2`

// Orders Resolver Examples
DB.Find(&Order{}) // replicas `db8`
DB.Table("orders").Find(&Report{}) // replicas `db8`

Read/Write Splitting

Read/Write splitting with DBResolver based on the current using GORM callback.

For Query, Row callback, will use replicas unless Write mode specified For Raw callback, statements are considered read-only and will use replicas if the SQL starts with SELECT

Manual connection switching

// Use Write Mode: read user from sources `db1`
DB.Clauses(dbresolver.Write).First(&user)

// Specify Resolver: read user from `secondary`'s replicas: db8
DB.Clauses(dbresolver.Use("secondary")).First(&user)

// Specify Resolver and Write Mode: read user from `secondary`'s sources: db6 or db7
DB.Clauses(dbresolver.Use("secondary"), dbresolver.Write).First(&user)

Load Balancing

GORM supports load balancing sources/replicas based on policy, the policy is an interface implements following interface:

type Policy interface {
	Resolve([]gorm.ConnPool) gorm.ConnPool
}

Currently only the RandomPolicy implemented and it is the default option if no policy specified.

Connection Pool

DB.Use(
  dbresolver.Register(dbresolver.Config{ /* xxx */ }).
  SetConnMaxIdleTime(time.Hour).
  SetConnMaxLifetime(24 * time.Hour).
  SetMaxIdleConns(100).
  SetMaxOpenConns(200)
)
Comments
  • Upgraded from 1.1.0 to 1.2.0, creates a panic in Clauses:27

    Upgraded from 1.1.0 to 1.2.0, creates a panic in Clauses:27

    Here is the line where it fails. We are currently using GORM in bux: https://github.com/BuxOrg/bux

    We just recently upgraded this package and now it is failing in the Query() method : stmt.DB.Callback().Query().Get("gorm:db_resolver")(stmt.DB)

    goroutine 8906 [running]:
    testing.tRunner.func1.2({0x184d840, 0x2b6ceb0})
    	/opt/hostedtoolcache/go/1.17.9/x64/src/testing/testing.go:1209 +0x24e
    testing.tRunner.func1()
    	/opt/hostedtoolcache/go/1.17.9/x64/src/testing/testing.go:1212 +0x218
    panic({0x184d840, 0x2b6ceb0})
    	/opt/hostedtoolcache/go/1.17.9/x64/src/runtime/panic.go:1038 +0x215
    gorm.io/plugin/dbresolver.Operation.ModifyStatement({0x1a74da3, 0xc000f1d801}, 0xc000f1dc00)
    	/home/runner/go/pkg/mod/gorm.io/plugin/[email protected]/clauses.go:27 +0x1c6
    gorm.io/gorm.(*DB).Clauses(0x1ee0850, {0xc000e9da50, 0x1, 0x18d0120})
    	/home/runner/go/pkg/mod/gorm.io/[email protected]/chainable_api.go:32 +0x24f
    

    Additional Outputs from Debugging:

    Screen Shot 2022-05-02 at 9 23 53 AM Screen Shot 2022-05-02 at 9 24 11 AM Screen Shot 2022-05-02 at 9 24 37 AM Screen Shot 2022-05-02 at 9 19 58 AM Screen Shot 2022-05-02 at 9 19 52 AM
  • fix #6

    fix #6

    • [x] Do only one thing
    • [x] Non breaking API changes
    • [ ] Tested

    What did this pull request do?

    When replicas can't use, but sources can use, SELECT command while be use sources. when replicas service recovery, it also use sources.

    So, I want to add a option, control force use replicas service. Even if the replicas service is not available。

    fix #6

    User Case Description

    Also mysql service is one master and more slave.

    if read request very large, use sources service is very danger.

  • For Read/Write splitting, allow logging of which DSN each query goes to

    For Read/Write splitting, allow logging of which DSN each query goes to

    Describe the feature

    I am using the Read/Write splitting feature. I would like to log for each query which DSN it goes to (or at least if it goes to a source or replica).

    Motivation

    This would be used for troubleshooting failed or slow DB instances, and for testing.

    Related Issues

    N/A.

  • dbresolver v1.2.3编译错误

    dbresolver v1.2.3编译错误

    GORM Playground Link

    https://github.com/go-gorm/playground/pull/1

    Description

    go.mod引用:

    	gorm.io/gen v0.3.16
    	gorm.io/gorm v1.24.0
    	gorm.io/plugin/dbresolver v1.2.3
    

    编译出现如下错误:

    2022/10/08 10:15:38 ERROR    ▶ 0006 Failed to build the application: # gorm.io/plugin/dbresolver
    /go/pkg/mod/gorm.io/plugin/[email protected]/dbresolver.go:139:18: 
    cannot use map[string]gorm.Stmt{} (value of type map[string]gorm.Stmt) as type map[string]*gorm.Stmt in struct literal
    

    截图如下: image

  • My lower version of gorm

    My lower version of gorm

    My gorm version is: github.com/jinzhu/gorm v1.9.16

    I want to introduce DB resolver into our project to achieve a read/write split and so on.

    But I found the DB resolver can't match the using gorm version.

    Can u give me some advice?

  • 如何对slover设置gorm.Config

    如何对slover设置gorm.Config

    我目前的情况

    有两个mysql数据库需要同时连接使用,然后我想设置DisableForeignKeyConstraintWhenMigrating 目前的情况是只有主连接会赋予该配置,而*dbresolver.DBResolver貌似不可以

    var Conn *gorm.DB
    
    func init() {
    	InitMultiDatabase()
    
    	// 设置一些gorm配置
    	Conn.Config.Apply(&gorm.Config{
    		DisableForeignKeyConstraintWhenMigrating: true,
    		PrepareStmt:                              true,
    	})
    
    	Conn.Logger = logger.NewGormLogger()
    
    	// 创建表
    	Conn.AutoMigrate(
    	 	&model.Customer{},
    	 	&model.CustomerBrand{},
    		&model.FlowNode{},
    		&model.FlowCurrent{},
    		&model.FlowUnknown{},
    	)
    	if !Conn.Migrator().HasTable(model.FlowUnknown{}) {
    		Conn.Migrator().CreateTable(&model.FlowUnknown{})
    	}
    }
    
    func InitMultiDatabase() {
    	var err error
    	Conn, err = gorm.Open(mysql.Open(config.DB_APP.DSN), &gorm.Config{})
    	if err != nil {
    		panic(err)
    	}
    	// 设置主库的线程池
    	sqlDB, err := Conn.DB()
    	sqlDB.SetMaxIdleConns(config.Pool.MaxIdleConns)
    	sqlDB.SetMaxOpenConns(config.Pool.MaxOpenConns)
    	sqlDB.SetConnMaxIdleTime(config.Pool.ConnMaxIdleTime)
    	sqlDB.SetConnMaxLifetime(config.Pool.ConnMaxLifetime)
    	if err != nil {
    		panic(err)
    	}
    
    	// 这里指定特定的表去特定的数据库
    	slover := dbresolver.Register(
    		dbresolver.Config{
    			Sources: []gorm.Dialector{mysql.Open(config.DB_DATA.DSN)},
    		},
    		&model.Agent{},
    		&model.Flow{},
    		&model.Global{},
    		&model.Item{},
    	)
    
    	// 设置连接池信息
    	slover.SetConnMaxIdleTime(config.Pool.ConnMaxIdleTime).
    		SetConnMaxLifetime(config.Pool.ConnMaxLifetime).
    		SetMaxIdleConns(config.Pool.MaxIdleConns).
    		SetMaxOpenConns(config.Pool.MaxOpenConns)
    
    	// 这里这么写会报错空指针
    	slover.Apply(&gorm.Config{
    		DisableForeignKeyConstraintWhenMigrating: true,
    		PrepareStmt:                              true,
    	})
    
    	Conn.Use(slover)
    }
    
  • multi source write

    multi source write

    clickhouse 通过 distribute engine表可以实现分布式写。 ` dsn1 := "192.168.1.1:9000" dsn2 := "192.168.1.2:9000" dsn3 := "192.168.1.3:9000" dsn4 := "192.168.1.4:9000"

    dbConn, err := gorm.Open(clickhouse.Open(dsn1), &gorm.Config{
    	SkipDefaultTransaction: true,
    })
    if err != nil {
    	fmt.Println(err)
    	return
    }
    
    
    err = dbConn.Use(dbresolver.Register(dbresolver.Config{
    	// `db2` 作为 sources,`db3`、`db4` 作为 replicas
    	Sources: []gorm.Dialector{clickhouse.Open(dsn2), clickhouse.Open(dsn1), clickhouse.Open(dsn3), clickhouse.Open(dsn4)},
    	// sources/replicas 负载均衡策略
    	Policy: dbresolver.RandomPolicy{},
    }))
    if err != nil {
    	fmt.Println(err)
    	return
    }
    

    write data

    err = dbConn.Transaction(func(tx *gorm.DB) error {
    	if err = tx.Create(&[]module.Tbl{t}).Error; err != nil {
    		return err
    	}
    	// 返回 nil 提交事务
    	return nil
    })
    

    ` 有四个clickhouse实例构建一套集群,使用dbConn 写入 数据库的时候发现始终是通过 dsn1,读取可以通过四个dsn读取,这个是为什么? 期望:写入也是可以根据随机选取

  • 麻烦作者更新一下  go.mod 声明的依赖包版本号

    麻烦作者更新一下 go.mod 声明的依赖包版本号

    麻烦作者更新一下 go.mod 声明的依赖包版本号

    • 1.由于我的项目已经引用了 gorm 的最新版本 v1.21.12
    • 2.继续使用该读写分离包,就会依赖老版本,这样我本地就会存在多个版本的 gorm包。
    • 3.请作者及时更新一下此包,并且为此包打一个新版本的 tag
    # 以下是 dbresolver 包声明的依赖包版本号,请及时更新
    require (
    	gorm.io/driver/mysql v1.1.0   //  目前是 v1.1.1
    	gorm.io/gorm v1.21.9  //  目前是 v1.21.12
    )
    
    
  • I want to be able to enable read replicas for transactions as well

    I want to be able to enable read replicas for transactions as well

    Describe the feature

    We are using ROW LEVEL SECURITY to achieve multi tenancy. Since RLS is based on transactions we are unable to utilize the read replicas feature as it always forwards the connections to the write replica. We would like to be able to control this behavior via some flag to the query operation.

  • 连接池参数无效

    连接池参数无效

    问题点

    1.请关注如下参数设置的连接池最大连接数目,该参数无效, 我设置的连接池最大连接数为30, 实际做并发测试的时候,mysql 显示客户端数量为本次测试并发值(128)+已有连接. 但是基于 database/sql 原生方式操作,结果为: 30++已有连接. 2.请作者检查一下代码,看看是我写的有问题,还是存在bug. 请求帮忙解决此问题.

    	gormDb, err := gorm.Open(mysql.Open("Dsn"), &gorm.Config{
    	})
    	if err != nil {
    		//gorm 数据库驱动初始化失败
    		return nil, err
    	}
    	var resolverConf dbresolver.Config
    
    	// 如果开启了读写分离,配置读数据库(resource、read、replicas)
    		resolverConf = dbresolver.Config{
    			Sources:  []gorm.Dialector{mysql.Open("WriteDsn")}, 
    			Replicas: []gorm.Dialector{mysql.Open("ReadDsn")},  
    			Policy:   dbresolver.RandomPolicy{}, 
    		}
    		
    	err = gormDb.Use(dbresolver.Register(resolverConf, "").SetConnMaxIdleTime(time.Minute).
    		SetConnMaxLifetime(10* time.Second).
    		SetMaxIdleConns(10).
    		SetMaxOpenConns(30))   // 这里设置的连接池最大连接为30 
    	if err != nil {
    		return nil, err
    	}
    	
    
    	//  并发性能测试,同时测试连接池
    	var wg sync.WaitGroup
    	// 并发最大连接数设置为 128 进行本次测试, 问题如下:
    	// 如果使用 database/sql 原生操作,在数据库使用  show  processlist 命令查看,程序执行期间的连接为 30+已有连接
    	// 使用 gorm 测试,连接池的上限根本无效,数据库的连接就是 128+已有连接
    
    	var conNum = make(chan uint16, 128)
    	wg.Add(1000)
    	time1 := time.Now()
    	for i := 1; i <= 1000; i++ {
    		conNum <- 1
    		go func() {
    			defer func() {
    				<-conNum
    				wg.Done()
    			}()
    			var received []tb_code_lists
    			variable.GormDbMysql.Table("tb_code_list").Select("code", "name", "company_name", "province", "city", "remark", "status", "created_at", "updated_at").Where("id<=?", 3500).Find(&received)
    			//fmt.Printf("本次读取的数据条数:%d\n",len(received))
    		}()
    	}
    	wg.Wait()
    	fmt.Printf("耗时(ms):%d\n", time.Now().Sub(time1).Milliseconds())
    
    
  • Bump gorm.io/gorm from 1.24.0 to 1.24.2

    Bump gorm.io/gorm from 1.24.0 to 1.24.2

    Bumps gorm.io/gorm from 1.24.0 to 1.24.2.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump gorm.io/gorm from 1.24.0 to 1.24.3

    Bump gorm.io/gorm from 1.24.0 to 1.24.3

    Bumps gorm.io/gorm from 1.24.0 to 1.24.3.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • An error occurred after the mysql active/standby switch:The MySQL server is running with the --read-only option so it cannot execute this statement

    An error occurred after the mysql active/standby switch:The MySQL server is running with the --read-only option so it cannot execute this statement

    GORM Playground Link

    https://github.com/go-gorm/playground/pull/1

    Description

    1、Stop the main library directly, and then switch between main and standby 2、log print: The MySQL server is running with the --read-only option so it cannot execute this statement 3、The read_only parameter confirms that it has been set to 0 4、 mysql 5.6 dsn: master db2Dsn: slave `

               err = db.Use(dbresolver.Register(dbresolver.Config{
    		Replicas: []gorm.Dialector{mysql.Open(db2Dsn), mysql.Open(dsn)},
    		// sources/replicas 负载均衡策略, 默认随机
    		// todo - 比例随机均匀, 读主 读从. 如需调整,需要自定义 Policy
    		Policy: dbresolver.RandomPolicy{},
    	}, tabs...).
    		SetMaxOpenConns(slaveCnf.MaxOpenCons).
    		SetMaxIdleConns(slaveCnf.MaxIdleCons).
    		SetConnMaxLifetime(time.Duration(slaveCnf.MaxLifetime) * time.Second))
    

    `

  • Fallback Policy for Replicas

    Fallback Policy for Replicas

    Describe the feature

    Currently only available policy is random. We need a fallback policy where incase first replica fails we need to fallback to second replica.

    Eg. Source: M1 (master 1) Replicas: R1 (Replica 1), R2 (Replica 2)

    • Normal Scenario - Writes Served by M1, Reads Served by R1
    • Disaster Master Down - Writes don't work, Reads Served by R1
    • R1 Promoted to Master (R1 becomes M1) - Writes Served by M1, Reads Fail. In this Scenario we want Reads to Shift to R2.

    First two scenarios are currently handled we need fallback policy for third scenario.

    We have replicas which would be in process of updation to master in case of a disaster. Thus first replica may become unavailable while second replica continues to function. In such scenario a Fallback policy would be required.

  • add change service operation hook

    add change service operation hook

    Describe the feature

    add a option, when service operation change to write or read, call use function.

    Motivation

    I will be write trace log. I need know service operation current status (write / read).

    so if add a hook, I can know service operation current status

Related tags
🔑A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout (LSM+WAL) similar to Riak.

bitcask A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout

Sep 26, 2022
Time Series Database based on Cassandra with Prometheus remote read/write support

SquirrelDB SquirrelDB is a scalable high-available timeseries database (TSDB) compatible with Prometheus remote storage. SquirrelDB store data in Cass

Oct 20, 2022
Kivik provides a common interface to CouchDB or CouchDB-like databases for Go and GopherJS.

Kivik Package kivik provides a common interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database dr

Dec 29, 2022
A Kubernetes operator for databases.

DB8 [dɪˈbeɪt] A Kubernetes operator and a collection of Helm charts for databases. Motivation Since the deprecation of the official Helm chart reposit

Jan 1, 2022
A simple golang api generator that stores struct fields in key/value based databases

Backgen A simple golang API generator that uses key/value based databases. It does not provide the database itself, only uses a interface to access se

Feb 4, 2022
Embedded key-value store for read-heavy workloads written in Go
Embedded key-value store for read-heavy workloads written in Go

Pogreb Pogreb is an embedded key-value store for read-heavy workloads written in Go. Key characteristics 100% Go. Optimized for fast random lookups an

Jan 3, 2023
OcppManager-go - A library for dynamically managing OCPP configuration (variables). It can read, update, and validate OCPP variables.

?? ocppManager-go A library for dynamically managing OCPP configuration (variables). It can read, update, and validate OCPP variables. Currently, only

Jan 3, 2022
GoPostgreSQL - An example usage of PostgreSQL with GO, very simple since the objective is that others can read and learn
GoPostgreSQL - An example usage of PostgreSQL with GO, very simple since the objective is that others can read and learn

GoPostgreSQL - An example usage of PostgreSQL with GO, very simple since the objective is that others can read and learn

Feb 10, 2022
Mantil-template-form-to-dynamodb - Receive form data and write it to a DynamoDB table
Mantil-template-form-to-dynamodb - Receive form data and write it to a DynamoDB table

This template is an example of serverless integration between Google Forms and DynamoDB

Jan 17, 2022
a key-value store with multiple backends including leveldb, badgerdb, postgresql

Overview goukv is an abstraction layer for golang based key-value stores, it is easy to add any backend provider. Available Providers badgerdb: Badger

Jan 5, 2023
A simple wrapper around badgerDB that can be used across multiple projects

mstore Mstore is a simple wrapper around badgerDB for platform applications that require a quick persistent cache close to the consumer. It's intended

Dec 14, 2021
Virsas-mod-db - Quick way to init mysql, postgres and redis connection from multiple services without duplicating the code

Quick way to init mysql, postgres and redis connection from multiple services without duplicating the code.

Jan 23, 2022
A db proxy for distributed transaction, read write splitting and sharding! Support any language! It can be deployed as a sidecar in a pod.
A db proxy for distributed transaction, read write splitting and sharding! Support any language! It can be deployed as a sidecar in a pod.

DBPack DBPack means a database cluster tool pack. It can be deployed as a sidecar in a pod, it shields complex basic logic, so that business developme

Dec 29, 2022
Small tool for splitting files found in a path into multiple groups

Small tool for splitting files found in a path into multiple groups. Usefull for parallelisation of whatever can be paralleled with multiple files.

Jan 30, 2022
Write controller-runtime based k8s controllers that read/write to git, not k8s

Git Backed Controller The basic idea is to write a k8s controller that runs against git and not k8s apiserver. So the controller is reading and writin

Dec 10, 2021
Chinese word splitting algorithm MMSEG in GO

MMSEGO This is a GO implementation of MMSEG which a Chinese word splitting algorithm. TO DO list Documentation/comments Benchmark Usage #Input Diction

Sep 27, 2022
GORM SQLChaos manipulates DML at program runtime based on gorm

GORM SQLChaos GORM SQLChaos manipulates DML at program runtime based on gorm callbacks Motivation In Financial Business distributed system, account im

Oct 21, 2022
Simple to do list API with Gin and Gorm (with Postgres)Simple to do list API with Gin and Gorm (with Postgres)

go-todo Simple to do list API with Gin and Gorm (with Postgres) Docker Clone this repository and run: docker-compose up You can then hit the followin

Aug 29, 2022
Gorm-sharding - High performance table sharding plugin for Gorm

Gorm Sharding This project has moved to Gorm offical organization: https://githu

Nov 18, 2022
Simple-read-file - Example of how to read file in Go

simple-read-file This repository contains a simple example of how to read file i

Jan 11, 2022