a powerful mysql toolset with Go

go-mysql

A pure go library to handle MySQL network protocol and replication.

Call for Committer/Maintainer

Sorry that I have no enough time to maintain this project wholly, if you like this project and want to help me improve it continuously, please contact me through email ([email protected]).

Requirement: In the email, you should list somethings(including but not limited to below) to make me believe we can work together.

  • Your GitHub ID
  • The contributions to go-mysql before, including PRs or Issues.
  • The reason why you can improve go-mysql.

Replication

Replication package handles MySQL replication protocol like python-mysql-replication.

You can use it as a MySQL slave to sync binlog from master then do something, like updating cache, etc...

Example

import (
	"github.com/siddontang/go-mysql/replication"
	"os"
)
// Create a binlog syncer with a unique server id, the server id must be different from other MySQL's. 
// flavor is mysql or mariadb
cfg := replication.BinlogSyncerConfig {
	ServerID: 100,
	Flavor:   "mysql",
	Host:     "127.0.0.1",
	Port:     3306,
	User:     "root",
	Password: "",
}
syncer := replication.NewBinlogSyncer(cfg)

// Start sync with specified binlog file and position
streamer, _ := syncer.StartSync(mysql.Position{binlogFile, binlogPos})

// or you can start a gtid replication like
// streamer, _ := syncer.StartSyncGTID(gtidSet)
// the mysql GTID set likes this "de278ad0-2106-11e4-9f8e-6edd0ca20947:1-2"
// the mariadb GTID set likes this "0-1-100"

for {
	ev, _ := streamer.GetEvent(context.Background())
	// Dump event
	ev.Dump(os.Stdout)
}

// or we can use a timeout context
for {
	ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
	ev, err := s.GetEvent(ctx)
	cancel()

	if err == context.DeadlineExceeded {
		// meet timeout
		continue
	}

	ev.Dump(os.Stdout)
}

The output looks:

=== RotateEvent ===
Date: 1970-01-01 08:00:00
Log position: 0
Event size: 43
Position: 4
Next log name: mysql.000002

=== FormatDescriptionEvent ===
Date: 2014-12-18 16:36:09
Log position: 120
Event size: 116
Version: 4
Server version: 5.6.19-log
Create date: 2014-12-18 16:36:09

=== QueryEvent ===
Date: 2014-12-18 16:38:24
Log position: 259
Event size: 139
Salve proxy ID: 1
Execution time: 0
Error code: 0
Schema: test
Query: DROP TABLE IF EXISTS `test_replication` /* generated by server */

Canal

Canal is a package that can sync your MySQL into everywhere, like Redis, Elasticsearch.

First, canal will dump your MySQL data then sync changed data using binlog incrementally.

You must use ROW format for binlog, full binlog row image is preferred, because we may meet some errors when primary key changed in update for minimal or noblob row image.

A simple example:

cfg := NewDefaultConfig()
cfg.Addr = "127.0.0.1:3306"
cfg.User = "root"
// We only care table canal_test in test db
cfg.Dump.TableDB = "test"
cfg.Dump.Tables = []string{"canal_test"}

c, err := NewCanal(cfg)

type MyEventHandler struct {
	DummyEventHandler
}

func (h *MyEventHandler) OnRow(e *RowsEvent) error {
	log.Infof("%s %v\n", e.Action, e.Rows)
	return nil
}

func (h *MyEventHandler) String() string {
	return "MyEventHandler"
}

// Register a handler to handle RowsEvent
c.SetEventHandler(&MyEventHandler{})

// Start canal
c.Run()

You can see go-mysql-elasticsearch for how to sync MySQL data into Elasticsearch.

Client

Client package supports a simple MySQL connection driver which you can use it to communicate with MySQL server.

Example

import (
	"github.com/siddontang/go-mysql/client"
)

// Connect MySQL at 127.0.0.1:3306, with user root, an empty password and database test
conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test")

// Or to use SSL/TLS connection if MySQL server supports TLS
//conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test", func(c *Conn) {c.UseSSL(true)})

// Or to set your own client-side certificates for identity verification for security
//tlsConfig := NewClientTLSConfig(caPem, certPem, keyPem, false, "your-server-name")
//conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test", func(c *Conn) {c.SetTLSConfig(tlsConfig)})

conn.Ping()

// Insert
r, _ := conn.Execute(`insert into table (id, name) values (1, "abc")`)

// Get last insert id
println(r.InsertId)
// Or affected rows count
println(r.AffectedRows)

// Select
r, err := conn.Execute(`select id, name from table where id = 1`)

// Close result for reuse memory (it's not necessary but very useful)
defer r.Close()

// Handle resultset
v, _ := r.GetInt(0, 0)
v, _ = r.GetIntByName(0, "id")

// Direct access to fields
for _, row := range r.Values {
	for _, val := range row {
		_ = val.Value() // interface{}
		// or
		if val.Type == mysql.FieldValueTypeFloat {
			_ = val.AsFloat64() // float64
		}
	}   
}

Tested MySQL versions for the client include:

  • 5.5.x
  • 5.6.x
  • 5.7.x
  • 8.0.x

Server

Server package supplies a framework to implement a simple MySQL server which can handle the packets from the MySQL client. You can use it to build your own MySQL proxy. The server connection is compatible with MySQL 5.5, 5.6, 5.7, and 8.0 versions, so that most MySQL clients should be able to connect to the Server without modifications.

Example

import (
	"github.com/siddontang/go-mysql/server"
	"net"
)

l, _ := net.Listen("tcp", "127.0.0.1:4000")

c, _ := l.Accept()

// Create a connection with user root and an empty password.
// You can use your own handler to handle command here.
conn, _ := server.NewConn(c, "root", "", server.EmptyHandler{})

for {
	conn.HandleCommand()
}

Another shell

mysql -h127.0.0.1 -P4000 -uroot -p 
//Becuase empty handler does nothing, so here the MySQL client can only connect the proxy server. :-) 

NewConn() will use default server configurations:

  1. automatically generate default server certificates and enable TLS/SSL support.
  2. support three mainstream authentication methods 'mysql_native_password', 'caching_sha2_password', and 'sha256_password' and use 'mysql_native_password' as default.
  3. use an in-memory user credential provider to store user and password.

To customize server configurations, use NewServer() and create connection via NewCustomizedConn().

Failover

Failover supports to promote a new master and let other slaves replicate from it automatically when the old master was down.

Failover supports MySQL >= 5.6.9 with GTID mode, if you use lower version, e.g, MySQL 5.0 - 5.5, please use MHA or orchestrator.

At the same time, Failover supports MariaDB >= 10.0.9 with GTID mode too.

Why only GTID? Supporting failover with no GTID mode is very hard, because slave can not find the proper binlog filename and position with the new master. Although there are many companies use MySQL 5.0 - 5.5, I think upgrade MySQL to 5.6 or higher is easy.

Driver

Driver is the package that you can use go-mysql with go database/sql like other drivers. A simple example:

package main

import (
	"database/sql"

	_ "github.com/siddontang/go-mysql/driver"
)

func main() {
	// dsn format: "user:password@addr?dbname"
	dsn := "[email protected]:3306?test"
	db, _ := sql.Open(dsn)
	db.Close()
}

We pass all tests in https://github.com/bradfitz/go-sql-test using go-mysql driver. :-)

Donate

If you like the project and want to buy me a cola, you can through:

PayPal 微信
[

Feedback

go-mysql is still in development, your feedback is very welcome.

Gmail: [email protected]

Comments
  • ERRO[0006] close sync with err: data len 0 < expected 1

    ERRO[0006] close sync with err: data len 0 < expected 1

    mysql 5.7 binlog canal 使用的时候,gtid在85649a98-8b3e-11e5-b9c9-5510536e2f9f:240 左右会报错 

    2018/05/08 00:07:11 [Info] HjEventHandler.go:27 mysql gtid set : 85649a98-8b3e-11e5-b9c9-5510536e2f9f:242 ERRO[0006] close sync with err: data len 0 < expected 1 INFO[0006] table structure changed, clear table cache: huajuan.hj_goods_preferential

    2018/05/08 00:07:11 [Info] HjEventHandler.go:27 mysql gtid set : 85649a98-8b3e-11e5-b9c9-5510536e2f9f:243 ERRO[0006] canal start sync binlog err: data len 0 < expected 1

    示例的binlog文件我上传上来了。希望谁可以看看 mysql-bin.000003.zip

  • Fix replication of TIMESTAMP on non-UTC machines

    Fix replication of TIMESTAMP on non-UTC machines

    To get the timestamp in any timezone, in the codebase consuming this library, add (and substitute with the timezone of your choice):

    func init() {
    	replication.TimeStringLocation = time.UTC
    }
    

    Fixes: #63

    Also see: github/gh-ost#182 and Shopify/ghostferry#23

    review: @siddontang

  • Add support for MySQL 8.0 and support for TLS/SSL for both Server and Client

    Add support for MySQL 8.0 and support for TLS/SSL for both Server and Client

    This PR:

    • Added support for MySQL 8.0 for the Client: starting from MySQL 8.0.4, MySQL uses 'caching_sha2_password' as default authentication method, which caused the connection to fail. This PR fixed the problem by supporting three mainstream auth methods 'mysql_native_password', 'caching_sha2_password', and 'sha256_password', which covers a wide range of MySQL versions from MySQL 5.5 to MySQL 8.0.
    • Added support of new auth methods for the Server: the server now accepts 'mysql_native_password', 'caching_sha2_password', and 'sha256_password' as auth methods. Other old and deprecated auth methods are not supported.
    • Supports TLS/SSL for the Client and the Server: the new design maintains compatibility with the previous releases. Customizations are optional.

    Since the upgrade of auth methods affects the Client and Server design, I made lots of changes to the code while trying to reuse the old code. Some minor existing bugs are fixed too. For instance, the buffer reader for the net.Conn is now removed because it causes the SSL handshake to fail.

    Changing and refactoring can be a bad thing though as they can introduce new bugs. However, I tried my best to make the old tests pass and added new feature tests for different MySQL versions using docker compose to increase testing coverage. For now all test cases passed.

  • Allow TLS connections in the driver

    Allow TLS connections in the driver

    Opening this for discussion and feedback. I'm not expecting this to be merged in as-is.

    • This mostly exposes what is already elsewhere in this codebase to the driver

    • Allows for an optional ssl= URL parameter (query string) of values true and custom. Database must be specified for this to work

    • true just does c.UseSSL(true) on the connection

    • custom uses a package level variable of customTLSConfig to do c.SetTLSConfig(customTLSConfig)

    • That variable is set via a new function, SetCustomTLSConfig

    • To use a custom config you'd do a full import of the driver (not for side-effects only) and then do something like:

        driver.SetCustomTLSConfig(CaPem, make([]byte, 0), make([]byte, 0), false, "my.domain.name")
      

    For what I need the basic TLS support is enough (ssl=true and c.UseSSL(true)) so if need be I could reduce this PR down to the first commit; However, since we are basically exposing what is already supported elsewhere in this codebase to the driver I thought I should get the custom TLS config support is a usable state (this PR).

    Things that I could do with feedback/opinions on:

    • Since the DSN style for this driver uses a ? as a separator/indicator for the database the ssl parameter only works and makes sense if a database is specified. E.g:

        `?mydatabase&ssl=true`
      

      vs

        `?ssl=true`
      

      Which would try to connect to a database called ssl=true.

      I am ok with this limitation/requirement.

    • I have also tried to keep the current style for parsing the URL parameters as opposed to re-working everything to use url.Parse. I assume there is a reason why that's not used and why ? is used instead of / for the database separator/indicator; At the very least I understand that switching to / would be a breaking change for many folk

    • I've allowed the Cert and Key to be optional in NewClientTLSConfig. I think that is a good idea, but I don't know if I've gone about it the best way

    • I'm just allowing for a single custom TLSConfig. This means that if a program uses the driver then only one custom config can be set - which might not be desirable if that driver is connecting to multiple endpoints. For me, right now, that's fine. But I'm willing to look at this some more

    • SetCustomTLSConfig is a very simple pass-through function. I welcome suggestions on how to improve. But it means I can just import the driver and still take advantage of NewClientTLSConfig

  • Add more MySQL-8.0 meta data to GTIDEvent and TableMapEvent

    Add more MySQL-8.0 meta data to GTIDEvent and TableMapEvent

    This pr adds more meta data added after MySQL-8.0:

    GTIDEvent:

    • immediate/original commit timestamp for this trx
    • transaction length (in bytes) for all binlog events of this trx, including this GTIDEvent. This is useful to detect transaction boundry
    • immediate/original server version

    TableMapEvent:

    • column names
    • primary key info
    • signedness info for numeric columns
    • collation info for character and enum/set columns
    • geometry type info

    These can be hopefully to solve #427 if using MySQL-8.0

    Example GTIDEvent dump (MySQL-8.0):

    === GTIDEvent === 
    Date: 2020-02-01 19:15:26
    Log position: 3440812
    Event size: 88
    Commit flag: 1                                                                                                                                           
    GTID_NEXT: 5aa72a7f-44a8-11ea-947f-0242ac190002:55
    LAST_COMMITTED: 54
    SEQUENCE_NUMBER: 55
    Immediate commmit timestamp: 1580555726309342 (2020-02-01T19:15:26.309342+08:00)
    Orignal commmit timestamp: 0 (<n/a>)
    Transaction length: 197
    Immediate server version: 80019
    Orignal server version: 0
    

    Example GTIDEvent dump for the same event (MySQL-5.7):

    === GTIDEvent ===
    Date: 2020-02-01 19:15:26
    Log position: 15156
    Event size: 65
    Commit flag: 1
    GTID_NEXT: 5aa72a7f-44a8-11ea-947f-0242ac190002:55
    LAST_COMMITTED: 49                                                                                                                                       
    SEQUENCE_NUMBER: 50
    Immediate commmit timestamp: 0 (<n/a>)
    Orignal commmit timestamp: 0 (<n/a>)
    Transaction length: 0
    Immediate server version: 0
    Orignal server version: 0
    
    

    Example TableMapEvent dump (MySQL-8.0):

    === TableMapEvent ===
    Date: 2020-03-10 15:24:58
    Log position: 78747
    Event size: 580
    TableID: 118
    TableID size: 6
    Flags: 1
    Schema: test
    Table: _types
    Column count: 42
    Column type: 
    00000000  10 01 01 02 09 03 08 f6  04 05 01 02 09 03 08 f6  |................|
    00000010  04 05 0d 0a 13 13 12 12  11 11 fe 0f fe 0f fc fc  |................|
    00000020  fc fc fc fc fc fc fe fe  ff f5                    |..........|
    NULL bitmap: 
    00000000  00 00 fc c0 ff 03                                 |......|
    Signedness bitmap: 
    00000000  00 7f 80                                          |...|
    Default charset: []
    Column charset: [224 224 63 63 63 63 63 63 224 224 224 224]
    Set str value: [[1 2]]
    Enum str value: [[a b]]
    Column name: [b_bit n_boolean n_tinyint n_smallint n_mediumint n_int n_bigint n_decimal n_float n_double nu_tinyint nu_smallint nu_mediumint nu_int nu_bigint nu_decimal nu_float nu_double t_year t_date t_time t_ftime t_datetime t_fdatetime t_timestamp t_ftimestamp c_char c_varchar c_binary c_varbinary c_tinyblob c_blob c_mediumblob c_longblob c_tinytext c_text c_mediumtext c_longtext e_enum s_set g_geometry j_json]
    Geometry type: [0]
    Primary key: []
    Primary key prefix: []
    Enum/set default charset: [224]
    Enum/set column charset: []
    
    

    Example TableMapEvent dump for the same event (MySQL-5.7):

    === TableMapEvent ===
    Date: 2020-03-10 15:24:58
    Log position: 31058
    Event size: 133
    TableID: 117
    TableID size: 6
    Flags: 1
    Schema: test
    Table: _types
    Column count: 42
    Column type: 
    00000000  10 01 01 02 09 03 08 f6  04 05 01 02 09 03 08 f6  |................|
    00000010  04 05 0d 0a 13 13 12 12  11 11 fe 0f fe 0f fc fc  |................|
    00000020  fc fc fc fc fc fc fe fe  ff f5                    |..........|
    NULL bitmap: 
    00000000  00 00 fc c0 ff 03                                 |......|
    Signedness bitmap: 
    Default charset: []
    Column charset: []
    Set str value: []
    Enum str value: []
    Column name: []
    Geometry type: []
    Primary key: []
    Primary key prefix: []
    Enum/set default charset: []
    Enum/set column charset: []
    
    

    ref:

    • https://mysqlhighavailability.com/more-metadata-is-written-into-binary-log/
    • https://mysqlhighavailability.com/taking-advantage-of-new-transaction-length-metadata/
    • https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_binlog_row_metadata
    • https://dev.mysql.com/doc/dev/mysql-server/latest/classbinary__log_1_1Gtid__event.html
    • https://dev.mysql.com/doc/dev/mysql-server/latest/classbinary__log_1_1Table__map__event.html
  • TWEAK: use gtid from dump position, and incrementally update it with binlog syncer

    TWEAK: use gtid from dump position, and incrementally update it with binlog syncer

    This is an improvement/bugfix to the previous PR.

    1. In the prev PR, we continue with file-based binlog replication from gtid dump, and that introduces a special case. Instead, we can use global.gtid_purged.

    2. Reimplement SyncedGTIDSet because one single gtid is generally useless to restore sync from.

    3. Renaming some gtid to gset, which is a set of gtid, to be clear

    #260

  • Allow logger override

    Allow logger override

    Problem

    Ideally libraries don't have a logger at all. But this is a smaller change by allowing a user to set the Logger. This will allow the previous behaviour if needed.

    Closes: https://github.com/go-mysql-org/go-mysql/issues/698 and maybe others

    Feedback?

    Should errors ever log or just return?

  • parse gtid for parseHandler if MySQL works in GTID_MODE, and begin to startWithGTID after mysqldump is done

    parse gtid for parseHandler if MySQL works in GTID_MODE, and begin to startWithGTID after mysqldump is done

    Purpose of This PR:

    1. if MySQL is GTID_MODE=ON, canal would be start to replicate with GTID after mysqldump is done.
    2. Add one more func GtidSet() for parseHandler, to save gtid set after parsing gtid from mysqldump.
  • Added test showing that the decimal decode error.

    Added test showing that the decimal decode error.

    I am getting some very very small numbers instead of the expected numbers. The issue turned out to be the with the inital byte. When there was a negative number that byte and the number was small enough to fit in one byte, the initial byte size would be 1. We were then using a 4 byte mask to mask out 1 byte value leaving the mask as part of the value, causing the error. This was only an issue when the number was negative as the mask was the 2's complement. For positive number the mask is a zero, so it was not contributing to the final value.

    I've added test cases that show the errors (the negative values) as well as test cases that were working.

  • TWEAK: do not set global log level in NewBinlogSyncer

    TWEAK: do not set global log level in NewBinlogSyncer

    This reverts https://github.com/siddontang/go-mysql/commit/0baad4d700af91f9031f5b4e3135f4eb53ed35ce.

    Setting a global log level in an initializer causes unwanted side effects in many cases. User who needs this should do log.SetLevelByString in main program only.

  • add master slave heartbeat and kill invalid thread id

    add master slave heartbeat and kill invalid thread id

    When the network jitter or other reasons, the mysql server does not kill invalid canal database connection, which can cause waste of resources, at the same time, with the heartbeat set, canal initiative to kill invalid connection, and to connect to the database

  • canal can't parse value of ddl

    canal can't parse value of ddl

    canal.valueExpr can't work

    ALTER TABLE `sometable` MODIFY COLUMN `some_col` varchar(1024) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'comments' AFTER `other_col`
    

    become

    ALTER TABLE `sometable` MODIFY COLUMN `some_col` VARCHAR(1024) CHARACTER SET UTF8 COLLATE utf8_bin NOT NULL DEFAULT  COMMENT  AFTER `other_col`
    

    after (*ast.AlterTableStmt).Restore(ctx *format.RestoreCtx), default value and comment value lost

    change "github.com/pingcap/parser" test_driver to public or copy the implements to canal?

  • canal云平台RDS解决binlog被删除的问题

    canal云平台RDS解决binlog被删除的问题

    https://github.com/alibaba/canal/issues/727 由于云平台数据库会在一定时间后把binlog清理到云存储,导致本地binlog消失,而且binlog越大消失越快。 对于出现位点消费异常导致消费中断,如果修复时间较长会导致恢复后无法继续消费位点的情况,为保证消费连续性,新增PR #741
    来实现遇到binlog文件消失时,通过文件下载方法,将binlog文件下载本地解析后再推送给EventHandler处理。 完成自动的离线binlog和在线binlog之间的切换,保证canal消费的连续性。 各云平台可根据自己的API来实现文件下载方法。

A flexible and powerful SQL string builder library plus a zero-config ORM.

SQL builder for Go Install Usage Basic usage Pre-defined SQL builders Build SQL for MySQL, PostgreSQL or SQLite Using Struct as a light weight ORM Nes

Dec 30, 2022
Thin clones of PostgreSQL to build powerful development, test, QA, staging environments
 Thin clones of PostgreSQL to build powerful development, test, QA, staging environments

Database Lab Engine (DLE) ⚡ Blazing-fast cloning of PostgreSQL databases ?? Thin clones of PostgreSQL to build powerful development, test, QA, staging

Jan 3, 2023
Sync MySQL data into elasticsearch
Sync MySQL data into elasticsearch

go-mysql-elasticsearch is a service syncing your MySQL data into Elasticsearch automatically. It uses mysqldump to fetch the origin data at first, the

Dec 30, 2022
A high-performance MySQL proxy

kingshard 中文主页 Overview kingshard is a high-performance proxy for MySQL powered by Go. Just like other mysql proxies, you can use it to split the read

Dec 30, 2022
Golang MySql binary log replication listener

Go MySql binary log replication listener Pure Go Implementation of MySQL replication protocol. This allow you to receive event like insert, update, de

Oct 25, 2022
MySQL replication topology management and HA
MySQL replication topology management and HA

orchestrator [Documentation] orchestrator is a MySQL high availability and replication management tool, runs as a service and provides command line ac

Jan 4, 2023
Vitess is a database clustering system for horizontal scaling of MySQL.

Vitess Vitess is a database clustering system for horizontal scaling of MySQL through generalized sharding. By encapsulating shard-routing logic, Vite

Jan 3, 2023
db-recovery is a tool for recovering MySQL data.

db-recovery is a tool for recovering MySQL data. It is used in scenarios where the database has no backup or binlog. It can parse data files and redo/undo logs to recover data.

Nov 17, 2022
一个使 mysql,pgsql 数据库表自动生成 go struct 的工具

db2go 一个使 mysql、pgsql 数据库表自动生成 go struct 的工具 快速使用 将项目放入到GOPATH/src目录下

Nov 25, 2022
🐳 A most popular sql audit platform for mysql
🐳 A most popular sql audit platform for mysql

?? A most popular sql audit platform for mysql

Jan 6, 2023
Dumpling is a fast, easy-to-use tool written by Go for dumping data from the database(MySQL, TiDB...) to local/cloud(S3, GCP...) in multifarious formats(SQL, CSV...).

?? Dumpling Dumpling is a tool and a Go library for creating SQL dump from a MySQL-compatible database. It is intended to replace mysqldump and mydump

Nov 9, 2022
Vitess is a database clustering system for horizontal scaling of MySQL.

Vitess Vitess is a database clustering system for horizontal scaling of MySQL through generalized sharding. By encapsulating shard-routing logic, Vite

Jan 4, 2023
GitHub's Online Schema Migrations for MySQL
GitHub's Online Schema Migrations for MySQL

gh-ost GitHub's online schema migration for MySQL gh-ost is a triggerless online schema migration solution for MySQL. It is testable and provides paus

Jan 4, 2023
Gaea is a mysql proxy, it's developed by xiaomi b2c-dev team.
Gaea is a mysql proxy, it's developed by xiaomi b2c-dev team.

简介 Gaea是小米中国区电商研发部研发的基于mysql协议的数据库中间件,目前在小米商城大陆和海外得到广泛使用,包括订单、社区、活动等多个业务。Gaea支持分库分表、sql路由、读写分离等基本特性,更多详细功能可以参照下面的功能列表。其中分库分表方案兼容了mycat和kingshard两个项目的路

Dec 30, 2022
Bifrost ---- 面向生产环境的 MySQL 同步到Redis,MongoDB,ClickHouse,MySQL等服务的异构中间件
Bifrost ---- 面向生产环境的 MySQL 同步到Redis,MongoDB,ClickHouse,MySQL等服务的异构中间件

Bifrost ---- 面向生产环境的 MySQL 同步到Redis,ClickHouse等服务的异构中间件 English 漫威里的彩虹桥可以将 雷神 送到 阿斯加德 和 地球 而这个 Bifrost 可以将 你 MySQL 里的数据 全量 , 实时的同步到 : Redis MongoDB Cl

Dec 30, 2022
go mysql driver, support distributed transaction

Go-MySQL-Driver A MySQL-Driver for Go's database/sql package Features Requirements Installation Usage DSN (Data Source Name) Password Protocol Address

Jul 23, 2022
A MySQL-compatible relational database with a storage agnostic query engine. Implemented in pure Go.

go-mysql-server go-mysql-server is a SQL engine which parses standard SQL (based on MySQL syntax) and executes queries on data sources of your choice.

Jan 2, 2023
Innotop for MySQL 8 written in Go
Innotop for MySQL 8 written in Go

Innotop Go Innotop for MySQL 8 written in Go Project started to learn Go and doing something useful (I hope). Additionaly the official Innotop written

Dec 27, 2022