A pure go library to handle MySQL network protocol and replication.

go-mysql

A pure go library to handle MySQL network protocol and replication.

semver example workflow gomod version

How to migrate to this repo

To change the used package in your repo it's enough to add this replace directive to your go.mod:

replace github.com/siddontang/go-mysql => github.com/go-mysql-org/go-mysql v1.2.1

v1.2.1 - is the last tag in repo, feel free to choose what you want.

Changelog

This repo uses Changelog.


Content

Replication

Replication package handles MySQL replication protocol like python-mysql-replication.

You can use it as a MySQL slave to sync binlog from master then do something, like updating cache, etc...

Example

import (
	"github.com/go-mysql-org/go-mysql/replication"
	"os"
)
// Create a binlog syncer with a unique server id, the server id must be different from other MySQL's. 
// flavor is mysql or mariadb
cfg := replication.BinlogSyncerConfig {
	ServerID: 100,
	Flavor:   "mysql",
	Host:     "127.0.0.1",
	Port:     3306,
	User:     "root",
	Password: "",
}
syncer := replication.NewBinlogSyncer(cfg)

// Start sync with specified binlog file and position
streamer, _ := syncer.StartSync(mysql.Position{binlogFile, binlogPos})

// or you can start a gtid replication like
// streamer, _ := syncer.StartSyncGTID(gtidSet)
// the mysql GTID set likes this "de278ad0-2106-11e4-9f8e-6edd0ca20947:1-2"
// the mariadb GTID set likes this "0-1-100"

for {
	ev, _ := streamer.GetEvent(context.Background())
	// Dump event
	ev.Dump(os.Stdout)
}

// or we can use a timeout context
for {
	ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
	ev, err := s.GetEvent(ctx)
	cancel()

	if err == context.DeadlineExceeded {
		// meet timeout
		continue
	}

	ev.Dump(os.Stdout)
}

The output looks:

=== RotateEvent ===
Date: 1970-01-01 08:00:00
Log position: 0
Event size: 43
Position: 4
Next log name: mysql.000002

=== FormatDescriptionEvent ===
Date: 2014-12-18 16:36:09
Log position: 120
Event size: 116
Version: 4
Server version: 5.6.19-log
Create date: 2014-12-18 16:36:09

=== QueryEvent ===
Date: 2014-12-18 16:38:24
Log position: 259
Event size: 139
Salve proxy ID: 1
Execution time: 0
Error code: 0
Schema: test
Query: DROP TABLE IF EXISTS `test_replication` /* generated by server */

Canal

Canal is a package that can sync your MySQL into everywhere, like Redis, Elasticsearch.

First, canal will dump your MySQL data then sync changed data using binlog incrementally.

You must use ROW format for binlog, full binlog row image is preferred, because we may meet some errors when primary key changed in update for minimal or noblob row image.

A simple example:

package main

import (
	"github.com/siddontang/go-log/log"
	"github.com/go-mysql-org/go-mysql/canal"
)

type MyEventHandler struct {
	canal.DummyEventHandler
}

func (h *MyEventHandler) OnRow(e *canal.RowsEvent) error {
	log.Infof("%s %v\n", e.Action, e.Rows)
	return nil
}

func (h *MyEventHandler) String() string {
	return "MyEventHandler"
}

func main() {
	cfg := canal.NewDefaultConfig()
	cfg.Addr = "127.0.0.1:3306"
	cfg.User = "root"
	// We only care table canal_test in test db
	cfg.Dump.TableDB = "test"
	cfg.Dump.Tables = []string{"canal_test"}

	c, err := canal.NewCanal(cfg)
	if err != nil {
		log.Fatal(err)
	}

	// Register a handler to handle RowsEvent
	c.SetEventHandler(&MyEventHandler{})

	// Start canal
	c.Run()
}

You can see go-mysql-elasticsearch for how to sync MySQL data into Elasticsearch.

Client

Client package supports a simple MySQL connection driver which you can use it to communicate with MySQL server.

Example

import (
	"github.com/go-mysql-org/go-mysql/client"
)

// Connect MySQL at 127.0.0.1:3306, with user root, an empty password and database test
conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test")

// Or to use SSL/TLS connection if MySQL server supports TLS
//conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test", func(c *Conn) {c.UseSSL(true)})

// Or to set your own client-side certificates for identity verification for security
//tlsConfig := NewClientTLSConfig(caPem, certPem, keyPem, false, "your-server-name")
//conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test", func(c *Conn) {c.SetTLSConfig(tlsConfig)})

conn.Ping()

// Insert
r, _ := conn.Execute(`insert into table (id, name) values (1, "abc")`)

// Get last insert id
println(r.InsertId)
// Or affected rows count
println(r.AffectedRows)

// Select
r, err := conn.Execute(`select id, name from table where id = 1`)

// Close result for reuse memory (it's not necessary but very useful)
defer r.Close()

// Handle resultset
v, _ := r.GetInt(0, 0)
v, _ = r.GetIntByName(0, "id")

// Direct access to fields
for _, row := range r.Values {
	for _, val := range row {
		_ = val.Value() // interface{}
		// or
		if val.Type == mysql.FieldValueTypeFloat {
			_ = val.AsFloat64() // float64
		}
	}   
}

Tested MySQL versions for the client include:

  • 5.5.x
  • 5.6.x
  • 5.7.x
  • 8.0.x

Example for SELECT streaming (v.1.1.1)

You can use also streaming for large SELECT responses. The callback function will be called for every result row without storing the whole resultset in memory. result.Fields will be filled before the first callback call.

// ...
var result mysql.Result
err := conn.ExecuteSelectStreaming(`select id, name from table LIMIT 100500`, &result, func(row []mysql.FieldValue) error {
    for idx, val := range row {
    	field := result.Fields[idx]
    	// You must not save FieldValue.AsString() value after this callback is done.
    	// Copy it if you need.
    	// ...
    }
    return false, nil
})

// ...

Server

Server package supplies a framework to implement a simple MySQL server which can handle the packets from the MySQL client. You can use it to build your own MySQL proxy. The server connection is compatible with MySQL 5.5, 5.6, 5.7, and 8.0 versions, so that most MySQL clients should be able to connect to the Server without modifications.

Example

import (
	"github.com/go-mysql-org/go-mysql/server"
	"net"
)

l, _ := net.Listen("tcp", "127.0.0.1:4000")

c, _ := l.Accept()

// Create a connection with user root and an empty password.
// You can use your own handler to handle command here.
conn, _ := server.NewConn(c, "root", "", server.EmptyHandler{})

for {
	conn.HandleCommand()
}

Another shell

mysql -h127.0.0.1 -P4000 -uroot -p 
//Becuase empty handler does nothing, so here the MySQL client can only connect the proxy server. :-) 

NewConn() will use default server configurations:

  1. automatically generate default server certificates and enable TLS/SSL support.
  2. support three mainstream authentication methods 'mysql_native_password', 'caching_sha2_password', and 'sha256_password' and use 'mysql_native_password' as default.
  3. use an in-memory user credential provider to store user and password.

To customize server configurations, use NewServer() and create connection via NewCustomizedConn().

Failover

Failover supports to promote a new master and let other slaves replicate from it automatically when the old master was down.

Failover supports MySQL >= 5.6.9 with GTID mode, if you use lower version, e.g, MySQL 5.0 - 5.5, please use MHA or orchestrator.

At the same time, Failover supports MariaDB >= 10.0.9 with GTID mode too.

Why only GTID? Supporting failover with no GTID mode is very hard, because slave can not find the proper binlog filename and position with the new master. Although there are many companies use MySQL 5.0 - 5.5, I think upgrade MySQL to 5.6 or higher is easy.

Driver

Driver is the package that you can use go-mysql with go database/sql like other drivers. A simple example:

package main

import (
	"database/sql"

	_ "github.com/go-mysql-org/go-mysql/driver"
)

func main() {
	// dsn format: "user:password@addr?dbname"
	dsn := "[email protected]:3306?test"
	db, _ := sql.Open(dsn)
	db.Close()
}

We pass all tests in https://github.com/bradfitz/go-sql-test using go-mysql driver. :-)

Donate

If you like the project and want to buy me a cola, you can through:

PayPal 微信
[

Feedback

go-mysql is still in development, your feedback is very welcome.

Gmail: [email protected]

Comments
  • ERRO[0006] close sync with err: data len 0 < expected 1

    ERRO[0006] close sync with err: data len 0 < expected 1

    mysql 5.7 binlog canal 使用的时候,gtid在85649a98-8b3e-11e5-b9c9-5510536e2f9f:240 左右会报错 

    2018/05/08 00:07:11 [Info] HjEventHandler.go:27 mysql gtid set : 85649a98-8b3e-11e5-b9c9-5510536e2f9f:242 ERRO[0006] close sync with err: data len 0 < expected 1 INFO[0006] table structure changed, clear table cache: huajuan.hj_goods_preferential

    2018/05/08 00:07:11 [Info] HjEventHandler.go:27 mysql gtid set : 85649a98-8b3e-11e5-b9c9-5510536e2f9f:243 ERRO[0006] canal start sync binlog err: data len 0 < expected 1

    示例的binlog文件我上传上来了。希望谁可以看看 mysql-bin.000003.zip

  • Fix replication of TIMESTAMP on non-UTC machines

    Fix replication of TIMESTAMP on non-UTC machines

    To get the timestamp in any timezone, in the codebase consuming this library, add (and substitute with the timezone of your choice):

    func init() {
    	replication.TimeStringLocation = time.UTC
    }
    

    Fixes: #63

    Also see: github/gh-ost#182 and Shopify/ghostferry#23

    review: @siddontang

  • Add support for MySQL 8.0 and support for TLS/SSL for both Server and Client

    Add support for MySQL 8.0 and support for TLS/SSL for both Server and Client

    This PR:

    • Added support for MySQL 8.0 for the Client: starting from MySQL 8.0.4, MySQL uses 'caching_sha2_password' as default authentication method, which caused the connection to fail. This PR fixed the problem by supporting three mainstream auth methods 'mysql_native_password', 'caching_sha2_password', and 'sha256_password', which covers a wide range of MySQL versions from MySQL 5.5 to MySQL 8.0.
    • Added support of new auth methods for the Server: the server now accepts 'mysql_native_password', 'caching_sha2_password', and 'sha256_password' as auth methods. Other old and deprecated auth methods are not supported.
    • Supports TLS/SSL for the Client and the Server: the new design maintains compatibility with the previous releases. Customizations are optional.

    Since the upgrade of auth methods affects the Client and Server design, I made lots of changes to the code while trying to reuse the old code. Some minor existing bugs are fixed too. For instance, the buffer reader for the net.Conn is now removed because it causes the SSL handshake to fail.

    Changing and refactoring can be a bad thing though as they can introduce new bugs. However, I tried my best to make the old tests pass and added new feature tests for different MySQL versions using docker compose to increase testing coverage. For now all test cases passed.

  • Allow TLS connections in the driver

    Allow TLS connections in the driver

    Opening this for discussion and feedback. I'm not expecting this to be merged in as-is.

    • This mostly exposes what is already elsewhere in this codebase to the driver

    • Allows for an optional ssl= URL parameter (query string) of values true and custom. Database must be specified for this to work

    • true just does c.UseSSL(true) on the connection

    • custom uses a package level variable of customTLSConfig to do c.SetTLSConfig(customTLSConfig)

    • That variable is set via a new function, SetCustomTLSConfig

    • To use a custom config you'd do a full import of the driver (not for side-effects only) and then do something like:

        driver.SetCustomTLSConfig(CaPem, make([]byte, 0), make([]byte, 0), false, "my.domain.name")
      

    For what I need the basic TLS support is enough (ssl=true and c.UseSSL(true)) so if need be I could reduce this PR down to the first commit; However, since we are basically exposing what is already supported elsewhere in this codebase to the driver I thought I should get the custom TLS config support is a usable state (this PR).

    Things that I could do with feedback/opinions on:

    • Since the DSN style for this driver uses a ? as a separator/indicator for the database the ssl parameter only works and makes sense if a database is specified. E.g:

        `?mydatabase&ssl=true`
      

      vs

        `?ssl=true`
      

      Which would try to connect to a database called ssl=true.

      I am ok with this limitation/requirement.

    • I have also tried to keep the current style for parsing the URL parameters as opposed to re-working everything to use url.Parse. I assume there is a reason why that's not used and why ? is used instead of / for the database separator/indicator; At the very least I understand that switching to / would be a breaking change for many folk

    • I've allowed the Cert and Key to be optional in NewClientTLSConfig. I think that is a good idea, but I don't know if I've gone about it the best way

    • I'm just allowing for a single custom TLSConfig. This means that if a program uses the driver then only one custom config can be set - which might not be desirable if that driver is connecting to multiple endpoints. For me, right now, that's fine. But I'm willing to look at this some more

    • SetCustomTLSConfig is a very simple pass-through function. I welcome suggestions on how to improve. But it means I can just import the driver and still take advantage of NewClientTLSConfig

  • Add more MySQL-8.0 meta data to GTIDEvent and TableMapEvent

    Add more MySQL-8.0 meta data to GTIDEvent and TableMapEvent

    This pr adds more meta data added after MySQL-8.0:

    GTIDEvent:

    • immediate/original commit timestamp for this trx
    • transaction length (in bytes) for all binlog events of this trx, including this GTIDEvent. This is useful to detect transaction boundry
    • immediate/original server version

    TableMapEvent:

    • column names
    • primary key info
    • signedness info for numeric columns
    • collation info for character and enum/set columns
    • geometry type info

    These can be hopefully to solve #427 if using MySQL-8.0

    Example GTIDEvent dump (MySQL-8.0):

    === GTIDEvent === 
    Date: 2020-02-01 19:15:26
    Log position: 3440812
    Event size: 88
    Commit flag: 1                                                                                                                                           
    GTID_NEXT: 5aa72a7f-44a8-11ea-947f-0242ac190002:55
    LAST_COMMITTED: 54
    SEQUENCE_NUMBER: 55
    Immediate commmit timestamp: 1580555726309342 (2020-02-01T19:15:26.309342+08:00)
    Orignal commmit timestamp: 0 (<n/a>)
    Transaction length: 197
    Immediate server version: 80019
    Orignal server version: 0
    

    Example GTIDEvent dump for the same event (MySQL-5.7):

    === GTIDEvent ===
    Date: 2020-02-01 19:15:26
    Log position: 15156
    Event size: 65
    Commit flag: 1
    GTID_NEXT: 5aa72a7f-44a8-11ea-947f-0242ac190002:55
    LAST_COMMITTED: 49                                                                                                                                       
    SEQUENCE_NUMBER: 50
    Immediate commmit timestamp: 0 (<n/a>)
    Orignal commmit timestamp: 0 (<n/a>)
    Transaction length: 0
    Immediate server version: 0
    Orignal server version: 0
    
    

    Example TableMapEvent dump (MySQL-8.0):

    === TableMapEvent ===
    Date: 2020-03-10 15:24:58
    Log position: 78747
    Event size: 580
    TableID: 118
    TableID size: 6
    Flags: 1
    Schema: test
    Table: _types
    Column count: 42
    Column type: 
    00000000  10 01 01 02 09 03 08 f6  04 05 01 02 09 03 08 f6  |................|
    00000010  04 05 0d 0a 13 13 12 12  11 11 fe 0f fe 0f fc fc  |................|
    00000020  fc fc fc fc fc fc fe fe  ff f5                    |..........|
    NULL bitmap: 
    00000000  00 00 fc c0 ff 03                                 |......|
    Signedness bitmap: 
    00000000  00 7f 80                                          |...|
    Default charset: []
    Column charset: [224 224 63 63 63 63 63 63 224 224 224 224]
    Set str value: [[1 2]]
    Enum str value: [[a b]]
    Column name: [b_bit n_boolean n_tinyint n_smallint n_mediumint n_int n_bigint n_decimal n_float n_double nu_tinyint nu_smallint nu_mediumint nu_int nu_bigint nu_decimal nu_float nu_double t_year t_date t_time t_ftime t_datetime t_fdatetime t_timestamp t_ftimestamp c_char c_varchar c_binary c_varbinary c_tinyblob c_blob c_mediumblob c_longblob c_tinytext c_text c_mediumtext c_longtext e_enum s_set g_geometry j_json]
    Geometry type: [0]
    Primary key: []
    Primary key prefix: []
    Enum/set default charset: [224]
    Enum/set column charset: []
    
    

    Example TableMapEvent dump for the same event (MySQL-5.7):

    === TableMapEvent ===
    Date: 2020-03-10 15:24:58
    Log position: 31058
    Event size: 133
    TableID: 117
    TableID size: 6
    Flags: 1
    Schema: test
    Table: _types
    Column count: 42
    Column type: 
    00000000  10 01 01 02 09 03 08 f6  04 05 01 02 09 03 08 f6  |................|
    00000010  04 05 0d 0a 13 13 12 12  11 11 fe 0f fe 0f fc fc  |................|
    00000020  fc fc fc fc fc fc fe fe  ff f5                    |..........|
    NULL bitmap: 
    00000000  00 00 fc c0 ff 03                                 |......|
    Signedness bitmap: 
    Default charset: []
    Column charset: []
    Set str value: []
    Enum str value: []
    Column name: []
    Geometry type: []
    Primary key: []
    Primary key prefix: []
    Enum/set default charset: []
    Enum/set column charset: []
    
    

    ref:

    • https://mysqlhighavailability.com/more-metadata-is-written-into-binary-log/
    • https://mysqlhighavailability.com/taking-advantage-of-new-transaction-length-metadata/
    • https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_binlog_row_metadata
    • https://dev.mysql.com/doc/dev/mysql-server/latest/classbinary__log_1_1Gtid__event.html
    • https://dev.mysql.com/doc/dev/mysql-server/latest/classbinary__log_1_1Table__map__event.html
  • TWEAK: use gtid from dump position, and incrementally update it with binlog syncer

    TWEAK: use gtid from dump position, and incrementally update it with binlog syncer

    This is an improvement/bugfix to the previous PR.

    1. In the prev PR, we continue with file-based binlog replication from gtid dump, and that introduces a special case. Instead, we can use global.gtid_purged.

    2. Reimplement SyncedGTIDSet because one single gtid is generally useless to restore sync from.

    3. Renaming some gtid to gset, which is a set of gtid, to be clear

    #260

  • Allow logger override

    Allow logger override

    Problem

    Ideally libraries don't have a logger at all. But this is a smaller change by allowing a user to set the Logger. This will allow the previous behaviour if needed.

    Closes: https://github.com/go-mysql-org/go-mysql/issues/698 and maybe others

    Feedback?

    Should errors ever log or just return?

  • parse gtid for parseHandler if MySQL works in GTID_MODE, and begin to startWithGTID after mysqldump is done

    parse gtid for parseHandler if MySQL works in GTID_MODE, and begin to startWithGTID after mysqldump is done

    Purpose of This PR:

    1. if MySQL is GTID_MODE=ON, canal would be start to replicate with GTID after mysqldump is done.
    2. Add one more func GtidSet() for parseHandler, to save gtid set after parsing gtid from mysqldump.
  • Added test showing that the decimal decode error.

    Added test showing that the decimal decode error.

    I am getting some very very small numbers instead of the expected numbers. The issue turned out to be the with the inital byte. When there was a negative number that byte and the number was small enough to fit in one byte, the initial byte size would be 1. We were then using a 4 byte mask to mask out 1 byte value leaving the mask as part of the value, causing the error. This was only an issue when the number was negative as the mask was the 2's complement. For positive number the mask is a zero, so it was not contributing to the final value.

    I've added test cases that show the errors (the negative values) as well as test cases that were working.

  • TWEAK: do not set global log level in NewBinlogSyncer

    TWEAK: do not set global log level in NewBinlogSyncer

    This reverts https://github.com/siddontang/go-mysql/commit/0baad4d700af91f9031f5b4e3135f4eb53ed35ce.

    Setting a global log level in an initializer causes unwanted side effects in many cases. User who needs this should do log.SetLevelByString in main program only.

  • add master slave heartbeat and kill invalid thread id

    add master slave heartbeat and kill invalid thread id

    When the network jitter or other reasons, the mysql server does not kill invalid canal database connection, which can cause waste of resources, at the same time, with the heartbeat set, canal initiative to kill invalid connection, and to connect to the database

  • canal can't parse value of ddl

    canal can't parse value of ddl

    canal.valueExpr can't work

    ALTER TABLE `sometable` MODIFY COLUMN `some_col` varchar(1024) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'comments' AFTER `other_col`
    

    become

    ALTER TABLE `sometable` MODIFY COLUMN `some_col` VARCHAR(1024) CHARACTER SET UTF8 COLLATE utf8_bin NOT NULL DEFAULT  COMMENT  AFTER `other_col`
    

    after (*ast.AlterTableStmt).Restore(ctx *format.RestoreCtx), default value and comment value lost

    change "github.com/pingcap/parser" test_driver to public or copy the implements to canal?

  • canal云平台RDS解决binlog被删除的问题

    canal云平台RDS解决binlog被删除的问题

    https://github.com/alibaba/canal/issues/727 由于云平台数据库会在一定时间后把binlog清理到云存储,导致本地binlog消失,而且binlog越大消失越快。 对于出现位点消费异常导致消费中断,如果修复时间较长会导致恢复后无法继续消费位点的情况,为保证消费连续性,新增PR #741
    来实现遇到binlog文件消失时,通过文件下载方法,将binlog文件下载本地解析后再推送给EventHandler处理。 完成自动的离线binlog和在线binlog之间的切换,保证canal消费的连续性。 各云平台可根据自己的API来实现文件下载方法。

logical is tool for synchronizing from PostgreSQL to custom handler through replication slot

logical logical is tool for synchronizing from PostgreSQL to custom handler through replication slot Required Postgresql 10.0+ Howto Download Choose t

Sep 2, 2022
Examples and code to assign a name to your MongoDB, MySQL, PostgreSQL, RabbitMQ, and redis connection.
Examples and code to assign a name to your MongoDB, MySQL, PostgreSQL, RabbitMQ, and redis connection.

your connection deserves a name ?? When your app interacts with an external system, assign a name to the connection. An external system in this contex

Dec 14, 2022
Data access layer for PostgreSQL, CockroachDB, MySQL, SQLite and MongoDB with ORM-like features.
Data access layer for PostgreSQL, CockroachDB, MySQL, SQLite and MongoDB with ORM-like features.

upper/db is a productive data access layer (DAL) for Go that provides agnostic tools to work with different data sources

Jan 3, 2023
A playground project to create a simple web API backed by a MySQL datastore.

A playground project to create a simple web API backed by a MySQL datastore. Which will allow evaluating ORM & HTTP router Go modules.

Oct 16, 2021
Simple Echo With Mysql

Simple Echo With Mysql

Dec 19, 2021
Golang MySQL driver

Install go get github.com/vczyh/go-mysql-driver Usage import _ "github.com/vczyh

Jan 27, 2022
Pure Go Postgres driver for database/sql

pq - A pure Go postgres driver for Go's database/sql package Install go get github.com/lib/pq Features SSL Handles bad connections for database/sql S

Jan 2, 2023
SQLite with pure Go

Sqinn-Go is a Go (Golang) library for accessing SQLite databases in pure Go. It uses Sqinn https://github.com/cvilsmeier/sqinn under the hood. It star

Dec 19, 2022
SAP (formerly sybase) ASE/RS/IQ driver written in pure go

tds import "github.com/thda/tds" Package tds is a pure Go Sybase ASE/IQ/RS driver for the database/sql package. Status This is a beta release. This dr

Dec 7, 2022
GoBigdis is a persistent database that implements the Redis server protocol. Any Redis client can interface with it and start to use it right away.

GoBigdis GoBigdis is a persistent database that implements the Redis server protocol. Any Redis client can interface with it and start to use it right

Apr 27, 2022
High-performance framework for building redis-protocol compatible TCP servers/services

Redeo The high-performance Swiss Army Knife for building redis-protocol compatible servers/services. Parts This repository is organised into multiple

Jan 4, 2023
redis protocol server for go.

redigosrv what redis server侧的golang版本实现。 why 作为后端RD,你开发的,维护的最多的应该就是你的server,作为服务的提供方来说,想要和客户端进行有效的交互,首先要要能理解信息的含义,因此一套通信协议是必须的。 为什么选择redis协议,应用层有很多成熟的

Nov 30, 2021
Package pbpgx provides a toolkit for easier Protocol Buffers interaction with PostgreSQL databases.

PBPGX Package pbpgx provides a toolkit for easier Protocol Buffers interaction with PostgreSQL databases. Pbpgx supports the Protocol Buffer types gen

Jun 27, 2022
Redisx: a library of Go utilities built on the redigo redis client library

redisx redisx is a library of Go utilities built on the redigo redis client libr

Dec 24, 2021
Go client library for Pilosa

Go Client for Pilosa Go client for Pilosa high performance distributed index. What's New? See: CHANGELOG Requirements Go 1.12 and higher. Install Down

Dec 3, 2022
Go Memcached client library #golang

About This is a memcache client library for the Go programming language (http://golang.org/). Installing Using go get $ go get github.com/bradfitz/gom

Jan 8, 2023
Redis client library for Go

go-redis go-redis is a Redis client library for the Go programming language. It's built on the skeleton of gomemcache. It is safe to use by multiple g

Nov 8, 2022
expressive DynamoDB library for Go

dynamo is an expressive DynamoDB client for Go, with an easy but powerful API. dynamo integrates with the official AWS SDK.

Dec 29, 2022
Books-rest api - Simple CRUD Rest API architecture using postgresql db with standard Library

books-rest_api Simple CRUD Rest API architecture using postgresql db with standa

Feb 8, 2022