Go language driver for RethinkDB

RethinkDB-go - RethinkDB Driver for Go

GitHub tag GoDoc Build status

Go driver for RethinkDB

RethinkDB-go Logo

Current version: v6.2.1 (RethinkDB v2.4)

Please note that this version of the driver only supports versions of RethinkDB using the v0.4 protocol (any versions of the driver older than RethinkDB 2.0 will not work).

If you need any help you can find me on the RethinkDB slack in the #gorethink channel.

Installation

go get gopkg.in/rethinkdb/rethinkdb-go.v6

Replace v6 with v5 or v4 to use previous versions.

Example

package rethinkdb_test

import (
	"fmt"
	"log"

	r "gopkg.in/rethinkdb/rethinkdb-go.v6"
)

func Example() {
	session, err := r.Connect(r.ConnectOpts{
		Address: url, // endpoint without http
	})
	if err != nil {
		log.Fatalln(err)
	}

	res, err := r.Expr("Hello World").Run(session)
	if err != nil {
		log.Fatalln(err)
	}

	var response string
	err = res.One(&response)
	if err != nil {
		log.Fatalln(err)
	}

	fmt.Println(response)

	// Output:
	// Hello World
}

Connection

Basic Connection

Setting up a basic connection with RethinkDB is simple:

func ExampleConnect() {
	var err error

	session, err = r.Connect(r.ConnectOpts{
		Address: url,
	})
	if err != nil {
		log.Fatalln(err.Error())
	}
}

See the documentation for a list of supported arguments to Connect().

Connection Pool

The driver uses a connection pool at all times, by default it creates and frees connections automatically. It's safe for concurrent use by multiple goroutines.

To configure the connection pool InitialCap, MaxOpen and Timeout can be specified during connection. If you wish to change the value of InitialCap or MaxOpen during runtime then the functions SetInitialPoolCap and SetMaxOpenConns can be used.

func ExampleConnect_connectionPool() {
	var err error

	session, err = r.Connect(r.ConnectOpts{
		Address:    url,
		InitialCap: 10,
		MaxOpen:    10,
	})
	if err != nil {
		log.Fatalln(err.Error())
	}
}

Connect to a cluster

To connect to a RethinkDB cluster which has multiple nodes you can use the following syntax. When connecting to a cluster with multiple nodes queries will be distributed between these nodes.

func ExampleConnect_cluster() {
	var err error

	session, err = r.Connect(r.ConnectOpts{
		Addresses: []string{url},
		//  Addresses: []string{url1, url2, url3, ...},
	})
	if err != nil {
		log.Fatalln(err.Error())
	}
}

When DiscoverHosts is true any nodes are added to the cluster after the initial connection then the new node will be added to the pool of available nodes used by RethinkDB-go. Unfortunately the canonical address of each server in the cluster MUST be set as otherwise clients will try to connect to the database nodes locally. For more information about how to set a RethinkDB servers canonical address set this page http://www.rethinkdb.com/docs/config-file/.

User Authentication

To login with a username and password you should first create a user, this can be done by writing to the users system table and then grant that user access to any tables or databases they need access to. This queries can also be executed in the RethinkDB admin console.

err := r.DB("rethinkdb").Table("users").Insert(map[string]string{
    "id": "john",
    "password": "p455w0rd",
}).Exec(session)
...
err = r.DB("blog").Table("posts").Grant("john", map[string]bool{
    "read": true,
    "write": true,
}).Exec(session)
...

Finally the username and password should be passed to Connect when creating your session, for example:

session, err := r.Connect(r.ConnectOpts{
    Address: "localhost:28015",
    Database: "blog",
    Username: "john",
    Password: "p455w0rd",
})

Please note that DiscoverHosts will not work with user authentication at this time due to the fact that RethinkDB restricts access to the required system tables.

Query Functions

This library is based on the official drivers so the code on the API page should require very few changes to work.

To view full documentation for the query functions check the API reference or GoDoc

Slice Expr Example

r.Expr([]interface{}{1, 2, 3, 4, 5}).Run(session)

Map Expr Example

r.Expr(map[string]interface{}{"a": 1, "b": 2, "c": 3}).Run(session)

Get Example

r.DB("database").Table("table").Get("GUID").Run(session)

Map Example (Func)

r.Expr([]interface{}{1, 2, 3, 4, 5}).Map(func (row Term) interface{} {
    return row.Add(1)
}).Run(session)

Map Example (Implicit)

r.Expr([]interface{}{1, 2, 3, 4, 5}).Map(r.Row.Add(1)).Run(session)

Between (Optional Args) Example

r.DB("database").Table("table").Between(1, 10, r.BetweenOpts{
    Index: "num",
    RightBound: "closed",
}).Run(session)

For any queries which use callbacks the function signature is important as your function needs to be a valid RethinkDB-go callback, you can see an example of this in the map example above. The simplified explanation is that all arguments must be of type r.Term, this is because of how the query is sent to the database (your callback is not actually executed in your Go application but encoded as JSON and executed by RethinkDB). The return argument can be anything you want it to be (as long as it is a valid return value for the current query) so it usually makes sense to return interface{}. Here is an example of a callback for the conflict callback of an insert operation:

r.Table("test").Insert(doc, r.InsertOpts{
    Conflict: func(id, oldDoc, newDoc r.Term) interface{} {
        return newDoc.Merge(map[string]interface{}{
            "count": oldDoc.Add(newDoc.Field("count")),
        })
    },
})

Optional Arguments

As shown above in the Between example optional arguments are passed to the function as a struct. Each function that has optional arguments as a related struct. This structs are named in the format FunctionNameOpts, for example BetweenOpts is the related struct for Between.

Cancelling queries

For query cancellation use Context argument at RunOpts. If Context is nil and ReadTimeout or WriteTimeout is not 0 from ConnectionOpts, Context will be formed by summation of these timeouts.

For unlimited timeouts for Changes() pass context.Background().

Results

Different result types are returned depending on what function is used to execute the query.

  • Run returns a cursor which can be used to view all rows returned.
  • RunWrite returns a WriteResponse and should be used for queries such as Insert, Update, etc...
  • Exec sends a query to the server and closes the connection immediately after reading the response from the database. If you do not wish to wait for the response then you can set the NoReply flag.

Example:

res, err := r.DB("database").Table("tablename").Get(key).Run(session)
if err != nil {
    // error
}
defer res.Close() // Always ensure you close the cursor to ensure connections are not leaked

Cursors have a number of methods available for accessing the query results

  • Next retrieves the next document from the result set, blocking if necessary.
  • All retrieves all documents from the result set into the provided slice.
  • One retrieves the first document from the result set.

Examples:

var row interface{}
for res.Next(&row) {
    // Do something with row
}
if res.Err() != nil {
    // error
}
var rows []interface{}
err := res.All(&rows)
if err != nil {
    // error
}
var row interface{}
err := res.One(&row)
if err == r.ErrEmptyResult {
    // row not found
}
if err != nil {
    // error
}

Encoding/Decoding

When passing structs to Expr(And functions that use Expr such as Insert, Update) the structs are encoded into a map before being sent to the server. Each exported field is added to the map unless

  • the field's tag is "-", or
  • the field is empty and its tag specifies the "omitempty" option.

Each fields default name in the map is the field name but can be specified in the struct field's tag value. The "rethinkdb" key in the struct field's tag value is the key name, followed by an optional comma and options. Examples:

// Field is ignored by this package.
Field int `rethinkdb:"-"`
// Field appears as key "myName".
Field int `rethinkdb:"myName"`
// Field appears as key "myName" and
// the field is omitted from the object if its value is empty,
// as defined above.
Field int `rethinkdb:"myName,omitempty"`
// Field appears as key "Field" (the default), but
// the field is skipped if empty.
// Note the leading comma.
Field int `rethinkdb:",omitempty"`
// When the tag name includes an index expression
// a compound field is created
Field1 int `rethinkdb:"myName[0]"`
Field2 int `rethinkdb:"myName[1]"`

NOTE: It is strongly recommended that struct tags are used to explicitly define the mapping between your Go type and how the data is stored by RethinkDB. This is especially important when using an Id field as by default RethinkDB will create a field named id as the primary key (note that the RethinkDB field is lowercase but the Go version starts with a capital letter).

When encoding maps with non-string keys the key values are automatically converted to strings where possible, however it is recommended that you use strings where possible (for example map[string]T).

If you wish to use the json tags for RethinkDB-go then you can call SetTags("rethinkdb", "json") when starting your program, this will cause RethinkDB-go to check for json tags after checking for rethinkdb tags. By default this feature is disabled. This function will also let you support any other tags, the driver will check for tags in the same order as the parameters.

NOTE: Old-style gorethink struct tags are supported but deprecated.

Pseudo-types

RethinkDB contains some special types which can be used to store special value types, currently supports are binary values, times and geometry data types. RethinkDB-go supports these data types natively however there are some gotchas:

  • Time types: To store times in RethinkDB with RethinkDB-go you must pass a time.Time value to your query, due to the way Go works type aliasing or embedding is not support here
  • Binary types: To store binary data pass a byte slice ([]byte) to your query
  • Geometry types: As Go does not include any built-in data structures for storing geometry data RethinkDB-go includes its own in the github.com/rethinkdb/rethinkdb-go/types package, Any of the types (Geometry, Point, Line and Lines) can be passed to a query to create a RethinkDB geometry type.

Compound Keys

RethinkDB unfortunately does not support compound primary keys using multiple fields however it does support compound keys using an array of values. For example if you wanted to create a compound key for a book where the key contained the author ID and book name then the ID might look like this ["author_id", "book name"]. Luckily RethinkDB-go allows you to easily manage these keys while keeping the fields separate in your structs. For example:

type Book struct {
  AuthorID string `rethinkdb:"id[0]"`
  Name     string `rethinkdb:"id[1]"`
}
// Creates the following document in RethinkDB
{"id": [AUTHORID, NAME]}

References

Sometimes you may want to use a Go struct that references a document in another table, instead of creating a new struct which is just used when writing to RethinkDB you can annotate your struct with the reference tag option. This will tell RethinkDB-go that when encoding your data it should "pluck" the ID field from the nested document and use that instead.

This is all quite complicated so hopefully this example should help. First lets assume you have two types Author and Book and you want to insert a new book into your database however you dont want to include the entire author struct in the books table. As you can see the Author field in the Book struct has some extra tags, firstly we have added the reference tag option which tells RethinkDB-go to pluck a field from the Author struct instead of inserting the whole author document. We also have the rethinkdb_ref tag which tells RethinkDB-go to look for the id field in the Author document, without this tag RethinkDB-go would instead look for the author_id field.

type Author struct {
    ID      string  `rethinkdb:"id,omitempty"`
    Name    string  `rethinkdb:"name"`
}

type Book struct {
    ID      string  `rethinkdb:"id,omitempty"`
    Title   string  `rethinkdb:"title"`
    Author  Author `rethinkdb:"author_id,reference" rethinkdb_ref:"id"`
}

The resulting data in RethinkDB should look something like this:

{
    "author_id": "author_1",
    "id":  "book_1",
    "title":  "The Hobbit"
}

If you wanted to read back the book with the author included then you could run the following RethinkDB-go query:

r.Table("books").Get("1").Merge(func(p r.Term) interface{} {
    return map[string]interface{}{
        "author_id": r.Table("authors").Get(p.Field("author_id")),
    }
}).Run(session)

You are also able to reference an array of documents, for example if each book stored multiple authors you could do the following:

type Book struct {
    ID       string  `rethinkdb:"id,omitempty"`
    Title    string  `rethinkdb:"title"`
    Authors  []Author `rethinkdb:"author_ids,reference" rethinkdb_ref:"id"`
}
{
    "author_ids": ["author_1", "author_2"],
    "id":  "book_1",
    "title":  "The Hobbit"
}

The query for reading the data back is slightly more complicated but is very similar:

r.Table("books").Get("book_1").Merge(func(p r.Term) interface{} {
    return map[string]interface{}{
        "author_ids": r.Table("authors").GetAll(r.Args(p.Field("author_ids"))).CoerceTo("array"),
    }
})

Custom Marshalers/Unmarshalers

Sometimes the default behaviour for converting Go types to and from ReQL is not desired, for these situations the driver allows you to implement both the Marshaler and Unmarshaler interfaces. These interfaces might look familiar if you are using to using the encoding/json package however instead of dealing with []byte the interfaces deal with interface{} values (which are later encoded by the encoding/json package when communicating with the database).

An good example of how to use these interfaces is in the types package, in this package the Point type is encoded as the GEOMETRY pseudo-type instead of a normal JSON object.

On the other side, you can implement external encode/decode functions with SetTypeEncoding function.

Logging

By default the driver logs are disabled however when enabled the driver will log errors when it fails to connect to the database. If you would like more verbose error logging you can call r.SetVerbose(true).

Alternatively if you wish to modify the logging behaviour you can modify the logger provided by github.com/sirupsen/logrus. For example the following code completely disable the logger:

// Enabled
r.Log.Out = os.Stderr
// Disabled
r.Log.Out = ioutil.Discard

Tracing

The driver supports opentracing-go. You can enable this feature by setting UseOpentracing to true in the ConnectOpts. Then driver will expect opentracing.Span in the RunOpts.Context and will start new child spans for queries. Also you need to configure tracer in your program by yourself.

The driver starts span for the whole query, from the first byte is sent to the cursor closed, and second-level span for each query for fetching data.

So you can trace how much time you program spends for RethinkDB queries.

Mocking

The driver includes the ability to mock queries meaning that you can test your code without needing to talk to a real RethinkDB cluster, this is perfect for ensuring that your application has high unit test coverage.

To write tests with mocking you should create an instance of Mock and then setup expectations using On and Return. Expectations allow you to define what results should be returned when a known query is executed, they are configured by passing the query term you want to mock to On and then the response and error to Return, if a non-nil error is passed to Return then any time that query is executed the error will be returned, if no error is passed then a cursor will be built using the value passed to Return. Once all your expectations have been created you should then execute you queries using the Mock instead of a Session.

Here is an example that shows how to mock a query that returns multiple rows and the resulting cursor can be used as normal.

func TestSomething(t *testing.T) {
	mock := r.NewMock()
	mock.On(r.Table("people")).Return([]interface{}{
		map[string]interface{}{"id": 1, "name": "John Smith"},
		map[string]interface{}{"id": 2, "name": "Jane Smith"},
	}, nil)

	cursor, err := r.Table("people").Run(mock)
	if err != nil {
		t.Errorf("err is: %v", err)
	}

	var rows []interface{}
	err = cursor.All(&rows)
	if err != nil {
		t.Errorf("err is: %v", err)
	}

	// Test result of rows

	mock.AssertExpectations(t)
}

If you want the cursor to block on some of the response values, you can pass in a value of type chan interface{} and the cursor will block until a value is available to read on the channel. Or you can pass in a function with signature func() interface{}: the cursor will call the function (which may block). Here is the example above adapted to use a channel.

func TestSomething(t *testing.T) {
	mock := r.NewMock()
	ch := make(chan []interface{})
	mock.On(r.Table("people")).Return(ch, nil)
	go func() {
		ch <- []interface{}{
			map[string]interface{}{"id": 1, "name": "John Smith"},
			map[string]interface{}{"id": 2, "name": "Jane Smith"},
		}
		ch <- []interface{}{map[string]interface{}{"id": 3, "name": "Jack Smith"}}
		close(ch)
	}()
	cursor, err := r.Table("people").Run(mock)
	if err != nil {
		t.Errorf("err is: %v", err)
	}

	var rows []interface{}
	err = cursor.All(&rows)
	if err != nil {
		t.Errorf("err is: %v", err)
	}

	// Test result of rows

	mock.AssertExpectations(t)
}

The mocking implementation is based on amazing https://github.com/stretchr/testify library, thanks to @stretchr for their awesome work!

Benchmarks

Everyone wants their project's benchmarks to be speedy. And while we know that RethinkDB and the RethinkDB-go driver are quite fast, our primary goal is for our benchmarks to be correct. They are designed to give you, the user, an accurate picture of writes per second (w/s). If you come up with a accurate test that meets this aim, submit a pull request please.

Thanks to @jaredfolkins for the contribution.

Type Value
Model Name MacBook Pro
Model Identifier MacBookPro11,3
Processor Name Intel Core i7
Processor Speed 2.3 GHz
Number of Processors 1
Total Number of Cores 4
L2 Cache (per Core) 256 KB
L3 Cache 6 MB
Memory 16 GB
BenchmarkBatch200RandomWrites                20                              557227775                     ns/op
BenchmarkBatch200RandomWritesParallel10      30                              354465417                     ns/op
BenchmarkBatch200SoftRandomWritesParallel10  100                             761639276                     ns/op
BenchmarkRandomWrites                        100                             10456580                      ns/op
BenchmarkRandomWritesParallel10              1000                            1614175                       ns/op
BenchmarkRandomSoftWrites                    3000                            589660                        ns/op
BenchmarkRandomSoftWritesParallel10          10000                           247588                        ns/op
BenchmarkSequentialWrites                    50                              24408285                      ns/op
BenchmarkSequentialWritesParallel10          1000                            1755373                       ns/op
BenchmarkSequentialSoftWrites                3000                            631211                        ns/op
BenchmarkSequentialSoftWritesParallel10      10000                           263481                        ns/op

Examples

Many functions have examples and are viewable in the godoc, alternatively view some more full features examples on the wiki.

Another good place to find examples are the tests, almost every term will have a couple of tests that demonstrate how they can be used.

Further reading

License

Copyright 2013 Daniel Cannon

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • Terrible performance; extreme cpu usage; degrades further over time

    Terrible performance; extreme cpu usage; degrades further over time

    We are experiencing extremely bad performance with this driver, and it's easily reproducible.

    This is using the most recently committed gorethink driver and RethinkDB 1.14.1 The platform is a digitalocean host running Ubuntu 14.04 with 4 cores and 8GB ram, but I've reproduced the same thing on other hosts with varying configurations.

    A simple program to insert 30000 tiny test documents using 3 concurrent functions inserting 10000 documents each completes in:

    real 7m0.511s user 10m56.033s sys 0m47.415s

    Which works out to be 71 documents per second.

    And during this time CPU usage for go, not so much rethink, goes nuts:

    cpudamn

    The really odd thing is that the performance gets worse as time goes on. It starts initially at around 300 ops/s and near the end of the test, the performance drops down to about 25 ops/s:

    performance

    I am posting this issue in gorethink instead of rethinkdb primarily because of the CPU - the driver is burning cpu like the devil inserting just a few thousand tiny documents.

    Here is the small app used in this test:

    package main
    
    import "log"
    import "strconv"
    import "runtime"
    import rdb "github.com/dancannon/gorethink"
    
    type test_document struct {
            Id       string `gorethink:"name"`
            ParentId string `gorethink:"parent_id"`
            Username string `gorethink:"username"`
            Status   int    `gorethink:"status"`
    }
    
    func he(cur *rdb.Cursor, err error) {
            if err != nil {
                    panic(err)
            }
    }
    
    func main() {
    
            runtime.GOMAXPROCS(2)
    
            log.Printf("Creating DB...")
            opts := rdb.ConnectOpts{
                    Address: "localhost:28015",
                    MaxIdle: 10,
            }
    
            ses, err := rdb.Connect(opts)
            if err != nil {
                    panic(err)
            }
    
            _, _ = rdb.DbDrop("perf_test").Run(ses)
    
            _, err = rdb.DbCreate("perf_test").Run(ses)
            if err != nil {
                    panic(err)
            }
    
            ses.Use("perf_test")
    
            tcopts := rdb.TableCreateOpts{PrimaryKey: "id", Durability: "soft"}
    
            log.Printf("Creating Table...\n")
    
            he(rdb.Db("perf_test").TableCreate("users", tcopts).Run(ses))
            he(rdb.Db("perf_test").Table("users").IndexCreate("username").Run(ses))
            he(rdb.Db("perf_test").Table("users").IndexCreate("parent_id").Run(ses))
    
            log.Printf("Populating table...\n")
    
            go what(0, 10000, ses)
            go what(10000, 20000, ses)
            what(20000, 30000, ses)
    
            log.Printf("Done.\n")
    
    }
    
    func what(begin int, end int, ses *rdb.Session) {
            for i := begin; i < end; i++ {
                    user := test_document{
                            Id:       strconv.Itoa(i),
                            Username: strconv.Itoa(i),
                            ParentId: strconv.Itoa(i - 1),
                            Status:   0,
                    }
                    rdb.Table("users").Insert(&user).Run(ses)
            }
    }
    
  • Bug when updating time.Time in a nested map

    Bug when updating time.Time in a nested map

    The driver loses type information from a nested map:

    when := time.Now()
    rdb.DB("test_db").Table("test_table").Get(id).
      Update(
        map[string]map[string]time.Time{
          "LastSeen": map[string]time.Time{"some_tag": when}}
      ).RunWrite(a.conn)
    

    The time ends up being updated as a string type instead of the expected {'$reql_type$': 'TIME'} type.

  • Improve connection pool performance

    Improve connection pool performance

    During my investigation of #125 I noticed that the performance of the connection pool is pretty bad method I use for closing connections does not work well with RethinkDB due to the reuse of connections for continue + end queries.

    I have looked into removing the connection pool completely as most of the official drivers do not use connection pools however proper concurrency support is pretty important for a Go driver IMO.

  • "Token ## not in stream cache" error

    When making multiple async queries to the driver, we sometimes get the above error. It happens about 20-60% of the calls we're making, so we've had to disable async calls to our API. The Python driver appears to have had this issue as well and they resolved it:

    https://github.com/rethinkdb/rethinkdb/issues/2337

    https://github.com/rethinkdb/rethinkdb/commit/ca9ab2835f88a6cf933878eeb3300767e64c9765#diff-d73ba4e9a072dff9a19fe841ca493c8fR228

    Is this something that can be address in this project as well?

  • Unable to check if Get() returns nothing

    Unable to check if Get() returns nothing

    As with rethinkgo, Get() always returns a single row and does not provide any way of checking if a record exists.

    row := r.Table(table).Get("missing key").RunRow(rs)
    err := row.Scan(obj)
    // err == nil, obj has default values
    
    rows, err := r.Table(table).Get("missing key").Run(rs)
    // err == nil
    for rows.Next() { // returns true
        err := rows.Scan(obj)
        // err == nil, obj has default values
    }
    

    GetAll works :

    rows, err := r.Table(table).GetAll("missing key").Run(rs)
    // err == nil
    for rows.Next() { // returns false
        err := rows.Scan(obj)
    }
    

    RethinkDB returns a single NULL datum on Get queries returning no result, and this null value is scanned to the object.

    Proposition :

    • [BC break] ResultRow.Scan returns ErrNotFound on NULL datum (as works mgo for MongoDB in similar cases)
    • add func (*ResultRow) IsNull() (or IsEmpty()/IsNil())

    The code would work as follows :

    row := r.Table(table).Get("missing key").RunRow(rs)
    err := row.Scan(obj)
    // err == ErrNotFound
    
    rows, err := r.Table(table).Get("missing key").Run(rs)
    // err == nil
    for rows.Next() && !rows.IsNull() { // true + false
        err := rows.Scan(obj)
    }
    

    If this is ok, I can prepare a PR.

    [EDIT: replaced RunRow by Run on last piece of code]

  • This project is no longer maintained

    This project is no longer maintained

    Unfortunately I have decided to stop maintaining GoRethink, this is due to the following reasons:

    • Over the last few years while I have spent a lot of time maintaining this driver I have not used it very much for my own personal projects.
    • My job has been keeping me very busy lately and I don't have as much time to work on this project as I used to.
    • The company behind RethinkDB has shut down and while I am sure the community will keep the database going it seems like a good time for me to step away from the project.
    • The driver itself is in a relatively good condition and many companies are using the existing version in production.

    I hope you understand my decision to step back from the project, if you have any questions or would be interested in take over some of the maintenance of the project please let me know. To make this process easier I have also decided to move the repository to the GoRethink organisation. All existing imports should still work.

    Thanks to everybody who got involved with this project over the last ~4 years and helped out, I have truly enjoyed the time I have spent building this library and I hope both RethinkDB and this driver manage to keep going.

  • Connection pool is exhausting connections, eventually hangs

    Connection pool is exhausting connections, eventually hangs

    After running my server for a while I start to see file descriptors being used up and when I dump goroutines I see tons of these:

    goroutine 123981 [chan receive, 455 minutes]:
    github.com/dancannon/gorethink.(*Pool).conn(0xc20805a1b0, 0x7bbf80, 0x0, 0x0)
            /home/web/apps/fbrss/src/github.com/dancannon/gorethink/pool.go:252 +0x2a2
    github.com/dancannon/gorethink.(*Pool).query(0xc20805a1b0, 0xc200000001, 0x0, 0xc20834b720, 0xc208226420, 0x0, 0x0, 0x0)
            /home/web/apps/fbrss/src/github.com/dancannon/gorethink/pool.go:504 +0x40
    github.com/dancannon/gorethink.(*Pool).Query(0xc20805a1b0, 0x1, 0x0, 0xc20834b720, 0xc208226420, 0x0, 0x0, 0x0)
            /home/web/apps/fbrss/src/github.com/dancannon/gorethink/pool.go:496 +0x94
    github.com/dancannon/gorethink.Term.Run(0x92df70, 0x5, 0x4700000000, 0x0, 0x0, 0xc2081dfdd0, 0x2, 0x2, 0xc208226390, 0xc208010310, ...)
            /home/web/apps/fbrss/src/github.com/dancannon/gorethink/query.go:197 +0x10c
    main.LoadUser(0xc20834b5ea, 0x28, 0x945b50, 0x6, 0xc208446d90, 0x0, 0x0)
            /home/web/apps/fbrss/data.go:235 +0xa75
    main.feedHandler(0xc2080691e0)
            /home/web/apps/fbrss/feed.go:182 +0x1b4
    

    (lines are off by one, 252 for me is https://github.com/dancannon/gorethink/blob/master/pool.go#L253)

    Depending on how big I make the pool this starts happening after 30-90 minutes of running at 2-5 reqs/sec. My code isn't doing anything extraordinary, looks roughly like this:

        rows, err = r.Table("users").GetAllByIndex("cookie", value).Limit(1).Run(rethinkSession)
        if err != nil {
            return nil, nil
        }
        var user User
        err = rows.One(&user)
        ...wrapping up....
    

    Weird thing is, after I start the server it keeps opening more connections until the limit is reached and it runs fine at the limit for a while. Eventually, something triggers the build up of opening new connections and everything stalls. Let me know if I can add anything else.

  • Inserts get truncated data at high concurrency

    Inserts get truncated data at high concurrency

    I'm playing around with the driver at runtime.GOMAXPROCS(8) and trying to insert 29k documents using this Zip JSON. After a while, the driver gives this error at random location and stop

    gorethink: String `CALCASIEU` (truncated) contains NULL byte at offset 9. in:
    r.Insert(r.Table("zips"), {City="CALCASIEU\x00", Loc=[-91.875149, 31.906412], Pop=124, State="LA"})
    

    Although everything runs fine at runtime.GOMAXPROCS(4) and rethinkdb get to around 3k of inserts/sec. The sample code is here (http://play.golang.org/p/ye2BpuJzlE)

  • Driver panics on user data

    Driver panics on user data

    The driver panics on user data in https://github.com/dancannon/gorethink/blob/master/query_control.go#L26 The panic is not documented. The doc just states "If the value cannot be converted, an error is returned at query .Run(session) time" which is only partially true.

    Panicking on user data is not a good pattern. The driver should be able to handle any input without crashing.

    If I do an Insert on user-provided data, the only way to avoid the panic is to pre-parse it myself to ensure the maximum nesting depth is not exceeding the driver-imposed limit. Which is a lot of unnecessary code duplication and extra CPU cycles. Wrapping every call to gorethink with Recover does not seem like a clean solution either.

    Why not just

    return Term{
        termType: p.Term_DATUM,
        data:     nil,
    }
    

    instead of panicking?

  • Changefeed crash

    Changefeed crash

    Having recurring crashes with the ChangeFeed-cursors (please see below).

    Thanks in advance for looking into it, Dan. :-)

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal 0xb code=0x1 addr=0x20 pc=0x62df73]
    
    goroutine 7932 [running]:
    github.com/dancannon/gorethink.(*Cursor).bufferNextResponse(0xc820517b80, 0x0, 0x0)
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/bitbucket.org/cloudintel/vatomizer/Godeps/_workspace/src/github.com/dancannon/gorethink/cursor.go:632 +0x263
    github.com/dancannon/gorethink.(*Cursor).seekCursor(0xc820517b80, 0x100000001, 0x0, 0x0)
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/bitbucket.org/cloudintel/vatomizer/Godeps/_workspace/src/github.com/dancannon/gorethink/cursor.go:570 +0xe7
    github.com/dancannon/gorethink.(*Cursor).nextLocked(0xc820517b80, 0xd9bfe0, 0xc8200769a0, 0xfeb601, 0xc8200769a0, 0x0, 0x0)
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/bitbucket.org/cloudintel/vatomizer/Godeps/_workspace/src/github.com/dancannon/gorethink/cursor.go:205 +0x3c
    github.com/dancannon/gorethink.(*Cursor).Next(0xc820517b80, 0xd9bfe0, 0xc8200769a0, 0xd9bfe0)
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/bitbucket.org/cloudintel/vatomizer/Godeps/_workspace/src/github.com/dancannon/gorethink/cursor.go:188 +0xb0
    github.com/dancannon/gorethink.(*Cursor).Listen.func1(0xdd3720, 0xc8206341e0, 0xc820517b80)
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/bitbucket.org/cloudintel/vatomizer/Godeps/_workspace/src/github.com/dancannon/gorethink/cursor.go:447 +0x19d
    created by github.com/dancannon/gorethink.(*Cursor).Listen
        /home/jenkins/.gvm/pkgsets/go1.5.1/global/src/bitbucket.org/cloudintel/vatomizer/Godeps/_workspace/src/github.com/dancannon/gorethink/cursor.go:456 +0x49
    
  • fix: refactored tests dependant on float assertion

    fix: refactored tests dependant on float assertion

    Hi Dan,

    As promised, here are the fixes for the tests.

    I created some helper methods which essentially ripoff Go's stdlib.

    From there I refactored the coordinates into float64 vertices.

    I then used a helper method I created to compare the coordinates instantiated to the Lines and Points returned, throwing an error if the deviation is too great.

    I think this is readable but am open to feedback.

    Jared

  • Utilize MarshalJSON and UnmarshalJSON interface implementations

    Utilize MarshalJSON and UnmarshalJSON interface implementations

    Is your feature request related to a problem? Please describe. Custom types can sometimes produce empty values in a RethinkDB document. I have implemented my own decimal type:

    type Decimal struct {
    	flags uint32
    	high  uint32
    	low   uint32
    	mid   uint32
    }
    
    // MarshalJSON returns the decimal as a text string without quotes
    func (d Decimal) MarshalJSON() ([]byte, error) { return d.MarshalText() }
    
    // MarshalText encodes the receiver into UTF-8-encoded text and returns the result.
    func (d Decimal) MarshalText() (text []byte, err error) {
    	text = []byte(d.String())
    	return text, nil
    }
    
    // UnmarshalJSON unmarshals the JSON value, ignoring quotes
    func (d *Decimal) UnmarshalJSON(text []byte) error {
    	return d.UnmarshalText(text)
    
    }
    
    // UnmarshalText unmarshals the decimal from the provided text.
    func (d *Decimal) UnmarshalText(text []byte) (err error) {
    	*d, err = Parse(string(text))
    	return err
    }
    

    It implements both json.Marshaler and json.Unmarshaler. This type encodes and decodes without issue using the standard encoding/json package. So I was surprised to see the following document stored in RethinkDB

    {
    "candle": {
    "bar_seqno": 12024547 ,
    "close_price": { } ,
    "high_price": { } ,
    "low_price": { } ,
    "open_price": { } ,
    }
    ....
    }
    

    when using

    type Candle struct {
      BarSeqno int `json:"bar_seqno"`
      OpenPrice Decimal `json:"open_price"`
      HighPrice Decimal `json:"high_price"`
      LowPrice Decimal `json:"low_price"`
      ClosePrice Decimal `json:"close_price"`
    }
    
    candle := Candle{
      BarSeqno: 12024547,
      OpenPrice: decimal.NewFromString("1.33028"),
      HighPrice: decimal.NewFromString("1.33028"),
      LowPrice: decimal.NewFromString("1.33028"),
      ClosePrice: decimal.NewFromString("1.33028"),
    }
    
    err := r.Table("candles").Insert(candle).Exec(session)
    

    What I would expect to see is

    {
    "candle": {
    "bar_seqno": 12024547 ,
    "close_price": 1.33028 ,
    "high_price": 1.33028 ,
    "low_price": 1.33028 ,
    "open_price": 1.33028 ,
    }
    ....
    }
    

    Describe the solution you'd like If this library could use the json.Marshaler and json.Unmarshaler implementations, I would get the expected value by just using

    err := r.Table("candles").Insert(candle).Exec(session)
    

    Describe alternatives you've considered My workaround comes from this issue and is basically:

    candles = []Candle{candle1, candle2, candle3}
    b := new(bytes.Buffer)
    	for _, candle := range candles {
    		if err = json.NewEncoder(b).Encode(candle); err != nil {
    			return err
    		}
    		if err = r.Table(name).Insert(r.JSON(b.String())).Exec(session); err != nil {
    			return err
    		}
    		b.Reset()
    	}
    

    This is not only more verbose but also creates separate calls to Insert instead of sending a batch which impacts performance and also eliminates the transaction-like quality of sending a slice of objects to Insert.

    Additional context Is there something about RethinkDB or this library that would prevent adding this functionality? I would be happy to give it a try but not if someone has already proven it is a bad idea.

  • WriteResponse does not return GeneratedKeys

    WriteResponse does not return GeneratedKeys

    Describe the bug I am running below query

    alert := dbEntities.Alert{
    		Acknowledged:          false,
    		AcknowledgedTimestamp: nil,
    		AutoAcknowledged:      false,
    		Class:                 alertClass,
    		Count:                 1,
    		Level:                 alertLevel,
    		Message:               message,
    		Ref:                   ref,
    		Timestamp:             nil,
    		Type:                  alertType,
    		RuleId:                ruleId,
    	}
    res, err := r.Table(hConstant.TableAlert).Insert(alert).RunWrite(rethinkHelper.RethinkSession)
    

    There is no error. In the res, it says inserted is 1 but GeneratedKeys slice is empty

    To Reproduce Steps to reproduce the behavior:

    1. Create Database.
    2. Create Table
    3. run above query
    4. Check the response object

    Expected behavior GeneratedKeys should have more than 0 value, specially generated id for inserted record

    Screenshots image (1)

    System info

    • OS: [Ubuntu 18.04.6 LTS]
    • RethinkDB Version: [2.4.2~0bionic (GCC 7.5.0)]
  • Contexts not working properly in certain scenarios

    Contexts not working properly in certain scenarios

    Describe the bug

    To describe the bug, I'd like to look at the following "database outage" scenario:

    • A microservice with pretty high workload, which connects to RethinkDB
    • The RethinkDB server goes down (maybe due to rolling update of a worker node in K8s or whatever...)

    What can then happen is:

    • Response times for users of the microservice are getting slower and slower, even though all DB queries are run with contexts properly set (max 30s, but response times can start to stack up quickly to >600s)
    • Go routines start to build up
    • Eventually the microservice get's OOM killed

    If I conclude correctly from the code, in the connection pool there is a mutex used while distributing queries to a connection (to prevent concurrent creation of a new connection?). I guess in my scenario it takes more time to create a connection (because the connection has gone bad it needs recreation) than requests are coming in. So, Go routines will queue up waiting for the mutex (until the database connection is re-established, which stops this behavior). In the logs of the application I eventually see the connection refused error from this driver.

    This shows that using mutexes has some disadvantages for these kind of scenarios because they cannot be left even when a context is provided. From my perspective, instead, the implementation should utilize something like go-lock or a construct using channels from which on the one hand the routines can be informed when the connection is ready and on the other hand a message from a context can be retrieved.

    Maybe one or the other will stumble upon the same problem and this helps to better understand the observed behavior.

    To Reproduce

    • Produce high workload
    • Shutdown RethinkDB server

    Expected behavior The queries to the database are cancelled by the context and do not queue up.

    Screenshots

    Screenshot from 2022-04-07 15-07-05

    --> As soon as the DB server is shutdown, go routines are queueing up (depends on the workload how quickly)

    profile004

    --> This is a bit complex as it's created with pprof for a real microservice, but the important information is at the bottom: go routines are queuing up in the conn function of the connection pool

    System info

    • RethinkDB Version: 2.4.1
  • Connection and Cursor can be used concurrently

    Connection and Cursor can be used concurrently

    The "not thread safe" comments are seven years old and no longer apply.

    Reason for the change Prevent other developers from being misled if they read the library's documentation but not its code.

    Description

    Code examples

    Checklist

    References

  • Added helper func to check if err is PK too long error

    Added helper func to check if err is PK too long error

    Signed-off-by: Wahab Ali [email protected]

    Reason for the change Ease of use for users consuming go-rethink API.

    Description This PR adds a helper function that checks if error returned by RethinkDB is a Primary Key too long error. RethinkDB has a unique limitation on length of PK, so I think having this helper useful for consumers of go-rethink API, especially if they are new to RethinkDB.

    Code examples N/A

    Checklist

    References N/A

Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package

Go-MySQL-Driver A MySQL-Driver for Go's database/sql package Features Requirements Installation Usage DSN (Data Source Name) Password Protocol Address

Jan 4, 2023
Qmgo - The Go driver for MongoDB. It‘s based on official mongo-go-driver but easier to use like Mgo.

Qmgo English | 简体中文 Qmgo is a Go driver for MongoDB . It is based on MongoDB official driver, but easier to use like mgo (such as the chain call). Qmg

Dec 28, 2022
Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

pqssh Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

Nov 6, 2022
Microsoft SQL server driver written in go language

A pure Go MSSQL driver for Go's database/sql package Install Requires Go 1.8 or above. Install with go get github.com/denisenkom/go-mssqldb . Connecti

Dec 26, 2022
goriak - Go language driver for Riak KV
goriak - Go language driver for Riak KV

goriak Current version: v3.2.1. Riak KV version: 2.0 or higher, the latest version of Riak KV is always recommended. What is goriak? goriak is a wrapp

Nov 22, 2022
Mirror of Apache Calcite - Avatica Go SQL Driver

Apache Avatica/Phoenix SQL Driver Apache Calcite's Avatica Go is a Go database/sql driver for the Avatica server. Avatica is a sub-project of Apache C

Nov 3, 2022
Firebird RDBMS sql driver for Go (golang)

firebirdsql (Go firebird sql driver) Firebird RDBMS http://firebirdsql.org SQL driver for Go Requirements Firebird 2.5 or higher Golang 1.13 or higher

Dec 20, 2022
Microsoft ActiveX Object DataBase driver for go that using exp/sql

go-adodb Microsoft ADODB driver conforming to the built-in database/sql interface Installation This package can be installed with the go get command:

Dec 30, 2022
Oracle driver for Go using database/sql

go-oci8 Description Golang Oracle database driver conforming to the Go database/sql interface Installation Install Oracle full client or Instant Clien

Dec 30, 2022
sqlite3 driver for go using database/sql

go-sqlite3 Latest stable version is v1.14 or later not v2. NOTE: The increase to v2 was an accident. There were no major changes or features. Descript

Jan 8, 2023
GO DRiver for ORacle DB

Go DRiver for ORacle godror is a package which is a database/sql/driver.Driver for connecting to Oracle DB, using Anthony Tuininga's excellent OCI wra

Jan 5, 2023
Go Sql Server database driver.

gofreetds Go FreeTDS wrapper. Native Sql Server database driver. Features: can be used as database/sql driver handles calling stored procedures handle

Dec 16, 2022
PostgreSQL driver and toolkit for Go

pgx - PostgreSQL Driver and Toolkit pgx is a pure Go driver and toolkit for PostgreSQL. pgx aims to be low-level, fast, and performant, while also ena

Jan 4, 2023
Pure Go Postgres driver for database/sql

pq - A pure Go postgres driver for Go's database/sql package Install go get github.com/lib/pq Features SSL Handles bad connections for database/sql S

Jan 2, 2023
Lightweight Golang driver for ArangoDB

Arangolite Arangolite is a lightweight ArangoDB driver for Go. It focuses on pure AQL querying. See AranGO for a more ORM-like experience. IMPORTANT:

Sep 26, 2022
Mongo Go Models (mgm) is a fast and simple MongoDB ODM for Go (based on official Mongo Go Driver)
Mongo Go Models (mgm) is a fast and simple MongoDB ODM for Go (based on official Mongo Go Driver)

Mongo Go Models Important Note: We changed package name from github.com/Kamva/mgm/v3(uppercase Kamva) to github.com/kamva/mgm/v3(lowercase kamva) in v

Jan 2, 2023
The MongoDB driver for Go

The MongoDB driver for Go This fork has had a few improvements by ourselves as well as several PR's merged from the original mgo repo that are current

Jan 8, 2023
The Go driver for MongoDB
The Go driver for MongoDB

MongoDB Go Driver The MongoDB supported driver for Go. Requirements Installation Usage Bugs / Feature Reporting Testing / Development Continuous Integ

Dec 31, 2022
SAP (formerly sybase) ASE/RS/IQ driver written in pure go

tds import "github.com/thda/tds" Package tds is a pure Go Sybase ASE/IQ/RS driver for the database/sql package. Status This is a beta release. This dr

Dec 7, 2022