Go Postgres Data Access Toolkit

dat

GoDoc

dat (Data Access Toolkit) is a fast, lightweight Postgres library for Go.

  • Focused on Postgres. See Insect, Upsert, SelectDoc, QueryJSON

  • Built on a solid foundation sqlx

    // child DB is *sqlx.DB
    DB.DB.Queryx(`SELECT * FROM users`)
  • SQL and backtick friendly

    DB.SQL(`SELECT * FROM people LIMIT 10`).QueryStructs(&people)
  • JSON Document retrieval (single trip to Postgres, requires Postgres 9.3+)

    DB.SelectDoc("id", "user_name", "avatar").
        Many("recent_comments", `SELECT id, title FROM comments WHERE id = users.id LIMIT 10`).
        Many("recent_posts", `SELECT id, title FROM posts WHERE author_id = users.id LIMIT 10`).
        One("account", `SELECT balance FROM accounts WHERE user_id = users.id`).
        From("users").
        Where("id = $1", 4).
        QueryStruct(&obj) // obj must be agreeable with json.Unmarshal()

    results in

    {
        "id": 4,
        "user_name": "mario",
        "avatar": "https://imgur.com/a23x.jpg",
        "recent_comments": [{"id": 1, "title": "..."}],
        "recent_posts": [{"id": 1, "title": "..."}],
        "account": {
            "balance": 42.00
        }
    }
  • JSON marshalable bytes (requires Postgres 9.3+)

    var b []byte
    b, _ = DB.SQL(`SELECT id, user_name, created_at FROM users WHERE user_name = $1 `,
        "mario",
    ).QueryJSON()
    
    // straight into map
    var obj map[string]interface{}
    DB.SQL(`SELECT id, user_name, created_at FROM users WHERE user_name = $1 `,
        "mario",
    ).QueryObject(&obj)
  • Ordinal placeholders

    DB.SQL(`SELECT * FROM people WHERE state = $1`, "CA").Exec()
  • SQL-like API

    err := DB.
        Select("id, user_name").
        From("users").
        Where("id = $1", id).
        QueryStruct(&user)
  • Redis caching

    // cache result for 30 seconds
    key := "user:" + strconv.Itoa(user.id)
    err := DB.
        Select("id, user_name").
        From("users").
        Where("id = $1", user.id).
        Cache(key, 30 * time.Second, false).
        QueryStruct(&user)
  • Nested transactions

  • Per query timeout with database cancellation logic pg_cancel_backend

  • SQL and slow query logging

  • Performant

    • ordinal placeholder logic is optimized to be nearly as fast as using ?
    • dat can interpolate queries locally resulting in performance increase over plain database/sql and sqlx. Benchmarks

Getting Started

Get it

dat.v1 uses glide package dependency manager. Earlier builds relied on gopkg.in which at the time was as good a solution as any. Will move to dep once it is stable.

glide get gopkg.in/mgutz/dat.v1/sqlx-runner

Use it

import (
    "database/sql"

    _ "github.com/lib/pq"
    "gopkg.in/mgutz/dat.v1"
    "gopkg.in/mgutz/dat.v1/sqlx-runner"
)

// global database (pooling provided by SQL driver)
var DB *runner.DB

func init() {
    // create a normal database connection through database/sql
    db, err := sql.Open("postgres", "dbname=dat_test user=dat password=!test host=localhost sslmode=disable")
    if err != nil {
        panic(err)
    }

    // ensures the database can be pinged with an exponential backoff (15 min)
    runner.MustPing(db)

    // set to reasonable values for production
    db.SetMaxIdleConns(4)
    db.SetMaxOpenConns(16)

    // set this to enable interpolation
    dat.EnableInterpolation = true

    // set to check things like sessions closing.
    // Should be disabled in production/release builds.
    dat.Strict = false

    // Log any query over 10ms as warnings. (optional)
    runner.LogQueriesThreshold = 10 * time.Millisecond

    DB = runner.NewDB(db, "postgres")
}

type Post struct {
    ID        int64         `db:"id"`
    Title     string        `db:"title"`
    Body      string        `db:"body"`
    UserID    int64         `db:"user_id"`
    State     string        `db:"state"`
    UpdatedAt dat.NullTime  `db:"updated_at"`
    CreatedAt dat.NullTime  `db:"created_at"`
}

func main() {
    var post Post
    err := DB.
        Select("id, title").
        From("posts").
        Where("id = $1", 13).
        QueryStruct(&post)
    fmt.Println("Title", post.Title)
}

Feature highlights

Use Builders or SQL

Query Builder

var posts []*Post
err := DB.
    Select("title", "body").
    From("posts").
    Where("created_at > $1", someTime).
    OrderBy("id ASC").
    Limit(10).
    QueryStructs(&posts)

Plain SQL

err = DB.SQL(`
    SELECT title, body
    FROM posts WHERE created_at > $1
    ORDER BY id ASC LIMIT 10`,
    someTime,
).QueryStructs(&posts)

Note: dat does not trim the SQL string, thus any extra whitespace is transmitted to the database.

In practice, SQL is easier to write with backticks. Indeed, the reason this library exists is most SQL builders introduce a DSL to insulate the user from SQL.

Query builders shine when dealing with data transfer objects, structs.

Fetch Data Simply

Query then scan result to struct(s)

var post Post
err := DB.
    Select("id, title, body").
    From("posts").
    Where("id = $1", id).
    QueryStruct(&post)

var posts []*Post
err = DB.
    Select("id, title, body").
    From("posts").
    Where("id > $1", 100).
    QueryStructs(&posts)

Query scalar values or a slice of values

var n int64
DB.SQL("SELECT count(*) FROM posts WHERE title=$1", title).QueryScalar(&n)

var ids []int64
DB.SQL("SELECT id FROM posts", title).QuerySlice(&ids)

Field Mapping

dat DOES NOT map fields automatically like sqlx. You must explicitly set db struct tags in your types.

Embedded fields are mapped breadth-first.

type Realm struct {
    RealmUUID string `db:"realm_uuid"`
}
type Group struct {
    GroupUUID string `db:"group_uuid"`
    *Realm
}

g := &Group{Realm: &Realm{"11"}, GroupUUID: "22"}

sql, args := InsertInto("groups").Columns("group_uuid", "realm_uuid").Record(g).ToSQL()
expected := `
    INSERT INTO groups ("group_uuid", "realm_uuid")
    VALUES ($1, $2)
	`

Blacklist and Whitelist

Control which columns get inserted or updated when processing external data

// userData came in from http.Handler, prevent them from setting protected fields
DB.InsertInto("payments").
    Blacklist("id", "updated_at", "created_at").
    Record(userData).
    Returning("id").
    QueryScalar(&userData.ID)

// ensure session user can only update his information
DB.Update("users").
    SetWhitelist(user, "user_name", "avatar", "quote").
    Where("id = $1", session.UserID).
    Exec()

IN queries

applicable when dat.EnableInterpolation == true

Simpler IN queries which expand correctly

ids := []int64{10,20,30,40,50}
b := DB.SQL("SELECT * FROM posts WHERE id IN $1", ids)
b.MustInterpolate() == "SELECT * FROM posts WHERE id IN (10,20,30,40,50)"

Tracing SQL

dat uses logxi for logging. By default, logxi logs all warnings and errors to the console. dat logs the SQL and its arguments on any error. In addition, dat logs slow queries as warnings if runner.LogQueriesThreshold > 0

To trace all SQL, set environment variable

LOGXI=dat* yourapp

CRUD

Create

Use Returning and QueryStruct to insert and update struct fields in one trip

var post Post

err := DB.
    InsertInto("posts").
    Columns("title", "state").
    Values("My Post", "draft").
    Returning("id", "created_at", "updated_at").
    QueryStruct(&post)

Use Blacklist and Whitelist to control which record (input struct) fields are inserted.

post := Post{Title: "Go is awesome", State: "open"}
err := DB.
    InsertInto("posts").
    Blacklist("id", "user_id", "created_at", "updated_at").
    Record(&post).
    Returning("id", "created_at", "updated_at").
    QueryStruct(&post)

// use wildcard to include all columns
err := DB.
    InsertInto("posts").
    Whitelist("*").
    Record(&post).
    Returning("id", "created_at", "updated_at").
    QueryStruct(&post)

Insert Multiple Records

// create builder
b := DB.InsertInto("posts").Columns("title")

// add some new posts
for i := 0; i < 3; i++ {
    b.Record(&Post{Title: fmt.Sprintf("Article %s", i)})
}

// OR (this is more efficient as it does not do any reflection)
for i := 0; i < 3; i++ {
    b.Values(fmt.Sprintf("Article %s", i))
}

// execute statement
_, err := b.Exec()

Inserts if not exists or select in one-trip to database

sql, args := DB.
    Insect("tab").
    Columns("b", "c").
    Values(1, 2).
    Where("d = $1", 3).
    Returning("id", "f", "g").
    ToSQL()

sql == `
WITH
    sel AS (SELECT id, f, g FROM tab WHERE (d = $1)),
    ins AS (
        INSERT INTO "tab"("b","c")
        SELECT $2,$3
        WHERE NOT EXISTS (SELECT 1 FROM sel)
        RETURNING "id","f","g"
    )
SELECT * FROM ins UNION ALL SELECT * FROM sel
`

Read

var other Post

err = DB.
    Select("id, title").
    From("posts").
    Where("id = $1", post.ID).
    QueryStruct(&other)

published := `
    WHERE user_id = $1
        AND state = 'published'
`

var posts []*Post
err = DB.
    Select("id, title").
    From("posts").
    Scope(published, 100).
    QueryStructs(&posts)

Update

Use Returning to fetch columns updated by triggers. For example, an update trigger on "updated_at" column

err = DB.
    Update("posts").
    Set("title", "My New Title").
    Set("body", "markdown text here").
    Where("id = $1", post.ID).
    Returning("updated_at").
    QueryScalar(&post.UpdatedAt)

Upsert - Update or Insert

sql, args := DB.
    Upsert("tab").
    Columns("b", "c").
    Values(1, 2).
    Where("d=$1", 4).
    Returning("f", "g").
    ToSQL()

expected := `
WITH
    upd AS (
        UPDATE tab
        SET "b" = $1, "c" = $2
        WHERE (d=$3)
        RETURNING "f","g"
    ), ins AS (
        INSERT INTO "tab"("b","c")
        SELECT $1,$2
        WHERE NOT EXISTS (SELECT 1 FROM upd)
        RETURNING "f","g"
    )
SELECT * FROM ins UNION ALL SELECT * FROM upd
`

applicable when dat.EnableInterpolation == true

To reset columns to their default DDL value, use DEFAULT. For example, to reset payment\_type

res, err := DB.
    Update("payments").
    Set("payment_type", dat.DEFAULT).
    Where("id = $1", 1).
    Exec()

Use SetBlacklist and SetWhitelist to control which fields are updated.

// create blacklists for each of your structs
blacklist := []string{"id", "created_at"}
p := paymentStructFromHandler

err := DB.
    Update("payments").
    SetBlacklist(p, blacklist...)
    Where("id = $1", p.ID).
    Exec()

Use a map of attributes

attrsMap := map[string]interface{}{"name": "Gopher", "language": "Go"}
result, err := DB.
    Update("developers").
    SetMap(attrsMap).
    Where("language = $1", "Ruby").
    Exec()

Delete

result, err = DB.
    DeleteFrom("posts").
    Where("id = $1", otherPost.ID).
    Exec()

Joins

Define JOINs in argument to From

err = DB.
    Select("u.*, p.*").
    From(`
        users u
        INNER JOIN posts p on (p.author_id = u.id)
    `).
    WHERE("p.state = 'published'").
    QueryStructs(&liveAuthors)

Scopes

Scopes predefine JOIN and WHERE conditions. Scopes may be used with DeleteFrom, Select and Update.

As an example, a "published" scope might define published posts by user.

publishedPosts := `
    INNER JOIN users u on (p.author_id = u.id)
    WHERE
        p.state == 'published' AND
        p.deleted_at IS NULL AND
        u.user_name = $1
`

unpublishedPosts := `
    INNER JOIN users u on (p.author_id = u.id)
    WHERE
        p.state != 'published' AND
        p.deleted_at IS NULL AND
        u.user_name = $1
`

err = DB.
    Select("p.*").                      // must qualify columns
    From("posts p").
    Scope(publishedPosts, "mgutz").
    QueryStructs(&posts)

Creating Connections

All queries are made in the context of a connection which is acquired from the underlying SQL driver's pool

For one-off operations, use DB directly

err := DB.SQL(sql).QueryStruct(&post)

For multiple operations, create a Tx transaction. defer Tx.AutoCommit() or defer Tx.AutoRollback() MUST be called

func PostsIndex(rw http.ResponseWriter, r *http.Request) {
    tx, _ := DB.Begin()
    defer tx.AutoRollback()

    // Do queries with the session
    var post Post
    err := tx.Select("id, title").
        From("posts").
        Where("id = $1", post.ID).
        QueryStruct(&post)
    )
    if err != nil {
        // `defer AutoRollback()` is used, no need to rollback on error
        r.WriteHeader(500)
        return
    }

    // do more queries with transaction ...

    // MUST commit or AutoRollback() will rollback
    tx.Commit()
}

DB and Tx implement runner.Connection interface to keep code DRY

func getUsers(conn runner.Connection) ([]*dto.Users, error) {
    sql := `
        SELECT *
        FROM users
    `
    var users []*dto.Users
    err := conn.SQL(sql).QueryStructs(&users)
    if err != nil {
        return err
    }
    return users
}

Nested Transactions

Nested transaction logic is as follows:

  • If Commit is called in a nested transaction, the operation results in no operation (NOOP). Only the top level Commit commits the transaction to the database.

  • If Rollback is called in a nested transaction, then the entire transaction is rolled back. Tx.IsRollbacked is set to true.

  • Either defer Tx.AutoCommit() or defer Tx.AutoRollback() MUST BE CALLED for each corresponding Begin. The internal state of nested transactions is tracked in these two methods.

func nested(conn runner.Connection) error {
    tx, err := conn.Begin()
    if err != nil {
        return err
    }
    defer tx.AutoRollback()

    _, err := tx.SQL(`INSERT INTO users (email) values $1`, "[email protected]").Exec()
    if err != nil {
        return err
    }
    // prevents AutoRollback
    tx.Commit()
}

func top() {
    tx, err := DB.Begin()
    if err != nil {
        logger.Fatal("Could not create transaction")
    }
    defer tx.AutoRollback()

    err := nested(tx)
    if err != nil {
        return
    }
    // top level commits the transaction
    tx.Commit()
}

Timeouts

A timeout may be set on any Query* or Exec with the Timeout method. When a timeout is set, the query is run in a separate goroutine and should a timeout occur dat will cancel the query via Postgres' pg_cancel_backend.

err := DB.Select("SELECT pg_sleep(1)").Timeout(1 * time.Millisecond).Exec()
err == dat.ErrTimedout

Dates

Use dat.NullTime type to properly handle nullable dates from JSON and Postgres.

Constants

applicable when dat.EnableInterpolation == true

dat provides often used constants in SQL statements

  • dat.DEFAULT - inserts DEFAULT
  • dat.NOW - inserts NOW()

Defining Constants

UnsafeStrings and constants will panic unless dat.EnableInterpolation == true

To define SQL constants, use UnsafeString

const CURRENT_TIMESTAMP = dat.UnsafeString("NOW()")
DB.SQL("UPDATE table SET updated_at = $1", CURRENT_TIMESTAMP)

UnsafeString is exactly that, UNSAFE. If you must use it, create a constant and NEVER use UnsafeString directly as an argument like this

DB.SQL("UPDATE table SET updated_at = $1", dat.UnsafeString(someVar))

Primitive Values

Load scalar and slice values.

var id int64
var userID string
err := DB.
    Select("id", "user_id").From("posts").Limit(1).QueryScalar(&id, &userID)

var ids []int64
err = DB.Select("id").From("posts").QuerySlice(&ids)

Caching

dat implements caching backed by an in-memory or Redis store. The in-memory store is not recommended for production use. Caching can cache any struct or primitive type that can be marshaled/unmarshaled cleanly with the json package due to Redis being a string value store.

Time is especially problematic as JavaScript, Postgres and Go have different time formats. Use the type dat.NullTime if you are getting cannot parse time errors.

Caching is performed before the database driver lessening the workload on the database.

// key-value store (kvs) package
import "gopkg.in/mgutz/dat.v1/kvs"

func init() {
    // Redis: namespace is the prefix for keys and should be unique
    store, err := kvs.NewRedisStore("namespace:", ":6379", "passwordOrEmpty")

    // Or, in-memory store provided by [go-cache](https://github.com/pmylund/go-cache)
    cleanupInterval := 30 * time.Second
    store = kvs.NewMemoryStore(cleanupInterval)

    runner.SetCache(store)
}

// Cache states query for a year using key "namespace:states"
b, err := DB.
    SQL(`SELECT * FROM states`).
    Cache("states", 365 * 24 * time.Hour, false).
    QueryJSON()

// Without a key, the checksum of the query is used as the cache key.
// In this example, the interpolated SQL  will contain their user_name
// (if EnableInterpolation is true) effectively caching each user.
//
// cacheID == checksum("SELECT * FROM users WHERE user_name='mario'")
b, err := DB.
    SQL(`SELECT * FROM users WHERE user_name = $1`, user).
    Cache("", 365 * 24 *  time.Hour, false).
    QueryJSON()

// Prefer using known unique IDs to avoid the computation cost
// of the checksum key.
key = "user" + user.UserName
b, err := DB.
    SQL(`SELECT * FROM users WHERE user_name = $1`, user).
    Cache(key, 15 * time.Minute, false).
    QueryJSON()

// Set invalidate to true to force setting the key
statesUpdated := true
b, err := DB.
    SQL(`SELECT * FROM states`).
    Cache("states", 365 * 24 *  time.Hour, statesUpdated).
    QueryJSON()

// Clears the entire cache
runner.Cache.FlushDB()

runner.Cache.Del("fookey")

SQL Interpolation

Interpolation is DISABLED by default. Set dat.EnableInterpolation = true to enable.

dat can interpolate locally to inline query arguments. For example, this statement

go

db.Exec(
    "INSERT INTO (a, b, c, d) VALUES ($1, $2, $3, $4)",
    []interface{}[1, 2, 3, 4],
)

is sent to the database with inlined args bypassing prepared statement logic in the lib/pq layer

"INSERT INTO (a, b, c, d) VALUES (1, 2, 3, 4)"

Interpolation provides these benefits:

  • Performance improvements
  • Debugging/tracing is simpler with interpolated SQL
  • May use safe SQL constants like dat.NOW and dat.DEFAULT
  • Expand placeholders with slice values $1 => (1, 2, 3)

Read SQL Interpolation in wiki for more details and SQL injection.

LICENSE

The MIT License (MIT)

Owner
Comments
  • Serializing custom types

    Serializing custom types

    I have a struct like this

    type JSON map[string]interface{}
    type Data struct {
        ID   string `db:"id" json:"id,omitempty"`
        meta JSON   `db:"meta" json:"meta,omitempty"`
    }
    

    But When I try to insert it it gives this error sql: converting Exec argument #7's type: unsupported type common.JSONO, a map Is there some way to overcome this because I want this meta to be serialized to a string and then stored?

  • Glide requirement causing major issues

    Glide requirement causing major issues

    This is more a feature request than an issue, but I wish you'd remove the requirement for Glide. You should leave the vendoring up to the application utilizing your package, since vendoring isn't "inherited".

    You've utilized Glide-specific functionality in your code (subpackages), which means if our application doesn't use Glide (it doesn't), we have to take extra manual steps which aren't easily reproducible locally for development and updates or remotely during continuous integration, etc.

  • What's the state of v2?

    What's the state of v2?

    First of all, thanks for the logxi library! Been using it for quite some time now.

    Dat looks very promising since I'm already using both logxi and sqlx, but what's the state of v2? Should I start using it right away or should I wait?

    Best regards, Christoffer

  • Why the custom interpolation logic for `dat.NullTime`?

    Why the custom interpolation logic for `dat.NullTime`?

    I was trying to figure out why dat.NullTime instances were being inserted as the 0 value instead of NULL when EnableInterpolation is true. It turns out it's because interpolating prefers the Interpolator interface to the Valuer interface, and dat.NullTime implements this here: https://github.com/mgutz/dat/blob/v1/types.go#L264.

    Besides the fact that it should definitely return NULL if the time is not valid, I'm not sure why it implements the interpolator interface in the first place. Is the default serialization of time.Time not sufficient? If you remove that function, NULLs work correctly and times get serialized, but I'm unclear if they're being serialized in the same manner or not.

  • IN queries and MustInterpolate

    IN queries and MustInterpolate

    I'm using v2 version of dat and MustInterpolate is mentioned only in documentation, I can't find it anywhere in the code. And without it queries end up with $brj$ spliced in.

    What's correct way of doing IN queries and what project branch is most actively developed at the moment?

  • Add

    Add "ForUpdate" to SelectBuilder?

    It would be nice to have a ForUpdate function in the SelectBuilder for working with transactions and row-level locking.

    Or is there another approach I'm missing?

  • Support for scoped queries

    Support for scoped queries

    The query syntax is pretty nice, but it would be nice to not have to every expression as a string. Specifically, for the Where().

    What do you think of this form:

    err = sess.Select("*").From("posts").Where(dat.Cond{"id >", 100}).QueryStructs(&posts))
    

    dat.Cond is defined as:

    type Cond map[string]interface{}
    

    and the cool thing is you can easily have cond := dat.Cond{} and then do cond["field"] = "val"

    and then build the condition string as you build the entire query.

    if .Where() accepted an ...interface{} then both forms could easily be supported. Or leave .Where() and add .Scope() or .WhereCond(), ...

    also, is Select("*") implied?

  • logxi

    logxi

    Hello,

    after yesterdays commit with logxi, we can no longer build our application.

    It says: gopkg.in\mgutz\dat.v1\init.go:7:2: no buildable Go source files in .....\github.com\mgutz\logxi

    We tried deleting all packages and installing them again, updating, etc. Same error everytime.

    Could you check it out please?

    Thank you

  • Doesn't support schemas

    Doesn't support schemas

    When using schemas, table names are quoted incorrectly by dat in the builders. For example:

    db.Update("hello.world").Set("name", "John Doe").Where("id = $1", 23).Exec()
    

    Generates a query that looks like:

    update "hello.world" set "name" = 'John Doe' where "id" = 23
    

    It should be:

    update "hello"."world" set "name" = 'John Doe' where "id" = 23
    
  • Use dependency management solution

    Use dependency management solution

    With vendor/ on by default in Go 1.6, it would be really convenient if dat used it for it's dependencies. We use glide for our dependency management, which will trace our dependencies (including dat) if they implement a vendor folder, or use Glide, godep, GB or GPM.

  • Update().SetBlacklist() doesn't understand embedded structs

    Update().SetBlacklist() doesn't understand embedded structs

    When using Update(...).SetBlacklist(...), the reflection methods fail to traverse into embedded structs with the error message:

    <xxx Value> must have db struct tags for all fields: db:""

    It seems there is a use of reflectx throughout the dat package which understands struct traversal, but the update methods use their own reflection.

    For example:

    type Model struct {
        ID        string    `json:"id" db:"id"`
        CreatedAt time.Time `json:"createdAt" db:"created_at"`
        UpdatedAt time.Time `json:"updatedAt" db:"updated_at"`
    }
    
    type Customer struct {
        Model
        First              string         `json:"first" db:"first"`
        Last               string         `json:"last" db:"last"`
    }
    
    customer := Customer{}
    err := tx.
        Update(TableCustomers).
        SetBlacklist(customer, "id", "created_at", "updated_at").
        Where("id = $1", customer.ID).
        Returning("updated_at").
        QueryScalar(&customer.UpdatedAt)
    

    Not only that, but it fails with a panic which takes everything down with it.

  • Dependency issue with mgutz logxi

    Dependency issue with mgutz logxi

    so I noticed in init.go

    package dat
    
    import (
    	"fmt"
    	"strconv"
    
    	"github.com/mgutz/logxi"
    )
    

    This version doesn't have any go files at all - as soon I added v1 to it it fixed my dependecies issues

    Is there suppose to be a version I am for this to reference the right v1 folder?

  • Support setting nullable fields

    Support setting nullable fields

    Issue

    Added support for setting a nullable field. Previously this would fail because a nil pointer cannot be set:

    # Nullable  *string         `db:"nullable"`
    
    person3 := Person{Name: "Barack", Nullable: nil}
    err = s.
    	InsertInto("people").
    	Columns("name", "nullable").
    	Record(person3).
    	Returning("id", "nullable").
    	QueryStruct(&person3)
    

    Fix

    The fix (discovered by @pascallouisperez) is to skip fields that !v.CanSet().

    Tests

    I added tests for inserting selecting and updating nullable fields.

  • sqlx-runner exec error

    sqlx-runner exec error

    Line 719 in sqlx-runner/exec.go won't compile as the NewV4 method returns a string and an error so causes a multi value returned error:

    https://github.com/satori/go.uuid/blob/master/generator.go

    func NewV4() (UUID, error) { return global.NewV4() }

  • Wrapping guid.NewV4() to get prior behavior returning a single value.

    Wrapping guid.NewV4() to get prior behavior returning a single value.

    Recently, the go.uuid library was modified to return a string, and an error when generating UUIDs. See [1]. For backwards compatibility, a Must func helper is provided which panics if there are any errors, reovering the prior behavior.

    [1] https://github.com/satori/go.uuid/commit/0ef6afb2f6cdd6cdaeee3885a95099c63f18fc8c

  • Fix panic on NullTime.UnmarshalJSON

    Fix panic on NullTime.UnmarshalJSON

    type WithDate struct {
    	Time dat.NullTime `json:"time"`
    }
    
    func TestParseNumberAsDate(t *testing.T) {
    	data := `{"time":5}`
    	result := &WithDate{}
    	json.Unmarshal([]byte(data), result) // PANIC!
    }
    
  • add support for context

    add support for context

    allow to pass context.Context to the sql package, using the *Context funcs on sqlx needs a more recent version of sqlx - see https://github.com/jmoiron/sqlx/pull/270

Data-builder - Data builder with golang

databuilder import "github.com/go-coldbrew/data-builder" Index Variables func Is

Feb 5, 2022
Type safe SQL builder with code generation and automatic query result data mapping
Type safe SQL builder with code generation and automatic query result data mapping

Jet Jet is a complete solution for efficient and high performance database access, consisting of type-safe SQL builder with code generation and automa

Jan 6, 2023
A Go (golang) package that enhances the standard database/sql package by providing powerful data retrieval methods as well as DB-agnostic query building capabilities.

ozzo-dbx Summary Description Requirements Installation Supported Databases Getting Started Connecting to Database Executing Queries Binding Parameters

Dec 31, 2022
Squat is an application that provides simple SQL data generation functionality.

Squat Squat is an application that provides simple SQL data generation functionality. It generates synthetic SQL data based on the table definition, t

Sep 22, 2022
Tpf2-tpnetmap-toolkit - A toolkit to create svg map images from TransportFever2 world data
Tpf2-tpnetmap-toolkit - A toolkit to create svg map images from TransportFever2 world data

tpf2-tpnetmap-toolkit TransportFever2 のワールドデータから svg のマップ画像を作成するツールキットです。 1. 導入方

Feb 17, 2022
Simple to do list API with Gin and Gorm (with Postgres)Simple to do list API with Gin and Gorm (with Postgres)

go-todo Simple to do list API with Gin and Gorm (with Postgres) Docker Clone this repository and run: docker-compose up You can then hit the followin

Aug 29, 2022
Go-postgres - go-postgres library provide NoSQL functionality which can execute queries with pool of connections

GO Postgres go-postgres library provide NoSQL functionality which can execute queries with pool of connections. What is this repository for? Establish

Dec 31, 2021
Sensu-go-postgres-metrics - The sensu-go-postgres-metrics is a sensu check that collects PostgreSQL metrics

sensu-go-postgres-metrics Table of Contents Overview Known issues Usage examples

Jan 12, 2022
rpCheckup is an AWS resource policy security checkup tool that identifies public, external account access, intra-org account access, and private resources.
rpCheckup is an AWS resource policy security checkup tool that identifies public, external account access, intra-org     account access, and private resources.

rpCheckup - Catch AWS resource policy backdoors like Endgame rpCheckup is an AWS resource policy security checkup tool that identifies public, externa

Dec 26, 2022
A cloud native Identity & Access Proxy / API (IAP) and Access Control Decision API

Heimdall Heimdall is inspired by Ory's OAthkeeper, tries however to resolve the functional limitations of that product by also building on a more mode

Jan 6, 2023
Simple and easy to use client for stock market, forex and crypto data from finnhub.io written in Go. Access real-time financial market data from 60+ stock exchanges, 10 forex brokers, and 15+ crypto exchanges

go-finnhub Simple and easy to use client for stock, forex and crpyto data from finnhub.io written in Go. Access real-time market data from 60+ stock e

Dec 28, 2022
Graphoscope: a solution to access multiple independent data sources from a common UI and show data relations as a graph
Graphoscope: a solution to access multiple independent data sources from a common UI and show data relations as a graph

Graphoscope A solution to access multiple independent data sources from a common UI and show data relations as a graph: Contains a list of by default

May 26, 2022
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

The open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics no matter wher

Jan 3, 2023
Ltree Visualizer - A golang library to visualize postgres ltree type data using DOT language and Graphviz
Ltree Visualizer - A golang library to visualize postgres ltree type data using DOT language and Graphviz

Ltree Visualizer A golang library to visualize postgres ltree type data using DOT language and Graphviz What is Ltree? Ltree is a data type which is u

Jun 12, 2022
2/2 microservice to serve requests when postgres data is required.

prmeet-nats This is a NATs communication layer for prmeet microservice. Intention. The library is intended to be a starting point for a project with a

Jan 11, 2022
A toolkit for replaying time series data.

Replay Toolkit The replay package provides some simple tools for replaying captured data at realtime. I use this in various tools that take logged dat

Aug 13, 2019
GoAdmin is a toolkit to help you build a data visualization admin panel for your golang app.
GoAdmin is a toolkit to help you build a data visualization admin panel for your golang app.

the missing golang data admin panel builder tool. Documentation | 中文文档 | 中文介绍 | DEMO | 中文DEMO | Twitter | Forum Inspired by laravel-admin Preface GoAd

Nov 25, 2021
Substation is a cloud native toolkit for building modular ingest, transform, and load (ITL) data pipelines

Substation Substation is a cloud native data pipeline toolkit. What is Substation? Substation is a modular ingest, transform, load (ITL) application f

Dec 30, 2022
Data access layer for PostgreSQL, CockroachDB, MySQL, SQLite and MongoDB with ORM-like features.
Data access layer for PostgreSQL, CockroachDB, MySQL, SQLite and MongoDB with ORM-like features.

upper/db is a productive data access layer (DAL) for Go that provides agnostic tools to work with different data sources

Jan 3, 2023
V3IO Frames ("Frames") is a Golang based remote data frames access (over gRPC or HTTP stream)

V3IO Frames ("Frames") is a multi-model open-source data-access library that provides a unified high-performance DataFrame API for working with different types of data sources (backends). The library was developed by Iguazio to simplify working with data in the Iguazio Data Science Platform ("the platform"), but it can be extended to support additional backend types.

Oct 1, 2022