Pure Go Postgres driver for database/sql

pq - A pure Go postgres driver for Go's database/sql package

GoDoc

Install

go get github.com/lib/pq

Features

  • SSL
  • Handles bad connections for database/sql
  • Scan time.Time correctly (i.e. timestamp[tz], time[tz], date)
  • Scan binary blobs correctly (i.e. bytea)
  • Package for hstore support
  • COPY FROM support
  • pq.ParseURL for converting urls to connection strings for sql.Open.
  • Many libpq compatible environment variables
  • Unix socket support
  • Notifications: LISTEN/NOTIFY
  • pgpass support
  • GSS (Kerberos) auth

Tests

go test is used for testing. See TESTS.md for more details.

Status

This package is effectively in maintenance mode and is not actively developed. Small patches and features are only rarely reviewed and merged. We recommend using pgx which is actively maintained.

Comments
  • Multiple

    Multiple "pq: unexpected describe rows response" errors

    We've been experiencing errors with lib/pq that eventually results in a state where no database connections in the pool are free. Max connections as set by SetMaxOpenConns is not reached.

    The errors normally start with db.Prepare:

    Could not prepare statement: pq: unexpected describe rows response: '3'
    

    We also see the unexpected response 'C'. These develop into multiple occurrences of

    sql: statement expects 0 inputs; got 4
    

    and similar errors on statements that have previously been prepared with no errors.

    To reiterate, we don't query any prepared statements that returned errors from Prepare. These errors are returned from queries on successfully prepared statements.

    Then, we see occurrences of failures of Begin:

    Could not start a transaction: pq: unknown response for simple query: '1'
    Could not start a transaction: unexpected command tag 
    Could not start a transaction: pq: unexpected transaction status idle in transaction
    

    The only error in postgres around this time is:

    FATAL:  invalid frontend message type 90
    

    Which happened around 20s after the initial errors were seen in our application log.

    pprof indicates that all goroutines querying the database were netpolling:

    #   0x423886    netpollblock+0xa6                       /usr/local/src/go/src/pkg/runtime/netpoll.goc:280
    #   0x4231ea    net.runtime_pollWait+0x6a                   /usr/local/src/go/src/pkg/runtime/netpoll.goc:116
    #   0x695534    net.(*pollDesc).Wait+0x34                   /usr/local/src/go/src/pkg/net/fd_poll_runtime.go:81
    #   0x695590    net.(*pollDesc).WaitRead+0x30                   /usr/local/src/go/src/pkg/net/fd_poll_runtime.go:86
    #   0x696910    net.(*netFD).Read+0x2a0                     /usr/local/src/go/src/pkg/net/fd_unix.go:204
    #   0x6a5825    net.(*conn).Read+0xc5                       /usr/local/src/go/src/pkg/net/net.go:122
    #   0x48a1a0    bufio.(*Reader).fill+0x110                  /usr/local/src/go/src/pkg/bufio/bufio.go:91
    #   0x48a5a4    bufio.(*Reader).Read+0x1a4                  /usr/local/src/go/src/pkg/bufio/bufio.go:159
    #   0x46f566    io.ReadAtLeast+0xf6                     /usr/local/src/go/src/pkg/io/io.go:288
    #   0x46f6d1    io.ReadFull+0x71                        /usr/local/src/go/src/pkg/io/io.go:306
    #   0x4f5f9b    github.com/lib/pq.(*conn).recvMessage+0x10b         /home/matt/dev/go/src/github.com/lib/pq/conn.go:637
    #   0x4f6357    github.com/lib/pq.(*conn).recv1+0x27                /home/matt/dev/go/src/github.com/lib/pq/conn.go:690
    #   0x4f5242    github.com/lib/pq.(*conn).prepareToSimpleStmt+0x822     /home/matt/dev/go/src/github.com/lib/pq/conn.go:508
    #   0x4f4997    github.com/lib/pq.(*conn).prepareTo+0x87            /home/matt/dev/go/src/github.com/lib/pq/conn.go:486
    #   0x4f5670    github.com/lib/pq.(*conn).Prepare+0x120             /home/matt/dev/go/src/github.com/lib/pq/conn.go:539
    #   0x4e5199    database/sql.(*driverConn).prepareLocked+0x49           /usr/local/src/go/src/pkg/database/sql/sql.go:250
    #   0x4e7ce1    database/sql.(*DB).prepare+0xb1                 /usr/local/src/go/src/pkg/database/sql/sql.go:828
    #   0x4e7b9c    database/sql.(*DB).Prepare+0x5c                 /usr/local/src/go/src/pkg/database/sql/sql.go:808
    

    In a separate instance of unresponsiveness, all the goroutines were waiting on mutexes, e.g.:

    #   0x4241d0    sync.runtime_Semacquire+0x30                            /usr/local/src/go/src/pkg/runtime/sema.goc:199
    #   0x47f136    sync.(*Mutex).Lock+0xd6                             /usr/local/src/go/src/pkg/sync/mutex.go:66
    #   0x4e6dd2    database/sql.(*DB).conn+0x42                            /usr/local/src/go/src/pkg/database/sql/sql.go:616
    #   0x4e7c82    database/sql.(*DB).prepare+0x32                         /usr/local/src/go/src/pkg/database/sql/sql.go:823
    #   0x4e7bbc    database/sql.(*DB).Prepare+0x5c                         /usr/local/src/go/src/pkg/database/sql/sql.go:808
    

    The logged errors weren't present in that case, so these could be two totally different problems.

    We're running Ubuntu 12.04 and pg 9.2 with go 1.2.1, and we're not assuming a trouble-free network by any means.

    I appreciate this is somewhat vague and we don't have a reproducible test case (yet), but any pointers for further investigation would be appreciated. Let me know if further background info would be helpful.

  • Add initial support for LISTEN/NOTIFY.

    Add initial support for LISTEN/NOTIFY.

    Using db.Query("LISTEN relname"), returning an infinite Rows object. This is not optimal as a Rows.Next() call cannot be interrupted, but should fit the API well.

    • It's somewhat ugly and shoe-horned, but seems to work.
    • I strived for minimum impact on the normal code path.
    • Only db.Query() or db.Prepare() will cause the new path to be taken. db.Exec() and Tx-specific code is unaffected. db.Exec() doesn't make sense as we need a return value, and we don't care about transactions.
  • Add support for binary_mode

    Add support for binary_mode

    Here's my work so far towards making #209 happen. It passes all tests with both binary_mode off and on, which suggests that we don't have enough tests.

    This functionality is split into three separate commits:

    1. Refactor writeBuf into something which can be used to send multiple messages in a single write() system call. This should reduce the overhead a bit when there are a lot of threads writing at the same time.
    2. Decoupling rows and stmt so that a rows object can live without having an associated stmt object. Needed for the single round trip mode.
    3. The implementation of binary_mode, where all []byte values are sent over and marked to be "binary", and everything else is sent over in text. Something like this is necessary since if we want to do only a single round-trip to the server per a call to *sql.Query(), we don't know the SQL types of the input parameters.

    There's still likely a lot of work to do:

    1. There's no documentation
    2. There are no additional tests
    3. This might break some driver.Valuers which return a byte slice which isn't actually valid as input for the type in binary mode. If we can adopt a general guideline of "string means text, []byte means binary" then things might work out great, but e.g. allocation penalties of such a guideline are not clear to me. This has some impact on e.g. the ongoing work for supporting arrays. Another approach might be to give up on binary mode altogether and instead always send everything over as text, but provide a special Valuer for bytea values (similarly to how we're likely going to have to have a separate values for bytea arrays). Both approached would

    I'd really like to stress point 3 directly above; this is not the only way to achieve single roundtrip mode, and at the moment I'm not convinced it's the best one, either. The performance gains from using binary mode have been in my tests almost negligible (though I've mostly tested with bytea values, since this patch doesn't use binary mode for e.g. ints even though it theoretically could), so don't get too hung up on that.

    Any thoughts?

  • Unchecked range error on slice.

    Unchecked range error on slice.

    I've refactored my code to work around this, but there's an uncheked range error sometimes raising a panic(), when for some reason the list is empty:

    https://github.com/lib/pq/blob/master/conn.go#L736

    I've added a logging line, and I see that typically something like this:

    === RUN Test_CreatingANewUserSuccessfully
    About to read index 0 of [25 25 1043 1043]
    About to read index 1 of [25 25 1043 1043]
    About to read index 2 of [25 25 1043 1043]
    About to read index 3 of [25 25 1043 1043]
    
    About to read index 0 of [2950]
    

    I have seen (but after refactoring, and tidying up on my side, I can''t now replicate) an issue where I see a panic raised, after output like:

    === RUN Test_CreatingANewUserSuccessfully
    About to read index 0 of []
    

    I don't quite understand how I could get an empty slice here, I'm passing something like:

        query := "INSERT INTO users (uuid, name, email, password_hash) VALUES (CASE WHEN $1::text = '' THEN uuid_generate_v4() ELSE $1::uuid END, $2, $3, $4) RETURNING uuid"
      err = store.tx.QueryRow(query, u.Uuid, u.Name, u.Email, password_hash).Scan(&uuid)
        if err != nil {
            return "", err
        }
    
  • Incorrect

    Incorrect "idle in transaction" on Tx.Commit()

    Hi there,

    I don't know if this has been reported before or isn't considered a bug. I searched the issues for something like it but didn't find anything.

    I encountered the following situation:

    • open a transaction
    • execute a query that returns rows
    • forget to close the rows
    • call Commit()
    • repeat

    The call to Commit() returns an error but the transaction actually gets committed (as can be checked via psql). The next call to Begin() then returns an error saying "pq: unexpected transaction status idle in transaction", even though the previous tx got committed and PostgreSQL thinks there's no idle-in-transaction connection either (ps output says "idle", not "idle in transaction").

    Maybe a little code:

    func Test(user string, age int, conn *sql.DB) (int, error) {
        tx, err := conn.Begin()
            if err != nil {
                return 0, err
        }
        rollback := false
    
        defer func() {
            if rollback {
                tx.Rollback()
            } else {
                err := tx.Commit()
                fmt.Println(err)
            }
        }()
    
        rows, err := tx.Query("INSERT INTO users(id, name, age) VALUES (DEFAULT, $1, $2) RETURNING (id)", user, age)
        if err != nil {
            rollback = true
            return 0, err
        }
    
        var id int
        rows.Next() // because I know there will only be one row
        err = rows.Scan(&id)
        if err != nil {
            rollback = true
            return 0, err
        }
    
        return id, nil
    }
    

    Call this function twice and the second call will fail even though the data from the first call reached the database correctly. If the connection really were idle in transaction the data could not be seen in the corresponding database table.

    Note: I know I should have used QueryRow (and I currently do), but that's how I stumbled over this. Inserting a rows.Close() call before returning works, too.

    But it looks like pq and PostgreSQL have differing ideas about what constitutes a successful commit. Btw, the call to Commit() returns "unexpected command tag INSERT" in this case.

    I believe this is a bug in pq or should at least be clarified in the documentation.

  • Array Support

    Array Support

    I've been browsing the issues and PRs to see what the status of array support is. Along with a bunch of others, support is something I'd like to see.

    I'm curious as to what the consensus is as of now. Would it be nice for support to be baked into this package, or should it come from another package? Should it affect the encode/decode process, or simply be a set of scanners?

    To speak more generally, what should be supported, what cases should be handled, and what should be avoided?

    This is something I'm willing to work on; I just want to know what will be mergable.

  • Very slow on Windows

    Very slow on Windows

    I've run pq on both Linux (Ubuntu 12.10) and Windows7. I've downloaded the latest version as of 15-Jan-2012.

    On Linux, no problems encountered, but I haven't done a lot of testing.

    On Windows, it is very slow. eg. simple SELECT on small table taking (say) 2 seconds. A minor update involving an update on one table and an insert on another with COMMIT taking 4.5 seconds.

    psql runs find on Windows, no slowness detected.

    Running "GO test" takes 0.321 seconds (Linux), and it takes 39.287 seconds (Windows). Subsequent runs on Linux get it just under 0.100 seconds, and on Windows just under 39 seconds.

    I had to alter conn_test.go (on Windows) because running the test gave repeated errors of 'pq: Role "ta10\Brian" does not exist'. I attempted to create role 'ta10\Brian', but psql gave error. I created role 'Brian', but still got error. Therefore, I altered "conn_test.go" as follows:

        ////conn, err := sql.Open("postgres", "")
        conn, err := sql.Open("postgres",
        "user=postgres dbname=postgres password=super")
    
  • Add Scanner/Value implementations for Postgres network address types.

    Add Scanner/Value implementations for Postgres network address types.

    An implementation of Scanner / Value types for Postgres network address types, intended to make it easier to work with these types. Includes the three current network address types - cidr, inet, macaddr. Addresses issue #121

    Some quick comments:

    1. The package name 'netaddr' was chosen to conform with Go recommended best practices. Happy to change this package name if there's a strong preference on the maintainer's part for another package name.
    2. I decided to give all of the types a consistent behavior with regards to NULL handling. Like the native value types (Bool, Int64, etc.) the corresponding structs have a 'Valid' member. When false, this indicates that the corresponding database value is NULL. Technically the native types for inet and macaddr support a nil value, and so for these cases a simpler struct could have been used. But I opted for a more consistent interface.
    3. It was unclear to me from the existing Hstore implementation whether an error in a Scan should result in a panic or the explict return of an error. I chose to explicitly return an error rather than panic if Scan received a value that cannot be coerced to a byte array.

    Happy to discuss or revisit any of the above.

  • Simplify error handling, add error codes

    Simplify error handling, add error codes

    This work simplifies and improves the custom error type(s). It partially reverts an earlier contribution.

    Primary change is moving to an Error type with fields corresponding to the underlying pq error fields.

  • No way to see an Insert error from QueryRow()

    No way to see an Insert error from QueryRow()

    I want to make an insert into a table with constraints. I also want to get the inserted id value if successful. The problem is if QueryRow() with an INSERT...RETURNING fails, then the error is a rather unhelpful no rows in result set.

    The error I want would be something like: duplicate key violates unique constraint "user_email_unique" which is the error that's returned by Exec() but then I don't get the last inserted id if everything goes well.

  • implement ConnPrepareContext/StmtQueryContext/StmtExecContext interfaces

    implement ConnPrepareContext/StmtQueryContext/StmtExecContext interfaces

    See #1046 for more context. (pun intended)

    This is almost a line for line copy of #921 with test cases (thanks @kylejbrock). Please let me know what else needs to be done to get this merged.

    Also dropped ci testing of unsupported versions (9.5 and 9.4) per pg docs here and added ci testing of versions 11, 12, and 13.

    closes #921 closes #1046

  • Rewrite the array.go to support generic array type?

    Rewrite the array.go to support generic array type?

    Hello,

    AFAIK this is current use case of Generic Array

    //  var x []sql.NullInt64
    //  db.QueryRow(`SELECT ARRAY[235, 401]`).Scan(pq.Array(&x))
    

    if we can rewrite or add another version of generic array to support the following use case in sqlx

    type Foobar struct {
    }
    
    type Foo struct {
    	Bar pq.GenericArrayV2[bool]
    	Baz pq.GenericArrayV2[Foobar]
    }
    
  • Reduced the complexity of handlePgpass

    Reduced the complexity of handlePgpass

    The complexity of handlePgpass was 30 and most of it seemed unnecessary as there was an anonymous func that was used like a named function and the for loop for scanner.Scan added at least 15 points of complexity alone. So I broke out the anonymous function and made the for loop run through a function with the same functionality as the original loop. I was unable to test the changes however, the complexity of all three has been reduced to under 15 and is ready to be tested. Here is the before and after sonar reports:

    image Web capture_30-11-2022_103312_localhost

  • Improved the performance of CopyIn and CopyInSchema and added BufferQ…

    Improved the performance of CopyIn and CopyInSchema and added BufferQ…

    Wide tables with large column names were slow to call the CopyIn and CopyInSchema functions. Created makeStmt to reduce duplication between the CopyIn and CopyInSchema functions. Created BufferQuoteIdentifier to efficiently generate column names and reduce string allocations. Created BenchmarkCopy which tests only CopyIn's ability to copy large data fast, the results showed that the changed copy ran at 20 ms versus the original which was around 700 ms.

  • pq.Array does not support array of smallint from postgresql

    pq.Array does not support array of smallint from postgresql

    For github.com/lib/pq v1.10.7, pq.Array does not support array of int16 that represents smallint array data type in postgresql.

    `func Array(a interface{}) interface { driver.Valuer sql.Scanner } { switch a := a.(type) { case []bool: return (*BoolArray)(&a) case []float64: return (*Float64Array)(&a) case []float32: return (*Float32Array)(&a) case []int64: return (*Int64Array)(&a) case []int32: return (*Int32Array)(&a) case []string: return (*StringArray)(&a) case [][]byte: return (*ByteaArray)(&a)

    case *[]bool:
    	return (*BoolArray)(a)
    case *[]float64:
    	return (*Float64Array)(a)
    case *[]float32:
    	return (*Float32Array)(a)
    case *[]int64:
    	return (*Int64Array)(a)
    case *[]int32:
    	return (*Int32Array)(a)
    case *[]string:
    	return (*StringArray)(a)
    case *[][]byte:
    	return (*ByteaArray)(a)
    }
    
    return GenericArray{a}
    

    }`

    This cause an error during Row.Scan with following message:

    Error: Received unexpected error: sql: Scan error on column index 2, name "columnName": pq: parsing array element index 0: pq: scanning to int16 is not implemented; only sql.Scanner

  • Standart registered driver does not respect context during connection

    Standart registered driver does not respect context during connection

    Default driver do not implements DriverContext:

    // Driver is the Postgres database driver.
    type Driver struct{}
    
    // Open opens a new connection to the database. name is a connection string.
    // Most users should only use it through database/sql package from the standard
    // library.
    func (d Driver) Open(name string) (driver.Conn, error) {
    	return Open(name)
    }
    
    func init() {
    	sql.Register("postgres", &Driver{})
    }
    

    as a result, when src/database/sql/sql.go do

    ci, err := db.connector.Connect(ctx)

    it uses default connector which ignore context:

    func Open(driverName, dataSourceName string) (*DB, error) {
    	driversMu.RLock()
    	driveri, ok := drivers[driverName]
    	driversMu.RUnlock()
    	if !ok {
    		return nil, fmt.Errorf("sql: unknown driver %q (forgotten import?)", driverName)
    	}
    
    	if driverCtx, ok := driveri.(driver.DriverContext); ok {
    		connector, err := driverCtx.OpenConnector(dataSourceName)
    		if err != nil {
    			return nil, err
    		}
    		return OpenDB(connector), nil
    	}
    
    	return OpenDB(dsnConnector{dsn: dataSourceName, driver: driveri}), nil
    }
    

    It results in connection hanging indefinitelly and leaking ports if database become unreachable during connection.

  • An error:Byte data as condition query error

    An error:Byte data as condition query error

    the code as following:

    tokenPlaintext := "J7H4KRUHODF42PI57KZQ43XDUM"
    hash, _ := bcrypt.GenerateFromPassword([]byte(tokenPlaintext), 12)
    query := `SELECT users.id, users.create_at, users.name, users.email, users.password_hash, users.activated, users.version FROM users INNER JOIN tokens ON users.id = tokens.user_id WHERE tokens.scope = $1 AND tokens.expiry > $2 AND tokens.hash = $3`
    args := []interface{}{tokenScope, t1, hash[:]}
    var user User
    ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
    defer cancel()
    err = db.QueryRowContext(ctx, query, args...).Scan(
          &user.ID,
          &user.CreateAt,
          &user.Name,
          &user.Account,
          &user.Password.hash,
          &user.Activated,
          &user.Version,
    )
    	fmt.Println(user)
    
    

    {0 0001-01-01 00:00:00 +0000 UTC {<nil> []} false 0}

    But if we read the value of tokens.hash, we can make the following judgment

    err := bcrypt.CompareHashAndPassword(token.hash, []byte(tokenPlaintext))

    Err equals nil

    This makes me feel very confused It seems that I cannot to find the corresponding tokens for the byte type parameters I passed in Hash, but we read the hash from the database, encrypt and compare the plaintext, and the results are indeed equal

Related tags
Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

pqssh Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

Nov 6, 2022
Microsoft ActiveX Object DataBase driver for go that using exp/sql

go-adodb Microsoft ADODB driver conforming to the built-in database/sql interface Installation This package can be installed with the go get command:

Dec 30, 2022
Oracle driver for Go using database/sql

go-oci8 Description Golang Oracle database driver conforming to the Go database/sql interface Installation Install Oracle full client or Instant Clien

Dec 30, 2022
sqlite3 driver for go using database/sql

go-sqlite3 Latest stable version is v1.14 or later not v2. NOTE: The increase to v2 was an accident. There were no major changes or features. Descript

Jan 8, 2023
Go Sql Server database driver.

gofreetds Go FreeTDS wrapper. Native Sql Server database driver. Features: can be used as database/sql driver handles calling stored procedures handle

Dec 16, 2022
Attach hooks to any database/sql driver

sqlhooks Attach hooks to any database/sql driver. The purpose of sqlhooks is to provide a way to instrument your sql statements, making really easy to

Dec 14, 2022
Qmgo - The Go driver for MongoDB. It‘s based on official mongo-go-driver but easier to use like Mgo.

Qmgo English | 简体中文 Qmgo is a Go driver for MongoDB . It is based on MongoDB official driver, but easier to use like mgo (such as the chain call). Qmg

Dec 28, 2022
SAP (formerly sybase) ASE/RS/IQ driver written in pure go

tds import "github.com/thda/tds" Package tds is a pure Go Sybase ASE/IQ/RS driver for the database/sql package. Status This is a beta release. This dr

Dec 7, 2022
Mirror of Apache Calcite - Avatica Go SQL Driver

Apache Avatica/Phoenix SQL Driver Apache Calcite's Avatica Go is a Go database/sql driver for the Avatica server. Avatica is a sub-project of Apache C

Nov 3, 2022
Firebird RDBMS sql driver for Go (golang)

firebirdsql (Go firebird sql driver) Firebird RDBMS http://firebirdsql.org SQL driver for Go Requirements Firebird 2.5 or higher Golang 1.13 or higher

Dec 20, 2022
Microsoft SQL server driver written in go language

A pure Go MSSQL driver for Go's database/sql package Install Requires Go 1.8 or above. Install with go get github.com/denisenkom/go-mssqldb . Connecti

Dec 26, 2022
GO DRiver for ORacle DB

Go DRiver for ORacle godror is a package which is a database/sql/driver.Driver for connecting to Oracle DB, using Anthony Tuininga's excellent OCI wra

Jan 5, 2023
PostgreSQL driver and toolkit for Go

pgx - PostgreSQL Driver and Toolkit pgx is a pure Go driver and toolkit for PostgreSQL. pgx aims to be low-level, fast, and performant, while also ena

Jan 4, 2023
Lightweight Golang driver for ArangoDB

Arangolite Arangolite is a lightweight ArangoDB driver for Go. It focuses on pure AQL querying. See AranGO for a more ORM-like experience. IMPORTANT:

Sep 26, 2022
Go language driver for RethinkDB
Go language driver for RethinkDB

RethinkDB-go - RethinkDB Driver for Go Go driver for RethinkDB Current version: v6.2.1 (RethinkDB v2.4) Please note that this version of the driver on

Dec 24, 2022
goriak - Go language driver for Riak KV
goriak - Go language driver for Riak KV

goriak Current version: v3.2.1. Riak KV version: 2.0 or higher, the latest version of Riak KV is always recommended. What is goriak? goriak is a wrapp

Nov 22, 2022
Mongo Go Models (mgm) is a fast and simple MongoDB ODM for Go (based on official Mongo Go Driver)
Mongo Go Models (mgm) is a fast and simple MongoDB ODM for Go (based on official Mongo Go Driver)

Mongo Go Models Important Note: We changed package name from github.com/Kamva/mgm/v3(uppercase Kamva) to github.com/kamva/mgm/v3(lowercase kamva) in v

Jan 2, 2023
The MongoDB driver for Go

The MongoDB driver for Go This fork has had a few improvements by ourselves as well as several PR's merged from the original mgo repo that are current

Jan 8, 2023
The Go driver for MongoDB
The Go driver for MongoDB

MongoDB Go Driver The MongoDB supported driver for Go. Requirements Installation Usage Bugs / Feature Reporting Testing / Development Continuous Integ

Dec 31, 2022