pggen - generate type safe Go methods from Postgres SQL queries

Test Lint GoReportCard

pggen - generate type safe Go methods from Postgres SQL queries

pggen is a tool that generates Go code to provide a typesafe wrapper around Postgres queries. If Postgres can run the query, pggen can generate code for it.

  1. Write SQL queries with a name and :one, :many, or :exec annotation.

    -- FindAuthors finds authors by first name.
    -- name: FindAuthors :many
    SELECT * FROM author WHERE first_name = pggen.arg('FirstName');
  2. Run pggen to generate Go code to create type-safe methods for each query.

    pggen gen go \
        --schema-glob author/schema.sql \
        --query-glob 'author/*.sql'
  3. Use the generated code.

    var conn *pgx.Conn
    q := NewQuerier(conn)
    aliceAuthors, err := q.FindAuthors(ctx, "alice")

Pitch

Why should you use pggen instead of the myriad of Go SQL bindings?

  • pggen is narrowly tailored to only generate code for queries you write in SQL.

  • pggen works with any Postgres database. Under the hood, pggen runs each query and uses the Postgres catalog tables, pg_type, pg_class, and pg_attribute, to get perfect type information for both the query parameters and result columns.

  • pggen works with all Postgres queries. If Postgres can run the query, pggen can generate Go code for the query.

  • pggen uses pgx, a faster replacement for lib/pq, the original Go Postgres library that's now in maintenance mode.

  • pggen provides a batch interface for each generated query with pgx.Batch. Using a batch enables sending multiple queries in a single network round-trip instead of one network round-trip per query.

Anti-pitch

I'd like to try to convince you why you shouldn't use pggen. Often, this is far more revealing than the pitch.

  • You want auto-generated models for every table in your database. pggen only generates code for each query in a query file. pggen requires custom SQL for even the simplest CRUD queries. Use gorm or any of alternatives listed at awesome Go ORMs.

  • You use database other than Postgres. pggen only supports Postgres. sqlc, a similar tool which inspired pggen, has early support for MySQL.

  • You want an active-record pattern where models have methods like find, create, update, and delete. pggen only generates code for queries you write. Use gorm.

  • You prefer building queries in a Go dialect instead of SQL. I'd recommend investing in really learning SQL; it will payoff. Otherwise, use squirrel, goqu, or go-sqlbuilder

  • You don't want to add a Postgres or Docker dependency to your build phase. Use sqlc, though you might still need Docker. sqlc generates code by parsing the schema file and queries in Go without using Postgres.

Install

go get github.com/jschaf/pggen

# Make sure pggen works.
pggen gen go --help

Usage

Generate code using Docker to create the Postgres database from a schema file:

# --schema-glob runs all matching files on Dockerized Postgres during database 
# creation.
pggen gen go \
    --schema-glob author/schema.sql \
    --query-glob author/query.sql

# Output: author/query.go.sql

# Or with multiple schema files. The schema files run on Postgres
# in the order they appear on the command line.
pggen gen go \
    --schema-glob author/schema.sql \
    --schema-glob book/schema.sql \
    --schema-glob publisher/schema.sql \
    --query-glob author/query.sql

# Output: author/query.sql.go

Generate code using an existing Postgres database (useful for custom setups):

pggen gen go \
    --query-glob author/query.sql \
    --postgres-connection "user=postgres port=5555 dbname=pggen"

# Output: author/query.sql.go

Generate code for multiple query files. All the query files must reside in the same directory. If query files reside in different directories, you can use --output-dir to set a single output directory:

pggen gen go \
    --schema-glob schema.sql \
    --query-glob author/fiction.sql \
    --query-glob author/nonfiction.sql \
    --query-glob author/bestselling.sql

# Output: author/fiction.sql.go
#         author/nonfiction.sql.go
#         author/bestselling.sql.go

# Or, using a glob. Notice quotes around glob pattern to prevent shell 
# expansion.
pggen gen go \
    --schema-glob schema.sql \
    --query-glob 'author/*.sql'

Examples

Examples embedded in the repo:

Features

  • JSON struct tags: All Row structs include JSON struct tags using the Postgres column name. To change the struct tag, use an SQL column alias.

    -- name: FindAuthors :many
    SELECT first_name, last_name as family_name FROM author;

    Generates:

    type FindAuthorsRow struct {
        FirstName   string `json:"first_name"`
        FamilyName  string `json:"family_name"`
    }
  • Acronyms: Custom acronym support so that author_id renders as AuthorID instead of AuthorId. Supports two formats:

    1. Long form: --acronym =: replaces with literally. Useful for plural acronyms like author_ids which should render as AuthorIDs, not AuthorIds. For the IDs example, use --acronym ids=IDs.

    2. Short form: --acronym : replaces with uppercase . Equivalent to --acronym =

    By default, pggen includes --acronym id to render id as ID.

  • Enums: Postgres enums map to Go string constant enums. The Postgres type:

    CREATE TYPE device_type AS ENUM ('undefined', 'phone', 'ipad');

    Generates the Go code:

    // DeviceType represents the Postgres enum device_type.
    type DeviceType string
    
    const (
        DeviceTypeUndefined DeviceType = "undefined"
        DeviceTypePhone     DeviceType = "phone"
        DeviceTypeIpad      DeviceType = "ipad"
    )
    
    func (d DeviceType) String() string { return string(d) }
  • Custom types: Use a custom Go type to represent a Postgres type with the --go-type flag. The format is =. For example:

    pggen gen go \
        --schema-glob example/custom_types/schema.sql \
        --query-glob example/custom_types/query.sql \
        --go-type 'text=github.com/jschaf/pggen/example/custom_types/mytype.String'

    pgx must be able to deserialize the Postgres type using the Go type. That means the Go type must fulfill at least one of following:

    • The Go type is a wrapper around primitive type, like type AuthorID int. pgx will use the decode methods on the primitive type.

    • The Go type implements both pgtype.BinaryDecoder and pgtype.TextDecoder. pgx will use the correct decoder based on the wire format. See the pgtype repo for many example types.

    • The pgx connection executing the query must have registered a data type using the Go type with ConnInfo.RegisterDataType.

      ci := conn.ConnInfo()
      
      ci.RegisterDataType(pgtype.DataType{
      	Value: new(mytype.Numeric),
      	Name: "numeric",
      	OID: pgtype.NumericOID,
      })
    • The Go type implements sql.Scanner.

    • pgx is able to use reflection to build an object to write fields into.

Tutorial

Let's say we have a database with the following schema in author/schema.sql:

CREATE TABLE author (
  author_id  serial PRIMARY KEY,
  first_name text NOT NULL,
  last_name  text NOT NULL,
  suffix     text NULL
)

First, write a query in the file author/query.sql. The query name is FindAuthors and the query returns :many rows. A query can return :many rows, :one row, or :exec for update, insert, and delete queries.

-- FindAuthors finds authors by first name.
-- name: FindAuthors :many
SELECT * FROM author WHERE first_name = pggen.arg('FirstName');

Second, use pggen to generate Go code to author/query.sql.go:

pggen gen go \
    --schema-glob author/schema.sql \
    --query-glob author/query.sql

We'll walk through the generated file author/query.sql.go:

  • The Querier interface defines the interface with methods for each SQL query. Each SQL query compiles into three methods, one method for to run query by itself, and two methods to support batching a query with pgx.Batch.

    // Querier is a typesafe Go interface backed by SQL queries.
    //
    // Methods ending with Batch enqueue a query to run later in a pgx.Batch. After
    // calling SendBatch on pgx.Conn, pgxpool.Pool, or pgx.Tx, use the Scan methods
    // to parse the results.
    type Querier interface {
        // FindAuthors finds authors by first name.
        FindAuthors(ctx context.Context, firstName string) ([]FindAuthorsRow, error)
        // FindAuthorsBatch enqueues a FindAuthors query into batch to be executed
        // later by the batch.
        FindAuthorsBatch(batch *pgx.Batch, firstName string)
        // FindAuthorsScan scans the result of an executed FindAuthorsBatch query.
        FindAuthorsScan(results pgx.BatchResults) ([]FindAuthorsRow, error)
    }

    To use the batch interface, create a *pgx.Batch, call the Batch methods, send the batch, and finally get the results with the Scan methods. See example/author/query.sql_test.go for complete example.

    q := NewQuerier(conn)
    batch := &pgx.Batch{}
    q.FindAuthorsBatch(batch, "alice")
    q.FindAuthorsBatch(batch, "bob")
    results := conn.SendBatch(context.Background(), batch)
    aliceAuthors, err := q.FindAuthorsScan(results)
    bobAuthors, err := q.FindAuthorsScan(results)
  • The DBQuerier struct implements the Querier interface with concrete implementations of each query method.

    type DBQuerier struct {
        conn genericConn
    }
  • Create DBQuerier with NewQuerier. The genericConn parameter is an interface over the different pgx connection transports so that DBQuerier doesn't force you to use a specific connection transport. *pgx.Conn, pgx.Tx, and *pgxpool.Pool all implement genericConn.

    // NewQuerier creates a DBQuerier that implements Querier. conn is typically
    // *pgx.Conn, pgx.Tx, or *pgxpool.Pool.
    func NewQuerier(conn genericConn) *DBQuerier {
        return &DBQuerier{
            conn: conn,
        }
    }
  • pggen embeds the SQL query formatted for a Postgres PREPARE statement with parameters indicated by $1, $2, etc. instead of pggen.arg('FirstName').

    const findAuthorsSQL = `SELECT * FROM author WHERE first_name = $1;`
  • pggen generates a row struct for each query named Row. pggen transforms the output column names into struct field names from lower_snake_case to UpperCamelCase in internal/casing/casing.go. pggen derives JSON struct tags from the Postgres column names. To change the JSON struct name, change the column name in the query.

    type FindAuthorsRow struct {
        AuthorID  int32       `json:"author_id"`
        FirstName string      `json:"first_name"`
        LastName  string      `json:"last_name"`
        Suffix    pgtype.Text `json:"suffix"`
    }

    As a convenience, if a query only generates a single column, pggen skips creating the Row struct and returns the type directly. For example, the generated query for SELECT author_id from author returns int32, not a Row struct.

    pggen infers struct field types by running the query. When Postgres returns query results, Postgres also sends the column types as a header for the results. pggen looks up the types in the header using the pg_type catalog table and chooses an appropriate Go type in internal/codegen/golang/types.go.

    Choosing an appropriate type is more difficult than might seem at first glance due to null. When Postgres reports that a column has a type text, that column can have both text and null values. So, the Postgres text represented in Go can be either a string or nil. pgtype provides nullable types for all built-in Postgres types. pggen tries to infer if a column is nullable or non-nullable. If a column is nullable, pggen uses a pgtype Go type like pgtype.Text. If a column is non-nullable, pggen uses a more ergonomic type like string. pggen's nullability inference in internal/pginfer/nullability.go is rudimentary; a proper approach requires a full AST with some control flow analysis.

  • Lastly, pggen generates the implementation for each query.

    As a convenience, if a there are only one or two query parameters, pggen inlines the parameters into the method definition, as with firstName below. If there are three or more parameters, pggen creates a struct named Params to pass the parameters to the query method.

    // FindAuthors implements Querier.FindAuthors.
    func (q *DBQuerier) FindAuthors(ctx context.Context, firstName string) ([]FindAuthorsRow, error) {
        rows, err := q.conn.Query(ctx, findAuthorsSQL, firstName)
        if rows != nil {
            defer rows.Close()
        }
        if err != nil {
            return nil, fmt.Errorf("query FindAuthors: %w", err)
        }
        items := []FindAuthorsRow{}
        for rows.Next() {
            var item FindAuthorsRow
            if err := rows.Scan(&item.AuthorID, &item.FirstName, &item.LastName, &item.Suffix); err != nil {
                return nil, fmt.Errorf("scan FindAuthors row: %w", err)
            }
            items = append(items, item)
        }
        if err := rows.Err(); err != nil {
            return nil, err
        }
        return items, err
    }

How it works

In a nutshell, pggen runs each query on Postgres to extract type information, and generates the appropriate code. In detail:

  • pggen determines input parameters by using a PREPARE statement and querying the pg_prepared_statement table to get type information for each parameter.

  • pggen determines output columns by executing the query and reading the field descriptions returned with the query result rows. The field descriptions contain the type ID for each output column. The type ID is a Postgres object ID (OID), the primary key to identify a row in the pg_type catalog table.

  • pggen determines if an output column can be null using heuristics. If a column cannot be null, pggen uses more ergonomic types to represent the output like string instead of pgtype.Text. The heuristics are quite simple; see internal/pginfer/nullability.go. A proper approach requires a full Postgres SQL syntax parser with control flow analysis to determine nullability.

For more detail, see the original, outdated design doc and discussion with the pgx author and sqlc author.

Comparison to sqlc

The primary difference between pggen and sqlc is how each tool infers the type and nullability of the input parameters and output columns for SQL queries.

sqlc parses the queries in Go code, using Cgo to call the Postgres parser.c library. After parsing, sqlc infers the types of the query parameters and result columns using custom logic in Go. In contrast, pggen gets the same type information by running the queries on Postgres and then fetching the type information for Postgres catalog tables.

Use sqlc if you don't wish to run Postgres to generate code or if you need better nullability analysis than pggen provides.

Use pggen if you can run Postgres for code generation and you use complex queries that sqlc is unable to parse. Additionally, use pggen if you have a custom database setup that's difficult to replicate in a schema file. pggen supports running on any database with any extensions.

Comments
  • Using custom types + arrays as pggen.arg()

    Using custom types + arrays as pggen.arg()

    Actual:

    When using custom types as argument (or an array) the generated code just pass the generated []CustomType go argument to QueryRow()

    select pggen.arg('my_types')::custom_type[]
    

    However []CustomType is not encodable by pgx, so it fails.

    Expected: It should create a encodable pgtype Value (pgtype.NewArrayType) before passing it to the query.

  • Support for migration tools

    Support for migration tools

    Is it possible to enable support for migration tools. I tried to use pggen with golang migrate. Pggen didn't ignore down sql files and codegen failed.

  • proposal: generate

    proposal: generate "queries" by stored pg function

    Motivation

    Generally allow any complex nested queries. Currently it's unclear what is possible and when pggen cannot handle it (as a result this issue is in contract to the promising project description)

    In contrast the limitation caused on pg side ("optional arguments" because no reliable argument & result type reflection is possible) are clear.

    example schema

    create type list_item as (
       name text,
       color text
    );
    
    create type list_stats as (
       val1 hstore,
       val2 int[]
    );
    
    create or replace function fn1(
        param1 bigint,
        page int = 1,
        item_count int = 10,
        out res_items list_item[],
        out res_stats list_stats
    ) language plpgsql as $$begin
        -- single or many query which fills res_items and res_stats 
        -- ex: with x as (...subquery...) select array_agg(...), array_agg(...) into res_items, res_stats
    end$$;
    

    keep in mind the same return values could be also created by a single statement used directly as pggen query. this example just demonstrate the way trough stored functions since this was my initial situation I was confronted with this idea.

    example query

    -- name: Fn1 :one
    select fn1(
        pggen.arg('x')
    );
    

    My intention is to create plpgsql stored functions which takes all needed arguments once and has at least two different datasets as return values. There could be an arbitrary number of return values. They can be arrays, single values and of course nested.

    advantages

    • one function with one parameter list to create multiple datasets
    • pggen (future): optional function parameters could be recognized (when create go code directly from a function signature)

    disadvantages

    • increased complexity

    challenges

    • OID's of the composit types or array types must be known? when i understand correctly this is one of the main issue, right?

    questions

    • what are the current issues to support this? I saw that pggen queries all the information about the result schema like FindDescendantOIDs. When some OID's must be preprocessed in pgx, maybe a preparation function (like PrepareAll in #26 ) could be a solution (since the pg binary msg format don't support more info on query time, a preparation of pgx is anyway indispensable)

    workaround

    • batch execute multiple standalone queries

    unclear

    • performance
      • regarding the scan process of the nested composit type arrays
      • call plpgsql function vs run batched prepared statements in general
  • The code output by `pggen` is dependent on the order of the columns in a table in `schema.sql`, though this order doesn't matter according to spec

    The code output by `pggen` is dependent on the order of the columns in a table in `schema.sql`, though this order doesn't matter according to spec

    Thanks for building pggen -- it's pretty remarkable.

    I've been running into an issue where the order of columns listed in schema.sql changes sometimes (though the columns themselves don't change). According to the SQL spec, reorderings of the columns in a table are no-ops, but the code output by pggen is substantively different since the fields in a given struct have reordered. The resulting symptom is an error due to type mismatch when trying to scan data into a struct.

    Am I just using this library wrong? Is there a workaround for this issue?

  • Nullability seemingly not respected when only one string column is returned

    Nullability seemingly not respected when only one string column is returned

    Hey there, great project, I'm getting a lot of use out of it.

    Here is the problem query:

    -- name: xxxGetLogs :one
    select
        string_agg(t.format, e'\n') as logs
    from
        (
            select
                format(
                    '[%s] %s',
                    to_char(
                        time at time zone 'utc' at time zone 'america/los_angeles',
                        'MM-DD-YY HH12:MI:SS AM'
                    ),
                    line
                ) as format
            from
                table2
            where
                parse_id = (
                    select
                        id
                    from
                        table1
                    where
                        val = pggen.arg ('val')
                )
        ) as t;
    

    Which generates this code:

    func (q *DBQuerier) xxxGetLogs(ctx context.Context, val string) (*string, error) {
    	ctx = context.WithValue(ctx, "pggen_query_name", "xxxGetLogs")
    	row := q.conn.QueryRow(ctx, xxxGetLogsSQL, val)
    	var item string
    	if err := row.Scan(&item); err != nil {
    		return &item, fmt.Errorf("query xxxGetLogs: %w", err)
    	}
    	return &item, nil
    }
    

    Everything seems to be working great except that row.Scan is reading into var item string instead of var item *string since it can be nullable. The functionality works as expected if there are two columns (it reads to a struct where the column is *string) but when it's a single column return, it seems to force rows.Scan to read into string instead of *string.

    I understand nullability inference is hard but that's not the problem here, since I believe it thinks this column should be nullable which is correct, but it just isn't generating the right code to read it into a nullable golang type.

    Any suggestions?

  • Mapping SQL types to slices and/or pointers of imported types is broken

    Mapping SQL types to slices and/or pointers of imported types is broken

    Mapping an SQL type to a pointer and/or a slice of a go type which must be imported results in invalid generated go code as pggen is confused by the slice and pointer markers ([] and *).

    Minimally reproducible example

    pggen Version

    $ pggen version
    pggen version 2021-09-07, commit 7f87888
    

    Gen Command

    pggen gen go -query-glob "queries/*.sql" \
        --postgres-connection "postgres://postgres@localhost/tit?sslmode=disable" \
        --go-type "timestamp=*time.Time" \
        --go-type "_timestamptz=[]time.Time" \
        --go-type "_timestamp=[]*time.Time" \
        --go-type "_interval=[]util/custom/times.Interval" \
        --go-type "_date=util/custom/[]times.Date" 
    

    Queries

    -- name: GetNilTime :one
    SELECT pggen.arg('data')::TIMESTAMP;
    
    -- name: GetTimes :many
    SELECT * FROM UNNEST(pggen.arg('data')::TIMESTAMP WITH TIME ZONE[]);
    
    -- name: GetNilTimes :many
    SELECT * FROM UNNEST(pggen.arg('data')::TIMESTAMP[]);
    
    -- name: GetCustomIntervals :many
    SELECT * FROM UNNEST(pggen.arg('data')::INTERVAL[]);
    
    -- name: GetCustomDates :many
    SELECT * FROM UNNEST(pggen.arg('data')::DATE[]);
    

    Generated File

    // Code generated by pggen. DO NOT EDIT.
    
    package queries
    
    import (
    	// --snip--
    
    	"time"
    	"util/custom/[]times"
    	"util/custom/times"
    )
    
    // --snip--
    type Querier interface {
    	GetNilTime(ctx context.Context, data time.*Time) (time.*Time, error)
    	// --snip--
    
    	GetTimes(ctx context.Context, data time.[]Time) ([]pgtype.Timestamptz, error)
    	// --snip--
    
    	GetNilTimes(ctx context.Context, data time.[]*Time) ([]time.*Time, error)
    	// --snip--
    
    	GetCustomIntervals(ctx context.Context, data times.[]Interval) ([]pgtype.Interval, error)
    	// --snip--
    
    	GetCustomDates(ctx context.Context, data []times.Date) ([]pgtype.Date, error)
    	// --snip--
    }
    

    Expected

    // Code generated by pggen. DO NOT EDIT.
    
    package queries
    
    import (
    	// --snip--
    
    	"time"
    	"util/custom/times"
    )
    
    // --snip--
    type Querier interface {
    	GetNilTime(ctx context.Context, data *time.Time) (*time.Time, error)
    	// --snip--
    
    	GetTimes(ctx context.Context, data []time.Time) ([]time.Time, error)
    	// --snip--
    
    	GetNilTimes(ctx context.Context, data []*time.Time) ([]*time.Time, error)
    	// --snip--
    
    	GetCustomIntervals(ctx context.Context, data []times.Interval) ([]times.Interval, error)
    	// --snip--
    
    	GetCustomDates(ctx context.Context, data []times.Date) ([]times.Date, error)
    	// --snip--
    }
    

    The two main issues are the miss-handling of the [] and * markers and the failure to correctly map the return values for:

    • "_timestamptz=[]time.Time" (GetTimes)
    • "_interval=[]util/custom/times.Interval" (GetCustomIntervals)
    • "_date=util/custom/[]times.Date" (GetCustomDates)

    For custom types, I placed the slice marker ([]) at different position, each result in the correct placement either as the argument type or the import and fails at the other location.


    As a side note, (tho this might be worth a separate issue if you think it's worth it) I used different SQL types for each queries as example since I cannot map a single SQL type to multiple Go types. It would be nice to be able to further specify a type mapping per query argument and for the return value. I.e. I can map overall timestamp=time.Time but for a query I could add a comment to the query to map a specific argument to a different type, like:

    -- name: GetNilTime :one *time.Time
    -- param: data *time.Time
    SELECT pggen.arg('data')::TIMESTAMP;
    
  • Support for custom tags in struct

    Support for custom tags in struct

    I've been using sqlc for quite sometime. But I want to migrate to pggen since pgx library is faster than libpq. But is it possible to implement something similar to this : https://github.com/kyleconroy/sqlc/issues/534

    I want to add custom tags like "json" or "validate" in the generated struct. In that way, we can reuse struct generated by pggen when binding json to struct and marshaling struct to json. Below is an example use case:

    CREATE TABLE user (
      id   BIGSERIAL PRIMARY KEY,
    
      -- tags:`validate:"min=10,max=0"`
      username text NOT NULL,
      
      --tags: `json:"-"`
      password text NOT NULL 
    
      -- tags:`validate:"gt=18"`
      age  integer NOT NULL,
    
    ); 
    

    So in the above example we can use the struct generated by pggen to bind json directly to this struct and use Validate module to validate, Also use the same struct to send JSON reponse to user without password field since we are using custom tags for JSON. Unlike sqlc, pggen uses actual database server to get information about tables. Is this possible to implement this feature? Like using --schema-glob file or some custom config files ?

  • array_agg(table) returning a []Table?

    array_agg(table) returning a []Table?

    hi thanks for the cool tool!

    given a schema

    create table if not exists screenshots (
        id bigint primary key               
    );                                      
    
    create table if not exists blocks (                            
        id serial primary key,                                     
        screenshot_id bigint not null references screenshots (id), 
        body text not null                                         
    );                                                             
    

    and query

    -- name: SearchScreenshots :many                        
    select                                                  
        screenshots.*,                                      
        array_agg(blocks) blocks                  
    from                                                    
        screenshots                                         
        join blocks on blocks.screenshot_id = screenshots.id
    where                                                   
        pggen.arg('Body') % blocks.body                    
    group by                                                
        screenshots.id                                      
    limit pggen.arg('Limit')
    offset pggen.arg('Offset');  
    

    ('str' % column being a pg_trgm thing)

    pggen generates

    type SearchScreenshotsParams struct {                                                                            
        Body   string                                                                            
        Limit  int                                                                            
        Offset int                                                                            
    }                                                                            
                                                                                
    type SearchScreenshotsRow struct {                                                                            
        ID             pgtype.Int8      `json:"id"`                                                                                                                        
        Blocks         []Blocks         `json:"blocks"`                                                                            
    }                                                                            
                                                                                
    // SearchScreenshots implements Querier.SearchScreenshots.                                                                      
    func (q *DBQuerier) SearchScreenshots(ctx context.Context, params SearchScreenshotsParams) ([]SearchScreenshotsRow, error) {    
        // some stuff
        return items, err
    }                                                              
    

    however the Blocks in SearchScreenshotsRow's []Blocks is undefined

    so my question is, is it somehow possible, or in the scope of this project, that pggen could generate something like

     Blocks         []Block         `json:"blocks"`
    

    where Block is a struct with fields matching the blocks table?

    also postgres shows the type of the array_agg(blocks) column as block[] image

    any help is much appreciated, thanks!

  •  cannot unmarshal number into Go value of type map[string]interface {}

    cannot unmarshal number into Go value of type map[string]interface {}

    I am just trying to get through a basic tutorial. I installed through go install: go install github.com/jschaf/pggen/cmd/pggen@latest

    Then tried to run the following command against my postgres database: pggen gen go \ --schema-glob author/schema.sql \ --query-glob author/query.sql

    it creates the author schema in the database correctly, but I get the following error while trying to generate code for query.sql: ERROR: parse template query file "C:\\code\\api-brand-allocation\\author\\query.sql": infer typed named query FindAuthorByID: infer output types for query: infer output type nullability: explain prepared query: can't scan into dest[0]: json: cannot unmarshal number into Go value of type map[string]interface {} I pulled my query.sql and schema.sql from the following location: https://github.com/jschaf/pggen/tree/main/example/author

    Can't figure out what I missed, hopefully something simple. Thanks!

  • Question: Insert statement with multiple value rows.

    Question: Insert statement with multiple value rows.

    Hello,

    Is it possible to generate an insert func that can insert multiple rows at once? Ex:

    INSERT INTO user 
        (username, password, created_at) 
    VALUES
        ('a', '123', NOW()),
        ('b', 'abc', NOW()),
        ('c', '456', NOW()),
        ('d', 'def', NOW());
    

    I understand that there's a batch API available but that still executes N queries on the DB vs just one with multiple value rows. At that point, would you know if there's a difference on the DB side performance wise?

    Thanks!

  • Commit broke generation of custom types

    Commit broke generation of custom types

    Good morning, It looks like commit #2489d54a806652d73f2c83bb5736a7574582092c broke generation of custom types. I'm using citext type which was generated previously to proper golang code

    // newUserEmails creates a new pgtype.ValueTranscoder for the Postgres
    // composite type 'user_emails'.
    func (tr *typeResolver) newUserEmails() pgtype.ValueTranscoder {
    	return tr.newCompositeValue(
    		"user_emails",
    		compositeField{"id", "uuid", &pgtype.UUID{}},
    		compositeField{"user_id", "uuid", &pgtype.UUID{}},
    		compositeField{"email", "citext", &pgtype.Text{}},
    		compositeField{"is_verified", "bool", &pgtype.Bool{}},
    		compositeField{"is_primary", "bool", &pgtype.Bool{}},
    		compositeField{"created_at", "timestamptz", &pgtype.Timestamptz{}},
    		compositeField{"updated_at", "timestamptz", &pgtype.Timestamptz{}},
    	)
    }
    

    but after i upgraded it generates like this:

    // newUserEmails creates a new pgtype.ValueTranscoder for the Postgres
    // composite type 'user_emails'.
    func (tr *typeResolver) newUserEmails() pgtype.ValueTranscoder {
    	return tr.newCompositeValue(
    		"user_emails",
    		compositeField{"id", "uuid", &pgtype.UUID{}},
    		compositeField{"user_id", "uuid", &pgtype.UUID{}},
    		compositeField{"email", "citext", &Text{}},
    		compositeField{"is_verified", "bool", &pgtype.Bool{}},
    		compositeField{"is_primary", "bool", &pgtype.Bool{}},
    		compositeField{"created_at", "timestamptz", &pgtype.Timestamptz{}},
    		compositeField{"updated_at", "timestamptz", &pgtype.Timestamptz{}},
    	)
    }
    

    what's wrong is field email does not have package specified:

    compositeField{"email", "citext", &Text{}},
    
  • PGX v5

    PGX v5

    There's a new pgx v5 beta announced and available for testing already.

    https://github.com/jackc/pgx/issues/1273

    Will pggen be ported to support pgx v5? Some major incompatible changes at it seems.

  • Interval processing causes panics

    Interval processing causes panics

    Hello, i'm getting the following when trying to pass interval to postgresql stored function:

    panic: encode []ProductIn: cannot convert {31207680000000 0 0 2} to Interval
    

    I tried to debug and found that error happens here: https://github.com/jackc/pgtype/blob/master/interval.go#L45 Object that is passed there is of type pgtype.Interval, which can't be processed properly as you can see. Could you please suggest how i can fix the issue and i will be happy to issue PR

  • Proposal: Custom go type mapping per query argument

    Proposal: Custom go type mapping per query argument

    Based on the side note in #43, detailed in a separate issue here as requested per @jschaf

    Currently, pggen allows to map an SQL type to a Go type using the --go-type <sql-type>=<go-type> argument (repeated for each SQL type which we want to map). This implies that an SQL type is only ever mapped to one specific Go type for all arguments of all queries.

    However, it might be useful to map a SQL type to different Go types based on context. This can apply either to parameters across queries or within queries, as well as the return value. A strong argument in favor of different mappings is the fact that PostgreSQL does not distinguish between a type which does not contain NULLs and a NULL-able type.

    As a simple example, consider a schema to track flights, we are interested in the flight number, departure time, ETA, and arrival time. The SQL table for the data could look like:

    CREATE TABLE flights (
        flight_number INTEGER PRIMARY KEY,
        departure TIMESTAMP WITH TIME ZONE NOT NULL,
        eta TIMESTAMP WITH TIME ZONE,
        arrival TIMESTAMP WITH TIME ZONE
    )
    

    We see the issue here, the departure/arrival times and ETA are all of type timestamptz. However, while the departure time is never NULL, the ETA and arrival time may be NULL. (This makes sense since the departure time is scheduled and known in advance, while the arrival time will only be known upon landing and the ETA is an estimation which may not be present, for example if the flight is delayed or canceled.)

    Let's consider some queries for flights:

    -- name: InsertFlight :exec
    INSERT INTO flights (flight_number, departure, eta)
        VALUES (pggen.arg('flightNumber'), pggen.arg('departure'), pggen.arg('eta'));
    
    -- GetTodaysFlights :many
    SELECT f.flight_number, f.departure, f.arrival, f.eta
    FROM flights f
    WHERE CURRENT_DATE <= f.departure AND f.departure < CURRENT_DATE + 1
    ORDER BY f.departure DESC;
    
    -- GetTodaysDepartures :many
    SELECT f.departure
    FROM flights f
    WHERE CURRENT_DATE <= f.departure AND f.departure < CURRENT_DATE + 1
    ORDER BY f.departure DESC;
    

    For those queries, all timezone arguments can be mapped to *time.Time but not to time.Time as this would fail for GetTodaysFlights for flights without an arrival time and/or an ETA. Having *time.Time implies that the departure parameter in InsertFlight will be a pointer which is not needed, and will fail at runtime if a nil pointer is given.

    It would be much better to individually map parameters and return values to the desired go types. This mapping would take precedence over the global mapping (via --go-type) and of course if not specifed, the then global mapping would be used.

    This individual mapping could be specified in the query comment, reusing the existing syntax used to specify the function name and return cardinality. Assuming the following global mapping: --go-type "timestamptz=*time.Time" --go-type "integer=int", the queries can be defined as:

    -- name: InsertFlight :exec
    -- arg: flightNumber int
    -- arg: departure time.Time
    INSERT INTO flights (flight_number, departure, eta)
        VALUES (pggen.arg('flightNumber'), pggen.arg('departure'), pggen.arg('eta'));
    
    -- GetTodaysFlights :many
    -- return: departure time.Time
    SELECT f.flight_number, f.departure, f.arrival, f.eta
    FROM flights f
    WHERE CURRENT_DATE <= f.departure AND f.departure < CURRENT_DATE + 1
    ORDER BY f.departure DESC;
    
    -- GetTodaysDepartures :many
    -- return: departure time.Time
    SELECT f.departure
    FROM flights f
    WHERE CURRENT_DATE <= f.departure AND f.departure < CURRENT_DATE + 1
    ORDER BY f.departure DESC;
    

    The format here is:

    • argument: --arg: <arg-name> <go-type>
    • return column --return: <column-name> <go-type>

    Note that:

    1. The value of <go-type> should be identical to that in the global mapping: --go-type <sql-type>=<go-type>.
    2. Even if the return is single column, the column name is still specified, but the return value does not need to be a struct with a single column, it can jut be a slice (with :many) or an instance (with :one) of the column type. The custom type specified is always for the column (of a single row), not for many rows, in this case it should be a slice of the type.

    So the Go functions should look like:

    func InsertFlight(flightNumber int, departure time.Time, eta *time.Time) error
    
    type GetTodaysFlightsRow struct {
       FlightNumber int    `json:"flight_number"`
       Departure time.Time `json:"departure"`
       ETA *time.Time      `json:"eta"`
       Arrival *time.Time  `json:"arrival"`
    }
    
    func GetTodaysFlights() ([]GetTodaysDeparturesRow, error)
    
    func GetTodaysDepartures() ([]time.Time, error)
    

    Hopefully it should be straight forward enough to implement and avoid any corner cases I can think of.

  • Generate the same type for queries with the same SELECT ... FROM

    Generate the same type for queries with the same SELECT ... FROM

    First of all: thanks so much for this library! It's just incredible -- it feels exactly like how I want to work with databases in go.

    Background

    Most of our tables are representations of an rpc type, so that we have an entity CostCenter that is stored in the cost_center table.

    Then we have several different ways of querying for the CostCenter (FindByID, FindBySubscriber, ListBySubscribers etc). All of these queries always do SELECT * FROM cost_center and then do the same mapping from the returned database type to our internal rpc type.

    This requires repeating the mapping from the pggen-generated row type to the rpc type for each query -- even though the structs have exactly the same structure.

    Suggestion

    Add a way to have SELECT statements that return the exact same thing, also use the same type.

    e.g. currently we have this generated code (removing comments and batching for readability):

    	FindByID(ctx context.Context, costCenterID db.UUID) (FindByIDRow, error)
    	FindBySubscriber(ctx context.Context, subscriberID db.UUID) (FindBySubscriberRow, error)
    	ListBySubscribers(ctx context.Context, subscriberIDs pgtype.UUIDArray) ([]ListBySubscribersRow, error)
    

    But FindByIDRow, FindBySubscriberRow and ListBySubscribersRow are all identical structs (except for the name).

    What would be nice is something like:

    	FindByID(ctx context.Context, costCenterID db.UUID) (SelectCostCenterRow, error)
    	FindBySubscriber(ctx context.Context, subscriberID db.UUID) (SelectCostCenterRow, error)
    	ListBySubscribers(ctx context.Context, subscriberIDs pgtype.UUIDArray) ([]SelectCostCenterRow, error)
    
write APIs using direct SQL queries with no hassle, let's rethink about SQL

SQLer SQL-er is a tiny portable server enables you to write APIs using SQL query to be executed when anyone hits it, also it enables you to define val

Jan 7, 2023
Parses a file and associate SQL queries to a map. Useful for separating SQL from code logic

goyesql This package is based on nleof/goyesql but is not compatible with it any more. This package introduces support for arbitrary tag types and cha

Oct 20, 2021
A reverse proxy for postgres which rewrites queries.

pg-rewrite-proxy A reverse proxy for postgres which rewrites queries. Arbitrary rewriting is supported by supplying an LUA script to the proxy applica

Dec 12, 2022
A tool to run queries in defined frequency and expose the count as prometheus metrics. Supports MongoDB and SQL
A tool to run queries in defined frequency and expose the count as prometheus metrics. Supports MongoDB and SQL

query2metric A tool to run db queries in defined frequency and expose the count as prometheus metrics. Why ? Product metrics play an important role in

Jul 1, 2022
CLI tool that can execute SQL queries on CSV, LTSV, JSON and TBLN. Can output to various formats.
CLI tool that can execute SQL queries on CSV, LTSV, JSON and TBLN. Can output to various formats.

trdsql CLI tool that can execute SQL queries on CSV, LTSV, JSON and TBLN. It is a tool like q, textql and others. The difference from these tools is t

Jan 1, 2023
sqlbench measures and compares the execution time of one or more SQL queries.

sqlbench sqlbench measures and compares the execution time of one or more SQL queries. The main use case is benchmarking simple CPU-bound query varian

Dec 6, 2022
SQL API is designed to be able to run queries on databases without any configuration by simple HTTP call.

SQL API SQL API is designed to be able to run queries on databases without any configuration by simple HTTP call. The request contains the DB credenti

Dec 2, 2022
Get data from .csv files use SQL-like queries.

csvql Get data from .csv files use SQL-like queries. Задание Необходимо написать консольную программу, которая по заданному клиентом запросу осуществл

Dec 7, 2021
Opionated sql formatter for use with .go files containing backticked queries

fumpt-the-sql Opionated sql formatter for use with .go files containing backticked queries. Uses https://sqlformat.darold.net/ for the actual sql form

Dec 10, 2021
Run SQL queries against JSON, CSV, Excel, Parquet, and more.

Run SQL queries against JSON, CSV, Excel, Parquet, and more This is a CLI companion to DataStation (a GUI) for running SQL queries against data files.

Dec 31, 2022
Querycrate - A simple library that loads and keeps SQL queries from files

QueryCrate A simple library for loading & getting string queries from files. How

Feb 15, 2022
Mergestat - a command-line tool for running SQL queries on git repositories and related data sources
Mergestat - a command-line tool for running SQL queries on git repositories and related data sources

Query git repositories with SQL. Generate reports, perform status checks, analyze codebases. ?? ??

Dec 30, 2022
Type-safe Redis client for Golang
Type-safe Redis client for Golang

Redis client for Golang Join Discord to ask questions. Documentation Reference Examples RealWorld example app Ecosystem Redis Mock. Distributed Locks.

Dec 29, 2022
Go-sql-reader - Go utility to read the externalised sql with predefined tags

go-sql-reader go utility to read the externalised sql with predefined tags Usage

Jan 25, 2022
auto generate sql from gorm model struct

gorm2sql: auto generate sql from gorm model struct A Swiss Army Knife helps you generate sql from gorm model struct. Installation go get github.com/li

Dec 22, 2022
xlsx2mysql: An tool of helping your fastly generate SQL from Excel.

xlsx2mysql An tool of helping your fastly generate SQL from Excel 中文文档 Origin In order to convert Excel to MySQL and I made a tool to implement.But Wh

Nov 13, 2021
ddl-maker generate ddl (SQL file) from Go struct.
ddl-maker generate ddl (SQL file) from Go struct.

[日本語] What is ddl-maker ddl-maker generate ddl (SQL file) from golang struct. It's only supported MySQL only now. The original code is kayac/ddl-maker

Jun 16, 2022
A database connection wrapper to cache prepared statements by transforming queries to use with array arguments.

sqlpp sqlpp is a sql(MySQL and PostgreSQL) database connection wrapper to cache prepared statements by transforming queries ("...in (?)...", []) to us

Feb 9, 2022
A simple Golang-based application that queries a PostgreSQL database

Qwik-E-Mart Demo App A simple Golang-based application that queries a PostgreSQL database named qwikemart to read and return customer data stored in t

Nov 6, 2021