Goose database migration tool - fork of https://bitbucket.org/liamstask/goose

goose

Goose is a database migration tool. Manage your database schema by creating incremental SQL changes or Go functions.

GoDoc Widget Travis Widget

Goals of this fork

github.com/pressly/goose is a fork of bitbucket.org/liamstask/goose with the following changes:

  • No config files
  • Default goose binary can migrate SQL files only
  • Go migrations:
    • We don't go build Go migrations functions on-the-fly from within the goose binary
    • Instead, we let you create your own custom goose binary, register your Go migration functions explicitly and run complex migrations with your own *sql.DB connection
    • Go migration functions let you run your code within an SQL transaction, if you use the *sql.Tx argument
  • The goose pkg is decoupled from the binary:
    • goose pkg doesn't register any SQL drivers anymore, thus no driver panic() conflict within your codebase!
    • goose pkg doesn't have any vendor dependencies anymore
  • We use timestamped migrations by default but recommend a hybrid approach of using timestamps in the development process and sequential versions in production.

Install

$ go get -u github.com/pressly/goose/cmd/goose

This will install the goose binary to your $GOPATH/bin directory.

For a lite version of the binary without DB connection dependent commands, use the exclusive build tags:

$ go build -tags='no_postgres no_mysql no_sqlite3' -i -o goose ./cmd/goose

Usage

Usage: goose [OPTIONS] DRIVER DBSTRING COMMAND

Drivers:
    postgres
    mysql
    sqlite3
    mssql
    redshift

Examples:
    goose sqlite3 ./foo.db status
    goose sqlite3 ./foo.db create init sql
    goose sqlite3 ./foo.db create add_some_column sql
    goose sqlite3 ./foo.db create fetch_user_data go
    goose sqlite3 ./foo.db up

    goose postgres "user=postgres dbname=postgres sslmode=disable" status
    goose mysql "user:password@/dbname?parseTime=true" status
    goose redshift "postgres://user:[email protected]:5439/db" status
    goose tidb "user:password@/dbname?parseTime=true" status
    goose mssql "sqlserver://user:password@dbname:1433?database=master" status

Options:

  -dir string
    	directory with migration files (default ".")
  -table string
    	migrations table name (default "goose_db_version")
  -h	print help
  -v	enable verbose mode
  -version
    	print version

Commands:
    up                   Migrate the DB to the most recent version available
    up-by-one            Migrate the DB up by 1
    up-to VERSION        Migrate the DB to a specific VERSION
    down                 Roll back the version by 1
    down-to VERSION      Roll back to a specific VERSION
    redo                 Re-run the latest migration
    reset                Roll back all migrations
    status               Dump the migration status for the current DB
    version              Print the current version of the database
    create NAME [sql|go] Creates new migration file with the current timestamp
    fix                  Apply sequential ordering to migrations

create

Create a new SQL migration.

$ goose create add_some_column sql
$ Created new file: 20170506082420_add_some_column.sql

Edit the newly created file to define the behavior of your migration.

You can also create a Go migration, if you then invoke it with your own goose binary:

$ goose create fetch_user_data go
$ Created new file: 20170506082421_fetch_user_data.go

up

Apply all available migrations.

$ goose up
$ OK    001_basics.sql
$ OK    002_next.sql
$ OK    003_and_again.go

up-to

Migrate up to a specific version.

$ goose up-to 20170506082420
$ OK    20170506082420_create_table.sql

up-by-one

Migrate up a single migration from the current version

$ goose up-by-one
$ OK    20170614145246_change_type.sql

down

Roll back a single migration from the current version.

$ goose down
$ OK    003_and_again.go

down-to

Roll back migrations to a specific version.

$ goose down-to 20170506082527
$ OK    20170506082527_alter_column.sql

redo

Roll back the most recently applied migration, then run it again.

$ goose redo
$ OK    003_and_again.go
$ OK    003_and_again.go

status

Print the status of all migrations:

$ goose status
$   Applied At                  Migration
$   =======================================
$   Sun Jan  6 11:25:03 2013 -- 001_basics.sql
$   Sun Jan  6 11:25:03 2013 -- 002_next.sql
$   Pending                  -- 003_and_again.go

Note: for MySQL parseTime flag must be enabled.

version

Print the current version of the database:

$ goose version
$ goose: version 002

Migrations

goose supports migrations written in SQL or in Go.

SQL Migrations

A sample SQL migration looks like:

-- +goose Up
CREATE TABLE post (
    id int NOT NULL,
    title text,
    body text,
    PRIMARY KEY(id)
);

-- +goose Down
DROP TABLE post;

Notice the annotations in the comments. Any statements following -- +goose Up will be executed as part of a forward migration, and any statements following -- +goose Down will be executed as part of a rollback.

By default, all migrations are run within a transaction. Some statements like CREATE DATABASE, however, cannot be run within a transaction. You may optionally add -- +goose NO TRANSACTION to the top of your migration file in order to skip transactions within that specific migration file. Both Up and Down migrations within this file will be run without transactions.

By default, SQL statements are delimited by semicolons - in fact, query statements must end with a semicolon to be properly recognized by goose.

More complex statements (PL/pgSQL) that have semicolons within them must be annotated with -- +goose StatementBegin and -- +goose StatementEnd to be properly recognized. For example:

-- +goose Up
-- +goose StatementBegin
CREATE OR REPLACE FUNCTION histories_partition_creation( DATE, DATE )
returns void AS $$
DECLARE
  create_query text;
BEGIN
  FOR create_query IN SELECT
      'CREATE TABLE IF NOT EXISTS histories_'
      || TO_CHAR( d, 'YYYY_MM' )
      || ' ( CHECK( created_at >= timestamp '''
      || TO_CHAR( d, 'YYYY-MM-DD 00:00:00' )
      || ''' AND created_at < timestamp '''
      || TO_CHAR( d + INTERVAL '1 month', 'YYYY-MM-DD 00:00:00' )
      || ''' ) ) inherits ( histories );'
    FROM generate_series( $1, $2, '1 month' ) AS d
  LOOP
    EXECUTE create_query;
  END LOOP;  -- LOOP END
END;         -- FUNCTION END
$$
language plpgsql;
-- +goose StatementEnd

Go Migrations

  1. Create your own goose binary, see example
  2. Import github.com/pressly/goose
  3. Register your migration functions
  4. Run goose command, ie. goose.Up(db *sql.DB, dir string)

A sample Go migration 00002_users_add_email.go file looks like:

package migrations

import (
	"database/sql"

	"github.com/pressly/goose"
)

func init() {
	goose.AddMigration(Up, Down)
}

func Up(tx *sql.Tx) error {
	_, err := tx.Exec("UPDATE users SET username='admin' WHERE username='root';")
	if err != nil {
		return err
	}
	return nil
}

func Down(tx *sql.Tx) error {
	_, err := tx.Exec("UPDATE users SET username='root' WHERE username='admin';")
	if err != nil {
		return err
	}
	return nil
}

Hybrid Versioning

Please, read the versioning problem first.

We strongly recommend adopting a hybrid versioning approach, using both timestamps and sequential numbers. Migrations created during the development process are timestamped and sequential versions are ran on production. We believe this method will prevent the problem of conflicting versions when writing software in a team environment.

To help you adopt this approach, create will use the current timestamp as the migration version. When you're ready to deploy your migrations in a production environment, we also provide a helpful fix command to convert your migrations into sequential order, while preserving the timestamp ordering. We recommend running fix in the CI pipeline, and only when the migrations are ready for production.

License

Licensed under MIT License

Comments
  • sequential numbering with postgres or redshift schemas

    sequential numbering with postgres or redshift schemas

    I'm just started using this fork because it has support for Redshift. However, I use schemas to organize tables with databases like redshift and postgres but the new sequential numbering of files, instead of timestamps, means that I have to put all my migration files in the same directory, otherwise the counter will start again at 00001 for each directory.

    Say I have a single database in redshift or postgres, and it has multiple schemas CREATE SCHEMA foo; and CREATE SCHEMA bar;. If I wanted to organize my goose migrations by folders, I might have:

    foo/
    bar/
    

    If i then ran cd foo && goose create add_cars sql then cd ../bar && goose_create add_trains sql, goose will create foo/00001_add_cars.sql and bar/00001_add_trains.sql. So running:

    $ cd foo $ goose postgres "user=rkulla dbname=postgres sslmode=disable" up goose: no migrations to run. current version: 1 $ cd ../bar $ goose postgres "user=rkulla dbname=postgres sslmode=disable" up goose: no migrations to run. current version: 1

    would only apply the migration under foo/ but not the one under bar/, because both migrations start with 00001_ and postgres or redshift only get ONE goose_db_version table to share amongst the different schema names. This makes them harder to organize unless I do goose create add_cars_foo sql' andgoose create add_trains_bar sql, then run commands likels *_foo.sqlandls *_bar.sql`.

    It's not that big of a deal right now I guess, but I'm wondering if there's a better way and if it's really worth not using timestamps. IIRC, Ruby on Rails's migration feature used to use sequential numbers but developers complained because it caused a lot of conflicts when 2 different developers working on the same project made separate changes but both generated a migration with the same number. So Rails switched to UTC timestamps.

  • Exit the program when migration can't be parsed

    Exit the program when migration can't be parsed

    Closes #115

    With un-parsable SQL, goose should exit it's current execution.

    Previously, goose would effectively skip over this invalid migration but still mark it as applied on the database.

    This PR will make it so that goose will error out of it's current execution and the migration will not be applied.

    I have also added files created by tests to the .gitignore

  • Add support for seeding the database with test data

    Add support for seeding the database with test data

    I have used goose at pretty much every job to manage my relational databases. It works like a charm, but I always find myself creating some sort of wrapper that allows me to use goose to seed my local and development environments with test data. It would be really nice if this were supported already.

    I propose creating a new command goose seed that will allow users to continue managing their schema migrations with goose and also seed their non-production environments with test data for integration & end-to-end testing. Please note that I am not suggesting actual seed data be managed this way - if your application requires some static, pre-existing data, that can just be inserted along with the regular schema migrations.

    What are your thoughts on this? If we agree that this is useful and decide on an interface, I'd like to take on this work and open a PR to implement it.

  • Add vendoring support using dep vendor package manager.

    Add vendoring support using dep vendor package manager.

    • Add vendor dependencies for better macos homebrew support
    • Referencing #42
    • Referencing Homebrew/homebrew-core#14724

    Signed-off-by: Mario Kozjak [email protected]

  • error: `CREATE INDEX CONCURRENTLY` cannot run inside a transaction block

    error: `CREATE INDEX CONCURRENTLY` cannot run inside a transaction block

    Hi, I am getting the following error whenever I am trying to create indexes separately:

    failed to run SQL migration: failed to execute SQL query "..."
    pq: CREATE INDEX CONCURRENTLY cannot run inside a transaction block
    

    Here's a small reproduction of my table migration:

    -- +goose Up
    -- +goose StatementBegin
    CREATE TABLE users (
        id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
        email VARCHAR NOT NULL,
        username VARCHAR NOT NULL
    );
    
    CREATE INDEX CONCURRENTLY users_id_pkey ON users(id);
    CREATE UNIQUE INDEX CONCURRENTLY users_email_key ON users(email);
    CREATE UNIQUE INDEX CONCURRENTLY users_username_key ON users(username);
    -- +goose StatementEnd
    

    Is there any way to fix this, or should I move each index creation query to a separate migration file?

  • Avoid to ignore empty lines in SQL files

    Avoid to ignore empty lines in SQL files

    The current implementation ignores blank lines in the migration sql file.

    But in my use case, this behavior produced unexpected results. In some cases, a blank line in the SQL may be meaningful. The following code is an example of SQL using the E string syntax in PostgreSQL.

    INSERT INTO article (id, content) VALUES ('id_0001',  E'# My markdown doc
    
    first paragraph
    
    second paragraph');
    

    In this markdown text, empty lines must NOT be removed.

    As far as I could check, the same behavior was not observed in command line mode. it occurs only in the case of library use. Therefore, If there is no strong motivation to ignore blank lines, this behavior should be fixed.

    I would love to hear your opinion. Thanks for the great library.

  • goose is no longer able to run with GOOSE_DBSTRING set as env

    goose is no longer able to run with GOOSE_DBSTRING set as env

    I have not added or changed anything but did not pin the version of goose.

    In the latest version I am now unable to set this env variable. Therefore I am unable to get the correct env string for deployments. Which ultimately messes up production issues now.

    goose run: "postgres": no such command
    

    I am running this command:

    ./goose -dir migrations postgres $GOOSE_DBSTRING up 
    
  • Binary releases

    Binary releases

    Hi all,

    First off - thanks for stepping up and taking over this project!

    I'd love to see binary releases published to your GitHub repo. Doing a go get to grab the CLI at runtime is a little scary, since we can't pin a version (to my knowledge). Even scarier given that this is mucking with databases.

    In terms of implementation, we're big fans of goreleaser - it does all the heavy lifting for you.

    Would be happy to take a crack at a PR if y'all would be open to it.

  • goose run: no separator found

    goose run: no separator found

    I've installed the binary using go get -u github.com/pressly/goose/cmd/goose. I can generate the migration files in anywhere, but not in my project directory. When i type something like goose postgres <connection_string> status, i got error goose run: no separator found. Is there something i might misconfigure ? or misunderstand the usage ? thanks

    I'm using

    • golang:1.9.2 darwin/amd64
    • macOS High Sierra 10.13
  • Feature Request: Custom migration templates

    Feature Request: Custom migration templates

    Hi,

    If I'm willing to create a PR, would you be open to adding an option to allow us to specify custom templates?

    The reason I ask is that the current sql template produces:

    
    -- +goose Up
    -- SQL in section 'Up' is executed when this migration is applied
    
    
    -- +goose Down
    -- SQL section 'Down' is executed when this migration is rolled back
    
    
    

    but I would like it to produce:

    
    -- +goose Up
    -- +goose StatementBegin
    -- +goose StatementEnd
    
    
    -- +goose Down
    -- +goose StatementBegin
    -- +goose StatementEnd
    
    
    

    I want to always use the begin and end statements since we will be doing some complex migrations including plpgsql functions and I'd rather just always use the same format.

    Allowing me to specify a custom template would avoid me having to replace the default template in the newly-created file after creating a new migration.

    If you're open to this, please give me a suggestion as to how you'd like the template specified and I'd be happy to follow that. I guess I'd see these templates specified in code, but I suppose a command like arg would work as well.

    Thoughts?

    Thanks!

  • vertica driver support

    vertica driver support

    • [x] Add VerticaDialect
    • [x] Add vertica-sql-go dependency
    • [x] Add !no_vertica build tag in driver_vertica.go
    • [x] Updated README
    • [x] Updated CLI help message
    • [x] Test coverage

    I've been using these changes successfully already with a custom build of goose to manage vertica migrations for a project. I'm willing to include test coverage similar to what has been done for other drivers if there is interest in accepting this PR. Let me know if there is any other feedback you have for me that I may have missed. Thanks!

  • Bump modernc.org/sqlite from 1.19.2 to 1.20.2

    Bump modernc.org/sqlite from 1.19.2 to 1.20.2

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • "Authentication failed" to Clickhouse

    relevant for version 3.6.0 and higher

    I'm executed the command: goose -dir "./db/clickhouse" clickhouse "tcp://10.0.2.7:9000?database=default&username=default&password=mypass" up

    got an error: goose run: code: 516, message: default: Authentication failed: password is incorrect or there is no user with such name

    The username and password are correct. I tried using "clickhouse-client" and through it connected to Clickhouse DB. Goose version 3.5.3 also connected and worked

  • Up to one & up to failed command

    Up to one & up to failed command

    i just run up-to-one and i got this following error (also on down-to-one)

    2022/12/22 15:42:53 goose run: "up-to-one": no such command
    

    and i try to run up-to command but it looks like run up command (all migration file)

    $ goose mysql "root:root@/database?parseTime=true" up-to 20221031141145
    2022/12/22 15:45:10 goose run: error: found 8 missing migrations:
    
  • Type of the 'tstamp' field in the 'goose_db_version' table

    Type of the 'tstamp' field in the 'goose_db_version' table

    goose_db_version table, TIMESTAMP type is used for tstamp field. In MySQL, the TIMESTAMP type is affected by the year 2038 problem and causes malfunctions. https://dev.mysql.com/doc/refman/8.0/en/datetime.html

    For MySQL, we recommend using the DATETIME type for tstamp fields.

  • Feature request: baseline/ignore migrations

    Feature request: baseline/ignore migrations

    We perform data migrations in IaSQL which is built on PostgreSQL and we have the need to ignore certain migrations as databases sometimes are recreated and the migration table gets blown away. This can be done either with an explicit blacklist, but ideally by defining a baseline that ignores migrations before the provided date or version similar to Flyway. The interface should be as simple as an additional, optional configuration value called baseline timestamp or version.

    This is a high priority for us and we are considering building our own migration system as we have not found one that fits all of our needs. If guidance can be provided someone from our team can contribute as soon as next week :)

A fully-featured AWS Athena database driver (+ athenareader https://github.com/uber/athenadriver/tree/master/athenareader)
A fully-featured AWS Athena database driver (+ athenareader https://github.com/uber/athenadriver/tree/master/athenareader)

?? athenadriver - A fully-featured AWS Athena database driver for Go ?? athenareader - A moneywise command line utililty to query athena in command li

Jan 3, 2023
It's a Go console utility for migration from MSSQL to MySQL engine.

A tool for migration the databases to MySQL It's a Go console utility for migration from MSSQL to MySQL engine. The databases should have prepopulated

Jan 4, 2022
[mirror] the database client and tools for the Go vulnerability database

The Go Vulnerability Database golang.org/x/vulndb This repository is a prototype of the Go Vulnerability Database. Read the Draft Design. Neither the

Dec 29, 2022
Database - Example project of database realization using drivers and models

database Golang based database realization Description Example project of databa

Feb 10, 2022
🏋️ dbbench is a simple database benchmarking tool which supports several databases and own scripts

dbbench Table of Contents Description Example Installation Supported Databases Usage Custom Scripts Troubeshooting Development Acknowledgements Descri

Dec 30, 2022
Dumpling is a fast, easy-to-use tool written by Go for dumping data from the database(MySQL, TiDB...) to local/cloud(S3, GCP...) in multifarious formats(SQL, CSV...).

?? Dumpling Dumpling is a tool and a Go library for creating SQL dump from a MySQL-compatible database. It is intended to replace mysqldump and mydump

Nov 9, 2022
The EVEmu Database Tool

EVEDBTool - The EVEmu Database Tool This is a tool written in Go to manage the installation, versioning and update of the EVEmu database. A pre-built

Sep 27, 2022
A little database tool for version control

A little database tool for version control It's very like sqitch, because I imitate it. If it's not allowed, give me a message, I'll destroy this proj

Aug 18, 2022
A tool I made to quickly store bug bounty program scopes in a local sqlite3 database

GoScope A tool I made to quickly store bug bounty program scopes in a local sqlite3 database. Download or copy a Burpsuite configuration file from the

Nov 18, 2021
A database connection tool for sensitive data
A database connection tool for sensitive data

go-sql 用于快速统计数据库行数、敏感字段匹配、数据库连接情况。 usage ./go-sql_darwin_amd64 -h ./go-sql_darwin_amd64 -f db.yaml -k name,user ./go-sql_darwin_amd64 -f db.yaml --min

Apr 4, 2022
Web-based, zero-config, dependency-free database schema change and version control tool for teams
Web-based, zero-config, dependency-free database schema change and version control tool for teams

Live Demo • Install • Help • Development • Design Doc Bytebase is a web-based, zero-config, dependency-free database schema change and version control

Jan 1, 2023
Database wrapper that manage read write connections

rwdb Database wrapper that manage read write connections Install go get github.com/andizzle/rwdb Create connections package main import "github.com/

Dec 10, 2022
Vitess is a database clustering system for horizontal scaling of MySQL.

Vitess Vitess is a database clustering system for horizontal scaling of MySQL through generalized sharding. By encapsulating shard-routing logic, Vite

Jan 3, 2023
sqlx is a library which provides a set of extensions on go's standard database/sql library

sqlx is a library which provides a set of extensions on go's standard database/sql library. The sqlx versions of sql.DB, sql.TX, sql.Stmt, et al. all leave the underlying interfaces untouched, so that their interfaces are a superset on the standard ones. This makes it relatively painless to integrate existing codebases using database/sql with sqlx.

Jan 7, 2023
Convert data exports from various services to a single SQLite database
Convert data exports from various services to a single SQLite database

Bionic Bionic is a tool to convert data exports from web apps to a single SQLite database. Bionic currently supports data exports from Google, Apple H

Dec 9, 2022
Zero boilerplate database operations for Go
Zero boilerplate database operations for Go

(Now compatible with MySQL and PostgreSQL!) Everyone knows that performing simple DATABASE queries in Go takes numerous lines of code that is often re

Jan 2, 2023
Vitess is a database clustering system for horizontal scaling of MySQL.

Vitess Vitess is a database clustering system for horizontal scaling of MySQL through generalized sharding. By encapsulating shard-routing logic, Vite

Jan 4, 2023
Additions to Go's database/sql for super fast performance and convenience.

gocraft/dbr (database records) gocraft/dbr provides additions to Go's database/sql for super fast performance and convenience. $ go get -u github.com/

Jan 1, 2023