Atlas is a CLI designed to help companies better work with their data.

Atlas - A Database Toolkit

Atlas is a CLI designed to help companies better work with their data. It includes several components that can be used individually but are designed to work very well together.

Supported databases:

  • MySQL
  • MariaDB
  • PostgresSQL
  • SQLite

Runs on all platforms:

  • Mac
  • Linux
  • Windows

Quick Installation

Download latest release.

curl -LO https://release.ariga.io/atlas/atlas-darwin-amd64-latest

Make the atlas binary executable.

chmod +x ./atlas-darwin-amd64-latest

Move the atlas binary to a file location on your system PATH.

sudo mv ./atlas-darwin-amd64-latest /usr/local/bin/atlas
sudo chown root: /usr/local/bin/atlas

Instructions for other platforms and databases: Getting Started.

Getting Started

Schema Inspection

Inspect and save output to a schema file.

atlas.hcl">
atlas schema inspect -d "mysql://root:pass@tcp(localhost:3306)/example" > atlas.hcl

Apply change to Schema

atlas schema apply -d "mysql://root:pass@tcp(localhost:3306)/example" -f atlas.hcl

Full CLI documentation.

About the Project

Read more about the motivation of the project Here.

Comments
  • Error doing anything

    Error doing anything

    version: "3.7"
    
    services:
      mysql:
        image: mysql:8.0
        volumes:
          - ./mysql-seed-data:/var/lib/mysql
        ports:
          - '3306:3306'
        expose:
          - '3306'
        environment: 
          MYSQL_ROOT_PASSWORD: password
          MYSQL_DATABASE: ${MYSQL_DB:?please provide a database}
          MYSQL_USER: ${MYSQL_USER:-user}
          MYSQL_PASSWORD: ${MYSQL_PASSWORD:-password}
        healthcheck:
          test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost", "-u", "atlast_user", "-p$$MYSQL_ROOT_PASSWORD"]
          timeout: 20s
          retries: 5
    
    volumes:
      mysql-seed-data:
        driver: local
    
    function generate_initial_schema() {
        DB="$1"
        SERVICE="$2"
        NAME="$3"
    
        dateSuffix=$(date -u '+%Y%m%d%H%M%S')
        parent_path=$(dirname "$0")
        
        docker run -d --name "${DB}" -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE="${DB}" mysql:8.0
        trap "docker stop ${DB}; docker rm ${DB}" EXIT
    
        # MYSQL_DB="${DB}" docker-compose -f infrafiles/mysql.docker-compose.yaml up -d 
        # trap "MYSQL_DB=${DB} docker-compose -f infrafiles/mysql.docker-compose.yaml down" EXIT   
        bash "$parent_path/wait-for-db.sh" 3306
    
        LOCAL_DB_URL="mysql://root:password@localhost:3306/$DB"
    
        REMOTE_DB_URL=$(get_remote_mysql_db_url "$DB")
        REMOTE_DB_URL=$(strip_tcp_from_url "$REMOTE_DB_URL")
        REMOTE_DB_URL="mysql://$REMOTE_DB_URL"
    
        echo "$LOCAL_DB_URL" 
        echo "$REMOTE_DB_URL" 
    
        FILENAME_PREX="migrations/${SERVICE}/${dateSuffix}_${NAME}"
        echo "generating table schema in ${FILENAME_PREX}.up.sql" 
        
        [[ ! -d  "migrations/${SERVICE}" ]] && mkdir -p "migrations/${SERVICE}"
    
        atlas schema diff --from "$LOCAL_DB_URL" --to "$REMOTE_DB_URL" > "${FILENAME_PREX}.up.sql"
        touch "${FILENAME_PREX}.down.sql"
    
        sed -i '' -E "s/\`${DB}\`\.//g" "${FILENAME_PREX}.up.sql"
        sed -i '' '/[^;] *$/s/$/;/' "${FILENAME_PREX}.up.sql"
    }
    

    Also sometimes I get,

    generating table schema in migrations/clients/20221216124159_create_tables.up.sql
    [mysql] 2022/12/16 18:12:02 packets.go:37: unexpected EOF
    [mysql] 2022/12/16 18:12:02 packets.go:37: unexpected EOF
    [mysql] 2022/12/16 18:12:02 packets.go:37: unexpected EOF
    Error: mysql: query system variables: driver: bad connection
    
  • versioned migration throwing syntax error when creating functons in mariadb

    versioned migration throwing syntax error when creating functons in mariadb

    DELIMITER $$
    
    CREATE OR REPLACE FUNCTION gen_uuid() RETURNS VARCHAR(22) 
    BEGIN
        RETURN concat(
            date_format(NOW(6), '%Y%m%d%i%s%f'),
            ROUND(1 + RAND() * (100 - 2))
        );
    END;$$
    
    DELIMITER ;
    

    This code throw syntax error at line 4 when running migration from the sql file.

    atlas version: v0.6.0-f064262-canary mariadb version:

  • No way to break glass and use not-yet-implemented types

    No way to break glass and use not-yet-implemented types

    I was expecting to be able to set the type of a column to a custom SQL expression by using the sql command.

    But with the input:

      column "duration" {
        null    = false
        type    = sql("interval")
        comment = "The duration that the reservation will last for."
      }
    

    I get the error output:

    Error: reverse alter table "mytable": postgres: unsupported type: "interval"
    

    The error coming from

    https://github.com/ariga/atlas/blob/89197145cc46bd66ba330f01ac18b5e236effb97/sql/postgres/convert.go#L129-L130

    However, based on this testcase I would have expected it to work:

    https://github.com/ariga/atlas/blob/89197145cc46bd66ba330f01ac18b5e236effb97/internal/integration/cockroach_test.go#L331

    Could you maybe point me in a direction for this one?

  • Bug: Scan error on column index 4, name

    Bug: Scan error on column index 4, name "REFERENCED_TABLE_NAME": converting NULL to string is unsupported

    Hi @a8m ,

    ➜ OS
    MacBook Air M1
    ➜ atlas version
    atlas version v0.8.3-2b5df3b-canary
    ➜ Database
    MySQL 5.7 (GCP Cloud SQL)
    
    # Database has multiple databases (1 service related with 1 database)
    ➜ export MYSQL_HOST=mysql://<user>:<pass>@<host>:3306
    ➜ atlas schema inspect -u $MYSQL_HOST -s competition_db > competition_db.hcl
    Error: mysql: sql: Scan error on column index 4, name "REFERENCED_TABLE_NAME": converting NULL to string is unsupported
    
    # Check database
    ➜ use database competition_db;
    ➜ show create table competition_user;
    
    CREATE TABLE `competition_user` (
      `cu_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
      `fk_competition_id` int(11) unsigned NOT NULL,
      `fk_user_id` int(11) unsigned NOT NULL,
      `cu_created_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
      `cu_updated_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
      PRIMARY KEY (`cu_id`),
      UNIQUE KEY `fk_competition_id` (`fk_competition_id`,`fk_user_id`),
      KEY `fk_user_id` (`fk_user_id`),
      CONSTRAINT `competition_user_ibfk_1` FOREIGN KEY (`fk_competition_id`) REFERENCES `competition` (`c_id`) ON UPDATE CASCADE,
     CONSTRAINT `competition_user_ibfk_2` FOREIGN KEY (`fk_user_id`) REFERENCES `admin_db`.`users` (`u_id`) ON UPDATE CASCADE
    ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8
    

    Thanks.

  • Add option to connect to database without SSL (fix

    Add option to connect to database without SSL (fix "SSL is not enabled on the server" error)

    Type

    Bug

    System

    OS: Mac OSX 11.6.3 DB: Postgres 12.8

    Details

    Installation went smoothly, but when I tried to inspect my existing local DB schema, I got an error

    > atlas schema inspect --dsn "postgres://eric@localhost:5432/my_db"
    Error: postgres: scanning system variables: pq: SSL is not enabled on the server
    

    A quick search led me to this SO answer.

    It appears that Atlas is attempting to connect with the raw DSN, without specifying an SSL mode (it appears the SSL is enabled by default):

    https://github.com/ariga/atlas/blob/ee1ec7abca136edea8428a7588b0d33b104868c7/cmd/action/provider.go#L35

    Perhaps this is necessary in some cases but for local DB inspection, probably not. The tests appear to get around this by explicitly disabling SSL:

    https://github.com/ariga/atlas/blob/ee1ec7abca136edea8428a7588b0d33b104868c7/internal/integration/postgres_test.go#L41

    I verified this fix by running Atlas with the ?sslmode=disable query param added to my DB url, and it worked

    > atlas schema inspect --dsn "postgres://eric@localhost:5432/my_db?sslmode=disable"
    
    🎉 prints a schema to terminal
    

    I am not well versed in Go to propose a specific solution but I think the most flexible option would be to add flag option for SSL mode (and perhaps default to false for inspection, since I would imagine inspecting a local DB is the most common initial step)

  • Error: specutil: failed converting to *schema.Realm: sqlspec: table not found

    Error: specutil: failed converting to *schema.Realm: sqlspec: table not found

    Issue:

    We are in process of developing a new database and I am trying out atlas as tool to apply schema changes made on local server over to a deployment postgres server.

    I keep getting this error:

    Error: specutil: failed converting to *schema.Realm: sqlspec: table "geoid_models" not found

    It has nothing to do with that specific table. If I remove that table from schema, error just moves on to another table

    Error: specutil: failed converting to *schema.Realm: sqlspec: table "vertical_datums" not found

    inspect command

    atlas schema inspect -u "postgres://postgres:<pass>@localhost:5432/<db>?sslmode=disable" > src/db/schema.hcl

    apply command

    atlas schema apply -u "postgres://<user>:<pass>@<server>:5432/<db>" -f src/db/schema.hcl

    schema.hcl

    table "file_formats" {
      schema = schema.public
      column "id" {
        null = false
        type = serial
      }
      column "code" {
        null = false
        type = character_varying
      }
      column "description" {
        null = false
        type = character_varying
      }
      column "data_type" {
        null = false
        type = character_varying
      }
      primary_key {
        columns = [column.id]
      }
    }
    table "geoid_models" {
      schema = schema.public
      column "id" {
        null = false
        type = integer
      }
      column "name" {
        null = false
        type = character_varying
      }
      column "sort_order" {
        null = false
        type = smallint
      }
      primary_key {
        columns = [column.id]
      }
    }
    table "job_statuses" {
      schema = schema.public
      column "id" {
        null = false
        type = serial
      }
      column "status" {
        null = false
        type = character_varying
      }
      primary_key {
        columns = [column.id]
      }
    }
    table "request_statuses" {
      schema = schema.public
      column "id" {
        null = false
        type = serial
      }
      column "status" {
        null = false
        type = character_varying
      }
      primary_key {
        columns = [column.id]
      }
    }
    table "spatial_ref_sys" {
      schema = schema.public
      column "srid" {
        null = false
        type = integer
      }
      column "auth_name" {
        null = true
        type = character_varying(256)
      }
      column "auth_srid" {
        null = true
        type = integer
      }
      column "srtext" {
        null = true
        type = character_varying(2048)
      }
      column "proj4text" {
        null = true
        type = character_varying(2048)
      }
      primary_key {
        columns = [column.srid]
      }
      check "spatial_ref_sys_srid_check" {
        expr = "((srid > 0) AND (srid <= 998999))"
      }
    }
    table "vertical_datums" {
      schema = schema.public
      column "id" {
        null = false
        type = integer
      }
      column "name" {
        null = false
        type = character_varying
      }
      column "native_name" {
        null = false
        type = character_varying
      }
      primary_key {
        columns = [column.id]
      }
    }
    table "image_fp" {
      schema = schema.spatial
      column "objectid" {
        null = false
        type = serial
      }
      column "geom" {
        null = true
        type = sql("geometry(geometry,4269)")
      }
      column "location" {
        null = true
        type = character_varying(254)
      }
      column "url" {
        null = true
        type = character_varying(254)
      }
      column "missionid" {
        null = true
        type = integer
      }
      column "azureblobname" {
        null = true
        type = character_varying(1000)
      }
      primary_key {
        columns = [column.objectid]
      }
      index "image_fp_geom_geom_idx" {
        columns = [column.geom]
        type    = GIST
      }
    }
    table "dem_jobs" {
      schema = schema.transactional
      column "id" {
        null = false
        type = bigserial
      }
      column "job_id" {
        null = false
        type = bigint
      }
      column "vertical_datum_id" {
        null = false
        type = integer
      }
      column "contour_interval" {
        null = true
        type = real
      }
      column "geoid_model_id" {
        null = true
        type = integer
      }
      primary_key {
        columns = [column.id]
      }
      foreign_key "geoid_models_fkey" {
        columns     = [column.geoid_model_id]
        ref_columns = [table.geoid_models.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
      foreign_key "jobs_fkey" {
        columns     = [column.job_id]
        ref_columns = [table.jobs.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
      foreign_key "vertical_datums_fkey" {
        columns     = [column.vertical_datum_id]
        ref_columns = [table.vertical_datums.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
    }
    table "imagery_jobs" {
      schema = schema.transactional
      column "id" {
        null = false
        type = bigserial
      }
      column "job_id" {
        null = false
        type = bigint
      }
      column "jpeg_quality" {
        null    = false
        type    = smallint
        default = 80
      }
      primary_key {
        columns = [column.id]
      }
      foreign_key "jobs_fkey" {
        columns     = [column.job_id]
        ref_columns = [table.jobs.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
    }
    table "job_outputs" {
      schema = schema.transactional
      column "id" {
        null = false
        type = bigserial
      }
      column "job_id" {
        null = false
        type = bigint
      }
      column "file_path" {
        null = true
        type = character_varying
      }
      column "cleanup_status" {
        null = true
        type = character_varying
      }
      primary_key {
        columns = [column.id]
      }
      foreign_key "jobs_fkey" {
        columns     = [column.job_id]
        ref_columns = [table.jobs.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
      index "job_id_unique" {
        unique  = true
        columns = [column.job_id]
        type    = BTREE
      }
    }
    table "jobs" {
      schema = schema.transactional
      column "id" {
        null = false
        type = bigserial
      }
      column "request_id" {
        null = true
        type = bigint
      }
      column "data_type" {
        null = false
        type = character_varying
      }
      column "progress" {
        null = true
        type = character_varying
      }
      column "processing_started_at" {
        null = true
        type = time
      }
      column "processing_finished_at" {
        null = true
        type = time
      }
      column "job_status_id" {
        null = false
        type = integer
      }
      column "created_at" {
        null = false
        type = time
      }
      column "updated_at" {
        null = false
        type = time
      }
      column "geom" {
        null = true
        type = polygon
      }
      column "horizontal_datum_id" {
        null = false
        type = integer
      }
      column "projection_id" {
        null = false
        type = integer
      }
      column "mission_id" {
        null = false
        type = integer
      }
      column "file_format_id" {
        null = false
        type = integer
      }
      primary_key {
        columns = [column.id]
      }
      foreign_key "file_formats_fkey" {
        columns     = [column.file_format_id]
        ref_columns = [table.file_formats.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
      foreign_key "job_statuses_fkey" {
        columns     = [column.job_status_id]
        ref_columns = [table.job_statuses.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
      foreign_key "requests_fkey" {
        columns     = [column.request_id]
        ref_columns = [table.requests.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
    }
    table "landcover_jobs" {
      schema = schema.transactional
      column "id" {
        null = false
        type = bigserial
      }
      column "job_id" {
        null = false
        type = bigint
      }
      primary_key {
        columns = [column.id]
      }
      foreign_key "jobs_fkey" {
        columns     = [column.job_id]
        ref_columns = [table.jobs.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
    }
    table "lidar_jobs" {
      schema = schema.transactional
      column "id" {
        null = false
        type = bigserial
      }
      column "job_id" {
        null = false
        type = bigint
      }
      column "bin_unit" {
        null = true
        type = sql("linear_unit")
      }
      column "bin_size" {
        null = true
        type = real
      }
      column "fill_gaps" {
        null = true
        type = boolean
      }
      column "contour_interval" {
        null = true
        type = real
      }
      column "create_single_contour" {
        null = true
        type = boolean
      }
      column "ancillary_data" {
        null = true
        type = sql("channels")
      }
      column "data_returns" {
        null = true
        type = sql("lidar_returns")
      }
      column "vertical_unit" {
        null = false
        type = sql("linear_unit")
      }
      column "classes" {
        null = false
        type = sql("smallint[]")
      }
      column "point_size_estimate" {
        null = true
        type = bigint
      }
      column "geoid_model" {
        null = false
        type = sql("geoid")
      }
      primary_key {
        columns = [column.id]
      }
      foreign_key "jobs_fkey" {
        columns     = [column.job_id]
        ref_columns = [table.jobs.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
    }
    table "requests" {
      schema = schema.transactional
      column "id" {
        null = false
        type = bigserial
      }
      column "confirmation_sent" {
        null    = true
        type    = boolean
        default = false
      }
      column "created_at" {
        null = false
        type = timestamp
      }
      column "updated_at" {
        null = false
        type = timestamp
      }
      column "user_id" {
        null = false
        type = bigint
      }
      column "request_status_id" {
        null = false
        type = integer
      }
      primary_key {
        columns = [column.id]
      }
      foreign_key "request_status_fk" {
        columns     = [column.request_status_id]
        ref_columns = [table.request_statuses.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
      foreign_key "users_fk" {
        columns     = [column.user_id]
        ref_columns = [table.users.column.id]
        on_update   = NO_ACTION
        on_delete   = NO_ACTION
      }
    }
    table "users" {
      schema = schema.transactional
      column "id" {
        null = false
        type = bigserial
      }
      column "email" {
        null = false
        type = character_varying
      }
      column "organization" {
        null = true
        type = character_varying
      }
      column "survey_optin" {
        null    = true
        type    = boolean
        default = false
      }
      primary_key {
        columns = [column.id]
      }
    }
    schema "public" {
    }
    schema "spatial" {
    }
    schema "transactional" {
    }
    
  • Implementation of RevisionReadWriter interface for postgresql

    Implementation of RevisionReadWriter interface for postgresql

    I am working on versioned migrations for my project and couldn't find any default implementation of RevisionReadWriter for postgresql for using Executor. Is there any other better way of applying the migrations file on DB?

  • Panic when inspecting pg

    Panic when inspecting pg

    Hi team,

    Saw the post on HN and wanted to try the tool. After reading the doc (could be cool to have a simple brew install), got to this step:

    atlas schema inspect -d "postgres://postgres:<pwd>@localhost:26432/postgres?sslmode=disable" -w

    But the process crashes with the error panic: inconsistent map element types

    Click to see stacktrace
    panic: inconsistent map element types (cty.Object(map[string]cty.Type{"__ref":cty.String, "column":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "default":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "null":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "type":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "foreign_key":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on_delete":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on_update":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "ref_columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "index":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "column":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "desc":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "expr":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "unique":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "primary_key":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "schema":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})}) then cty.Object(map[string]cty.Type{"__ref":cty.String, "column":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "default":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "null":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "type":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "foreign_key":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on_delete":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on_update":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "ref_columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "index":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on":cty.Object(map[string]cty.Type{"__ref":cty.String, "column":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "desc":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "expr":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})}), "unique":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "primary_key":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "schema":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})}))
    
    goroutine 1 [running]:
    github.com/zclconf/go-cty/cty.MapVal(0xc000398ec8)
    	/Users/runner/go/pkg/mod/github.com/zclconf/[email protected]/cty/value_init.go:207 +0x47d
    ariga.io/atlas/schema/schemaspec/schemahcl.blockVars(0xc0004b9b80, {0x0, 0x0}, 0xc000336c00)
    	/Users/runner/go/pkg/mod/ariga.io/[email protected]/schema/schemaspec/schemahcl/context.go:87 +0x253
    ariga.io/atlas/schema/schemaspec/schemahcl.evalCtx(0xc00000e180, 0xc000239bc0)
    	/Users/runner/go/pkg/mod/ariga.io/[email protected]/schema/schemaspec/schemahcl/context.go:41 +0xc5
    ariga.io/atlas/schema/schemaspec/schemahcl.decode(0x7440108, {0xc0003e2a80, 0x63b4, 0x6a80})
    	/Users/runner/go/pkg/mod/ariga.io/[email protected]/schema/schemaspec/schemahcl/hcl.go:76 +0x145
    ariga.io/atlas/schema/schemaspec/schemahcl.(*state).UnmarshalSpec(0xc000062900, {0xc0003e2a80, 0x63b4, 0x6a80}, {0x4ca9de0, 0xc000296140})
    	/Users/runner/go/pkg/mod/ariga.io/[email protected]/schema/schemaspec/schemahcl/hcl.go:55 +0x5d
    ariga.io/atlas/sql/postgres.UnmarshalSpec({0xc0003e2a80, 0x63b4, 0x6a80}, {0x656d3a0, 0xc0001b0de0}, {0x4e08660, 0xc0002960f0})
    	/Users/runner/go/pkg/mod/ariga.io/[email protected]/sql/postgres/sqlspec.go:39 +0x9b
    ariga.io/atlas/sql/postgres.glob..func1({0xc0003e2a80, 0xc0003dc000, 0x63b4}, {0x4e08660, 0xc0002960f0})
    	/Users/runner/go/pkg/mod/ariga.io/[email protected]/sql/postgres/sqlspec.go:97 +0x3f
    ariga.io/atlantis/ent/schema.validateSchema.func1({0x658e9d0, 0xc000336390}, 0xc000148630)
    	/Users/runner/work/ariga/ariga/atlantis/ent/schema/atlasschema.go:98 +0x3d3
    ariga.io/atlantis/ent/hook.AtlasSchemaFunc.Mutate(0x6570dc0, {0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
    	/Users/runner/work/ariga/ariga/atlantis/ent/hook/hook.go:22 +0x49
    ariga.io/atlantis/ent/hook.If.func1.1({0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
    	/Users/runner/work/ariga/ariga/atlantis/ent/hook/hook.go:229 +0x89
    entgo.io/ent.MutateFunc.Mutate(0xc000698240, {0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
    	/Users/runner/go/pkg/mod/entgo.io/[email protected]/ent.go:347 +0x3d
    ariga.io/atlantis/ent/runtime.init.0.func1.1({0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
    	/Users/runner/work/ariga/ariga/atlantis/ent/runtime/runtime.go:33 +0xa2
    entgo.io/ent.MutateFunc.Mutate(0x6596c58, {0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
    	/Users/runner/go/pkg/mod/entgo.io/[email protected]/ent.go:347 +0x3d
    ariga.io/atlantis/ent.historyMutator.AtlasSchemaHistoryMutateHook.func1({0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
    	/Users/runner/work/ariga/ariga/atlantis/ent/enthistory.go:54 +0x16c
    entgo.io/ent.MutateFunc.Mutate(0x6570dc0, {0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
    	/Users/runner/go/pkg/mod/entgo.io/[email protected]/ent.go:347 +0x3d
    ariga.io/atlantis/ent.(*AtlasSchemaCreate).Save(0xc0002960a0, {0x658e9d0, 0xc000336390})
    	/Users/runner/work/ariga/ariga/atlantis/ent/atlasschema_create.go:97 +0x22c
    ariga.io/atlantis/ent.(*AtlasSchemaCreate).SaveX(...)
    	/Users/runner/work/ariga/ariga/atlantis/ent/atlasschema_create.go:106
    ariga.io/atlantis/cmd/atlas/web.Inspect({0x658e928, 0xc0000c61c0}, 0xc000148580, {0x20b330808, 0x45}, {0x0, 0xc0000c61c0, 0xc0005f4d40})
    	/Users/runner/work/ariga/ariga/atlantis/cmd/atlas/web/web.go:55 +0x405
    main.initConfig.func1({0x658e928, 0xc0000c61c0}, 0xc0000c61c0)
    	/Users/runner/work/ariga/ariga/atlantis/cmd/atlas/main.go:48 +0x46
    main.run({0x4ea6bdf, 0xe}, 0x643ff10)
    	/Users/runner/work/ariga/ariga/atlantis/cmd/atlas/main.go:77 +0x10c
    main.registerWeb.func1(0x6d87460, {0x4e8eb10, 0x3, 0x3})
    	/Users/runner/work/ariga/ariga/atlantis/cmd/atlas/main.go:58 +0x25
    github.com/spf13/cobra.(*Command).execute(0x6d87460, {0xc0000be5a0, 0x3, 0x3})
    	/Users/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:860 +0x5f8
    github.com/spf13/cobra.(*Command).ExecuteC(0x6d86a60)
    	/Users/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:974 +0x3bc
    github.com/spf13/cobra.(*Command).Execute(...)
    	/Users/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:902
    main.main()
    	/Users/runner/work/ariga/ariga/atlantis/cmd/atlas/main.go:35 +0x115
    

    Version

    • atlas CLI version v0.3.3
    • MacOS 12.2

    Best Regards,

  • Move core functionality from cmd/* to sql/*

    Move core functionality from cmd/* to sql/*

    While going through the documentation of Atlas I stumbled on this section: https://atlasgo.io/integrations/go-api There it says that the core engine capabilities can be used as a go library / embedded as a module.

    Unfortunately at the moment some very core features seems to be implemented in cmd/* - this makes trying to replicate the functionality of the CLI tool impossible.

    For example - there is no way of reading a set of revisions from the DB revisions table as the ent implementation of the sql/migrate.RevisionReadWriter is not bundled with the ariga.io/atlas go module.

    As far as I can tell the ariga.io/atlas go module basically packages / exposes the sql/ directory from this repo and in the case of this example the implementation of RevisionReadWriter is EntRevisions inside cmd/atlas/internal/migrate/migrate.go

    Could you please move the core functionality to the exported go module so that the main features of the CLI can be replicated programatically by using the Atlas go API / embedding Atlas as a go module?

  • Empty ...down.sql when drop tables

    Empty ...down.sql when drop tables

    Description: I'm using Atlas in combined with GORM's auto migrate feature as described in docs to generate migration scripts. And i'm running Atlas with "golang-migrate" as the format. But when i drop a table and run "atlas migrate diff", the down.sql that has been generated by the Atlas is empty. This leads golang-migrate to fail when migrating to an older version.

    Steps To Reproduce:

    • Create DBs diff gorm and app.
    • Create two entity models users and admins.
    • Create tables on gorm db by using GORM auto migrate.
    • Run Atlas with the following command.
    atlas migrate diff "init" --dir "./migrations"?format=golang-migrate --dev-url "diff_db_url" --to "gorm_db_url" 
    
    • See ..._init.up and ..._init.down SQL files.
    • Drop diff and gorm DBs.
    • Apply migrations on app DB via golang-migrate.
    • Succeed
    • Remove admins model.
    • Create DBs diff and gorm.
    • Create tables on gorm db by using GORM auto migrate. (users only)
    • Run Atlas with the following command.
    atlas migrate diff "drop_admins" --dir "./migrations"?format=golang-migrate --dev-url "diff_db_url" --to "gorm_db_url" 
    
    • See ..._drop_admins.down.sql is empty.

    Expected: ...drop_admins.up.sql;

    -- drop "admins" table
    DROP TABLE "public"."admins";
    

    ...drop_admins.down.sql;

    -- create "admins" table
    CREATE TABLE "public"."admins" ("id" text NOT NULL, "name" text NULL, "surname" text NULL, PRIMARY KEY ("id"));
    

    Actual ...drop_admins.up.sql;

    -- drop "admins" table
    DROP TABLE "public"."admins";
    

    ...drop_admins.down.sql:

  • Error: sql/schema: create

    Error: sql/schema: create "atlas_schema_revisions" table: pq: relation "atlas_schema_revisions" already exists

    Can not apply migration anymore.

    When running this command

    atlas migrate --dir file://ent/migrations apply --url=${DATABASE_URL}
    

    I am getting Error: sql/schema: create "atlas_schema_revisions" table: pq: relation "atlas_schema_revisions" already exists this error.

  • sql/postgres: Constraints on Ranges

    sql/postgres: Constraints on Ranges

    @a8m It would be nice to be able to define constraints on ranges via the EXCLUDE clause. (https://www.postgresql.org/docs/current/rangetypes.html#RANGETYPES-CONSTRAINT)

  • Posgtres Identity Field Error

    Posgtres Identity Field Error

    I am trying to add identity column but having an error. I am using latest v0.8.3 of Atlas and installed with brew.

    
    table "application" {
      schema = schema.public
      column "id" {
        null = false
        type = integer
        identity {
          generated = ALWAYS
        }
      }
      primary_key {
        columns = [column.id]
      }
    }
    schema "public" {
    }
    

    Error: reverse alter table "application": unexpected attribute change (expect IDENTITY): []

    Any help?

  • Support SQL-VIEWs

    Support SQL-VIEWs

    I have a schema, which has SQL views. In the HCL export this views look like this:

    table "view_customer" {
      schema  = schema.testing
      comment = "VIEW"
      column "customer_id" {
        null    = false
        type    = int
        default = 0
      }
      column "user_id" {
        null = true
        type = int
      }
    }
    

    By creating this schema, the views are created as tables.

    • Is it possible to ignore all views and not have to list them in --exclude
    • Is support for views planned / possible?
  • [Feature] Support Timescale DB

    [Feature] Support Timescale DB

    Timescale is essentially an extension of Postgres, with its own additional functionality like continuous queries.

    Features required:

    • support postgres extensions https://github.com/ariga/atlas/issues/520
    • inspect/create/convert hypertable
    • support postgres materialized views as continuous aggregates

    There's probably more but these would probably be the most common.

Related tags
Oct 1, 2022
A terminal designed for anyone to use and designed for any platform

A terminal designed for anyone to use and designed for any platform. Which includes the basic features of any terminal and includes friendly commands to perform tools such as ping, traceroute, generate key pairs, encrypt/decrypt, router security actions, etc. All of the source code is done in Go.

Jan 25, 2022
Another Go shellcode loader designed to work with Cobalt Strike raw binary payload.
Another Go shellcode loader designed to work with Cobalt Strike raw binary payload.

Bankai Another Go shellcode loader designed to work with Cobalt Strike raw binary payload. I created this project to mainly educate myself learning Go

Dec 29, 2022
Golang library with POSIX-compliant command-line UI (CLI) and Hierarchical-configuration. Better substitute for stdlib flag.
Golang library with POSIX-compliant command-line UI (CLI) and Hierarchical-configuration. Better substitute for stdlib flag.

cmdr cmdr is a POSIX-compliant, command-line UI (CLI) library in Golang. It is a getopt-like parser of command-line options, be compatible with the ge

Oct 28, 2022
A golang CLI to display various stats about Hockey teams and their players

A golang CLI to display various stats about Hockey teams and their players

Oct 26, 2021
A CLI application to extract the top customers, and their favourite snacks.

zimpler.candystore An assignment which written as a CLI application in Golang for Zimpler interview process. Getting Started This repository includes

Dec 24, 2021
a work time management CLI tool for any platform
a work time management CLI tool for any platform

english |日本語 jobgosh | job management tool made with golang for shell a multi-platform work time management CLI tool to track and improve your day to

May 16, 2022
A CLI tool for working with CloudWatch logs. It performs functions that I need at work.

CloudWatch Logs Utility A simple utility for working with CloudWatch Logs. AWS should probably build this themselves, but since they won't, I am here

Dec 31, 2021
There is a certain amount of work to be done before you can implement the features of your Go powered CLI app

go-project-template-cli There is a certain amount of work to be done before you can implement the features of your Go powered CLI app. A few of those

Jan 23, 2022
A simple CLI tool to help you manage your CPU

gocpu A simple cli tool to handle and watch your CPU. Usage Usage gocpu [subcommand] [flags] subcommand: watch - see the realtime cpu frequenc

Nov 29, 2021
A template that help you to quick implement some CLI using Go

db-data-generator This is template that help you to quick implement some CLI using Go. This repository is contains following. minimal CLI implementati

Nov 22, 2021
A dead simple cli utility to help you manage your git stash
A dead simple cli utility to help you manage your git stash

A dead simple cli utility to help you manage your git stash.

Aug 2, 2022
Dev-spaces - A CLI to help creating development environments using AWS Spot Instances

This is a CLI to help creating on-demand development spaces using EC2 Spot Intances.

Nov 9, 2022
Debug Dockerized Go applications better
Debug Dockerized Go applications better

A tool that makes debugging of Dockerized Go applications super easy by enabling Debugger and Hot-Reload features, seamlessly. Installing go get -u gi

Jan 4, 2023
A better way to clone, organize and manage multiple git repositories
A better way to clone, organize and manage multiple git repositories

git-get git-get is a better way to clone, organize and manage multiple git repositories. git-get Description Installation macOS Linux Windows Usage gi

Nov 16, 2022
Better sync package for Go.

synx Better sync package for Go. Rationale TODO. Note For better sync/atomic package see atomix. Features Simple API. Easy to integrate. Optimized for

Dec 16, 2022
Use Golang to achieve better console backend services

Use Golang to achieve better console backend services

Dec 7, 2021
Go-api-cli - Small CLI to fetch data from an API sync and async

Async API Cli CLI to fetch data on "todos" from a given API in a number of ways.

Jan 13, 2022
Go cmd util that prints cmd-line args with their index

echoidx This is an exercise of the book The Go Programming Language, by Alan A.

Dec 18, 2021