Dolt is a SQL database that you can fork, clone, branch, merge, push and pull just like a git repository.

Dolt

Dolt is Git for Data!

Dolt is a SQL database that you can fork, clone, branch, merge, push and pull just like a git repository. Connect to Dolt just like any MySQL database to run queries or update the data using SQL commands. Use the command line interface to import CSV files, commit your changes, push them to a remote, or merge your teammate's changes.

All the commands you know for Git work exactly the same for Dolt. Git versions files, Dolt versions tables. It's like Git and MySQL had a baby!

We also built DoltHub, a place to share Dolt databases. We host public data for free!

Join us on Discord to say hi and ask questions!

What's it for?

Lots of things! Dolt is a generally useful tool with countless applications. But if you want some ideas, here's how people are using it so far.

How do I use it?

Check out our quick-start guide to skip the docs and get started as fast as humanly possible! Or keep reading for a high level overview of how to use the command line tool.

Having problems? Read the FAQ to find answers.

Dolt CLI

The dolt CLI has the same commands as git, with some extras.

$ dolt
Valid commands for dolt are
                init - Create an empty Dolt data repository.
              status - Show the working tree status.
                 add - Add table changes to the list of staged table changes.
               reset - Remove table changes from the list of staged table changes.
              commit - Record changes to the repository.
                 sql - Run a SQL query against tables in repository.
          sql-server - Start a MySQL-compatible server.
                 log - Show commit logs.
                diff - Diff a table.
               blame - Show what revision and author last modified each row of a table.
               merge - Merge a branch.
              branch - Create, list, edit, delete branches.
                 tag - Create, list, delete tags.
            checkout - Checkout a branch or overwrite a table from HEAD.
              remote - Manage set of tracked repositories.
                push - Push to a dolt remote.
                pull - Fetch from a dolt remote data repository and merge.
               fetch - Update the database from a remote data repository.
               clone - Clone from a remote data repository.
               creds - Commands for managing credentials.
               login - Login to a dolt remote host.
             version - Displays the current Dolt cli version.
              config - Dolt configuration.
                  ls - List tables in the working set.
              schema - Commands for showing and importing table schemas.
               table - Commands for copying, renaming, deleting, and exporting tables.
           conflicts - Commands for viewing and resolving merge conflicts.
             migrate - Executes a repository migration to update to the latest format.
         read-tables - Fetch table(s) at a specific commit into a new dolt repo
                  gc - Cleans up unreferenced data from the repository.

Installation

From Latest Release

To install on Linux or Mac based systems run this command in your terminal:

sudo bash -c 'curl -L https://github.com/dolthub/dolt/releases/latest/download/install.sh | bash'

This will download the latest dolt release and put it in /usr/local/bin/, which is probably on your $PATH.

Homebrew

Dolt is on Homebrew, updated every release.

brew install dolt

Windows

Download the latest Microsoft Installer (.msi file) in releases and run it.

For information on running on Windows, see here.

Chocolatey

You can install dolt using Chocolatey:

choco install dolt

From Source

Make sure you have Go installed, and that go is in your path.

Clone this repository and cd into the go directory. Then run:

go install ./cmd/dolt

Configuration

Verify that your installation has succeeded by running dolt in your terminal.

$ dolt
Valid commands for dolt are
[...]

Configure dolt with your user name and email, which you'll need to create commits. The commands work exactly the same as git.

$ dolt config --global --add user.email [email protected]
$ dolt config --global --add user.name "YOUR NAME"

Getting started

Let's create our first repo, storing state population data.

$ mkdir state-pops
$ cd state-pops

Run dolt init to set up a new dolt repo, just like you do with git. Then run some SQL queries to insert data.

$ dolt init
Successfully initialized dolt data repository.
$ dolt sql -q "create table state_populations ( state varchar(14), population int, primary key (state) )"
$ dolt sql -q "show tables"
+-------------------+
| tables            |
+-------------------+
| state_populations |
+-------------------+
$ dolt sql -q "insert into state_populations (state, population) values
('Delaware', 59096),
('Maryland', 319728),
('Tennessee', 35691),
('Virginia', 691937),
('Connecticut', 237946),
('Massachusetts', 378787),
('South Carolina', 249073),
('New Hampshire', 141885),
('Vermont', 85425),
('Georgia', 82548),
('Pennsylvania', 434373),
('Kentucky', 73677),
('New York', 340120),
('New Jersey', 184139),
('North Carolina', 393751),
('Maine', 96540),
('Rhode Island', 68825)"
Query OK, 17 rows affected

Use dolt sql to jump into a SQL shell, or run single queries with the -q option.

$ dolt sql -q "select * from state_populations where state = 'New York'"
+----------+------------+
| state    | population |
+----------+------------+
| New York | 340120     |
+----------+------------+

add the new tables and commit them. Every command matches git exactly, but with tables instead of files.

$ dolt add .
$ dolt commit -m "initial data"
$ dolt status
On branch master
nothing to commit, working tree clean

Update the tables with more SQL commands, this time using the shell:

update state_populations set population = 0 where state like 'New%'; Query OK, 3 rows affected Rows matched: 3 Changed: 3 Warnings: 0 state_pops> exit Bye ">
$ dolt sql
# Welcome to the DoltSQL shell.
# Statements must be terminated with ';'.
# "exit" or "quit" (or Ctrl-D) to exit.
state_pops> update state_populations set population = 0 where state like 'New%';
Query OK, 3 rows affected
Rows matched: 3  Changed: 3  Warnings: 0
state_pops> exit
Bye

See what you changed with dolt diff:

$ dolt diff
diff --dolt a/state_populations b/state_populations
--- a/state_populations @ qqr3vd0ea6264oddfk4nmte66cajlhfl
+++ b/state_populations @ 17cinjh5jpimilefd57b4ifeetjcbvn2
+-----+---------------+------------+
|     | state         | population |
+-----+---------------+------------+
|  <  | New Hampshire | 141885     |
|  >  | New Hampshire | 0          |
|  <  | New Jersey    | 184139     |
|  >  | New Jersey    | 0          |
|  <  | New York      | 340120     |
|  >  | New York      | 0          |
+-----+---------------+------------+

Then commit your changes once more with dolt add and dolt commit.

$ dolt add state_populations
$ dolt commit -m "More like Old Jersey"

See the history of your repository with dolt log.

% dolt log
commit babgn65p1r5n36ao4gfdj99811qauo8j
Author: Zach Musgrave 
Date:   Wed Nov 11 13:42:27 -0800 2020

    More like Old Jersey

commit 9hgk7jb7hlkvvkbornpldcopqh2gn6jo
Author: Zach Musgrave 
Date:   Wed Nov 11 13:40:53 -0800 2020

    initial data

commit 8o8ldh58pjovn8uvqvdq2olf7dm63dj9
Author: Zach Musgrave 
Date:   Wed Nov 11 13:36:24 -0800 2020

    Initialize data repository

Importing data

If you have data in flat files like CSV or JSON, you can import them using the dolt table import command. Use dolt table import -u to add data to an existing table, or dolt table import -c to create a new one.

$ head -n3 data.csv
state,population
Delaware,59096
Maryland,319728
$ dolt table import -c -pk=state state_populations data.csv

Branch and merge

Just like with git, it's a good idea to make changes on your own branch, then merge them back to master. The dolt checkout command works exactly the same as git checkout.

$ dolt checkout -b 

The merge command works the same too.

$ dolt merge 

Working with remotes

Dolt supports remotes just like git. Remotes are set up automatically when you clone data from one.

$ dolt clone dolthub/corona-virus
...
$ cd corona-virus
$ dolt remote -v
origin https://doltremoteapi.dolthub.com/dolthub/corona-virus

To push to a remote, you'll need credentials. Run dolt login to open a browser to sign in and cache your local credentials. You can sign into DoltHub with your Google account, your Github account, or with a user name and password.

$ dolt login

If you have a repo that you created locally that you now want to push to a remote, add a remote exactly like you would with git.

$ dolt remote add origin myname/myRepo
$ dolt remote -v
origin https://doltremoteapi.dolthub.com/myname/myRepo

And then push to it.

$ dolt push origin master

Other remotes

dolt also supports directory, aws, and gcs based remotes:

  • file - Use a directory on your machine
dolt remote add  file:///Users/xyz/abs/path/
  • aws - Use an S3 bucket
dolt remote add  aws://dynamo-table:s3-bucket/database
  • gs - Use a GCS bucket
dolt remote add  gs://gcs-bucket/database

Interesting datasets to clone

DoltHub has lots of interesting datasets to explore and clone. Here are some of our favorites.

More documentation

There's a lot more to Dolt than can fit in a README file! For full documentation, check out the docs on DoltHub. Some of the topics we didn't cover here:

Credits and License

Dolt relies heavily on open source code and ideas from the Noms project. We are very thankful to the Noms team for making this code freely available, without which we would not have been able to build Dolt so rapidly.

Dolt is licensed under the Apache License, Version 2.0. See LICENSE for details.

Comments
  • Ambiguous error message when type checking fails makes it almost impossible to figure out why an import failed, especially when using triggers.

    Ambiguous error message when type checking fails makes it almost impossible to figure out why an import failed, especially when using triggers.

    I ran into an issue attempting to load a csv file, which unusually hadn't happened previously despite identical steps taken

    After doing some investigation, it seems that new tables created with the below schema definition are not allowing me to import the attached csv. What is strange is that this previously worked fine, and I have located a particular branch on one of the test repos I had set up, where despite an identical schema the csv still imports.

    Repo: geweldon/Source3 branch with error: main branch that still somehow works: history5

    //Schema SBQQ__ProductOption__c @ working CREATE TABLE SBQQ__ProductOption__c ( Id varchar(18), OwnerId varchar(18), SBQQ__AppliedImmediatelyContext__c varchar(255), SBQQ__AppliedImmediately__c tinyint DEFAULT "0", SBQQ__Bundled__c tinyint DEFAULT "0", SBQQ__ComponentCodePosition__c decimal(6,0), SBQQ__ComponentCode__c varchar(60), SBQQ__ComponentDescriptionPosition__c decimal(6,0), SBQQ__ComponentDescription__c varchar(255), SBQQ__ConfiguredSKU__c varchar(18), SBQQ__DefaultPricingTable__c varchar(255), SBQQ__DiscountAmount__c decimal(14,2), SBQQ__DiscountSchedule__c varchar(18), SBQQ__Discount__c decimal(8,2), SBQQ__DiscountedByPackage__c tinyint DEFAULT "0", SBQQ__ExistingQuantity__c decimal(12,2), SBQQ__Feature__c varchar(18), SBQQ__MaxQuantity__c decimal(12,2), SBQQ__MinQuantity__c decimal(12,2), SBQQ__Number__c decimal(5,0), SBQQ__OptionalSKU__c varchar(18), SBQQ__QuantityEditable__c tinyint DEFAULT "0", SBQQ__Quantity__c decimal(12,2), SBQQ__QuoteLineVisibility__c varchar(255), SBQQ__RenewalProductOption__c varchar(18), SBQQ__Required__c tinyint DEFAULT "0", SBQQ__Selected__c tinyint DEFAULT "0", SBQQ__SubscriptionScope__c varchar(255), SBQQ__System__c tinyint DEFAULT "0", SBQQ__Type__c varchar(255) DEFAULT "Component", SBQQ__UnitPrice__c decimal(14,2), SBQQ__UpliftedByPackage__c tinyint DEFAULT "0", CPQ_Custom_1__c tinyint DEFAULT "1", CPQ_Custom_2__c tinyint DEFAULT "0", Custom_Picklist__c varchar(255), Example_Record__c tinyint DEFAULT "0", Pre_Existing__c tinyint DEFAULT "0", ProdRecordSeedExtID__c varchar(255), ATGExtId__c varchar(255), ATGSourceId__c varchar(255) NOT NULL, PRIMARY KEY (ATGSourceId__c), KEY Id (Id), KEY OwnerId (OwnerId), KEY SBQQ__ConfiguredSKU__c (SBQQ__ConfiguredSKU__c), KEY SBQQ__DiscountSchedule__c (SBQQ__DiscountSchedule__c), KEY SBQQ__Feature__c (SBQQ__Feature__c), KEY SBQQ__OptionalSKU__c (SBQQ__OptionalSKU__c), KEY SBQQ__RenewalProductOption__c (SBQQ__RenewalProductOption__c) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

    partner_org___dep_train_3-backup-sbqq__productoption__c-20220413205903.csv

  • Can't connect to mysql with navicat

    Can't connect to mysql with navicat

    I start a new server and I can connect to mysql with cli. image But when I create a connect to mysql with navicat, it report an error that image Excuse me, why is this?

  • `as of` queries fail in StarRocks

    `as of` queries fail in StarRocks

    Customer is trying to use StarRocks DB:

    https://github.com/StarRocks/starrocks

    They are able connect but apparently as of queries don't work. This might need a change to their code.

  • new sysbench runner and CI for systab

    new sysbench runner and CI for systab

    Harness and CI for testing system table performance. Scripts are codegen'd in https://github.com/dolthub/systab-sysbench-scripts. We compare native system tables to materialized versions with logically equivalent rows.

    +-------------------------------------+-----------+----------+-------+
    | name                                | mean_mult | med_mult | stdd  |
    +-------------------------------------+-----------+----------+-------+
    | dolt_commit_ancestors_commit_filter | 6.89      | 7.65     | 0.377 |
    | dolt_commits_commit_filter          | 6.08      | 6.74     | 0.311 |
    | dolt_diff_log_join_on_commit        | 1.39      | 1.36     | 5.834 |
    | dolt_diff_table_commit_filter       | 37.77     | 40.8     | 1.756 |
    | dolt_diffs_commit_filter            | 25.8      | 29.53    | 0.738 |
    | dolt_history_commit_filter          | 25.52     | 27.95    | 0.799 |
    | dolt_log_commit_filter              | 4.64      | 5.14     | 0.401 |
    +-------------------------------------+-----------+----------+-------+
    
  • Error: [2057] when fetching remote table at non-HEAD commit following schema change using as of

    Error: [2057] when fetching remote table at non-HEAD commit following schema change using as of

    Trying to fetch non-HEAD commits produces an error if schema has been changed when using the doltr R package.

    Error: The number of parameters in bound buffers differs from number of columns in resultset [2057]

    doltr issue #46

    I'll post dolt server logs this evening

  • Missing indexes when join condition uses extra conditions

    Missing indexes when join condition uses extra conditions

    I have a GROUP BY query with multiple JOINs which runs quite slow in Dolt (~90 sec) compared to MySQL (~400 ms) when executed against the data in https://www.dolthub.com/repositories/knutwannheden/lobbywatch:

    select rechtsform,
           count(1)                                                  anz,
           count(i.id)                                               anz_bind,
           count(distinct pa.id)                                     anz_parl,
           count(m.id)                                               anz_mand,
           count(distinct p.id)                                      anz_pers,
           count(case when i.id is null and m.id is null then 1 end) anz_ohne
    from organisation o
             left join interessenbindung i
                       on o.id = i.organisation_id and now() between coalesce(i.von, now()) and coalesce(i.bis, now())
             left join parlamentarier pa
                       on pa.id = i.parlamentarier_id and now() between pa.im_rat_seit and coalesce(pa.im_rat_bis, now())
             left join mandat m on o.id = m.organisation_id and now() between coalesce(m.von, now()) and coalesce(m.bis, now())
             left join person p on p.id = m.person_id and now() >= coalesce(p.zutrittsberechtigung_von, now())
    group by rechtsform
    order by 2 desc
    ;
    

    Here is the link to the named query ("slow analytics query"): https://www.dolthub.com/repositories/knutwannheden/lobbywatch/query/master?q=select+rechtsform%2C++++++++count%281%29++++++++++++++++++++++++++++++++++++++++++++++++++anz%2C++++++++count%28i.id%29+++++++++++++++++++++++++++++++++++++++++++++++anz_bind%2C++++++++count%28distinct+pa.id%29+++++++++++++++++++++++++++++++++++++anz_parl%2C++++++++count%28m.id%29+++++++++++++++++++++++++++++++++++++++++++++++anz_mand%2C++++++++count%28distinct+p.id%29++++++++++++++++++++++++++++++++++++++anz_pers%2C++++++++count%28case+when+i.id+is+null+and+m.id+is+null+then+1+end%29+anz_ohne+from+organisation+o++++++++++left+join+interessenbindung+i++++++++++++++++++++on+o.id+%3D+i.organisation_id+and+now%28%29+between+coalesce%28i.von%2C+now%28%29%29+and+coalesce%28i.bis%2C+now%28%29%29++++++++++left+join+parlamentarier+pa++++++++++++++++++++on+pa.id+%3D+i.parlamentarier_id+and+now%28%29+between+pa.im_rat_seit+and+coalesce%28pa.im_rat_bis%2C+now%28%29%29++++++++++left+join+mandat+m+on+o.id+%3D+m.organisation_id+and+now%28%29+between+coalesce%28m.von%2C+now%28%29%29+and+coalesce%28m.bis%2C+now%28%29%29++++++++++left+join+person+p+on+p.id+%3D+m.person_id+and+now%28%29+%3E%3D+coalesce%28p.zutrittsberechtigung_von%2C+now%28%29%29+group+by+rechtsform+order+by+2+desc+%3B&active=Queries

  • Starting dolt as a sql server specifying user and host parameter creates a user at the host as defined. This should be the `%` default.

    Starting dolt as a sql server specifying user and host parameter creates a user at the host as defined. This should be the `%` default.

    Specifying the -u command line parameter no longer works. Dolt version 0.40.25 . I kept working my way backward through earlier releases to finally make it work as documented (version .39.5)

     Main PID: 30022 (dolt)
       CGroup: /system.slice/doltdb.service
               └─30022 /usr/local/bin/dolt sql-server -H10.1.6.210 -uroot
    

    dolt_conn_user_not_found_error

  • As of reach dependent on view commit history

    As of reach dependent on view commit history

    We observed a strange problem where a view we had established to union three long running tables suddenly started always referring to HEAD when using as of. After some exploration we found out that this occurs when a view is dropped and then added back in. It appears that a newly created view can't use as of to reach further back into the commit history than it's own creation commit.

    I also wonder if this is a more general issue with as of. For example, we've been dropping tables then immediately re-adding them back in without an intervening commit in order to handle changes to the schema. This was a cool trick but can dolt's as of handle that? Or will it stop at the last time a table was added?

    reprex:

    dolt sql -q "create table test (pk int, c1 int, primary key(pk))" 
    dolt sql -q "insert into test values (1,2), (2,4), (3,6)"
    dolt commit -am "create test table and add values"
    dolt sql -q "select * from test" 
    
    dolt sql -q "create view test_view as select * from test where c1 < 3"
    dolt sql -q "select * from test_view" 
    
    dolt sql -q "insert into test values (4,1), (5,3), (6,5)"
    dolt commit -am "add some more values"
    
    dolt sql -q "select * from test" 
    dolt sql -q "select * from test_view as of 'HEAD'"
    dolt sql -q "select * from test_view as of 'HEAD^'"
    
    dolt sql -q "alter table test add column c2 int"
    dolt commit -am "Added column c2"
    dolt sql -q "select * from test_view as of 'HEAD'"
    dolt sql -q "select * from test_view as of 'HEAD^'"
    dolt sql -q "select * from test_view as of 'HEAD~2'"
    
    dolt sql -q "drop view test_view"
    dolt commit -am "drop test_view"
    
    dolt sql -q "create view test_view as select * from test where c1 < 3"
    dolt commit -am "re-add test_view"
    dolt sql -q "select * from test_view as of 'HEAD'"
    dolt sql -q "select * from test_view as of 'HEAD^'"
    dolt sql -q "select * from test_view as of 'HEAD~2'" 
    
  • `dolt table import` cannot handle table names with `-` in them

    `dolt table import` cannot handle table names with `-` in them

    So diving into updating tables from a csv file via the dolt command line. I imported and created a series of tables from some csv exports from another database, and then attempted to update using a comparable but unique csv. However I keep running into this error "Error creating reader for csv file:partner_org___dep_train_3-backup-sbqq__productoption__c-20220308222108.csv." I have included the full set of commands and results below, but I am baffled as to what is going on here.

    I am able to to import the csv's in question using the dolthub ui, and I can import them using the create option in an empty branch, but trying to update a table just keeps giving me an error.

    grantweldon@MacbookPros-MacBook-Pro Org-record-backup-3 % dolt ls
    Tables in working set: partner_org___dep_train_3-backup-product2 partner_org___dep_train_3-backup-sbqq__customaction__c partner_org___dep_train_3-backup-sbqq__priceaction__c partner_org___dep_train_3-backup-sbqq__pricecondition__c partner_org___dep_train_3-backup-sbqq__pricerule__c partner_org___dep_train_3-backup-sbqq__productfeature__c partner_org___dep_train_3-backup-sbqq__productoption__c

    grantweldon@MacbookPros-MacBook-Pro Org-record-backup-3 % dolt table import -u partner_org___dep_train_3-backup-sbqq__productoption__c partner_org___dep_train_3-backup-sbqq__productoption__c-20220308222108.csv

    Error creating reader for csv file:partner_org___dep_train_3-backup-sbqq__productoption__c-20220308222108.csv. cause: syntax error at position 41 near 'partner_org___dep_train_3' When attempting to move data from csv file:partner_org___dep_train_3-backup-sbqq__productoption__c-20220308222108.csv to partner_org___dep_train_3-backup-sbqq__productoption__c, could not open a reader.

    grantweldon@MacbookPros-MacBook-Pro Org-record-backup-3 % dolt table import -r partner_org___dep_train_3-backup-sbqq__productoption__c partner_org___dep_train_3-backup-sbqq__productoption__c-20220308222108.csv Error creating reader for csv file:partner_org___dep_train_3-backup-sbqq__productoption__c-20220308222108.csv.

  • Autocommit blocks access to conflict resolution table

    Autocommit blocks access to conflict resolution table

    Autocommit (on by default) prevents transactions from executing when conflicts exists. This causes problems from select * from dolt_conflicts query which is needed to resolve said queries.

    Repro below

    create database test_db;
    use test_db;
    
    select dolt_checkout('main');
    create table state_populations (state varchar(14) not null,population int,rate double,primary key (state));
    insert into state_populations (state, population, rate) values ('Delaware', 59096, 0.98),('New Hampshire', 141885, 1.007);
    select dolt_commit('-a', '-m', 'initial table for merge conflict path');
    select dolt_checkout('-b', 'mybranch_2', 'main');
    select dolt_checkout('-b', 'mybranch_1', 'main');
    
    update state_populations set population = 200000 where state like 'New%';
    select dolt_commit('-a', '-m', 'commit data to mybranch_1');
    select dolt_checkout('main');
    select dolt_merge('mybranch_1');
    select dolt_commit('-m', 'dolt_commit merge mybranch_1');
    
    select dolt_checkout('mybranch_2');
    
    update state_populations set population = 300000 where state like 'New%';
    select dolt_commit('-a', '-m', 'commit data to mybranch_2');
    select dolt_checkout('main');
    select dolt_merge('mybranch_2');
    select * from dolt_conflicts;
    
  • Support VARCHAR(MAX)

    Support VARCHAR(MAX)

    Currently the maximum size for a VARCHAR is 16383 and one needs to specify VARCHAR(16383) for it. It would be nice to support the VARCHAR(MAX) syntax so one doesn't have to remember the number.

  • Bump github.com/aws/aws-sdk-go from 1.32.6 to 1.33.0 in /go

    Bump github.com/aws/aws-sdk-go from 1.32.6 to 1.33.0 in /go

    Bumps github.com/aws/aws-sdk-go from 1.32.6 to 1.33.0.

    Changelog

    Sourced from github.com/aws/aws-sdk-go's changelog.

    Release v1.33.0 (2020-07-01)

    Service Client Updates

    • service/appsync: Updates service API and documentation
    • service/chime: Updates service API and documentation
      • This release supports third party emergency call routing configuration for Amazon Chime Voice Connectors.
    • service/codebuild: Updates service API and documentation
      • Support build status config in project source
    • service/imagebuilder: Updates service API and documentation
    • service/rds: Updates service API
      • This release adds the exceptions KMSKeyNotAccessibleFault and InvalidDBClusterStateFault to the Amazon RDS ModifyDBInstance API.
    • service/securityhub: Updates service API and documentation

    SDK Features

    • service/s3/s3crypto: Introduces EncryptionClientV2 and DecryptionClientV2 encryption and decryption clients which support a new key wrapping algorithm kms+context. (#3403)
      • DecryptionClientV2 maintains the ability to decrypt objects encrypted using the EncryptionClient.
      • Please see s3crypto documentation for migration details.

    Release v1.32.13 (2020-06-30)

    Service Client Updates

    • service/codeguru-reviewer: Updates service API and documentation
    • service/comprehendmedical: Updates service API
    • service/ec2: Updates service API and documentation
      • Added support for tag-on-create for CreateVpc, CreateEgressOnlyInternetGateway, CreateSecurityGroup, CreateSubnet, CreateNetworkInterface, CreateNetworkAcl, CreateDhcpOptions and CreateInternetGateway. You can now specify tags when creating any of these resources. For more information about tagging, see AWS Tagging Strategies.
    • service/ecr: Updates service API and documentation
      • Add a new parameter (ImageDigest) and a new exception (ImageDigestDoesNotMatchException) to PutImage API to support pushing image by digest.
    • service/rds: Updates service documentation
      • Documentation updates for rds

    Release v1.32.12 (2020-06-29)

    Service Client Updates

    • service/autoscaling: Updates service documentation and examples
      • Documentation updates for Amazon EC2 Auto Scaling.
    • service/codeguruprofiler: Updates service API, documentation, and paginators
    • service/codestar-connections: Updates service API, documentation, and paginators
    • service/ec2: Updates service API, documentation, and paginators
      • Virtual Private Cloud (VPC) customers can now create and manage their own Prefix Lists to simplify VPC configurations.

    Release v1.32.11 (2020-06-26)

    Service Client Updates

    • service/cloudformation: Updates service API and documentation
      • ListStackInstances and DescribeStackInstance now return a new StackInstanceStatus object that contains DetailedStatus values: a disambiguation of the more generic Status value. ListStackInstances output can now be filtered on DetailedStatus using the new Filters parameter.
    • service/cognito-idp: Updates service API

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • Alias bug with `INSERT INTO ON DUPLICATE KEY UPDATE`

    Alias bug with `INSERT INTO ON DUPLICATE KEY UPDATE`

    schema:

    CREATE TABLE `image_labels` (
      `image_id` int NOT NULL,
      `label` varchar(100) NOT NULL,
      `top_left_x` int,
      `top_left_y` int,
      `bottom_right_x` int,
      `bottom_right_y` int,
      PRIMARY KEY (`image_id`,`label`),
      CONSTRAINT `chk_esd5b974` CHECK ((`top_left_x` < `bottom_right_x`)),
      CONSTRAINT `chk_r4sbrdks` CHECK ((`top_left_y` < `bottom_right_y`))
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_bin;
    
    image_labels2 @ working
    CREATE TABLE `image_labels2` (
      `image_id` int NOT NULL,
      `label` varchar(100) NOT NULL,
      `top_left_x` int,
      `top_left_y` int,
      `bottom_right_x` int,
      `bottom_right_y` int
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_bin;
    

    When running:

    INSERT INTO image_labels 
    SELECT * from image_labels2 
    ON DUPLICATE KEY UPDATE 
    image_labels.top_left_x = image_labels2.top_left_x,
    image_labels.top_left_y = image_labels2.top_left_y,
    image_labels.bottom_right_x = image_labels2.bottom_right_x,
    image_labels.bottom_right_y = image_labels2.bottom_right_y;
    

    I get:

    error on line 1 for query INSERT INTO image_labels
    SELECT * from image_labels2
    ON DUPLICATE KEY UPDATE
    image_labels.top_left_x = image_labels2.top_left_x,
    image_labels.top_left_y = image_labels2.top_left_y,
    image_labels.bottom_right_x = image_labels2.bottom_right_x,
    image_labels.bottom_right_y = image_labels2.bottom_right_y: ambiguous column name "top_left_x", it's present in all these tables: image_labels, image_labels2
    ambiguous column name "top_left_x", it's present in all these tables: image_labels, image_labels2
    
  • `CREATE TABLE AS SELECT *` does not replicate CHECK constraints

    `CREATE TABLE AS SELECT *` does not replicate CHECK constraints

    Schema is:

    CREATE TABLE `image_labels` (
      `image_id` int NOT NULL,
      `label` varchar(100) NOT NULL,
      `top_left_x` int,
      `top_left_y` int,
      `bottom_right_x` int,
      `bottom_right_y` int,
      PRIMARY KEY (`image_id`,`label`),
      CONSTRAINT `chk_esd5b974` CHECK ((`top_left_x` < `bottom_right_x`)),
      CONSTRAINT `chk_r4sbrdks` CHECK ((`top_left_y` < `bottom_right_y`))
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_bin;
    

    Run

    CREATE TABLE image_labels2 AS SELECT * FROM image_labels;
    

    Result:

    CREATE TABLE `image_labels2` (
      `image_id` int NOT NULL,
      `label` varchar(100) NOT NULL,
      `top_left_x` int,
      `top_left_y` int,
      `bottom_right_x` int,
      `bottom_right_y` int
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_bin;
    
  • [auto-bump] [no-release-notes] dependency by max-hoffman

    [auto-bump] [no-release-notes] dependency by max-hoffman

    :coffee: An Automated Dependency Version Bump PR :crown:

    Initial Changes

    The initial changes contained in this PR were produced by `go get`ing the dependency.

    ``` $ cd ./go $ go get github.com/dolthub//go@ ```

    Before Merging

    This PR must have passing CI and a review before merging.

  • dolt doesn't handle large numbers without a cast

    dolt doesn't handle large numbers without a cast

    SQL:

    use db;
    
    SET @testValue = 809826404100301269648758758005707100;
    CREATE TABLE t(a INT, b DECIMAL(40, 2));
    INSERT INTO t(a, b) VALUES (1, 1), (2, 42);
    SELECT *, 'expecting 2 rows' FROM t WHERE b <= @testValue;
    
    INSERT INTO t(a, b) VALUES (3, @testValue - 100);
    SELECT *, 'expecting 3 rows' FROM t WHERE b < @testValue;
    

    MySQL output:

    +------+-------+------------------+
    | a    | b     | expecting 2 rows |
    +------+-------+------------------+
    |    1 |  1.00 | expecting 2 rows |
    |    2 | 42.00 | expecting 2 rows |
    +------+-------+------------------+
    +------+-----------------------------------------+------------------+
    | a    | b                                       | expecting 3 rows |
    +------+-----------------------------------------+------------------+
    |    1 |                                    1.00 | expecting 3 rows |
    |    2 |                                   42.00 | expecting 3 rows |
    |    3 | 809826404100301269648758758005707000.00 | expecting 3 rows |
    +------+-----------------------------------------+------------------+
    

    dolt output:

    error on line 3 for query 
    
    SET @testValue = 809826404100301269648758758005707100: strconv.ParseUint: parsing "809826404100301269648758758005707100": value out of range
    strconv.ParseUint: parsing "809826404100301269648758758005707100": value out of range
    

    If we change @testValue to cast from a string to decimal, we can see that dolt handles decimals just fine.

    SQL difference:

    SET @testValue = CAST("809826404100301269648758758005707100" AS DECIMAL(40, 2));
    

    dolt output:

    Query OK, 2 rows affected (0.00 sec)
    +---+-------+------------------+
    | a | b     | expecting 2 rows |
    +---+-------+------------------+
    | 1 | 1.00  | expecting 2 rows |
    | 2 | 42.00 | expecting 2 rows |
    +---+-------+------------------+
    
    Query OK, 1 row affected (0.00 sec)
    +---+-----------------------------------------+------------------+
    | a | b                                       | expecting 3 rows |
    +---+-----------------------------------------+------------------+
    | 1 | 1.00                                    | expecting 3 rows |
    | 3 | 809826404100301269648758758005707000.00 | expecting 3 rows |
    | 2 | 42.00                                   | expecting 3 rows |
    +---+-----------------------------------------+------------------+
    

    MySQL output stays the same:

    +------+-------+------------------+
    | a    | b     | expecting 2 rows |
    +------+-------+------------------+
    |    1 |  1.00 | expecting 2 rows |
    |    2 | 42.00 | expecting 2 rows |
    +------+-------+------------------+
    +------+-----------------------------------------+------------------+
    | a    | b                                       | expecting 3 rows |
    +------+-----------------------------------------+------------------+
    |    1 |                                    1.00 | expecting 3 rows |
    |    2 |                                   42.00 | expecting 3 rows |
    |    3 | 809826404100301269648758758005707000.00 | expecting 3 rows |
    +------+-----------------------------------------+------------------+
    
"Go SQL DB" is a relational database that supports SQL queries for research purposes

A pure golang SQL database for database theory research

Jan 6, 2023
Beerus-DB: a database operation framework, currently only supports Mysql, Use [go-sql-driver/mysql] to do database connection and basic operations

Beerus-DB · Beerus-DB is a database operation framework, currently only supports Mysql, Use [go-sql-driver/mysql] to do database connection and basic

Oct 29, 2022
☄ The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct.
☄ The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct.

Gormat - Cross platform gopher tool The golang convenient converter supports Database to Struct, SQL to Struct, and JSON to Struct. 中文说明 Features Data

Dec 20, 2022
DonutDB: A SQL database implemented on DynamoDB and SQLite

DonutDB: A SQL database implemented on DynamoDB and SQLite

Dec 21, 2022
Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures.

Owl is a db manager platform,committed to standardizing the data, index in the database and operations to the database, to avoid risks and failures. capabilities which owl provides include Process approval、sql Audit、sql execute and execute as crontab、data backup and recover .

Nov 9, 2022
CockroachDB - the open source, cloud-native distributed SQL database.
CockroachDB - the open source, cloud-native distributed SQL database.

CockroachDB is a cloud-native SQL database for building global, scalable cloud services that survive disasters. What is CockroachDB? Docs Quickstart C

Jan 2, 2023
A decentralized, trusted, high performance, SQL database with blockchain features
A decentralized, trusted, high performance, SQL database with blockchain features

中文简介 CovenantSQL(CQL) is a Byzantine Fault Tolerant relational database built on SQLite: ServerLess: Free, High Availabile, Auto Sync Database Service

Jan 3, 2023
LBADD: An experimental, distributed SQL database
LBADD: An experimental, distributed SQL database

LBADD Let's build a distributed database. LBADD is an experimental distributed SQL database, written in Go. The goal of this project is to build a dat

Nov 29, 2022
A course to build the SQL layer of a distributed database.

TinySQL TinySQL is a course designed to teach you how to implement a distributed relational database in Go. TinySQL is also the name of the simplifed

Jan 8, 2023
This is a simple Golang application that executes SQL commands to clean up a mirror node's database.

This is a simple Golang application that executes SQL commands to clean up a mirror node's database.

Jan 24, 2022
Membin is an in-memory database that can be stored on disk. Data model smiliar to key-value but values store as JSON byte array.

Membin Docs | Contributing | License What is Membin? The Membin database system is in-memory database smiliar to key-value databases, target to effici

Jun 3, 2021
This is a simple graph database in SQLite, inspired by "SQLite as a document database".

About This is a simple graph database in SQLite, inspired by "SQLite as a document database". Structure The schema consists of just two structures: No

Jan 3, 2023
Hard Disk Database based on a former database

Hard Disk Database based on a former database

Nov 1, 2021
Simple key value database that use json files to store the database

KValDB Simple key value database that use json files to store the database, the key and the respective value. This simple database have two gRPC metho

Nov 13, 2021
A simple Git Notes Key Value store

Gino Keva - Git Notes Key Values Gino Keva works as a simple Key Value store built on top of Git Notes, using an event sourcing architecture. Events a

Aug 14, 2022
Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on commands and key-regexes written by GO

Welcome to NIPO Nipo is a powerful, fast, multi-thread, clustered and in-memory key-value database, with ability to configure token and acl on command

Dec 28, 2022
OcppManager-go - A library for dynamically managing OCPP configuration (variables). It can read, update, and validate OCPP variables.

?? ocppManager-go A library for dynamically managing OCPP configuration (variables). It can read, update, and validate OCPP variables. Currently, only

Jan 3, 2022
GoPostgreSQL - An example usage of PostgreSQL with GO, very simple since the objective is that others can read and learn
GoPostgreSQL - An example usage of PostgreSQL with GO, very simple since the objective is that others can read and learn

GoPostgreSQL - An example usage of PostgreSQL with GO, very simple since the objective is that others can read and learn

Feb 10, 2022
Kivik provides a common interface to CouchDB or CouchDB-like databases for Go and GopherJS.

Kivik Package kivik provides a common interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database dr

Dec 29, 2022