sq is a command line tool that provides jq-style access to structured data sources such as SQL databases, or document formats like CSV or Excel.

sq: swiss-army knife for data

sq is a command line tool that provides jq-style access to structured data sources such as SQL databases, or document formats like CSV or Excel.

sq can perform cross-source joins, execute database-native SQL, and output to a multitude of formats including JSON, Excel, CSV, HTML, Markdown and XML, or insert directly to a SQL database. sq can also inspect sources to view metadata about the source structure (tables, columns, size) and has commands for common database operations such as copying or dropping tables.

Install

For other installation options, see here.

It is strongly advised to install shell completion.

macOS

brew install neilotoole/sq/sq

Windows

scoop bucket add sq https://github.com/neilotoole/sq
scoop install sq

Linux

apt

curl -fsSLO https://github.com/neilotoole/sq/releases/latest/download/sq-linux-amd64.deb && sudo apt install -y ./sq-linux-amd64.deb && rm ./sq-linux-amd64.deb

rpm

sudo rpm -i https://github.com/neilotoole/sq/releases/latest/download/sq-linux-amd64.rpm

yum

yum localinstall -y https://github.com/neilotoole/sq/releases/latest/download/sq-linux-amd64.rpm

Shell completion

Shell completion is available for bash, zsh, fish, and powershell. It is strongly recommended to install.

Execute sq completion --help for installation instructions.

Quickstart

Use sq help to see command help. The tutorial is the best place to start. The cookbook has recipes for common actions.

The major concept is: sq operates on data sources, which are treated as SQL databases (even if the source is really a CSV or XLSX file etc).

In a nutshell, you sq add a source (giving it a handle), and then execute commands against the source.

Sources

Initially there are no sources.

$ sq ls

Let's add a source. First we'll add a SQLite database, but this could also be Postgres, SQL Server, Excel, etc. Download the sample DB, and sq add the source. We use -h to specify a handle to use.

$ wget https://sq.io/testdata/sakila.db

$ sq add ./sakila.db -h @sakila_sl3
@sakila_sl3  sqlite3  sakila.db

$ sq ls -v
HANDLE       DRIVER   LOCATION                 OPTIONS
@sakila_sl3* sqlite3  sqlite3:/root/sakila.db

$ sq ping @sakila_sl3
@sakila_sl3  1ms  pong

$ sq src
@sakila_sl3  sqlite3  sakila.db

The sq ping command simply pings the source to verify that it's available.

sq src lists the active source, which in our case is @sakila_sl3. You can change the active source using sq src @other_src. When there's an active source specified, you can usually omit the handle from sq commands. Thus you could instead do:

$ sq ping
@sakila_sl3  1ms  pong

Query

Fundamentally, sq is for querying data. Using our jq-style syntax:

$ sq '.actor | .actor_id < 100 | .[0:3]'
actor_id  first_name  last_name     last_update
1         PENELOPE    GUINESS       2020-02-15T06:59:28Z
2         NICK        WAHLBERG      2020-02-15T06:59:28Z
3         ED          CHASE         2020-02-15T06:59:28Z

The above query selected some rows from the actor table. You could also use native SQL, e.g.:

$ sq sql 'SELECT * FROM actor WHERE actor_id < 100 LIMIT 3'
actor_id  first_name  last_name  last_update
1         PENELOPE    GUINESS    2020-02-15T06:59:28Z
2         NICK        WAHLBERG   2020-02-15T06:59:28Z
3         ED          CHASE      2020-02-15T06:59:28Z

But we're flying a bit blind here: how did we know about the actor table?

Inspect

sq inspect is your friend (output abbreviated):

$ sq inspect
HANDLE          DRIVER   NAME       FQ NAME         SIZE   TABLES  LOCATION
@sakila_sl3     sqlite3  sakila.db  sakila.db/main  5.6MB  21      sqlite3:///root/sakila.db

TABLE                   ROWS   TYPE   SIZE  NUM COLS  COL NAMES                                                                          COL TYPES
actor                   200    table  -     4         actor_id, first_name, last_name, last_update                                       numeric, VARCHAR(45), VARCHAR(45), TIMESTAMP
address                 603    table  -     8         address_id, address, address2, district, city_id, postal_code, phone, last_update  int, VARCHAR(50), VARCHAR(50), VARCHAR(20), INT, VARCHAR(10), VARCHAR(20), TIMESTAMP
category                16     table  -     3         category_id, name, last_update

Use --json (-j) to output in JSON (output abbreviated):

$ sq inspect -j
{
  "handle": "@sakila_sl3",
  "name": "sakila.db",
  "driver": "sqlite3",
  "db_version": "3.31.1",
  "location": "sqlite3:///root/sakila.db",
  "size": 5828608,
  "tables": [
    {
      "name": "actor",
      "table_type": "table",
      "row_count": 200,
      "columns": [
        {
          "name": "actor_id",
          "position": 0,
          "primary_key": true,
          "base_type": "numeric",
          "column_type": "numeric",
          "kind": "decimal",
          "nullable": false
        }

Combine sq inspect with jq for some useful capabilities. Here's how to list all the table names in the active source:

$ sq inspect -j | jq -r '.tables[] | .name'
actor
address
category
city
country
customer
[...]

And here's how you could export each table to a CSV file:

$ sq inspect -j | jq -r '.tables[] | .name' | xargs -I % sq .% --csv --output %.csv
$ ls
actor.csv     city.csv	    customer_list.csv  film_category.csv  inventory.csv  rental.csv		     staff.csv
address.csv   country.csv   film.csv	       film_list.csv	  language.csv	 sales_by_film_category.csv  staff_list.csv
category.csv  customer.csv  film_actor.csv     film_text.csv	  payment.csv	 sales_by_store.csv	     store.csv

Note that you can also inspect an individual table:

$ sq inspect @sakila_sl3.actor
TABLE  ROWS  TYPE   SIZE  NUM COLS  COL NAMES                                     COL TYPES
actor  200   table  -     4         actor_id, first_name, last_name, last_update  numeric, VARCHAR(45), VARCHAR(45), TIMESTAMP

Insert Output Into Database Source

sq query results can be output in various formats (JSON, XML, CSV, etc), and can also be "outputted" as an insert into database sources.

That is, you can use sq to insert results from a Postgres query into a MySQL table, or copy an Excel worksheet into a SQLite table, or a push a CSV file into a SQL Server table etc.

Note: If you want to copy a table inside the same (database) source, use sq tbl copy instead, which uses the database's native table copy functionality.

For this example, we'll insert an Excel worksheet into our @sakila_sl3 SQLite database. First, we download the XLSX file, and sq add it as a source.

$ wget https://sq.io/testdata/xl_demo.xlsx

$ sq add ./xl_demo.xlsx --opts header=true
@xl_demo_xlsx  xlsx  xl_demo.xlsx

$ sq @xl_demo_xlsx.person
uid  username    email                  address_id
1    neilotoole  [email protected]  1
2    ksoze       [email protected]        2
3    kubla       [email protected]          NULL
[...]

Now, execute the same query, but this time sq inserts the results into a new table (person) in @sakila_sl3:

$ sq @xl_demo_xlsx.person --insert @sakila_sl3.person
Inserted 7 rows into @sakila_sl3.person

$ sq inspect @sakila_sl3.person
TABLE   ROWS  TYPE   SIZE  NUM COLS  COL NAMES                         COL TYPES
person  7     table  -     4         uid, username, email, address_id  INTEGER, TEXT, TEXT, INTEGER

$ sq @sakila_sl3.person
uid  username    email                  address_id
1    neilotoole  [email protected]  1
2    ksoze       [email protected]        2
3    kubla       [email protected]          NULL
[...]

Cross-Source Join

sq has rudimentary support for cross-source joins. That is, you can join an Excel worksheet with a CSV file, or Postgres table, etc.

Note: The current mechanism for these joins is highly naive: sq copies the joined table from each source to a "scratch database" (SQLite by default), and then performs the JOIN using the scratch database's SQL interface. Thus, performance is abysmal for larger tables. There are massive optimizations to be made, but none have been implemented yet.

See the tutorial for further details, but given an Excel source @xl_demo and a CSV source @csv_demo, you can do:

$ sq '@csv_demo.data, @xl_demo.address | join(.D == .address_id) | .C, .city'
C                      city
[email protected]  Washington
[email protected]        Ulan Bator
[email protected]        Washington
[email protected]    Ulan Bator
[email protected]        Washington

Table Commands

sq provides several handy commands for working with tables. Note that these commands work directly against SQL database sources, using their native SQL commands.

$ sq tbl copy .actor .actor_copy
Copied table: @sakila_sl3.actor --> @sakila_sl3.actor_copy (200 rows copied)

$ sq tbl truncate .actor_copy
Truncated 200 rows from @sakila_sl3.actor_copy

$ sq tbl drop .actor_copy
Dropped table @sakila_sl3.actor_copy

UNIX Pipes

For file-based sources (such as CSV or XLSX), you can sq add the source file, but you can also pipe it:

$ cat ./example.xlsx | sq .Sheet1

Similarly, you can inspect:

$ cat ./example.xlsx | sq inspect

Data Source Drivers

sq knows how to deal with a data source type via a driver implementation. To view the installed/supported drivers:

$ sq drivers
DRIVER     DESCRIPTION                            USER-DEFINED  DOC
sqlite3    SQLite                                 false         https://github.com/mattn/go-sqlite3
postgres   PostgreSQL                             false         https://github.com/jackc/pgx
sqlserver  Microsoft SQL Server                   false         https://github.com/denisenkom/go-mssqldb
mysql      MySQL                                  false         https://github.com/go-sql-driver/mysql
csv        Comma-Separated Values                 false         https://en.wikipedia.org/wiki/Comma-separated_values
tsv        Tab-Separated Values                   false         https://en.wikipedia.org/wiki/Tab-separated_values
json       JSON                                   false         https://en.wikipedia.org/wiki/JSON
jsona      JSON Array: LF-delimited JSON arrays   false         https://en.wikipedia.org/wiki/JSON
jsonl      JSON Lines: LF-delimited JSON objects  false         https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON
xlsx       Microsoft Excel XLSX                   false         https://en.wikipedia.org/wiki/Microsoft_Excel

Output Formats

sq has many output formats:

  • --table: Text/Table
  • --json: JSON
  • --jsona: JSON Array
  • --jsonl: JSON Lines
  • --csv / --tsv : CSV / TSV
  • --xlsx: XLSX (Microsoft Excel)
  • --html: HTML
  • --xml: XML
  • --markdown: Markdown
  • --raw: Raw (bytes)

Acknowledgements

  • Much inspiration is owed to jq.
  • See go.mod for a list of third-party packages.
  • Additionally, sq incorporates modified versions of:
  • The Sakila example databases were lifted from jOOQ, which in turn owe their heritage to earlier work on Sakila.

Similar / Related / Noteworthy Projects

Comments
  • MySQL driver options are stripped during

    MySQL driver options are stripped during "add"

    Trying to add a MySQL host with old passwords:

    sq add --handle=@segv 'mysql://user:password@host/database?allowOldPasswords=1'
    
    sq: failed to ping @segv [mysql://segv:****@segvdb/segv]: this user requires old password authentication. If you still want to use it, please add 'allowOldPasswords=1' to your DSN. See also https://github.com/go-sql-driver/mysql/wiki/old_passwords
    

    Stepping through the code shows this line: https://github.com/neilotoole/sq/blob/b7cb0a0b66a981e5c4d16b7f533eeb241d4c1f04/drivers/mysql/mysql.go#L410

    resulting a dsn of:

    "user:password@tcp(host)/database"
    

    It is possible that I'm misunderstanding how to add driver options.

  • Installing through brew gets an error

    Installing through brew gets an error

    brew install neilotoole/sq/sq

    Cloning into '/opt/homebrew/Library/Taps/neilotoole/homebrew-sq'...
    remote: Enumerating objects: 164, done.
    remote: Counting objects: 100% (164/164), done.
    remote: Compressing objects: 100% (119/119), done.
    remote: Total 164 (delta 74), reused 18 (delta 4), pack-reused 0
    Receiving objects: 100% (164/164), 17.71 KiB | 4.43 MiB/s, done.
    Resolving deltas: 100% (74/74), done.
    Error: Invalid formula: /opt/homebrew/Library/Taps/neilotoole/homebrew-sq/Formula/sq.rb
    sq: wrong number of arguments (given 1, expected 0)
    Error: Cannot tap neilotoole/sq: invalid syntax in tap!
    
  • Missing quotes when joining tables

    Missing quotes when joining tables

    When joining two tables by columns with different names, generated SQL lacks double quotes around table and column specifiers:

    E.g.

    '.user, .org | j(.org.id == .user.org_id)'
    
    >> SELECT * FROM "user" INNER JOIN "org"  ON org.id = user.org_id
    

    Joining by the common column name works fine:

    '.user, .org | j(.id)'
    >>> SELECT * FROM "user" INNER JOIN "org"  ON "user"."id" = "org"."id"
    

    Double quotes are necessary if you have to operate on such an unfortunate schema that uses reserved keywords as table names (like user in this case).

  • Bug #87: generated SQL should always quote table and column names in join statement

    Bug #87: generated SQL should always quote table and column names in join statement

    See #87.

    • BaseFragmentBuilder now always quotes table and col names in the JOIN statement.
    • Refactoring of libsq.engine so that the SQL generated from SQL input can be tested.
  • Shell completion not working

    Shell completion not working

    Shell completion simply doesn't work. We're also on an ancient version of spf13/cobra, and the completion functionality has moved on massively since then.

    Therefore:

    • We need to upgrade the cobra version
    • Implement any additional completion logic
  • Windows tests failing due to sqlite/file closing order

    Windows tests failing due to sqlite/file closing order

    The test suite fails on windows due to the way we close SQLite. This is related to our Files implementation, and the close order.

    See: https://github.com/neilotoole/sq/runs/1627556486?check_suite_focus=true

  • Add a remote/server/cluster feature

    Add a remote/server/cluster feature

    You should be able to have the sq query execute remotely on a server/cluster. For example:

    > sq remote create aws myaws1 ...
    Created cluster and added remote "myaws1"
    > sq --remote=myaws1 '@mysql1.tbluser, @pg1.tbladdress | join(.uid)'
    # Add a pre-existing cluster, e.g. one set up by an admin
    > sq remote add myazure1 http://my_sq_lb.azure.com --username=ETC
    > sq --remote=myazure1 '@mysql1.tbluser, @pg1.tbladdress | join(.uid)'
    

    There's two pieces here: the first is creating (or adding) the remote cluster, the second is executing the query remotely.

    The service consists of a load balancer that directs requests to a bunch of sq server instances (just web servers wrapping libsq deployed in Docker containers). The sq --remote=myaws1 QUERY command posts the query and the local config to the server. If the query refers to a local data source, the server will send a response with upload URLs for the local files (e.g. http://myaws1.aws.amazon.com/upload/GUID). sq will upload those files, and the server will run the query, occasionally sending status updates to the client, while the client waits for the job to complete.

  • Add a

    Add a "history" feature

    This would work similarly to bash's history:

    > sq history
    704 SUCCESS   sq '@pg1.user'
    705 FAIL      sq '@my1.tbluser'
    > sq history err 705
    Error: invalid username/password for data source @my1: ....
    > sq history run 704
    .... [executes the query again]
    > sq history run
    ... [executes most recent query, equivalent to "sq history run 0"]
    > sq history run -3
    ... [executes the 3rd most recent query]
    

    The history command can integrate with the archive command [#46] , e.g.

    > sq history archive 704 --name=mygoodquery
    Archived query 704 "mygoodquery": sq '@pg1.user'
    > sq archive run mygoodquery
    ... [runs the named 
    
  • Add a

    Add a "saved queries" (archive) feature

    You should be able to archive queries for future use. For example:

    > sq --archive=query1 '@my1.tbluser, @pg1.tbladdress | join(.uid) | .[0:100] | .uid, .email, .city, .zip'
    > sq archive ls
    NAME     SLQ
    query1    @my1.tbluser, @pg1.tbladdress | join(.uid) | .[0:100] | .uid, .email, .city, .zip
    > sq archive run query1
    

    This could be implemented by having an archive section in ~/.sq/sq.yml

  • Generate client code/lib in multiple languages

    Generate client code/lib in multiple languages

    Given a sq statement like this:

    > sq '@my1.tbluser, @pg1.tbladdress | join(.uid) | .[0:100] | .uid, .email, .city, .zip'
    

    It should be possible to generate client source code that executes that query, in multiple languages. For example:

    > sq generate java myquery '@my1.tbluser, @pg1.tbladdress | join(.uid) | .[0:100] | .uid, .email, .city, .zip'
    Generated Java code to "./myquery"
    > cd myquery
    > ls myquery
    build.sh myquery.java myquery.gradle
    > ./build.sh
    > java -jar myquery.jar
    ... Executing query...
    

    The generated code wouldn't need to include a full SLQ parser. The generate command would run the query through the sq engine, capturing the generated SQL for each data source, and the plan steps (joins, etc), and write those into the .java files.

  • Add ability to push (publish) sq statements to remote BI systems

    Add ability to push (publish) sq statements to remote BI systems

    Let's say you've got a sq statement like so:

    sq '@my1.tbluser, @pg1.tbladdress | join(.uid) | .[0:100] | .uid, .email, .city, .zip'
    

    And you have an account on a business integration service like AWS QuickSight:

    > sq bi add quicksight @qs1 USERNAME PASS
    

    Then it should be possible to "push" or publish this query to other BI/cross-join services, e.g.

    > sq publish @qs1 --name=qs1_users '@my1.tbluser, @pg1.tbladdress | join(.uid) | .[0:100] | .uid, .email, .city, .zip'
    Added BI view @qs1_users
    

    Under the hood, sq will invoke QuickSight's native APIs to create the equivalent query in their cloud. You can then use the QuickSight tools to interact with that view, or you can use sq to query it:

    > sq '@qs1_users | .uid, .email, .city, .zip'
    
  • feat: allow connection to custom schema in postgres

    feat: allow connection to custom schema in postgres

    sq, by default, successfully displays all tables under the default "public" schema/namespace. e.g. public.my_table it would be useful to allow/show some way to connect to a table with a custom schema in postgres e.g. customer1.my_table

  • name conflicts with /usr/bin/sq -- some alternatives

    name conflicts with /usr/bin/sq -- some alternatives

    Has anyone else run into the naming conflict with the /usr/bin/sq that's shipped with ispell? (For lurkers -- 'ispell' has been around since the early 70's and tends to be installed today as a backend for many text editors, including both vim and emacs, and is available for all major operating systems, including UNIX, Linux, MacOS, and even DOS/Windows.)

    In my own case, I wound up deploying this package as 'seeq' to avoid the conflict. A slightly shorter name that I don't like as much is 'siq'. Neither of those come up in the output of apt-file search bin/siq, apt-file search bin/seeq, or googling for man siq or man seeq.

  • Auto Increment Duplicate Column name

    Auto Increment Duplicate Column name

    When i use a xlsx file as an entry, and when thos file as duplicate header (line 1) , sq fail Could you add an option to auto add an index behind the column, think it could complete your suggestion of https://github.com/neilotoole/sq/issues/15

  • Use space in column name (xlsx)

    Use space in column name (xlsx)

    Thanks for such a nice tool How could i select a table with a space, inspect for whole file is ok, but coud'nt find a way to select table name if any space in it

  • Adding a source with special characters in password

    Adding a source with special characters in password

    Works: sq add 'sqlserver://test:test@localhost?database=x'

    Doesn't work: sq add 'sqlserver://test:test#@localhost?database=x' Error: sq: parse "sqlserver://test:test": invalid port ":test" after host

    What's the recommended way to add a source in case the password contains special characters like #? Can a source be manually added, for example?

xyr is a very lightweight, simple and powerful data ETL platform that helps you to query available data sources using SQL.

xyr [WIP] xyr is a very lightweight, simple and powerful data ETL platform that helps you to query available data sources using SQL. Supported Drivers

Dec 2, 2022
Graphik is a Backend as a Service implemented as an identity-aware document & graph database with support for gRPC and graphQL
Graphik is a Backend as a Service implemented as an identity-aware document & graph database with support for gRPC and graphQL

Graphik is a Backend as a Service implemented as an identity-aware, permissioned, persistant document/graph database & pubsub server written in Go.

Dec 30, 2022
Dud is a lightweight tool for versioning data alongside source code and building data pipelines.

Dud Website | Install | Getting Started | Source Code Dud is a lightweight tool for versioning data alongside source code and building data pipelines.

Jan 1, 2023
Baker is a high performance, composable and extendable data-processing pipeline for the big data era

Baker is a high performance, composable and extendable data-processing pipeline for the big data era. It shines at converting, processing, extracting or storing records (structured data), applying whatever transformation between input and output through easy-to-write filters.

Dec 14, 2022
CUE is an open source data constraint language which aims to simplify tasks involving defining and using data.

CUE is an open source data constraint language which aims to simplify tasks involving defining and using data.

Jan 1, 2023
DEPRECATED: Data collection and processing made easy.

This project is deprecated. Please see this email for more details. Heka Data Acquisition and Processing Made Easy Heka is a tool for collecting and c

Nov 30, 2022
Open source framework for processing, monitoring, and alerting on time series data

Kapacitor Open source framework for processing, monitoring, and alerting on time series data Installation Kapacitor has two binaries: kapacitor – a CL

Dec 24, 2022
A library for performing data pipeline / ETL tasks in Go.
A library for performing data pipeline / ETL tasks in Go.

Ratchet A library for performing data pipeline / ETL tasks in Go. The Go programming language's simplicity, execution speed, and concurrency support m

Jan 19, 2022
A distributed, fault-tolerant pipeline for observability data

Table of Contents What Is Veneur? Use Case See Also Status Features Vendor And Backend Agnostic Modern Metrics Format (Or Others!) Global Aggregation

Dec 25, 2022
Kanzi is a modern, modular, expendable and efficient lossless data compressor implemented in Go.

kanzi Kanzi is a modern, modular, expendable and efficient lossless data compressor implemented in Go. modern: state-of-the-art algorithms are impleme

Dec 22, 2022
Data syncing in golang for ClickHouse.
Data syncing in golang for ClickHouse.

ClickHouse Data Synchromesh Data syncing in golang for ClickHouse. based on go-zero ARCH A typical data warehouse architecture design of data sync Aut

Jan 1, 2023
Machine is a library for creating data workflows.
Machine is a library for creating data workflows.

Machine is a library for creating data workflows. These workflows can be either very concise or quite complex, even allowing for cycles for flows that need retry or self healing mechanisms.

Dec 26, 2022
churro is a cloud-native Extract-Transform-Load (ETL) application designed to build, scale, and manage data pipeline applications.

Churro - ETL for Kubernetes churro is a cloud-native Extract-Transform-Load (ETL) application designed to build, scale, and manage data pipeline appli

Mar 10, 2022
Stream data into Google BigQuery concurrently using InsertAll() or BQ Storage.

bqwriter A Go package to write data into Google BigQuery concurrently with a high throughput. By default the InsertAll() API is used (REST API under t

Dec 16, 2022
Dev Lake is the one-stop solution that integrates, analyzes, and visualizes software development data
Dev Lake is the one-stop solution that integrates, analyzes, and visualizes software development data

Dev Lake is the one-stop solution that integrates, analyzes, and visualizes software development data throughout the software development life cycle (SDLC) for engineering teams.

Dec 30, 2022
Command-line tool to load csv and excel (xlsx) files and run sql commands
Command-line tool to load csv and excel (xlsx) files and run sql commands

csv-sql supports loading and saving results as CSV and XLSX files with data processing with SQLite compatible sql commands including joins.

Nov 2, 2022
Single binary CLI for generating structured JSON, CSV, Excel, etc.

fakegen: Single binary CLI for generating a random schema of M columns to populate N rows of JSON, CSV, Excel, etc. This program generates a random sc

Dec 26, 2022
OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.

OctoSQL OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases, streaming sources and file formats using

Dec 29, 2022
Prometheus Common Data Exporter can parse JSON, XML, yaml or other format data from various sources (such as HTTP response message, local file, TCP response message and UDP response message) into Prometheus metric data.
Prometheus Common Data Exporter can parse JSON, XML, yaml or other format data from various sources (such as HTTP response message, local file, TCP response message and UDP response message) into Prometheus metric data.

Prometheus Common Data Exporter Prometheus Common Data Exporter 用于将多种来源(如http响应报文、本地文件、TCP响应报文、UDP响应报文)的Json、xml、yaml或其它格式的数据,解析为Prometheus metric数据。

May 18, 2022
Command line tool to generate idiomatic Go code for SQL databases supporting PostgreSQL, MySQL, SQLite, Oracle, and Microsoft SQL Server

About xo xo is a command-line tool to generate Go code based on a database schema or a custom query. xo works by using database metadata and SQL intro

Jan 8, 2023