MatrixOne is a planet scale, cloud-edge native big data engine crafted for heterogeneous workloads.

LICENSE Language CodeFactor

What is MatrixOne?

MatrixOne is a planet scale, cloud-edge native big data engine crafted for heterogeneous workloads. It provides an end-to-end data processing platform that is highly autonomous and easy to use, to empower users to store, manipulate, and analyze data across devices, edges, and clouds, with minimal operational overheads.

Features

Planet Scalability

MatrixOne cluster can easily expand capacity during SQL processing, computation, and storage, by adding nodes to the cluster on the fly.

Cloud-Edge Native

Not limited to public clouds, hybrid clouds, on-premise servers, or smart devices, MatrixOne accommodates itself to myriads of infrastructure while still providing top services with low latency and high throughput.

Hybrid Streaming, Transactional and Analytical Processing Engine

By converging multiple engines, MatrixOne can support hybrid streaming, transactional, and analytical workloads; with its pluggable architecture, MatrixOne allows for easy integration with third-party engines.

High Availability

MatrixOne uses a RAFT-based consensus algorithm to provide fault tolerance in one zone. And a more advanced state-machine replication protocol is planned for the future to achieve geo-distributed active-active.

Ease of Use

An important goal of MatrixOne is to make it easy for users to operate and manage data, making daily work almost effortless.

  • No Dependency: Download, install, or start MatrixOne straightforwardly without depending on external toolings.
  • Simplify Administration: Re-balancing, failover, system tuning, and other administrative tasks are fully automatic.
  • MySQL-compatible Syntax: MatrixOne allows you to query data using traditional SQL queries.

End-to-End Automated Data Science

By streaming SQL and user-defined functions, MatrixOne provides end-to-end data processing pipelines to deliver productive data science applications.

Architecture

Architecture

Query Parser Layer

  • Parser: Parses SQL, Streaming Query, or Python language into an abstract syntax tree for further processing.
  • Planner: Finds the best execution plan through rule-based, cost-based optimization algorithms, and transfers abstract syntax tree to plan tree.
  • IR Generator: Converts Python code into an intermediate representation.

Computation Layer

  • JIT Compilation: Turns SQL plan tree or IR code into a native program using LLVM at runtime.
  • Vectorized Execution: MatrixOne leverages SIMD instructions to construct vectorized execution pipelines.
  • Cache: Multi-version cache of data, indexes, and metadata for queries.

Cluster Management Layer

MatrixCube is a fundamental library for building distributed systems, which offers guarantees about reliability, consistency, and scalability. It is designed to facilitate distributed, stateful application building to allow developers only need to focus on the business logic on a single node. MatrixCube is currently built upon multi-raft to provide replicated state machine and will migrate to Paxos families to increase friendliness to scenarios spanning multiple data centers.

  • Prophet: Used by MatrixCube to manage and schedule the MatrixOne cluster.
  • Transaction Manager: MatrixOne supports distributed transaction of snapshot isolation level.
  • Replicated State Machine: MatrixOne uses RAFT-based consensus algorithms and hyper logic clocks to implement strong consistency of the clusters. Introduction of more advanced state-machine replication protocols is yet to come.

Replicated Storage Layer

  • Row Storage: Stores serving workload, metadata, and catalog.
  • Column Storage: Stores analytical workload and materialized views.

Storage Provision Layer

MatrixOne stores data in shared storage of S3 / HDFS, or the local disk, on-premise server, hybrid and any cloud, or even smart devices.

Quick Start

Get started with MatrixOne quickly by the following steps.

Installation

You can install MatrixOne either by building from source or using docker.

Building from source

  1. Install Go (version 1.17 is required).

  2. Get the MatrixOne code:

$ git clone https://github.com/matrixorigin/matrixone.git
$ cd matrixone
  1. Run make:

    You can run make debug, make clean, or anything else our Makefile offers.

$ make config
$ make build
  1. Boot MatrixOne server:
$ ./mo-server system_vars_config.toml

Using docker

  1. Install Docker, then verify that Docker daemon is running in the background:
$ docker --version
  1. Create and run the container for the latest release of MatrixOne. It will pull the image from Docker Hub if not exists.
$ docker run -d -p 6001:6001 --name matrixone matrixorigin/matrixone:latest

Connecting MatrixOne server

  1. Install MySQL client.

    MatrixOne supports the MySQL wire protocol, so you can use MySQL client drivers to connect from various languages. Currently, MatrixOne is only compatible with Oracle MySQL client. This means that some features might not work with MariaDB client.

  2. Connect MatrixOne server:

$ mysql -h IP -P PORT -uUsername -p

The connection string is the same format as MySQL accepts. You need to provide a user name and a password.

Use the built-in test account for example:

  • user: dump
  • password: 111
$ mysql -h 127.0.0.1 -P 6001 -udump -p
Enter password:

Now, MatrixOne only supports the TCP listener.

Contributing

See Contributing Guide for details on contribution workflows.

Roadmap

Check out Roadmap for MatrixOne development plan.

Community

You can join MatrixOne community on Slack to discuss and ask questions.

License

MatrixOne is licensed under the Apache License, Version 2.0.

Comments
  • [Bug]: lots of error for bvt test with tae-cn-tae-dn running on k8s

    [Bug]: lots of error for bvt test with tae-cn-tae-dn running on k8s

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93): nightly-c9f7c175 (docker tag)
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    The test pass rate is only 17% when runing bvt test on k8s, the key errors are : 1、can not get query result , such as: [ERROR] [SCRIPT FILE]: /root/matrixone/test/distributed/cases/auto_increment/auto_increment_columns.sql [ROW NUMBER]: 357 [SQL STATEMENT]: select * from t12 order by a; [EXPECT RESULT]: a b c 0 0 '0' 1 1 '??????' 2 3 'hello' 4 4 'world' 5 5 ' 6 100 'aa' 10 101 'bb' 11 102 'cc' 1000 103 '1000' 1001 104 1002 105 [ACTUAL RESULT]: a b c

    2、SQL parser error: table "mo_role_privs" does not exist

    Expected Behavior

    No response

    Steps to Reproduce

    No response

    Additional information

    mo-tester.tar.gz mo-log.tar.gz

  • [Bug]: loop execute load data, Cause mo service oom.

    [Bug]: loop execute load data, Cause mo service oom.

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93):
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    Reproducible Steps:

    platform: 128 server

    1.mo path: /data1/tianyahui/matrixone (The failure log still exists) 2.load data script path: /data/mo-load-data 3.execute command: nohub bash loop.sh & 4.loop default setting: 127G size csv. (table_40000000_100_columns.yml) 5.mo-load-data log: mo-load-data.log 6.The seventh load data execution failed. (mo server oom, mo.log path: /data1/tianyahui/matrixone/mo.log )

    Expected Behavior

    No response

    Steps to Reproduce

    No response

    Additional information

    No response

  • [Bug]: LOAD TPCH 10G DATA and BATCH INSERT ERROR by context deadline exceeded

    [Bug]: LOAD TPCH 10G DATA and BATCH INSERT ERROR by context deadline exceeded

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93):92439b1d3f73552ca912de6180099821c39955fd
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    chbechmark batch insert data : 2022-11-06 00:49:29 ERROR Loader:335 - context deadline exceeded 2022-11-06 00:49:29 ERROR Loader:336 - s_w_id = 1, s_i_id = 91000

    Loading data/10/orders.tbl in to table orders,please wait..... load data infile '/data1/sudong/mo-regression/tools/mo-tpch/data/10/orders.tbl' into table tpch_10g.orders FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; mysql: [Warning] Using a password on the command line interface can be insecure.

    because of the logs is very large, I dont find the right errror message, if want to reproduce ,contract me

    Expected Behavior

    No response

    Steps to Reproduce

    No response

    Additional information

    No response

  • [Bug]: show database need 27 seconds on eks(s3)

    [Bug]: show database need 27 seconds on eks(s3)

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93): e23e00f1d01d43e33c509a6d239ccc38b5f93409
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    mysql> show databases;

    +--------------------+ | Database | +--------------------+ | mo_task | | information_schema | | mysql | | system_metrics | | system | | sbtest | | mo_catalog | +--------------------+ 7 rows in set (1 min 27.44 sec)

    Expected Behavior

    No response

    Steps to Reproduce

    No response

    Additional information

    No response

  • [Bug]: select count(*) hung for serval minutes and return table not exist when restart dn and cn on EKS

    [Bug]: select count(*) hung for serval minutes and return table not exist when restart dn and cn on EKS

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93):adefec5d7907027d66a46199c2d33185f121992d
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    mysql> select count() from sbtest1; +----------+ | count() | +----------+ | 3691 | +----------+ 1 row in set (0.21 sec)

    mysql> select count() from sbtest1; +----------+ | count() | +----------+ | 3696 | +----------+ 1 row in set (0.17 sec)

    mysql> select count(*) from sbtest1; ^[[A

    ERROR 1064 (HY000): SQL parser error: table "sbtest1" does not exist mysql> select count(*) from sbtest1;

    ERROR 1064 (HY000): SQL parser error: table "sbtest1" does not exist mysql> mysql> mysql> mysql> mysql> select count(*) from sbtest1;

    ERROR 1064 (HY000): SQL parser error: table "sbtest1" does not exist mysql> mysql>

    Expected Behavior

    No response

    Steps to Reproduce

    1. mo run on EKS, 1cn-1dn-3log
    2. run sysbench insert test with 10 terminals
    3. delete dn pod and wait new pod started
    4. run select count(*) from sbtest1;
    5. delete dn pod and wait new pod started(2 times)
    6.run select count(*) from sbtest1;
    
    how to run sysbench insert test:
    1.git clone https://github.com/aressu1985/mo-load.git
    2.modify mo addr in mo.yml
    3.create database and table
      ./start.sh -m SYSBENCH -n 1 -s 1
    4.run test
      ./start.sh -c cases/sysbench/simple_insert_10_10000
    

    Additional information

    No response

  • [Bug]: after  update reported deadline error, new mysql client connection login failed

    [Bug]: after update reported deadline error, new mysql client connection login failed

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.6.0 or 8b23a93):2f62c7d7c082f91416190e882ae9c7e30cbb8855
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    image then new mysql client connection failed, related logs already upload this issue image

    Expected Behavior

    No response

    Steps to Reproduce

    No response

    Additional information

    No response

  • [Bug]: cannot query task data by primary key -- real problem, restart database may fail.

    [Bug]: cannot query task data by primary key -- real problem, restart database may fail.

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93): reuse/matrixone:cf8c698ae23e3e084f9487533a6a153948133bbc
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    logservice panic because of not task data is returned. We need to know why task data cannot be queried by primary key?

    image

    Expected Behavior

    No response

    Steps to Reproduce

    keep on restart dn, cn, on eks deployment.    cannot restart.
    many different errors.    
    

    Additional information

    No response

  • [Bug]: distributed transaction: insert into select duplicate data not reported error

    [Bug]: distributed transaction: insert into select duplicate data not reported error

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93):
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    cn-dn results failed: image

    tae transaction results correct: image

    Expected Behavior

    No response

    Steps to Reproduce

    create table dis_table_02(a int not null auto_increment,b varchar(25),c datetime,primary key(a),key bstr (b),key cdate (c) );
    insert into dis_table_02(b,c) values ('aaaa','2020-09-08');
    insert into dis_table_02(b,c) values ('aaaa','2020-09-08');
    create table dis_table_03(b varchar(25) primary key,c datetime);
    begin ;
    insert into dis_table_03 select b,c from dis_table_02;
    select * from dis_table_03;
    commit;
    

    Additional information

    No response

  • [Bug]: Memtable need gc

    [Bug]: Memtable need gc

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93):
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    need gc

    Expected Behavior

    No response

    Steps to Reproduce

    No response

    Additional information

    No response

  • [Bug]: mysql-client hangs during sql execution, need manually ctrl-c

    [Bug]: mysql-client hangs during sql execution, need manually ctrl-c

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93): 92439b1d3f73552ca912de6180099821c39955fd
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    During running a SQL file of several DDL sqls, the following error occurred on the client side occasionally. All logs can be found here: https://github.com/matrixorigin/matrixone/files/9946003/multi-cn.log.tar.gz image

    Expected Behavior

    No response

    Steps to Reproduce

    1. start a cluster with two CNs(If you want to start a mo cluster in eks, you can following this guideline document: https://github.com/matrixorigin/docs/blob/main/dev/guide/MO%20Operator%20%E7%9A%84%E4%BD%BF%E7%94%A8%E4%B8%8E%E6%B5%8B%E8%AF%95%EF%BC%88%E5%86%85%E9%83%A8%E7%89%88%E6%9C%AC%EF%BC%89.md)
    2. execute drop table if exist create table xxx repeatedly

    Additional information

    Configuration of each node is here: Archive.zip

  • [Bug]: lost connection during execution

    [Bug]: lost connection during execution

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93): 92439b1d3f73552ca912de6180099821c39955fd
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    during running a SQL file of several DDL sqls, the following error occurred on the client side and strange errors on the server side. All logs can be found here: multi-cn.log.tar.gz

    image

    Expected Behavior

    No response

    Steps to Reproduce

    1. start a cluster with two CNs(If you want to start a mo cluster in eks, you can following this guideline document: https://github.com/matrixorigin/docs/blob/main/dev/guide/MO%20Operator%20%E7%9A%84%E4%BD%BF%E7%94%A8%E4%B8%8E%E6%B5%8B%E8%AF%95%EF%BC%88%E5%86%85%E9%83%A8%E7%89%88%E6%9C%AC%EF%BC%89.md)
    2. execute drop table if exist create table xxx repeatedly

    Additional information

    Configuration of each node is here: Archive.zip

  • [Bug]: The result of TPCH Q16(1G) are different for different execution, especially for 1st query and the following query

    [Bug]: The result of TPCH Q16(1G) are different for different execution, especially for 1st query and the following query

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93):62722889363357c124fc8c7b6e54639956171928
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    The result of TPCH Q16(1G) are different for different execution, especially for 1st query and the following query

    frist time result: q16.1st.txt

    second time result: q16.2nd.txt

    third time result: q16.3rd.txt

    Expected Behavior

    No response

    Steps to Reproduce

    1. git clone https://github.com/matrixorigin/mo-tpch.git
    2.
    export LC_ALL="C.UTF-8"
    ./run.sh -c -s 1
    ./run.sh -l -s 1
    ./run.sh -q q16 -s 1
    ./run.sh -q q16 -s 1
    ./run.sh -q q16 -s 1
    ./run.sh -q q16 -s 1
    

    Additional information

    No response

  • [Bug]:  Query occurs panic after adding unique constraints to the table

    [Bug]: Query occurs panic after adding unique constraints to the table

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93):
    - Hardware parameters:
    - OS type:
    - Others:
    
    ./mo-service  -launch ./etc/launch-tae-CN-tae-DN/launch.toml
    

    Actual Behavior

    create table t1(id int,name VARCHAR(255),age int); insert into t1 values(1,"Abby", 24); insert into t1 values(2,"Bob", 25); insert into t1 values(3,"Carol", 23); insert into t1 values(4,"Dora", 29);

    mysql> select * from t1; +------+-------+------+ | id | name | age | +------+-------+------+ | 1 | Abby | 24 | | 2 | Bob | 25 | | 3 | Carol | 23 | | 4 | Dora | 29 | +------+-------+------+ 4 rows in set (0.01 sec)

    mysql> create unique index idx on t1(name); Query OK, 0 rows affected (0.02 sec)

    mysql> select * from t1; ERROR 20101 (HY000): internal error: panic runtime error: invalid memory address or nil pointer dereference: runtime.panicmem /usr/local/go/src/runtime/panic.go:260 runtime.sigpanic /usr/local/go/src/runtime/signal_unix.go:835 github.com/matrixorigin/matrixone/pkg/vm/engine/disttae.(*database).Relation /home/yiming/workspace/matrixone/pkg/vm/engine/disttae/database.go:74 github.com/matrixorigin/matrixone/pkg/frontend.(*TxnCompilerContext).getRelation /home/yiming/workspace/matrixone/pkg/frontend/session.go:1655 github.com

    Expected Behavior

    mysql> select * from t1; +------+-------+------+ | id | name | age | +------+-------+------+ | 1 | Abby | 24 | | 2 | Bob | 25 | | 3 | Carol | 23 | | 4 | Dora | 29 | +------+-------+------+ 4 rows in set (0.01 sec

    Steps to Reproduce

    No response

    Additional information

    No response

  • fix bit aggregation function bug

    fix bit aggregation function bug

    What type of PR is this?

    • [ ] API-change
    • [x] BUG
    • [ ] Improvement
    • [ ] Documentation
    • [ ] Feature
    • [ ] Test and CI
    • [ ] Code Refactoring

    Which issue(s) this PR fixes:

    issue #3734

    What this PR does / why we need it:

    fix bit aggregation function bug which bit_wise function result different in ARM and X86

  • fix create unique index panic

    fix create unique index panic

    What type of PR is this?

    • [ ] API-change
    • [x] BUG
    • [ ] Improvement
    • [ ] Documentation
    • [ ] Feature
    • [ ] Test and CI
    • [ ] Code Refactoring

    Which issue(s) this PR fixes:

    issue #7387

    What this PR does / why we need it:

    1.Fix the error problem of creating index without data 2.Fix the error reported when creating the index of the original table containing the primary key

  • Make some changes in format function.

    Make some changes in format function.

    What type of PR is this?

    • [ ] API-change
    • [x] BUG
    • [ ] Improvement
    • [ ] Documentation
    • [ ] Feature
    • [ ] Test and CI
    • [ ] Code Refactoring

    Which issue(s) this PR fixes:

    issue #7356

    What this PR does / why we need it:

    Add check for scientific notion in format function. Add test for format function.

  • [Bug]: panic runtime error: slice bounds out of range [2:1]:

    [Bug]: panic runtime error: slice bounds out of range [2:1]:

    Is there an existing issue for the same bug?

    • [X] I have checked the existing issues.

    Environment

    - Version or commit-id (e.g. v0.1.0 or 8b23a93): 5e8b41364349c2a4e40fc9311cf70ac369a62299
    - Hardware parameters:
    - OS type:
    - Others:
    

    Actual Behavior

    
    CREATE TABLE `sys_menu2` (
      `menu_id` bigint NOT NULL AUTO_INCREMENT COMMENT 'ID',
      `pid` bigint DEFAULT NULL COMMENT '上级菜单ID',
      `sub_count` int DEFAULT '0' COMMENT '子菜单数目',
      `type` int DEFAULT NULL COMMENT '菜单类型',
      `title` varchar(255) DEFAULT NULL COMMENT '菜单标题',
      `name` varchar(255) DEFAULT NULL COMMENT '组件名称',
      `component` varchar(255) DEFAULT NULL COMMENT '组件',
      `menu_sort` int DEFAULT NULL COMMENT '排序',
      `icon` varchar(255) DEFAULT NULL COMMENT '图标',
      `path` varchar(255) DEFAULT NULL COMMENT '链接地址',
      `i_frame` varchar(255) DEFAULT NULL COMMENT '是否外链',
      `cache` varchar(255) DEFAULT b'0' COMMENT '缓存',
      `hidden` varchar(255) DEFAULT b'0' COMMENT '隐藏',
      `permission` varchar(255) DEFAULT NULL COMMENT '权限',
      `create_by` varchar(255) DEFAULT NULL COMMENT '创建者',
      `update_by` varchar(255) DEFAULT NULL COMMENT '更新者',
      `create_time` datetime DEFAULT NULL COMMENT '创建日期',
      `update_time` datetime DEFAULT NULL COMMENT '更新时间',
      PRIMARY KEY (`menu_id`) ,
      UNIQUE KEY `uniq_title` (`title`),
      UNIQUE KEY `uniq_name` (`name`),
      KEY `inx_pid` (`pid`)
    ) COMMENT='系统菜单'
    > 20101 - internal error: panic runtime error: slice bounds out of range [2:1]: 
    runtime.goPanicSliceB
    	/usr/local/go/src/runtime/panic.go:153
    github.com/matrixorigin/matrixone/pkg/sql/parsers/dialect/mysql.(*Lexer).toBit
    	/workspace/mo20221226/matrixone/pkg/sql/parsers/dialect/mysql/mysql_lexer.go:159
    github.com/matrixorigin/matrixone/pkg/sql/parsers/dialect/mysql.(*Lexer).Lex
    	/workspace/mo20221226/matrixone/pkg/sql/parsers/dialect/mysql/mysql_lexer.go:82
    github.com/matrixorigin/matrixone/pkg/sql/parsers/dialect/m
    

    Expected Behavior

    No response

    Steps to Reproduce

    No response

    Additional information

    No response

Collects many small inserts to ClickHouse and send in big inserts

ClickHouse-Bulk Simple Yandex ClickHouse insert collector. It collect requests and send to ClickHouse servers. Installation Download binary for you pl

Dec 28, 2022
Vectorized SQL for JSON at scale: fast, simple, schemaless
Vectorized SQL for JSON at scale: fast, simple, schemaless

Vectorized SQL for JSON at scale: fast, simple, schemaless Sneller is a high-performance vectorized SQL engine for JSON that runs directly on object s

Jan 7, 2023
Minimal memory usage, cloud native logstash alternative
Minimal memory usage, cloud native logstash alternative

Mr-Plow Tiny and minimal tool to export data from relational db (postgres or mysql) to elasticsearch. The tool does not implement all the logstash fea

Aug 18, 2022
Dumpling is a fast, easy-to-use tool written by Go for dumping data from the database(MySQL, TiDB...) to local/cloud(S3, GCP...) in multifarious formats(SQL, CSV...).

?? Dumpling Dumpling is a tool and a Go library for creating SQL dump from a MySQL-compatible database. It is intended to replace mysqldump and mydump

Nov 9, 2022
Constant Database native golang implementation

CDB golang implementation cdb is a fast, reliable, simple package for creating and reading constant databases see docs for more details Advantages Ite

Jul 15, 2022
A MySQL-compatible relational database with a storage agnostic query engine. Implemented in pure Go.

go-mysql-server go-mysql-server is a SQL engine which parses standard SQL (based on MySQL syntax) and executes queries on data sources of your choice.

Jan 2, 2023
MySQL Storage engine conversion,Support mutual conversion between MyISAM and InnoDB engines.

econvert MySQL Storage engine conversion 简介 此工具用于MySQL存储引擎转换,支持CTAS和ALTER两种模式,目前只支持MyISAM和InnoDB存储引擎相互转换,其它引擎尚不支持。 注意:当对表进行引擎转换时,建议业务停止访问或者极少量访问时进行。 原

Oct 25, 2021
It's a Go console utility for migration from MSSQL to MySQL engine.

A tool for migration the databases to MySQL It's a Go console utility for migration from MSSQL to MySQL engine. The databases should have prepopulated

Jan 4, 2022
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.

Zinc Zinc is a search engine that does full text indexing. It is a lightweight alternative to elasticsearch and runs in less than 100 MB of RAM. It us

Jan 8, 2023
Hybrid Engine for emulate trading flow

Deridex Backend This is a backend implementation for Deridex market. The system

Dec 15, 2021
Query and Provision Cloud Infrastructure using an extensible SQL based grammar
Query and Provision Cloud Infrastructure using an extensible SQL based grammar

Deploy, Manage and Query Cloud Infrastructure using SQL [Documentation] [Developer Guide] Cloud infrastructure coding using SQL InfraQL allows you to

Oct 25, 2022
CloudQuery extracts, transforms, and loads your cloud assets into normalized PostgreSQL tables.
CloudQuery extracts, transforms, and loads your cloud assets into normalized PostgreSQL tables.

The open-source cloud asset inventory backed by SQL. CloudQuery extracts, transforms, and loads your cloud assets into normalized PostgreSQL tables. C

Dec 31, 2022
Crossplane provider for InfluxDB Cloud

provider-template provider-template is a minimal Crossplane Provider that is meant to be used as a template for implementing new Providers. It comes w

Jan 10, 2022
Google Cloud Spanner driver for Go's database/sql package.

go-sql-spanner Google Cloud Spanner driver for Go's database/sql package. This support is currently in the Preview release status. import _ "github.co

Dec 11, 2022
A fast data generator that's multi-table aware and supports multi-row DML.
A fast data generator that's multi-table aware and supports multi-row DML.

If you need to generate a lot of random data for your database tables but don't want to spend hours configuring a custom tool for the job, then datage

Dec 26, 2022
Sync MySQL data into elasticsearch
Sync MySQL data into elasticsearch

go-mysql-elasticsearch is a service syncing your MySQL data into Elasticsearch automatically. It uses mysqldump to fetch the origin data at first, the

Dec 30, 2022
db-recovery is a tool for recovering MySQL data.

db-recovery is a tool for recovering MySQL data. It is used in scenarios where the database has no backup or binlog. It can parse data files and redo/undo logs to recover data.

Nov 17, 2022
Convert data exports from various services to a single SQLite database
Convert data exports from various services to a single SQLite database

Bionic Bionic is a tool to convert data exports from web apps to a single SQLite database. Bionic currently supports data exports from Google, Apple H

Dec 9, 2022
OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.

OctoSQL OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases, streaming sources and file formats using

Dec 29, 2022