TiDB is an open source distributed HTAP database compatible with the MySQL protocol

LICENSE Language Build Status Go Report Card GitHub release GitHub release date CircleCI Status Coverage Status GoDoc

What is TiDB?

TiDB ("Ti" stands for Titanium) is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.

  • Horizontal Scalability

    TiDB expands both SQL processing and storage by simply adding new nodes. This makes infrastructure capacity planning both easier and more cost-effective than traditional relational databases which only scale vertically.

  • MySQL Compatible Syntax

    TiDB acts like it is a MySQL 5.7 server to your applications. You can continue to use all of the existing MySQL client libraries, and in many cases, you will not need to change a single line of code in your application. Because TiDB is built from scratch, not a MySQL fork, please check out the list of known compatibility differences.

  • Distributed Transactions with Strong Consistency

    TiDB internally shards table into small range-based chunks that we refer to as "Regions". Each Region defaults to approximately 100 MiB in size, and TiDB uses a Two-phase commit internally to ensure that Regions are maintained in a transactionally consistent way.

  • Cloud Native

    TiDB is designed to work in the cloud -- public, private, or hybrid -- making deployment, provisioning, operations, and maintenance simple.

    The storage layer of TiDB, called TiKV, became a Cloud Native Computing Foundation member project in 2018. The architecture of the TiDB platform also allows SQL processing and storage to be scaled independently of each other in a very cloud-friendly manner.

  • Minimize ETL

    TiDB is designed to support both transaction processing (OLTP) and analytical processing (OLAP) workloads. This means that while you may have traditionally transacted on MySQL and then Extracted, Transformed and Loaded (ETL) data into a column store for analytical processing, this step is no longer required.

  • High Availability

    TiDB uses the Raft consensus algorithm to ensure that data is highly available and safely replicated throughout storage in Raft groups. In the event of failure, a Raft group will automatically elect a new leader for the failed member, and self-heal the TiDB cluster without any required manual intervention. Failure and self-healing operations are also transparent to applications.

For more details and latest updates, see TiDB docs and release notes.

Quick start

See Quick Start Guide, which includes deployment methods using TiUP, Ansible, Docker, and Kubernetes.

To start developing TiDB

The community repository hosts all information about the TiDB community, including how to contribute to TiDB, how TiDB community is governed, how special interest groups are organized, etc.

contribution-map

Contributions are welcomed and greatly appreciated. See Contribution Guide for details on submitting patches and the contribution workflow. For more contributing information, click on the contributor icon above.

Adopters

View the current list of in-production TiDB adopters here.

Case studies

Roadmap

Read the Roadmap.

Getting help

Documentation

Blog

TiDB Monthly

TiDB Monthly

Architecture

architecture

License

TiDB is under the Apache 2.0 license. See the LICENSE file for details.

Acknowledgments

Owner
PingCAP
The team behind TiDB TiKV, an open source MySQL compatible NewSQL HTAP database
PingCAP
Comments
  • privilege: fix `REVOKE` privilege check incompatibility with MySQL (#13014)

    privilege: fix `REVOKE` privilege check incompatibility with MySQL (#13014)

    cherry-pick #13014 to release-3.0


    What problem does this PR solve?

    Originally, execute REVOKE query demand user have SuperPriv, which is incompatibility with MySQL in such case.

    create user u1;
    create user u2;
    grant select on *.* to u1 with grant option.
    grant select on *.* to u2;
    // login as u1
    revoke select on *.* from u2;
    

    TiDB will return error, MySQL is ok.

    What is changed and how it works?

    change privilege check for REVOKE like GRANT. If a user has grant option on some object like table or db, he can revoke privilege on these object from other users.

    Check List

    Tests

    • Unit test

    Code changes

    Side effects

    • Increased code complexity

    Related changes

    • Need to cherry-pick to the release branch

    Release note

    • fix privilege check for REVOKE
  • ddl: support concurrent ddl

    ddl: support concurrent ddl

    Signed-off-by: Weizhen Wang [email protected] Signed-off-by: xiongjiwei [email protected] Signed-off-by: wjhuang2016 [email protected]

    What problem does this PR solve?

    Issue Number: ref https://github.com/pingcap/tidb/issues/32031

    this is a big PR and we split it into many commits, every commit almost has a single purpose. I will introduce them briefly, you may need to reference the doc https://github.com/pingcap/tidb/pull/33629

    • init ddl tables create tidb_ddl_job, tidb_ddl_reorg, tidb_ddl_history tables with raw meta write, these 3 tables is use to replace the ddl job queue and reorg and history hash table. you can see this part in doc

    • setup concurrent ddl env and add ddl worker pool this commit adds the ddl worker pool definition, the ddl job manager will find a job and ship it to a worker in the worker pool. Also, this commit provides a sessionctx wrapper, only use in ddl relate. it just wraps begin, commit and execute.

    • add ddl manager to handle ddl job this commit implements the ddl manager, which is used for

      • find a runnable ddl job
      • ship the job to the worker

      you can ref the doc. Also, it adds a function HandleDDLJob which will do the ddl job, and it is a replacer of HandleDDLJobQueue. The last thing of this commit is adding schemaVersionManager to update the schema version in a new txn, it prevents the txn conflict with the schema version key.

    • reorg handler for concurrent ddl just implements the partner of the reorg information.

    • manage ddl jobs for concurrent ddl same as above, the partner of add job, delete job and many other related to history job

    • change ddl interface caller because many of the functions need a session now, we just change the caller

    • add metrics for concurrent ddl add metrics

    • migrate ddl between table and queue support switch between the old and new ddl framework, migrate the existing ddl job between queue and table

    What is changed and how it works?

    Check List

    Tests

    • [x] Unit test
    • [ ] Integration test
    • [ ] Manual test (add detailed scripts or steps below)
    • [ ] No code

    Release note

    None
    
  • planner: Implement PointGet in TryFastPlan for range/list paritition table

    planner: Implement PointGet in TryFastPlan for range/list paritition table

    What problem does this PR solve?

    Issue Number: related to #24476, #24150

    Problem Summary:

    • planner: Implement PointGet in TryFastPlan for range/list paritition table. Will implement BatchGet in the next PR.

    What is changed and how it works?

    What's Changed:

    • Change the tryPointGetPlan logic. Let it permit other types of partition table other than hash partition table. Add logic to locate partition table.

    How it Works:

    • After getting paris variable, we use it to locate the partition where data point lies.

    Check List

    Tests

    • Unit test

    Side effects

    • N/A

    Release note

    • planner: Implement PointGet in TryFastPlan for range/list paritition table
  • docs: make a change line to debug ci

    docs: make a change line to debug ci

    What problem does this PR solve?

    Issue Number: close #xxx

    Problem Summary:

    What is changed and how it works?

    Check List

    Tests

    • [ ] Unit test
    • [ ] Integration test
    • [ ] Manual test (add detailed scripts or steps below)
    • [ ] No code

    Side effects

    • [ ] Performance regression: Consumes more CPU
    • [ ] Performance regression: Consumes more Memory
    • [ ] Breaking backward compatibility

    Documentation

    • [ ] Affects user behaviors
    • [ ] Contains syntax changes
    • [ ] Contains variable changes
    • [ ] Contains experimental features
    • [ ] Changes MySQL compatibility

    Release note

    Please refer to Release Notes Language Style Guide to write a quality release note.

    None
    
  • *:  Support Failed-Login Tracking and Temporary Account Locking

    *: Support Failed-Login Tracking and Temporary Account Locking

    What problem does this PR solve?

    Issue Number: close #38938

    Problem Summary:

    What is changed and how it works?

    Most of the feature is the same as MySQL's, except that:

    • A server restart, accounts not reset
    • Execution of FLUSH PRIVILEGES. (Starting the server with --skip-grant-tables, failed-login tracking is working. In this case, the first execution of FLUSH PRIVILEGES accounts not reset.)

    Check List

    Tests

    • [x] Unit test
    • [ ] Integration test
    • [ ] Manual test (add detailed scripts or steps below)
    • [ ] No code

    Side effects

    • [ ] Performance regression: Consumes more CPU
    • [ ] Performance regression: Consumes more Memory
    • [ ] Breaking backward compatibility

    Documentation

    • [ ] Affects user behaviors
    • [ ] Contains syntax changes
    • [ ] Contains variable changes
    • [ ] Contains experimental features
    • [ ] Changes MySQL compatibility

    Release note

    Please refer to Release Notes Language Style Guide to write a quality release note.

    
    Support failed-login tracking and temporary account locking for users with `FAILED_LOGIN_ATTEMPTS` and/or `PASSWORD_LOCK_TIME`
    
    
  • *: support password reuse policy

    *: support password reuse policy

    What problem does this PR solve?

    Issue Number: ref #38937

    Problem Summary:

    What is changed and how it works?

    the feature is the same as MySQL's .

    Check List

    Tests

    • [x] Unit test
    • [x] Integration test
    • [ ] Manual test (add detailed scripts or steps below)
    • [ ] No code

    Side effects

    • [ ] Performance regression: Consumes more CPU
    • [ ] Performance regression: Consumes more Memory
    • [ ] Breaking backward compatibility

    Documentation

    • [ ] Affects user behaviors
    • [ ] Contains syntax changes
    • [ ] Contains variable changes
    • [ ] Contains experimental features
    • [ ] Changes MySQL compatibility

    Release note

    Please refer to Release Notes Language Style Guide to write a quality release note.

     support password reuse policy like MySQL . 
    
  • Release 4.0

    Release 4.0

    What problem does this PR solve?

    Issue Number: close #xxx

    Problem Summary:

    What is changed and how it works?

    Proposal: xxx

    What's Changed:

    How it Works:

    Related changes

    • PR to update pingcap/docs/pingcap/docs-cn:
    • Need to cherry-pick to the release branch

    Check List

    Tests

    • Unit test
    • Integration test
    • Manual test (add detailed scripts or steps below)
    • No code

    Side effects

    • Performance regression
      • Consumes more CPU
      • Consumes more MEM
    • Breaking backward compatibility

    Release note

  • br: support batch create table for restore

    br: support batch create table for restore

    What problem does this PR solve?

    Issue Number: close #30284 PR only for BR part review, the TiDB review please refer to: https://github.com/pingcap/tidb/pull/28763

    Problem Summary:

    What is changed and how it works?

    Check List

    Tests

    • [x] Unit test
    • [x] Integration test
    • [ ] Manual test (add detailed scripts or steps below)
    • [ ] No code

    Side effects

    • [ ] Performance regression: Consumes more CPU
    • [ ] Performance regression: Consumes more Memory
    • [ ] Breaking backward compatibility

    Documentation

    • [ ] Affects user behaviors
    • [ ] Contains syntax changes
    • [ ] Contains variable changes
    • [ ] Contains experimental features
    • [ ] Changes MySQL compatibility

    Release note

    None
    
  • Add PROXY protocol support

    Add PROXY protocol support

    Add PROXY protocol V1 and V2 support.

    usage: tidb-server --proxy-protocol-networks "*" --proxy-protocol-header-timeout 5

    Add --proxy-protocol-networks command parameter for PROXY protocol enable or disable. If you want to limit HAProxy server IP range, you can set --proxy-protocol-networks parameter to a CIDRs and split by ",". For example:

    tidb-server --proxy-protocol-networks "192.168.1.0/24,192.168.2.0/24"

    For more information about PROXY protocol please refer https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

  • test leak in TestInfo

    test leak in TestInfo

    FAIL:TestInfo

    Bug Report

    Please answer these questions before submitting your issue. Thanks!

    1. What did you do? If possible, provide a recipe for reproducing the error. https://internal.pingcap.net/idc-jenkins/blue/rest/organizations/jenkins/pipelines/tidb_ghpr_unit_test/runs/24373/nodes/84/log/?start=0
    [2020-02-17T03:08:06.587Z] === RUN   TestInfo
    [2020-02-17T03:08:06.587Z] [2020/02/17 11:08:01.308 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.TestInfo\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain_test.go:150\ntesting.tRunner\n\t/usr/local/go/src/testing/testing.go:909"]
    [2020-02-17T03:08:06.587Z] [2020/02/17 11:08:01.311 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.587Z] [2020/02/17 11:08:01.314 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.587Z] [2020/02/17 11:08:01.318 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.587Z] [2020/02/17 11:08:01.321 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.587Z] [2020/02/17 11:08:01.324 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.587Z] [2020/02/17 11:08:01.327 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.587Z] [2020/02/17 11:08:01.330 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.588Z] [2020/02/17 11:08:01.334 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.588Z] [2020/02/17 11:08:01.337 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.588Z] [2020/02/17 11:08:01.340 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.588Z] [2020/02/17 11:08:01.343 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.588Z] [2020/02/17 11:08:01.346 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.588Z] [2020/02/17 11:08:01.349 +08:00] [ERROR] [syncer.go:199] ["close session failed"] [] [stack="github.com/pingcap/tidb/ddl/util.(*schemaVersionSyncer).Done\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/ddl/util/syncer.go:199\ngithub.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop\n\t/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:485"]
    [2020-02-17T03:08:06.588Z] {"level":"warn","ts":"2020-02-17T11:08:01.350+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-516b5633-69d2-49d2-a3a5-8ed7da21c073/localhost:1525069566242647020","attempt":0,"error":"rpc error: code = NotFound desc = etcdserver: requested lease not found"}
    [2020-02-17T03:08:06.588Z] --- FAIL: TestInfo (2.75s)
    [2020-02-17T03:08:06.588Z]     leaktest.go:143: Test TestInfo check-count 50 appears to have leaked: github.com/pingcap/tidb/domain.(*Domain).loadSchemaInLoop(0xc0006026c0, 0x2faf080)
    [2020-02-17T03:08:06.588Z]         	/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:469 +0x269
    [2020-02-17T03:08:06.588Z]         created by github.com/pingcap/tidb/domain.(*Domain).Init
    [2020-02-17T03:08:06.588Z]         	/home/jenkins/agent/workspace/tidb_ghpr_unit_test@2/go/src/github.com/pingcap/tidb/domain/domain.go:696 +0x59c
    [2020-02-17T03:08:06.588Z] FAIL
    
    1. What did you expect to see?

    2. What did you see instead?

    3. What version of TiDB are you using (tidb-server -V or run select tidb_version(); on TiDB)?

  • tidb: support a plan cache for prepared statements

    tidb: support a plan cache for prepared statements

    Since prepared statements are compiled and optimized whenever they are executed, there is a room to reduce the execution time of the prepared statements.

    In order to do that we can compile a prepared statement once and reuse the compiled plan by converting parameter expressions to deferred ones, which are evaluated as late as possible, and storing the plan to a cache.

    The implentation has some limitations:

    • the prepared statements with limit expressions are not cached
    • the statistics related to the cache are not supported
    • on-demand methods for invalidating the cache are not supported
  • ttl: Add `CommandClient` to trigger TTL job manually

    ttl: Add `CommandClient` to trigger TTL job manually

    What problem does this PR solve?

    Issue Number: close #40345

    Problem Summary:

    In some cases, we want to trigger a TTL job manually, for example some test cases want to test the TTL execution without waiting for the schedule time.

    What is changed and how it works?

    1. Add a client to send commands to ttl.JobManager and ttl.JobManager can also use it to handle commands.
    2. Support to trigger a TTL job manually using the client.
    3. Optimize some interval works in ttl.JobManager to reduce the time cost of job processing and to reduce some test time.
    4. Now triggering a TTL job can only be used in the internal test.

    Check List

    Tests

    • [x] Unit test
    • [ ] Integration test
    • [ ] Manual test (add detailed scripts or steps below)
    • [ ] No code

    Side effects

    • [ ] Performance regression: Consumes more CPU
    • [ ] Performance regression: Consumes more Memory
    • [ ] Breaking backward compatibility

    Documentation

    • [ ] Affects user behaviors
    • [ ] Contains syntax changes
    • [ ] Contains variable changes
    • [ ] Contains experimental features
    • [ ] Changes MySQL compatibility

    Release note

    Please refer to Release Notes Language Style Guide to write a quality release note.

    None
    
  • ttl: don't schedule ttl job when EnableTTLJob is off (#40336)

    ttl: don't schedule ttl job when EnableTTLJob is off (#40336)

    This is an automated cherry-pick of #40336

    Signed-off-by: YangKeao [email protected]

    What problem does this PR solve?

    Issue Number: close #40335

    What is changed and how it works?

    Don't try to schedule anything if the EnableTTLJob is false.

    Check List

    Tests

    • [ ] Unit test
    • [x] Integration test
    • [ ] Manual test (add detailed scripts or steps below)
    • [ ] No code

    Release note

    Fix the issue that the ttl job keeps scheduling even when `tidb_ttl_job_enable` is set to 'OFF'
    
  • planner: support more types to use IndexMerge to access MVIndex

    planner: support more types to use IndexMerge to access MVIndex

    What problem does this PR solve?

    Issue Number: ref #40191

    Problem Summary: planner: support more types to use IndexMerge to access MVIndex

    What is changed and how it works?

    planner: support more types to use IndexMerge to access MVIndex

    Check List

    Tests

    • [x] Unit test
    • [ ] Integration test
    • [ ] Manual test (add detailed scripts or steps below)
    • [ ] No code

    Side effects

    • [ ] Performance regression: Consumes more CPU
    • [ ] Performance regression: Consumes more Memory
    • [ ] Breaking backward compatibility

    Documentation

    • [ ] Affects user behaviors
    • [ ] Contains syntax changes
    • [ ] Contains variable changes
    • [ ] Contains experimental features
    • [ ] Changes MySQL compatibility

    Release note

    Please refer to Release Notes Language Style Guide to write a quality release note.

    None
    
  • Implement

    Implement "drop column with index" through dropping index before dropping column

    Enhancement

    For alter table xxx drop column xxx statement, it will drop both the column and its index. It brings some problems (e.g. https://github.com/pingcap/tidb/issues/40192). It's also not online-enough in some cases: if a NOT NULL UNIQUE column is in write-only mode, the user cannot actually write any data into the table, because the default value of this column is always inserted and checked as duplicated... (though, sounds not like a big problem).

    We could try to implement "drop column with index" in multiple steps: drop the index first and then drop the column. It will fix this problem. It'll also make the routine of dropping column more unified with other implementations.

  • query got tikv error

    query got tikv error "other error: [components/tidb_query_expr/src/types/expr_builder.rs:74]: Unsupported expression type MysqlEnum"

    Bug Report

    Please answer these questions before submitting your issue. Thanks!

    1. Minimal reproduce step (Required)

    CREATE TABLE `3d4cb6ef-0f1d-4251-8bfa-597151b12faf` (
      `4f7b3506-819c-4d2a-b9f7-83d435afcbfa` smallint(6) NOT NULL DEFAULT '-5122',
      `ed811e32-79ef-407f-b0aa-c26cfab07039` decimal(31,5) NOT NULL DEFAULT '-58273067513829223182570619.25537',
      PRIMARY KEY (`4f7b3506-819c-4d2a-b9f7-83d435afcbfa`,`ed811e32-79ef-407f-b0aa-c26cfab07039`) /*T![clustered_index] CLUSTERED */
    );
    
    CREATE TABLE `19034a25-1533-4252-8b25-abbdd6969f26` (
      `2285ff2b-896f-4dfe-a7d5-b7f7fd7a8bc8` mediumint(9) NOT NULL DEFAULT '-4139846',
      `1593583d-41c7-4143-b6a3-ccd5220fad20` enum('j50s','8','4a3yi','kv','lf5b','j','ky76i','q','el','xy','9') DEFAULT 'q',
      `ba579924-abe2-42b3-8f78-b3aad6369dfc` enum('n0pr','u7x','qt','0msar','zrr4','b2b3n','t','xv6','rnw','7cqno','x3elq') DEFAULT 'b2b3n',
      `279ef64e-beab-41cf-acac-998f925c1da1` timestamp DEFAULT '2014-09-29 07:13:02',
      `f4e3f70d-6654-43bd-8ac4-9249ea076743` enum('c','3','he','p2','pww','ux9ib','xk1fg','ws1n','mgiz1','2','j8vj') NOT NULL DEFAULT 'pww',
      `4f52ced6-03a4-48d5-a875-899e78069dc9` decimal(45,23) DEFAULT '7130795821065799090555.21999951481601315693236',
      `4f06def2-7f89-4fab-9567-fa29aa828993` year(4) DEFAULT '1968',
      PRIMARY KEY (`f4e3f70d-6654-43bd-8ac4-9249ea076743`,`2285ff2b-896f-4dfe-a7d5-b7f7fd7a8bc8`) /*T![clustered_index] CLUSTERED */,
      KEY `6abfc501-2a14-40ce-ac53-20b10798b194` (`f4e3f70d-6654-43bd-8ac4-9249ea076743`),
      KEY `0350a244-83ba-4377-9045-a0a9c1483346` (`4f52ced6-03a4-48d5-a875-899e78069dc9`),
      KEY `b972c93b-78c6-42f1-9f30-2b139429b36d` (`f4e3f70d-6654-43bd-8ac4-9249ea076743`,`2285ff2b-896f-4dfe-a7d5-b7f7fd7a8bc8`,`4f52ced6-03a4-48d5-a875-899e78069dc9`,`1593583d-41c7-4143-b6a3-ccd5220fad20`,`279ef64e-beab-41cf-acac-998f925c1da1`),
      UNIQUE KEY `f981fc15-052a-49e7-9ae7-4c9904e80379` (`f4e3f70d-6654-43bd-8ac4-9249ea076743`,`279ef64e-beab-41cf-acac-998f925c1da1`,`2285ff2b-896f-4dfe-a7d5-b7f7fd7a8bc8`)
    ) ENGINE=InnoDB DEFAULT CHARSET=ascii COLLATE=ascii_bin COMMENT='f5d35eed-b7c8-4acc-a3dc-cfdb3c70f576'
    PARTITION BY HASH (`2285ff2b-896f-4dfe-a7d5-b7f7fd7a8bc8`) PARTITIONS 6;
    
    INSERT INTO `19034a25-1533-4252-8b25-abbdd6969f26` VALUES (-7186257,'ky76i','u7x','1985-07-19 16:00:00','p2',62465.40000000000000000000000,1973),(5449941,'el','zrr4','1993-11-18 16:00:00','ws1n',46929.20000000000000000000000,2006),(-4139846,'el','t','2025-06-27 16:00:00','',7130795821065799090555.21999951481601315693236,1996),(4562822,'lf5b','b2b3n','2016-04-15 16:00:00','ux9ib',55415.20000000000000000000000,2021),(4841120,'q','b2b3n','2025-06-27 16:00:00','2',7130795821065799090555.21999951481601315693236,1968),(-209341,'el','t','2025-06-27 16:00:00','',0.09000000000000000000000,1982),(-1047955,'el','t','2025-06-27 16:00:00','c',0.50000000000000000000000,1983),(-7984447,'el','t','1979-10-23 16:00:00','xk1fg',2.30600000000000000000000,2011),(3699259,'j50s','n0pr','1979-12-09 16:00:00','xk1fg',367.70340000000000000000000,2028),(-7625501,'8','t','2030-01-05 16:00:00','c',12864.00000000000000000000000,2025),(-2982875,'lf5b','t','1981-04-02 16:00:00','c',42.00000000000000000000000,2000),(2030843,'xy',NULL,'2013-07-08 16:00:00','he',3446.40000000000000000000000,2004),(7599611,'el','t','2025-06-27 16:00:00','ws1n',7130795821065799090555.21999951481601315693236,1968);
    
    INSERT INTO `3d4cb6ef-0f1d-4251-8bfa-597151b12faf` VALUES (-26167,-71962604850603508117163388.41295),(-20311,35958677810997277612112850.64947),(-10584,7.04220),(-10584,903981.60700),(-10584,96361735411871213251545173.82569),(-5122,68904.10000),(5999,7.04220),(29452,7.04220);
    
    select  field( `19034a25-1533-4252-8b25-abbdd6969f26`.`279ef64e-beab-41cf-acac-998f925c1da1` , `19034a25-1533-4252-8b25-abbdd6969f26`.`279ef64e-beab-41cf-acac-998f925c1da1` , `19034a25-1533-4252-8b25-abbdd6969f26`.`2285ff2b-896f-4dfe-a7d5-b7f7fd7a8bc8` )
    as r0 from `19034a25-1533-4252-8b25-abbdd6969f26` where not( `19034a25-1533-4252-8b25-abbdd6969f26`.`f4e3f70d-6654-43bd-8ac4-9249ea076743` in ( select `ed811e32-79ef-407f-b0aa-c26cfab07039` from `3d4cb6ef-0f1d-4251-8bfa-597151b12faf` where `19034a25-1533-4252-8b25-abbdd6969f26`.`1593583d-41c7-4143-b6a3-ccd5220fad20` in ( select `ed811e32-79ef-407f-b0aa-c26cfab07039` from `3d4cb6ef-0f1d-4251-8bfa-597151b12faf` where `19034a25-1533-4252-8b25-abbdd6969f26`.`ba579924-abe2-42b3-8f78-b3aad6369dfc` >= 'qt' and not( `19034a25-1533-4252-8b25-abbdd6969f26`.`ba579924-abe2-42b3-8f78-b3aad6369dfc` <> 'u7x' ) ) and not( `19034a25-1533-4252-8b25-abbdd6969f26`.`f4e3f70d-6654-43bd-8ac4-9249ea076743` between 'mgiz1' and 'ws1n' ) ) );
    

    2. What did you expect to see? (Required)

    No error

    3. What did you see instead (Required)

    other error: [components/tidb_query_expr/src/types/expr_builder.rs:74]: Unsupported expression type MysqlEnum

    4. What is your TiDB version? (Required)

    master

RadonDB is an open source, cloud-native MySQL database for building global, scalable cloud services

OverView RadonDB is an open source, Cloud-native MySQL database for unlimited scalability and performance. What is RadonDB? RadonDB is a cloud-native

Dec 31, 2022
A MySQL-compatible relational database with a storage agnostic query engine. Implemented in pure Go.

go-mysql-server is a SQL engine which parses standard SQL (based on MySQL syntax) and executes queries on data sources of your choice. A simple in-memory database and table implementation are provided, and you can query any data source you want by implementing a few interfaces.

Dec 27, 2022
CockroachDB - the open source, cloud-native distributed SQL database.
CockroachDB - the open source, cloud-native distributed SQL database.

CockroachDB is a cloud-native SQL database for building global, scalable cloud services that survive disasters. What is CockroachDB? Docs Quickstart C

Jan 2, 2023
An open-source graph database
An open-source graph database

Cayley is an open-source database for Linked Data. It is inspired by the graph database behind Google's Knowledge Graph (formerly Freebase). Documenta

Dec 31, 2022
LinDB is an open-source Time Series Database which provides high performance, high availability and horizontal scalability.
LinDB is an open-source Time Series Database which provides high performance, high availability and horizontal scalability.

LinDB is an open-source Time Series Database which provides high performance, high availability and horizontal scalability. LinDB stores all monitoring data of ELEME Inc, there is 88TB incremental writes per day and 2.7PB total raw data.

Jan 1, 2023
A distributed MySQL binlog storage system built on Raft
A distributed MySQL binlog storage system built on Raft

What is kingbus? 中文 Kingbus is a distributed MySQL binlog store based on raft. Kingbus can act as a slave to the real master and as a master to the sl

Dec 31, 2022
Vitess is a database clustering system for horizontal scaling of MySQL through generalized sharding.

Vitess is a database clustering system for horizontal scaling of MySQL through generalized sharding.

Jan 4, 2023
Run MySQL Database on Docker

Run MySQL Database on Docker cd <path>/resources/docker sudo docker-compose up (sudo for linux) This will start a container MySQL Database running on

Jan 1, 2022
IceFireDB - Distributed disk storage system based on Raft and RESP protocol.
IceFireDB - Distributed disk storage system based on Raft and RESP protocol.

Distributed disk storage database based on Raft and Redis protocol.

Dec 27, 2022
Export output from pg_stat_activity and pg_stat_statements from Postgres into a time-series database that supports the Influx Line Protocol (ILP).

pgstat2ilp pgstat2ilp is a command-line program for exporting output from pg_stat_activity and pg_stat_statements (if the extension is installed/enabl

Dec 15, 2021
Distributed reliable key-value store for the most critical data of a distributed system

etcd Note: The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order

Jan 9, 2023
The lightweight, distributed relational database built on SQLite.
The lightweight, distributed relational database built on SQLite.

rqlite is a lightweight, distributed relational database, which uses SQLite as its storage engine. Forming a cluster is very straightforward, it grace

Jan 5, 2023
LBADD: An experimental, distributed SQL database
LBADD: An experimental, distributed SQL database

LBADD Let's build a distributed database. LBADD is an experimental distributed SQL database, written in Go. The goal of this project is to build a dat

Nov 29, 2022
A course to build the SQL layer of a distributed database.

TinySQL TinySQL is a course designed to teach you how to implement a distributed relational database in Go. TinySQL is also the name of the simplifed

Jan 8, 2023
TalariaDB is a distributed, highly available, and low latency time-series database for Presto
TalariaDB is a distributed, highly available, and low latency time-series database for Presto

TalariaDB is a distributed, highly available, and low latency time-series database that stores real-time data. It's built on top of Badger DB.

Nov 16, 2022
Redwood is a highly-configurable, distributed, realtime database that manages a state tree shared among many peers

Redwood is a highly-configurable, distributed, realtime database that manages a state tree shared among many peers. Imagine something like a Redux store, but distributed across all users of an application, that offers offline editing and is resilient to poor connectivity.

Jan 8, 2023
decentralized,distributed,peer-to-peer database.

P2PDB 中文 | English 简介 P2PDB(p2p数据库),是一个去中心化、分布式、点对点数据库、P2PDB使用IPFS为其数据存储和IPFS Pubsub自动与对等方同步数据。P2PDB期望打造一个去中心化的分布式数据库,使P2PDB 成为去中心化应用程序 (dApps)、区块链应用程

Jan 1, 2023
Couchbase - distributed NoSQL cloud database

couchbase Couchbase is distributed NoSQL cloud database. create Scope CREATE SCO

Feb 16, 2022
Set out to become the de facto open-source alternative to MongoDB

MangoDB MangoDB is set out to become the de facto open-source alternative to MongoDB. MangoDB is an open-source proxy, which converts MongoDB wire pro

Dec 29, 2022