TiDB Mesh: Implement Multi-Tenant Keyspace by Decorating Message between Components

TiDB Mesh: Implement Multi-Tenant Keyspace by Decorating Message between Components

There’s a traditional thinking paradigm in TiDB multi-tenant features design, which is that we would do too much code refactoring to support multi-tenant, and we would maintain a new code pattern for both single-tenant and multi-tenant features. These refactoring works seem unaffordable for the current development resources.

Actually, large-scale code refactoring is not necessary. There are not only code generators for clients and servers, but also messages in the network traffic. We can pay more attention to the network traffic of gRPC, rather than how to produce gRPC messages and receive gRPC messages.

In the traditional way, we have to modify the definition in protobuf, or introduce a new set of API in protobuf to support multi-tenancy, such as the keyspace support for TiKV. And then, we use the code generator to update the codes for clients and servers, fix errors that arise after the updates, implement the multi-tenancy logic around the updated functions, and make the system runnable.

If we focus on the gRPC messages in network traffic, things could be easier. We can just decorate the gRPC messages that are used for the key-value-related requests. We can make full use of the information of source and destination, embed the cluster information in the key-value requests.

Let’s think about a simple version of keyspace implementation. We can add or remove a prefix at the head of the key in every k/v request across components. When TiDB puts a key-value pair to TiKV, we can add the prefix to the key, and when we fetch this key-value pair from TiKV, we remove the prefix. The message decoration is transparent for TiDB, and TiKV with PD can store many sets of TiDB key-value pairs without conflicts. If TiKV implements the keyspaces feature as some other design documents, we can also use some params serving for keyspace, instead of the key prefix.

The procedure shows as the graph, key_a is a duplicated key for TiKV, which represents the common issues when multisets of TiDB use the same storage layer. There’s a transparent proxy layer that decorates the content about the key in the gRPC message. It can be implemented by Istio or a dedicated gRPC Proxy service.

Architecture

The source and destination information are clear in the service mesh system, such as Istio. We can get the component name, which embeds the cluster name, and the method of the gRPC. The information could decide what prefix should be used in decorating.

Network Traffic We can use the heterogeneous feature in TiDB Operator to deploy the cluster. There are a set of examples for TiDBClusters.

apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
  name: storage-layer
spec:
  version: v5.2.1
  pd:
    baseImage: pingcap/pd
    replicas: 1
    requests:
      storage: "1Gi"
    config: {}
  tikv:
    baseImage: pingcap/tikv
    replicas: 1
    requests:
      storage: "1Gi"
    config: {}
---
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
  name: cluster1
spec:
  version: v5.2.1
  cluster:
    name: storage-layer
  tidb:
    baseImage: pingcap/tidb
    replicas: 1
    service:
      type: ClusterIP
    config: {}
---
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
  name: cluster2
spec:
  version: v5.2.1
  cluster:
    name: storage-layer
  tidb:
    baseImage: pingcap/tidb
    replicas: 1
    service:
      type: ClusterIP
    config: {}

There must be many problems in this implementation, such as the decoration of coprocessors and some other features supported. We can support these features step by step. If the system works as expected, we would have a universal storage layer, and a dedicated computing layer. It brings us closer to the serverless database.

Service Mesh is a big topic in TiDB. Once we introduce the service mesh into TiDB, not only we can get the multi-tenant feature, but also we can strengthen the observability of the distributed system. There’re many more accurate network traffic management methods we can use in the future.

Similar Resources

Container Storage Interface components for SPIFFE

SPIFFE CSI Driver WARNING: This project is in the "Development" phase of the SPIFFE Project Maturity Phases. A Container Storage Interface driver for

Jan 3, 2023

A set of components that can be composed into a highly available metric system with unlimited storage capacity

A set of components that can be composed into a highly available metric system with unlimited storage capacity

Overview Thanos is a set of components that can be composed into a highly available metric system with unlimited storage capacity, which can be added

Oct 20, 2021

Harbormaster - Toolkit for automating the creation & mgmt of Docker components and tools

My development environment is MacOS with an M1 chip and I mostly develop for lin

Feb 17, 2022

Local Disk Manager is one of HwameiStor components

Local Disk Manager is one of HwameiStor components

Local Disk Manager is one of HwameiStor components. It will manage all the local disks of the HwameiStor nodes, including provision local Disk volume, and disk health management.

Aug 6, 2022

Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

Aug 6, 2022

Using Go to implement CobaltStrike's Beacon

Using Go to implement CobaltStrike's Beacon

Geacon 本项目仅限于安全研究和教学,严禁用于非法用途! Usage 修改core/config.go中的配置信息 设置平台和架构set "GOOS=linux" && set "GOARCH=amd64" 编译生成go build -ldflags="-s -w" main.go Screen

Nov 15, 2022

Go implement for Genshin Gacha Export

README Build on Windows 执行命令 env GOOS=windows GOARCH=amd64 go build 测试在 windows 11 上通过 使用 确保在原神游戏中打开一次祈愿记录. 然后双击运行 GenshinGachaExport.exe. 它将会在当前目录下生成

Apr 6, 2022

The CLI tool glueing Git, Docker, Helm and Kubernetes with any CI system to implement CI/CD and Giterminism

The CLI tool glueing Git, Docker, Helm and Kubernetes with any CI system to implement CI/CD and Giterminism

___ werf is an Open Source CLI tool written in Go, designed to simplify and speed up the delivery of applications. To use it, you need to describe the

Jan 4, 2023

Code generator to help implement the visitor pattern.

mkvisitor Given package example type ( Node struct{} Leaf struct{} ) run mkvisitor -type "Node,Leaf" then generate package example import "fmt" t

Oct 6, 2021
Grafana Mimir provides horizontally scalable, highly available, multi-tenant, long-term storage for Prometheus.
Grafana Mimir provides horizontally scalable, highly available, multi-tenant, long-term storage for Prometheus.

Grafana Mimir Grafana Mimir is an open source software project that provides a scalable long-term storage for Prometheus. Some of the core strengths o

Jan 3, 2023
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

Dec 17, 2021
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration Karmada (Kubernetes Armada) is a Kubernetes management system that enables

Dec 30, 2022
Go WhatsApp Multi-Device Implementation in REST API with Multi-Session/Account Support

Go WhatsApp Multi-Device Implementation in REST API This repository contains example of implementation go.mau.fi/whatsmeow package with Multi-Session/

Dec 3, 2022
🐻 The Universal Service Mesh. CNCF Sandbox Project.
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Aug 10, 2021
🐻 The Universal Service Mesh. CNCF Sandbox Project.
🐻 The Universal Service Mesh. CNCF Sandbox Project.

Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to i

Jan 8, 2023
Snowcat - A service mesh scanning tool
 Snowcat - A service mesh scanning tool

Snowcat - A service mesh scanning tool Snowcat gathers and analyzes the configuration of an Istio cluster and audits it for potential violations of se

Nov 9, 2022
ADK Node ( a.k.a ADKgo v2 ) for the ADK Mesh with full Smart Contract support [Mainnet]

ADKGo SmartNode (MAINNET v2) Official Golang implementation of the ADK Mesh protocol with Smart Contract funtionality BASE NODE SETUP STEPS (on clean

Nov 24, 2021
Meshery Adapter for Cilium Service Mesh
Meshery Adapter for Cilium Service Mesh

Meshery Adapter for Cilium Service Mesh Cilium Service Mesh Cilium is open sourc

Dec 1, 2022
Multicluster mesh addon: An enhanced addon created with addon-framework

Multicluster Mesh Addon multicluster-mesh-addon is an enhanced addon created wit

Feb 11, 2022