PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期

1 系统概述

PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储。

PolarDB Stack是阿里云面向线下数据库市场,推出的轻量级PolarDB PaaS软件。 基于共享存储提供一写多读的PolarDB数据库服务,特别定制和深度优化了数据库生命周期管理。

2 整体架构

PolarDB Stack集群组件整体分为Data Panel, Control Panel和Paas三部分。

  • 数据面Data Panel

    • PolarDB Engine为数据库引擎,分RW(支持读写)和RO(只读)节点和Standby节点
  • PolarFS 用户态文件系统

  • 管控面Control Panel

    • CM(Cluster Manager)为集群管理模块,包括节点拓扑维护,主备角色切换,节点状态汇报等
  • LifeCycle Operator 负责数据库集群生命周期管理

    • Storage Controller 组件负责存储管理
  • Daemon 负责网络管理,节点内部维护、状态采集。

  • PolarDB Stack需要部署在Kubernetes上,系统组件及DB集群实例运行在docker容器中

img

2.1 计算存储分离

​ PolarDB Stack采用存储和计算分离的架构,所有计算节点共享一份数据,提供分钟级的配置升降级、秒级的故障恢复、全局数据一致性。采用计算与存储分离的设计理念,满足业务弹性扩展的需求。各计算节点通过分布式文件系统(PolarFS)共享底层的存储(SAN),极大降低了用户的存储成本。PolarDBStack基于kubernetes和共享存储管理控制器为数据库引擎提供计算存储分离功能,基于kubernetes完成计算资源调度与分配,基于共享存储管理控制器完成存储的挂载与读写控制。计算资源申请释放升配降配可灵活独立进行, 在计算资源配置时,存储管理控制器提供存储资源的挂载读写控制等,实现计算资源与存储资源各自分离单独控制与共同协作。

img

2.2 生命周期管理

​ PolarDB Stack数据库集群的生命周期管理主要包括数据库集群创建、刷新引擎参数、规格变更、添加删除节点、存储扩容、迁移节点、重建、重启集群、重启实例、读写切换、引擎小版本升级等流程。

​ PolarDB Stack 使用K8S作为底座,主要组件对象的生命周期管理基于K8S operator开发,基本工作流程如下:首先自定义一种K8S资源,然后由用户创建或修改该资源的一个实例,管控Operator监听到资源实例的变化,触发调协,调协中由状态机检测当前资源状态,判断是否触发了当前状态的某个动作,然后执行该动作进入特定工作流。如果工作流正常执行完毕,资源实例会进入终态(稳态)。由于工作流包含较多步骤,部分动作可能耗时较长,步骤执行失败时会进行重试,自动重试达到上限后,停止继续执行,进入中断状态等待人工介入。

img

​ 数据库集群的生命周期管理是PolarDB Stack核心工作, 首先基于kubernetes CRD定义数据库集群数据模型,然operator会关注DB集群资源变化,资源发生变化时进入状态机,执行特定工作流,一系列步骤执行成功后,最终进入终态“运行中”。

img

2.3 代码架构

  1. 基础层的流程引擎、工具库分别抽出一个工程做实现。
  2. 领域模型、外部依赖接口定义、应用服务层、适配器默认实现放到同一个工程,作为领域库。
  3. operator 引用流程引擎、工具库和领域库。
  4. 在operator中实现工作流、REST、monitor,这些逻辑实现很薄,只是对流程引擎和领域库的调用。如果默认适配器不能满足需求,operator还要针对adapter实现自定义逻辑。
  5. operator 和领域库的应用服务层交互,防止领域逻辑外泄到应用。operator实例化adapter,并将其传入到service,继而注入到领域模型。

img

安装与使用

安装文档 使用手册

Similar Resources

An operator which complements grafana-operator for custom features which are not feasible to be merged into core operator

Grafana Complementary Operator A grafana which complements grafana-operator for custom features which are not feasible to be merged into core operator

Aug 16, 2022

The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022

The Elastalert Operator is an implementation of a Kubernetes Operator, to easily integrate elastalert with gitops.

Elastalert Operator for Kubernetes The Elastalert Operator is an implementation of a Kubernetes Operator. Getting started Firstly, learn How to use el

Jun 28, 2022

Cloud-gaming-operator - The one that manages VMs for cloud gaming built on GCE

cloud-gaming-operator GCE上に建てたクラウドゲーミング用のVMを管理するやつ 事前準備 GCEのインスタンスかマシンイメージを作成してお

Jan 22, 2022

Andrews-monitor - A Go program to monitor when times were available to order for Brown's Andrews dining hall. Used during the portion of the pandemic when the dining hall was only available for online order.

Andrews Dining Hall Monitor A Go program to monitor when times were available to order for Brown's Andrews dining hall. Used during the portion of the

Jan 1, 2022

Kubernetes Operator Samples using Go, the Operator SDK and OLM

Kubernetes Operator Samples using Go, the Operator SDK and OLM

Kubernetes Operator Patterns and Best Practises This project contains Kubernetes operator samples that demonstrate best practices how to develop opera

Nov 24, 2022

A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Oct 24, 2021

Minecraft-operator - A Kubernetes operator for Minecraft Java Edition servers

Minecraft Operator A Kubernetes operator for dedicated servers of the video game

Dec 15, 2022

K8s-network-config-operator - Kubernetes network config operator to push network config to switches

Kubernetes Network operator Will add more to the readme later :D Operations The

May 16, 2022
Comments
  • How can I get volume_id when creating pvc.

    How can I get volume_id when creating pvc.

    I create pvc by executing curl -X POST "http://10.0.0.77:2002/pvcs" -H "accept: application/json" -H "Content-Type: application/json" -d "{ \"name\": \"pvc-32ze341nncwlczm47bsre\", \"namespace\": \"default\", \"need_format\": true, \"volume_id\": \"32ze341nncwlczm47bsre\", \"volume_type\": \"lun\"}" but get error: {"error":"can not find lv for wwid 32ze341nncwlczm47bsre"}. The ouput of multipah -ll is empty. How can I get correct volume_id?

  • Error with polarstack-daemon.

    Error with polarstack-daemon.

    I install polardb by install.sh and I have modify env.yaml according to my own configuration. However, pod created by polarstack-daemon can't work properly. Here is the log of the pod.

    ----------------------------------------------------------------------------------------------
    |                                                                                           |
    | polarbox cloud branch:master commitId:b3f3fde34f4e018cf8ca28625e8d9042ee7bb1f1 
    | polarbox repo https://github.com/ApsaraDB/PolarDB-Stack-Daemon.git
    | polarbox commitDate Wed Oct 20 14:33:55 2021 +0800
    |                                                                                           |
    ----------------------------------------------------------------------------------------------
    start polarbox controller-manager cloud-provider
    I0207 18:21:43.031321       1 main.go:48] --------------------------------------------------------------------------------------------
    I0207 18:21:43.031391       1 main.go:49] |                                                                                           |
    I0207 18:21:43.031398       1 main.go:50] |                              polarstack-daemon                                            |
    I0207 18:21:43.031404       1 main.go:51] |                                                                                           |
    I0207 18:21:43.031410       1 main.go:52] --------------------------------------------------------------------------------------------
    W0207 18:21:43.032072       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
    [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
    
    [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
     - using env:	export GIN_MODE=release
     - using code:	gin.SetMode(gin.ReleaseMode)
    
    [GIN-debug] GET    /healthz                  --> github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/bizapis.health (3 handlers)
    [GIN-debug] GET    /api/v1/TestConn          --> github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/bizapis.Handle.func1 (3 handlers)
    [GIN-debug] GET    /api/v1/GetStandByIp      --> github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/bizapis.Handle.func1 (3 handlers)
    [GIN-debug] POST   /api/v1/RequestCheckCoreVersion --> github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/bizapis.Handle.func1 (3 handlers)
    [GIN-debug] POST   /api/v1/InnerCheckCoreVersion --> github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/bizapis.Handle.func1 (3 handlers)
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x12651a0]
    
    goroutine 22 [running]:
    golang.org/x/crypto/ssh.(*connection).clientAuthenticate(0xc0003e5e00, 0xc000327860, 0x0, 0xa)
    	/go/pkg/mod/golang.org/x/[email protected]/ssh/client_auth.go:63 +0x420
    golang.org/x/crypto/ssh.(*connection).clientHandshake(0xc0003e5e00, 0xc000488e40, 0x9, 0xc000327860, 0x0, 0x0)
    	/go/pkg/mod/golang.org/x/[email protected]/ssh/client.go:113 +0x2b6
    golang.org/x/crypto/ssh.NewClientConn(0x180c020, 0xc0000e1bb0, 0xc000488e40, 0x9, 0xc000327380, 0x180c020, 0xc0000e1bb0, 0x0, 0x0, 0xc000488e40, ...)
    	/go/pkg/mod/golang.org/x/[email protected]/ssh/client.go:83 +0xf8
    golang.org/x/crypto/ssh.Dial(0x15d5acb, 0x3, 0xc000488e40, 0x9, 0xc000327380, 0xc000488e40, 0x9, 0x1)
    	/go/pkg/mod/golang.org/x/[email protected]/ssh/client.go:177 +0xb3
    github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager.SSHConnect(0x15d6479, 0x4, 0xc000488dc8, 0x6, 0x16, 0x2, 0x2, 0xc00003ce00)
    	/go/src/github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/sshutil.go:74 +0x26a
    github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager.(*SSHConnection).Init(0xc0005cb7a0, 0x4, 0xc000488dc8)
    	/go/src/github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/sshutil.go:119 +0x17b
    github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/node_net_status.(*PolarNodeNetworkProbe).__initSSH(0xc000529f80, 0x1b, 0xc0001184e0)
    	/go/src/github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/node_net_status/node_network_probe.go:547 +0x36f
    github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/node_net_status.(*PolarNodeNetworkProbe).Init(0xc000529f80, 0x0, 0x0)
    	/go/src/github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/node_net_status/node_network_probe.go:164 +0xc5
    github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/node_net_status.StartNodeNetworkProbe(0xc00042e000, 0xc000048540)
    	/go/src/github.com/ApsaraDB/PolarDB-Stack-Daemon/polar-controller-manager/node_net_status/node_network_probe.go:116 +0x208
    created by github.com/ApsaraDB/PolarDB-Stack-Daemon/cmd/daemon/app.Run
    	/go/src/github.com/ApsaraDB/PolarDB-Stack-Daemon/cmd/daemon/app/contorllermanager.go:97 +0x1ae
    

    k8s: v1.23, 3 machines docker: v20.10.12 mysql: v8.0.26

    Here is imformation of pods:

    NAME                                       READY   STATUS             RESTARTS       AGE
    calico-kube-controllers-85b5b5888d-rcpmx   1/1     Running            2 (6d3h ago)   10d
    calico-node-9dcsb                          1/1     Running            0              10d
    calico-node-knnwv                          1/1     Running            0              10d
    calico-node-wgf4h                          1/1     Running            2 (6d3h ago)   10d
    coredns-64897985d-tphjz                    1/1     Running            2 (6d3h ago)   10d
    coredns-64897985d-vq2cq                    1/1     Running            2 (6d3h ago)   10d
    etcd-vm08-1                                1/1     Running            7 (6d3h ago)   10d
    kube-apiserver-vm08-1                      1/1     Running            7 (6d3h ago)   10d
    kube-controller-manager-vm08-1             1/1     Running            3 (6d3h ago)   10d
    kube-proxy-ctc85                           1/1     Running            0              10d
    kube-proxy-gzpxg                           1/1     Running            2 (6d3h ago)   10d
    kube-proxy-vdxmm                           1/1     Running            0              10d
    kube-scheduler-vm08-1                      1/1     Running            3 (6d3h ago)   10d
    manager-65dcc96d8d-49d4z                   1/1     Running            0              6m44s
    manager-65dcc96d8d-6r6ql                   1/1     Running            0              6m44s
    manager-65dcc96d8d-l9rvp                   1/1     Running            0              6m44s
    polardb-sms-manager-66db8bbcbf-4dr7q       1/1     Running            0              6m44s
    polardb-sms-manager-66db8bbcbf-6mhpc       1/1     Running            0              6m44s
    polardb-sms-manager-66db8bbcbf-qzvwf       1/1     Running            0              6m44s
    polarstack-daemon-2fpcg                    0/1     CrashLoopBackOff   6 (48s ago)    6m44s
    polarstack-daemon-knpxs                    0/1     CrashLoopBackOff   6 (47s ago)    6m44s
    polarstack-daemon-mthf7                    0/1     CrashLoopBackOff   6 (59s ago)    6m44s
    

    Here is imformation of cm:

    NAME                                                              DATA   AGE
    calico-config                                                     4      10d
    ccm-config                                                        6      24m
    cloud-provider-port-usage-vm08-1                                  0      2d5h
    cloud-provider-port-usage-vm08-2                                  0      2d5h
    cloud-provider-port-usage-vm08-3                                  0      2d5h
    cloud-provider-wwid-usage-vm08-2                                  0      4h51m
    cloud-provider-wwid-usage-vm08-3                                  0      4h51m
    controller-config                                                 27     24m
    coredns                                                           1      10d
    extension-apiserver-authentication                                6      10d
    instance-system-resources                                         3      24m
    kube-proxy                                                        2      10d
    kube-root-ca.crt                                                  1      10d
    kubeadm-config                                                    1      10d
    kubelet-config-1.23                                               1      10d
    metabase-config                                                   1      24m
    mpd.polardb.aliyun.com                                            0      6d2h
    polardb-sms-manager                                               1      24m
    polardb4mpd-controller                                            5      24m
    polarstack-daemon-version-availability-vm08-1                     2      2d5h
    polarstack-daemon-version-availability-vm08-2                     2      2d5h
    polarstack-daemon-version-availability-vm08-3                     2      2d5h
    postgresql-1-0-level-polar-o-x4-large-config-rwo                  17     24m
    postgresql-1-0-level-polar-o-x4-large-resource-rwo                12     24m
    postgresql-1-0-level-polar-o-x4-medium-config-rwo                 17     24m
    postgresql-1-0-level-polar-o-x4-medium-resource-rwo               12     24m
    postgresql-1-0-level-polar-o-x4-xlarge-config-rwo                 17     24m
    postgresql-1-0-level-polar-o-x4-xlarge-resource-rwo               12     24m
    postgresql-1-0-level-polar-o-x8-12xlarge-config-rwo               17     24m
    postgresql-1-0-level-polar-o-x8-12xlarge-exclusive-config-rwo     17     24m
    postgresql-1-0-level-polar-o-x8-12xlarge-exclusive-resource-rwo   13     24m
    postgresql-1-0-level-polar-o-x8-12xlarge-resource-rwo             14     24m
    postgresql-1-0-level-polar-o-x8-2xlarge-config-rwo                17     24m
    postgresql-1-0-level-polar-o-x8-2xlarge-exclusive-config-rwo      17     24m
    postgresql-1-0-level-polar-o-x8-2xlarge-exclusive-resource-rwo    13     24m
    postgresql-1-0-level-polar-o-x8-2xlarge-resource-rwo              12     24m
    postgresql-1-0-level-polar-o-x8-4xlarge-config-rwo                17     24m
    postgresql-1-0-level-polar-o-x8-4xlarge-exclusive-config-rwo      17     24m
    postgresql-1-0-level-polar-o-x8-4xlarge-exclusive-resource-rwo    13     24m
    postgresql-1-0-level-polar-o-x8-4xlarge-resource-rwo              12     24m
    postgresql-1-0-level-polar-o-x8-xlarge-config-rwo                 17     24m
    postgresql-1-0-level-polar-o-x8-xlarge-resource-rwo               12     24m
    postgresql-1-0-minor-version-info-rwo-image-open                  6      24m
    postgresql-1-0-mycnf-template-rwo                                 1      24m
    
  • Chart can't be applied in the version 1.23 of k8s

    Chart can't be applied in the version 1.23 of k8s

    I plan to install polardb by install.sh. However I get the following error:

    install.go:178: [debug] Original chart version: ""
    install.go:195: [debug] CHART PATH: /home/vm08-1/PolarDB-Stack-Operator/polardb-stack-chart
    
    Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
    helm.go:84: [debug] unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
    unable to build kubernetes objects from release manifest
    
    

    I check config/all.yaml of this repo and kubernetes official website. I find the yaml file's syntax is not suitable for the version 1.23 of k8s. I want to know if the development team will update chart to fit the latest k8s in the feature?

  • Applying config/all.yaml problem

    Applying config/all.yaml problem

    When I try to apply config.all.yaml, I get following error: Error from server (Invalid): error when creating "config/all.yaml": Deployment.apps "manager" is invalid: spec.template.spec.containers[0].volumeMounts[2].name: Not found: "config-etcd" I read all.yaml and find no volumes named config-etcd. How can I fix it?

    k8s: v1.23 docker: v20.10.12

Kubegres is a Kubernetes operator allowing to create a cluster of PostgreSql instances and manage databases replication, failover and backup.

Kubegres is a Kubernetes operator allowing to deploy a cluster of PostgreSql pods with data replication enabled out-of-the box. It brings simplicity w

Dec 30, 2022
YurtCluster Operator creates and manages OpenYurt cluster atop Kubernetes

YurtCluster Operator Quick Start Prepare a Kubernetes cluster # cat <<EOF | kind create cluster --config=- kind: Cluster apiVersion: kind.x-k8s.io/v1a

Aug 3, 2022
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

Dec 31, 2022
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023
Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behaviors.

add-operator Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behavio

Dec 15, 2021
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

Feb 21, 2022
Sensu-go-postgres-metrics - The sensu-go-postgres-metrics is a sensu check that collects PostgreSQL metrics

sensu-go-postgres-metrics Table of Contents Overview Known issues Usage examples

Jan 12, 2022
Crane (FinOps Crane) is an opensource project which manages cloud resource on Kubernetes stack, it is inspired by FinOps concepts.
Crane (FinOps Crane) is an opensource project which manages cloud resource on Kubernetes stack, it is inspired by FinOps concepts.

Crane (FinOps Crane) is an opensource project which manages cloud resource on Kubernetes stack, it is inspired by FinOps concepts. Goal of Crane is to provide an one-stop shop project to help Kubernetes users to save cloud resource usage with a rich set of functionalities.

Jan 3, 2023