topolvm operator provide kubernetes local storage which is light weight and high performance

Topolvm-Operator

Topolvm-Operator

Topolvm-Operator is an open source cloud-native local storage orchestrator for Kubernetes, which bases on topolvm.

Supported environments

  • Kubernetes: 1.20, 1.19
  • Node OS: Linux with LVM2
  • Filesystems: ext4, xfs

The CSIStorageCapacity feature gate should be turned on

Features

  • Orchestrate topolvm
  • Prepare volume group
  • Volume group dynamic expand
  • Perception of storage topology
  • Volume capacity limit

Planned features

  • Raid of volume group
  • Auto discover available devices
  • Manage volume group that user created

Components

  • operator: orchestrate topolvm include TopolvmCluster controller and ConfigMap controller
  • preparevg: prepare volume group on each node

Diagram

A diagram of components and the how they work see below:

component diagram

How components work

  1. TopolvmCluster controller watch the TopolvmCluster(CRD)
  2. TopolvmCluster controller start ConfigMap controller to watch lvmd ConfigMap if TopolvmCluster created
  3. TopolvmCluster controller create preparevg Job,Topolvm-controller Deployment depend on TopolvmCluster
  4. preparevg Job on specific node check disk that provided in TopolvmCluster and create volume group, if volume group created successfully and then create lvmd ConfigMap for the node
  5. ConfigMap controller finds the new lvmd ConfigMap then create Topolvm-node Deployment
  6. TopolvmCluster controller update TopolvmCluster status

Getting started and Documentation

docs directory contains documents about installation and specifications

Topolvm

topolvm-operator is based on topolvm, we fork topolvm/topolvm and do some changes.

see alauda/topolvm

the changes are below:

  • remove topolvm-scheduler
  • lvmd containerized

Docker images

Report a Bug

For filing bugs, suggesting improvements, or requesting new features, please open an issue.

Comments
  • Use Operator SDK for the project

    Use Operator SDK for the project

    Removals:

    • TopolvmCluster CR v1 api
    • CRD conversion webhook

    Additions:

    • OperatorSDK Project layout
    • Usage of Kustomize to generate manifests
    • Make targets for creation of bundle & catalog images

    Minor Changes:

    • Rename of controller files

    No Changes:

    • Helm charts
    • Unit, e2e tests and github workflows

    Non-goals:

    • Actual bundle and catalog generations
    • Re-arch of controllers

    Ref: #86

  • WIP: topolvm lvmd deployed as daemonset

    WIP: topolvm lvmd deployed as daemonset

    Remove lvmd container from node deployment. Deploy lvmd as Daemonset.

    @little-guy-lxr : I do not know if i have choose the better place to call "MakeLvmdDaemonSet". Any suggestions will be welcome.

    This is on-going. Because we need to add this PR some CI tests, and collect more ideas.

    Signed-off-by: Juan Miguel Olmo Martínez [email protected]

  • Debugging a `phase: Failure` custom resource

    Debugging a `phase: Failure` custom resource

    • Here is the corresponding CR and resultant of lvmd configmap
    • Issue: phase: "" isn't being set and so overall CR is in phase: Failure
    ---
    apiVersion: topolvm.cybozu.com/v2
    kind: TopolvmCluster
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"topolvm.cybozu.com/v2","kind":"TopolvmCluster","metadata":{"annotations":{},"name":"sample-cr","namespace":"topolvm-system"},"spec":{"storage":{"className":"hdd","devices":[{"name":"/dev/nvme1n1","type":"disk"},{"name":"/dev/nvme2n1","type":"disk"},{"name":"/dev/nvme3n1","type":"disk"},{"name":"/dev/nvme4n1","type":"disk"}],"useAllDevices":false,"useAllNodes":true,"useLoop":false,"volumeGroupName":"test-master"},"topolvmVersion":"quay.io/topolvm/topolvm-with-sidecar:0.10.2"}}
      creationTimestamp: "2021-11-18T04:12:09Z"
      finalizers:
      - topolvmcluster.topolvm.cybozu.com
      generation: 1
      name: sample-cr
      namespace: topolvm-system
      resourceVersion: "25599"
      uid: a64f6090-56c3-414f-b219-cc92b5b12914
    spec:
      storage:
        className: hdd
        devices:
        - name: /dev/nvme1n1
          type: disk
        - name: /dev/nvme2n1
          type: disk
        - name: /dev/nvme3n1
          type: disk
        - name: /dev/nvme4n1
          type: disk
        useAllDevices: false
        useAllNodes: true
        useLoop: false
        volumeGroupName: test-master
      topolvmVersion: quay.io/topolvm/topolvm-with-sidecar:0.10.2
    status:
      nodeStorageState:
      - failClasses: []
        loops: null
        node: ip-10-0-146-72.ap-south-1.compute.internal
        phase: "" # why is this empty :/
        successClasses:
        - className: hdd
          state: create successful
          vgName: test-master
      phase: Failure # due to empty phase overall phase is 'Failure'
    ---
    apiVersion: v1
    data:
      lvmd.yaml: |
        socket-name: /run/topolvm/lvmd.sock
        device-classes:
        - name: hdd
          volume-group: test-master
          default: true
      status.json: '{"node":"ip-10-0-146-72.ap-south-1.compute.internal","phase":"","failClasses":[],"successClasses":[{"className":"hdd","vgName":"test-master","state":"create
        successful"}],"loops":null}'
    kind: ConfigMap
    metadata:
      annotations:
        node-name: ip-10-0-146-72.ap-south-1.compute.internal
      creationTimestamp: "2021-11-18T04:12:20Z"
      labels:
        topolvm/lvmdconfig: lvmdconfig
      name: lvmdconfig-ip-10-0-146-72.ap-south-1.compute.internal
      namespace: topolvm-system
      ownerReferences:
      - apiVersion: topolvm.cybozu.com/v2
        blockOwnerDeletion: true
        controller: true
        kind: TopolvmCluster
        name: sample-cr
        uid: a64f6090-56c3-414f-b219-cc92b5b12914
      resourceVersion: "25598"
      uid: 92e247a0-e58a-45c4-9295-c1eacb703571
    
    • o/p from single node cluster
    sh-4.4# pvs
      PV           VG          Fmt  Attr PSize   PFree  
      /dev/nvme1n1 test-master lvm2 a--  <10.00g <10.00g
      /dev/nvme2n1 test-master lvm2 a--  <10.00g <10.00g
      /dev/nvme3n1 test-master lvm2 a--  <10.00g <10.00g
      /dev/nvme4n1 test-master lvm2 a--  <10.00g <10.00g
    sh-4.4# vgs
      VG          #PV #LV #SN Attr   VSize  VFree 
      test-master   4   0   0 wz--n- 39.98g 39.98g
    sh-4.4# lvs
    sh-4.4# 
    
    • Corresponding logs from operator
    2021-11-18 04:12:20.236230 D | lvmd-config: got configmap start process
    2021-11-18 04:12:20.240571 D | topolvm-cluster-reconciler: UpdateStatus phase:Failure
    2021-11-18 04:12:20.245161 D | topolvm-cluster-reconciler: start reconcile
    2021-11-18 04:12:20.250330 I | lvmd-config: psp topolvm-node existing
    2021-11-18 04:12:20.252129 I | lvmd-config: psp topolvm-preparevg existing
    2021-11-18 04:12:20.254318 I | topolvm-cluster-reconciler: class info nothing change no need to start prepare volumegroup job
    2021-11-18 04:12:20.257338 I | topolvm-cluster-reconciler: controller deployment no change need not reconcile
    2021-11-18 04:12:20.259146 I | lvmd-config: cmlvmdconfig-ip-10-0-146-72.ap-south-1.compute.internal  update but data not change no need to update node deployment
    2021-11-18 04:12:20.260313 I | topolvm-cluster-reconciler: node deployment no change need not reconcile
    2021-11-18 04:12:29.950809 D | topolvm-cluster-reconciler: no need to update cluster status
    2021-11-18 04:12:29.950833 D | op-k8sutil: creating servicemonitor topolvm-service-monitor
    [...] last two lines are repeated and no o/p of another reconcile
    
    • This function isn't being reconciled and just updates the status only once https://github.com/alauda/topolvm-operator/blob/29fb862c949804637241b7157c9d4c2c539ca485/controllers/topolvmcluster_controller.go#L440
  • Failed to create (intermittent) prepareVG job on AWS

    Failed to create (intermittent) prepareVG job on AWS

    • Did the usual, created a OCP cluster, added disks, installed operator etc

    • Operator Image (created from origin-topolvm branch): quay.io/rhn_support_lgangava/toperator:origin

    • Issue:

    1. Unable to deploy pod for preparing volume group
    # k describe jobs topolvm-prepare-vg-ip-10-0-155-231.ap-south-1.compute.internal                                                                                                             
    Name:             topolvm-prepare-vg-ip-10-0-155-231.ap-south-1.compute.internal                                                                                                             
    Namespace:        topolvm-system  
    [...]
    Events:
      Type     Reason        Age   From            Message
      ----     ------        ----  ----            -------
    Warning  FailedCreate  30s   job-controller  Error creating: Pod "topolvm-prepare-vg-ip-10-0-155-231.ap-south-1.compute.--1-wxw6f" is invalid: [metadata.generateName: Invalid value: "topolvm-prepare-vg-ip-10-0-155-231.ap-south-1.compute.--1-": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'), metadata.name: Invalid value: "topolvm-prepare-vg-ip-10-0-155-231.ap-south-1.compute.--1-wxw6f": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')]
    [...]
    
    • Corresponding code lgtm but not sure why the pod name is starting with double hyphens --1- 🤔 https://github.com/alauda/topolvm-operator/blob/dad1c0a07d8bbe436c79301208d15b0e1806754f/pkg/operator/volumegroup/vgmanager.go#L144-L151
  • Feature request: lvmd deployed as daemonset

    Feature request: lvmd deployed as daemonset

    This change is suggested by the topolvm project recommendations for topolvm production deployments.

    Currently lvmd is deployed as part of the node deployment. The node deployment is for each cluster node and takes care that the lvmd config passsed to each lvmd container (using the lvmdconfig configmap) provides the proper values for:

    • communication socket: Must be the same used for topolvm-node containers
    • Device classes configuration: that can be different for each node depending of the user preferences in the topolvm cluster CRD.

    Things to discuss:

    1. Possible implementations. a) Deploy lvmd as daemonset: Requires to pass in someway the different node configuration to the lvmd deployed in each node.... deploy 1 daemonset for node?

    b) Change node deployment to daemonset

    c) changes in lvmd to pass variable configuration using a command instead a config file.

  • Solve problems starting topolvm-controller using topolvm original image

    Solve problems starting topolvm-controller using topolvm original image

    This changes must be applied over the changes proposed in: https://github.com/alauda/topolvm-operator/pull/36

    • Topolvm operator image in topolvm cluster now points to the original topolvm image.
    • Added a self signed certificate to make possible start metric and webhook https server in topolvm controller
    • Added NAMESPACE env var to CSI provisioner container in topolvm controller
    • Fix some typos

    TODO: Discuss the management of certificates (helm chart vs CRDs)

    Signed-off-by: Juan Miguel Olmo Martínez [email protected]

  • Topolvm operator uses forked version of topolvm

    Topolvm operator uses forked version of topolvm

    @little-guy-lxr

    Our team at Red Hat is willing to contribute to Topolvm operator.

    Currently Topolvm Operator uses a forked version of Topolvm. However, we would prefer to use the main topolvm project instead of a forked version

    The operator doc suggests two major changes in the forked version: a) remove topolvm-scheduler b) lvmd containerized

    • TopoLVM project now provides a way to run lvmd as a container. See

    Do you think Topolvm operator could start using the original project instead of the forked version ?

  • Disk discovery is only possible after applying the cluster.yaml file.

    Disk discovery is only possible after applying the cluster.yaml file.

    Use case:  As a storage admin, I should be able to discover available disks on the cluster in order to make better decisions while creating LVM volume groups. 
    

    TopoLVM operator has a discovery mechanism that stores the discovered device results in the lvmdconfig-<nodeName> config map under data.devices This works well, but the config map is available only after the cluster CR is applied. So list of available disks is not available to the user (Storage Admin) before creating the cluster.

    @fanzy618 @little-guy-lxr This is not an issue but just a use case for device discovery. Is there a way to get list of devices before cluster CR is applied? What do you guys think about this use case?

    Also, is there a more convenient way to reach out to you guys for questions? (slack, etc)

    Thanks.

  • Moving to Operator SDK

    Moving to Operator SDK

    @little-guy-lxr

    • Can you please create a new branch for operator-sdk
    • I'll send commits to that from time to time and when it's suitable with current functionality we can change the branches?
    • Do you still wish to support api/v1?

    I don't have triage rights, pls assign this issue to me.

  • Need for extra filters for disk discovery in cloud environments (GCP)

    Need for extra filters for disk discovery in cloud environments (GCP)

    hello,

    • Thanks for all the efforts for creating and maintaining this project, can you pls help me with below issues if possible?
    • Deployed Topolvm operator on OCP 4.x with required feature gates opened on GCP via Operatorhub (v2.0.0)
    • Below is the manifest file for TopolvmCluster
    apiVersion: topolvm.cybozu.com/v2
    kind: TopolvmCluster
    metadata:
      name: topolvmcluster-sample
      namespace: topolvm-system
    spec:
      topolvmVersion: "alaudapublic/topolvm:2.0.0"
      storage:
        useAllNodes: true
        useAllDevices: true
        useLoop: true
        volumeGroupName: test
        className: hdd
    
    • Consider each node (3 masters + 3 workers) has below disks
    -> oc debug nodes/xxx-worker-a-zrtzc.c.xxx -- lsblk
    Starting pod/xxx-worker-a-zrtzc.c.xxx-debug ...
    To use host binaries, run `chroot /host`
    NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda      8:0    0   128G  0 disk 
    |-sda1   8:1    0     1M  0 part 
    |-sda2   8:2    0   127M  0 part 
    |-sda3   8:3    0   384M  0 part /host/boot
    `-sda4   8:4    0 127.5G  0 part /host/sysroot
    sdb      8:16   0    10G  0 disk 
    sdc      8:32   0    10G  0 disk 
    
    Removing debug pod ...
    
    • Ideally /dev/sd{b,c} should be considered for vgcreation and partitions in /dev/sda should be excluded
    • However, below is the result of data stored under devices in lvmdconfig-xxx
    [
      {
        "name": "sda1",
        "parent": "sda",
        "hasChildren": false,
        "devLinks": "/dev/disk/by-id/google-persistent-disk-0-part1 /dev/disk/by-partuuid/2c82abb4-778a-41b5-8e2b-1576bd2b0b8f /dev/disk/by-path/pci-0000:00:03.0-scsi-0:0:1:0-part1 /dev/disk/by-id/scsi-0Google_PersistentDisk_persistent-disk-0-part1 /dev/disk/by-partlabel/BIOS-BOOT",
        "size": 1048576,
        "uuid": "",
        "serial": "0Google_PersistentDisk_persistent-disk-0",
        "type": "part",
        "rotational": false,
        "readOnly": false,
        "Partitions": null,
        "filesystem": "",
        "vendor": "Google",
        "model": "PersistentDisk",
        "wwn": "",
        "wwnVendorExtension": "",
        "empty": false,
        "real-path": "/dev/sda1",
        "kernel-name": "sda1"
      }
    ]
    

    Issues:

    1. Operator picked up sda1 for vg creation and failed due to less space (1MB), so we might need a filter for minimum size of disk to consider for vg?
    2. sda1 shouldn't be picked up in the first place as it is of type BIOS-BOOT and should be skipped, it might contain MBR?
    3. Result of issue1, when vgcreation failed, operator didn't walk through other healthy devices sd{b,c} as every reconcile will anyways fail, probably it's better to create vg on those?
  • topolvm controller certs management

    topolvm controller certs management

    Implements: #62

    I have included a new parameter in the cluster CRD for providing the name of the secret containing the certificate that will be used by topolvm-controller mutation webhook. This will allow the user to us cert-manager to create the certificate, o any other certificate provision, only is required to create a TLS secret with the certificate and the key.

    In case that the user does not provide the CRD certsSecret parameter, a self signed certificate will be created automatically and used in topolvm-controller deployment. This also happens if the user has provided the parameter certsSecret , but using a wrng name, or if the secret does not exists in the topolvm namespace.

    After discussion, more work needed to add the new parameter in the Helm chart, add documentation, and test.

  • bug: can't deploy in k8s 1.21

    bug: can't deploy in k8s 1.21

    the helm chart defalutl lvm image is not support k8s 1.21,

    after change to quay.io/topolvm/topolvm-with-sidecar:0.10, topolvm -operator logs error:

    {"level":"error","ts":1639815831.7687695,"msg":"error received after stop sequence was engaged","error":"leader election lost"}
    {"level":"error","ts":1639815831.768764,"logger":"setup","msg":"problem running manager","error":"open /certs/tls.crt: no such file or directory","stacktrace":"github.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:974\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:902\ngithub.com/topolvm/topolvm/pkg/topolvm-controller/cmd.Execute\n\t/workdir/pkg/topolvm-controller/cmd/root.go:39\nmain.main\n\t/workdir/pkg/hypertopolvm/main.go:44\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:225"}
    Error: open /certs/tls.crt: no such file or directory
    open /certs/tls.crt: no such file or directory
    
  • Using topolvm image without sidecars

    Using topolvm image without sidecars

    On Openshift, it is required to use the CSI sidecar images shipped by Openshift. I would like to modify the operator to use the default sidecar containers available and use the topolvm image that is built without sidecars.

    Proposal: There are 3 ways to provide the images.

    1. Hardcode default CSI sidecar images in the code
    2. In the operator config map
    3. Via operator ENV variables

    Highest priority will be given to the values in the ENV, followed by the configmap. If neither of these are available, the default hardcoded values will be used.

  • [Question] Support for devices at storage level with no deviceclasses

    [Question] Support for devices at storage level with no deviceclasses

    • I can think of a usecase for below CR, like, specific devices from specific nodes but no deviceClass from the nodes. Just checking whether it's intentional not to support below type or is it any bug?
    ---
    apiVersion: topolvm.cybozu.com/v2
    kind: TopolvmCluster
    metadata:
      name: sample-cr
      namespace: topolvm-system
    spec:
      topolvmVersion: "quay.io/topolvm/topolvm-with-sidecar:0.10.2"
      storage:
        # single device class with nodename, devices at storage level
        useAllNodes: false
        useAllDevices: false
        useLoop: false
        devices:
          - name: "/dev/nvme1n1"
            type: "disk"
          - name: "/dev/nvme2n1"
            type: "disk"
          - name: "/dev/nvme3n1"
            type: "disk"
          - name: "/dev/nvme4n1"
            type: "disk"
        deviceClasses:
          - nodeName: "ip-10-0-146-72.ap-south-1.compute.internal"
            classes:
              - volumeGroup: test-master
                className: hdd
                default: true
    
  • topolvm operator is failed to push monitoring metrics into prometheus

    topolvm operator is failed to push monitoring metrics into prometheus

    topolvm operator is not pushing any monitoring metrics out of operator and nodes pod, There is no way to create an alert and alerting rules for topolvm in Kubernetes / Openshift.

Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behaviors.

add-operator Basic Kubernetes operator that have multiple versions in CRD. This operator can be used to experiment and understand Operator/CRD behavio

Dec 15, 2021
An high performance and ops-free local storage solution for Kubernetes.
An high performance and ops-free local storage solution for Kubernetes.

Carina carina 是一个CSI插件,在Kubernetes集群中提供本地存储持久卷 项目状态:开发测试中 CSI Version: 1.3.0 Carina architecture 支持的环境 Kubernetes:1.20 1.19 1.18 Node OS:Linux Filesys

May 18, 2022
Carina: an high performance and ops-free local storage for kubernetes
Carina: an high performance and ops-free local storage for kubernetes

Carina English | 中文 Background Storage systems are complex! There are more and more kubernetes native storage systems nowadays and stateful applicatio

Dec 30, 2022
Kubernetes Operator Samples using Go, the Operator SDK and OLM
Kubernetes Operator Samples using Go, the Operator SDK and OLM

Kubernetes Operator Patterns and Best Practises This project contains Kubernetes operator samples that demonstrate best practices how to develop opera

Nov 24, 2022
The Elastalert Operator is an implementation of a Kubernetes Operator, to easily integrate elastalert with gitops.

Elastalert Operator for Kubernetes The Elastalert Operator is an implementation of a Kubernetes Operator. Getting started Firstly, learn How to use el

Jun 28, 2022
Minecraft-operator - A Kubernetes operator for Minecraft Java Edition servers

Minecraft Operator A Kubernetes operator for dedicated servers of the video game

Dec 15, 2022
K8s-network-config-operator - Kubernetes network config operator to push network config to switches

Kubernetes Network operator Will add more to the readme later :D Operations The

May 16, 2022
Pulumi-k8s-operator-example - OpenGitOps Compliant Pulumi Kubernetes Operator Example

Pulumi GitOps Example OpenGitOps Compliant Pulumi Kubernetes Operator Example Pr

May 6, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

Aug 6, 2022
Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using HPE Smart Storage Administrator tool

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Jan 17, 2022
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Jan 23, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
Operator Permissions Advisor is a CLI tool that will take a catalog image and statically parse it to determine what permissions an Operator will request of OLM during an install

Operator Permissions Advisor is a CLI tool that will take a catalog image and statically parse it to determine what permissions an Operator will request of OLM during an install. The permissions are aggregated from the following sources:

Apr 22, 2022
A kubernetes operator sample generated by kubebuilder , which run cmd in pod on specified time

init kubebuilder init --domain github.com --repo github.com/tonyshanc/sample-operator-v2 kubebuilder create api --group sample --version v1 --kind At

Jan 25, 2022
Dynamically provisioning persistent local storage with Kubernetes

Local Path Provisioner Overview Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the

Jan 4, 2023
Test Operator using operator-sdk 1.15

test-operator Test Operator using operator-sdk 1.15 operator-sdk init --domain rbt.com --repo github.com/ravitri/test-operator Writing kustomize manif

Dec 28, 2021
a k8s operator 、operator-sdk

helloworld-operator a k8s operator 、operator-sdk Operator 参考 https://jicki.cn/kubernetes-operator/ https://learnku.com/articles/60683 https://opensour

Jan 27, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022