Open-Local is a local disk management system composed of multiple components.

Open-Local

English | 简体中文

Open-Local is a local disk management system composed of multiple components. With Open-Local, using local storage in Kubernetes will be as simple as centralized storage.

Features

  • Local storage pool management
  • Dynamic volume provisioning
  • Extended scheduler
  • Volume expansion
  • Volume snapshot
  • Volume metrics

Overall Architecture

Open-Localcontains three types of components:

  • Scheduler extender: as an extended component of Kubernetes Scheduler, adding local storage scheduling algorithm
  • CSI plugins: providing the ability to create/delete volume, expand volume and take snapshots of the volume
  • Agent: running on each node in the K8s cluster, and report local storage device information for Scheduler extender

Who uses Open-Local

Open-Local has been widely used in production environments, and currently used products include:

  • Alibaba Cloud ECP (Enterprise Container Platform)
  • Alibaba Cloud ADP (Cloud-Native Application Delivery Platform)
  • AntStack Plus Products

User guide

More details here

License

Apache 2.0 License

Owner
Alibaba
Alibaba Open Source
Alibaba
Comments
  • Unable to install open-local on minicube

    Unable to install open-local on minicube

    Hello,

    I followed the installation guide here

    When I typed kubectl get po -nkube-system -l app=open-local the output was:

    NAME                                              READY   STATUS      RESTARTS   AGE
    open-local-agent-p2xdq                            3/3     Running     0          13m
    open-local-csi-provisioner-59cd8644ff-n52xc       1/1     Running     0          13m
    open-local-csi-resizer-554f54b5b4-xkw97           1/1     Running     0          13m
    open-local-csi-snapshotter-64dff4b689-9g9wl       1/1     Running     0          13m
    open-local-init-job--1-f9vzz                      0/1     Completed   0          13m
    open-local-init-job--1-j7j8b                      0/1     Completed   0          13m
    open-local-init-job--1-lmvqd                      0/1     Completed   0          13m
    open-local-scheduler-extender-5dc8d8bb49-n44pn    1/1     Running     0          13m
    open-local-snapshot-controller-846c8f6578-2bfhx   1/1     Running     0          13m
    

    However, when I typed kubectl get nodelocalstorage, I got this output:

    NAME       STATE   PHASE   AGENTUPDATEAT   SCHEDULERUPDATEAT   SCHEDULERUPDATESTATUS
    minikube                                                       
    

    According to the installation guide, the column The STATE should display DiskReady.

    And if I typed kubectl get nls -o yaml, it outputted:

    piVersion: v1
    items:
    - apiVersion: csi.aliyun.com/v1alpha1
      kind: NodeLocalStorage
      metadata:
        creationTimestamp: "2021-09-20T13:37:09Z"
        generation: 1
        name: minikube
        resourceVersion: "615"
        uid: 6f193362-e2b2-4053-a6e6-81de35c96eaf
      spec:
        listConfig:
          devices: {}
          mountPoints:
            include:
            - /mnt/open-local/disk-[0-9]+
          vgs:
            include:
            - open-local-pool-[0-9]+
        nodeName: minikube
        resourceToBeInited:
          vgs:
          - devices:
            - /dev/sdb
            name: open-local-pool-0
    kind: List
    metadata:
      resourceVersion: ""
      selfLink: ""
    

    I am running Minicube on my desktop computer which has a SSD hard disk.

    Thank you for your help.

  • Use an existing VG?

    Use an existing VG?

    Question

    Is it possible to use an existing VG with this project? I already have a PV and VG created, and the VG has 100GB free to create LVs.

    Would it be possible to configure open-local to create new LVs in the existing VG? If it's possible, would appreciate any help.

  • Helm install CSIDriver error

    Helm install CSIDriver error

    System info

    Via uname -a && kubectl version && helm version && apt-show-versions lvm2 | grep amd:

    Linux master-node 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:03:20Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T17:57:25Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
    version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
    lvm2:amd64/bionic-updates 2.02.176-4.1ubuntu3.18.04.3 uptodate
    

    Bug Description

    Setup

    wget https://github.com/alibaba/open-local/archive/refs/tags/v0.1.1.zip
    unzip v0.1.1.zip
    cd open-local-0.1.1
    

    The Problem

    Using release 0.1.1 of open-local and following the current user guide's instructions, when I run

    helm install open-local ./helm
    

    -- I get the following error:

    Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "CSIDriver" in version "storage.k8s.io/v1beta1"
    
  • device schedule with error Get Response StatusCode 500

    device schedule with error Get Response StatusCode 500

    Ⅰ. Issue Description

    新建 pvc 时报错

      Normal   WaitForFirstConsumer  2m13s                 persistentvolume-controller                                      waiting for first consumer to be created before binding
      Normal   ExternalProvisioning  11s (x11 over 2m13s)  persistentvolume-controller                                      waiting for a volume to be created, either by external provisioner "local.csi.aliyun.com" or manually created by system administrator
      Normal   Provisioning          6s (x8 over 2m13s)    local.csi.aliyun.com_node1_080636cb-a68d-4ee8-a3a3-db5ae5634cbb  External provisioner is provisioning volume for claim "demo/pvc-open-local-device-hdd-test2-0-d0"
      Warning  ProvisioningFailed    6s (x8 over 2m13s)    local.csi.aliyun.com_node1_080636cb-a68d-4ee8-a3a3-db5ae5634cbb  failed to provision volume with StorageClass "open-local-device-hdd": rpc error: code = InvalidArgument desc = Parse Device part schedule info error rpc error: code = InvalidArgument desc = device schedule with error Get Response StatusCode 500, Response: failed to allocate local storage for pvc demo/pvc-open-local-device-hdd-test2-0-d0: Insufficient Device storage, requested 0, available 0, capacity 0
    

    Ⅱ. Describe what happened

    我是在 k3s 集群中部署的 open-local,但是因为 k3s 没有 kube-scheduler 相关配置文件,所以 init-job 无法正常运行。

    modifying kube-scheduler.yaml...
    grep: /etc/kubernetes/manifests/kube-scheduler.yaml: No such file or directory
    + sed -i '/  hostNetwork: true/a \  dnsPolicy: ClusterFirstWithHostNet' /etc/kubernetes/manifests/kube-scheduler.yaml
    sed: can't read /etc/kubernetes/manifests/kube-scheduler.yaml: No such file or directory
    

    其他相关配置都运行良好:

    NAME                                              READY   STATUS    RESTARTS   AGE
    open-local-agent-7sd9d                            3/3     Running   0          22h
    open-local-csi-provisioner-785b7f99bd-hlqdv       1/1     Running   0          22h
    open-local-agent-8kg4r                            3/3     Running   0          22h
    open-local-agent-jljlv                            3/3     Running   0          22h
    open-local-scheduler-extender-5d48bc465c-r42pn    1/1     Running   0          22h
    open-local-snapshot-controller-785987975c-hhgr7   1/1     Running   0          22h
    open-local-csi-snapshotter-5f797c4596-wml76       1/1     Running   0          22h
    open-local-csi-resizer-7c9698976f-f7tzz           1/1     Running   0          22h
    
    master1 [~]$ kubectl get nodelocalstorage -ojson master1|jq .status.filteredStorageInfo
    {
      "updateStatusInfo": {
        "lastUpdateTime": "2021-11-12T11:02:56Z",
        "updateStatus": "accepted"
      },
      "volumeGroups": [
        "open-local-pool-0"
      ]
    }
    master1 [~]$ kubectl get nodelocalstorage -ojson node1|jq .status.filteredStorageInfo
    {
      "updateStatusInfo": {
        "lastUpdateTime": "2021-11-12T11:01:56Z",
        "updateStatus": "accepted"
      },
      "volumeGroups": [
        "open-local-pool-0"
      ]
    }
    master1 [~]$ kubectl get nodelocalstorage -ojson node2|jq .status.filteredStorageInfo
    {
      "updateStatusInfo": {
        "lastUpdateTime": "2021-11-12T11:02:56Z",
        "updateStatus": "accepted"
      },
      "volumeGroups": [
        "open-local-pool-0"
      ]
    }
    

    所以我怀疑是调度的问题导致无法正常创建 PVC/PV

    Ⅲ. Describe what you expected to happen

    希望能够在 k3s 正常部署运行 open-local

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. 建议参考 k8s-scheduler-extender的方式实现调度的自定义,兼容 k3s 等其他平台

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version:
    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:
  • [feature] support SPDK

    [feature] support SPDK

    Why you need it?

    vhost-user-blk/scsi. is a high efficient way to transport data for virtual environments. Open-local currently doesn't support vhost-user-blk/scsi.

    How it could be?

    The Storage Performance Development Kit (SPDK) can provide vhost support. To support vhost-user-blk/scsi in open-local node CSI driver should communicate with SPDK. Following is a brief description :

    .  NodeStageVolume / NodeUnStageVolume
        n/a
    .  NodePublishVolume
        -  Create bdev
            # scripts/rpc.py bdev_aio_create <path_to_host_block_dev> <bdev_name>
            # scripts/rpc.py bdev_lvol_create_lvstore <bdev_name> <lvs_name >
            # scripts/rpc.py bdev_lvol_create  <lvol_name> <size> -l <lvs_name>
        -  Create vhost device
            # scripts/rpc.py vhost_create_blk_controller --cpumask 0x1 vhostblk0 <bdev_name>
            # mknod /var/run/kata-containers/vhost-user/block/devices/vhostblk0 b 241 0
            # mount --bind [...] /var/run/kata-containers/vhost-user/block/devices/vhostblk0 <target_path>
    .  NodeUnPublishVolume
           # umount <target_path>
           # scripts/rpc.py bdev_lvol_delete  <lvol_name>
           # rm /var/run/kata-containers/vhost-user/block/devices/vhostblk0
    

    besides, we need add a field in nlsc and nls to indicate if the storage is provided by SPDK.

    image

    Other related information

  • snapshot-controller CrashLoopBackOff and pvc keep in pending status

    snapshot-controller CrashLoopBackOff and pvc keep in pending status

    Ⅰ. Issue Description

    I follow the instruction install the open-local but it seems does not work correctly. Probably some bugs here. The snapshot-controller CrashLoopBackOff with the message "Failed to list v1 volumesnapshots with error=the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io)" . Besides,I create the example yaml and the pv keep in pending condition. I'm not sure whether it is the first one cause the second.

    Ⅱ. Describe what happened

    The snapshot-controller CrashLoopBackOff but crd exist(not sure for the version) image image

    The open-local-controller also give the message Failed to watch v1.VolumeSnapshotClass image

    Check the clusterrole of open-local image

    The pvc in pending status image image image image

    Extension scheduler give error message image image

    The pod also pending image

    Nodelocalstorage does not create successfully image

    Double check the NodeLocalStorageInitConfig which is correct image image

    Ⅲ. Describe what you expected to happen

    the example yaml given by instruction create successfully

    Ⅵ. Environment:

    • Open-Local version: v0.5.5
    • OS (e.g. from /etc/os-release): centos 7.9
    • Kernel (e.g. uname -a): Linux kube-control-2 3.10.0-1160.el7.x86_64 #1 SMP Wed Nov 18 03:43:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
    • Install tools: helm
    • Others:
  • device pv faile to mount as fs

    device pv faile to mount as fs

    Ⅰ. Issue Description

    Try to use device volumeType and mount as fs. PV seem fail to mount

    Ⅱ. Describe what happened

    Apply a yaml file like this:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: dev-fs-pvc
    spec:
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      storageClassName: open-local-device-hdd
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: "open-local-test-dev-fs"
    spec:
      containers:
      - name: dev-fs
        image: busybox
        volumeMounts:
        - mountPath: "/data"
          name: data
        command:
        - stat 
        - /data
      volumes:
       - name: data
         persistentVolumeClaim:
           claimName: dev-fs-pvc
      restartPolicy: Never
    
    
    Events:
      Type     Reason            Age                From               Message
      ----     ------            ----               ----               -------
      Warning  FailedScheduling  38s                default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "dev-fs-pvc"
      Warning  FailedScheduling  35s                default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "dev-fs-pvc"
      Normal   Scheduled         31s                default-scheduler  Successfully assigned open-local/open-local-test-dev-fs to node2
      Warning  FailedMount       13s (x6 over 29s)  kubelet            MountVolume.SetUp failed for volume "local-7859a7ae-c56c-48eb-8c36-dd898f1ab22f" : rpc error: code = Internal desc = NodePublishVolume(FileSystem): mount device volume local-7859a7ae-c56c-48eb-8c36-dd898f1ab22f with path /var/lib/kubelet/pods/249942c1-26ee-41e4-8a05-1e67ee9deaab/volumes/kubernetes.io~csi/local-7859a7ae-c56c-48eb-8c36-dd898f1ab22f/mount with error: rpc error: code = Internal desc = mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t ext4 -o rw,defaults /dev/vdd /var/lib/kubelet/pods/249942c1-26ee-41e4-8a05-1e67ee9deaab/volumes/kubernetes.io~csi/local-7859a7ae-c56c-48eb-8c36-dd898f1ab22f/mount
    Output: mount: wrong fs type, bad option, bad superblock on /dev/vdd,
           missing codepage or helper program, or other error
    
           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    

    Ⅲ. Describe what you expected to happen

    pv should be mounted, pod should run

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. Apply yaml as above
    2. Pod stuck in creating state
    3. kubectl describe the targeting pod and see the error msg

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version: 0.5.5
    • OS (e.g. from /etc/os-release): ubuntu
    • Kernel (e.g. uname -a): 5.15
    • Install tools: helm
    • Others:
  • make failed

    make failed

    Ⅰ. Issue Description

    I check out the repo and run make, it seems fail out of the box, am I missing something?

    Ⅱ. Describe what happened

    lee@ubuntu:~/workspace/picloud/open-local$ make
    go test -v ./...
    ?       github.com/alibaba/open-local/cmd       [no test files]
    ?       github.com/alibaba/open-local/cmd/agent [no test files]
    ?       github.com/alibaba/open-local/cmd/controller    [no test files]
    ?       github.com/alibaba/open-local/cmd/csi   [no test files]
    ?       github.com/alibaba/open-local/cmd/doc   [no test files]
    time="2022-06-08T12:15:26+02:00" level=info msg="test noResyncPeriodFunc"
    time="2022-06-08T12:15:26+02:00" level=info msg="test noResyncPeriodFunc"
    time="2022-06-08T12:15:26+02:00" level=info msg="test noResyncPeriodFunc"
    time="2022-06-08T12:15:26+02:00" level=info msg="Waiting for informer caches to sync"
    time="2022-06-08T12:15:26+02:00" level=info msg="starting http server on port 23000"
    time="2022-06-08T12:15:26+02:00" level=info msg="all informer caches are synced"
    === RUN   TestVGWithName
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod testpod with nodes [[node-192.168.0.1 node-192.168.0.2 node-192.168.0.3 node-192.168.0.4]]"
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.1"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=error msg="Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: false,failReasons: [Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi], err: Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.1,fits: false,failReasons: [Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.1 is not suitable for pod default/testpod, reason: [Insufficient LVM storage on node node-192.168.0.1, vg is ssd, pvc requested 150Gi, vg used 0, vg capacity 100Gi] "
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.2"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.2 is capable of lvm 1 pvcs"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.2,fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.3"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.3 is capable of lvm 1 pvcs"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.3,fits: true,failReasons: [], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="predicating pod default/testpod with node node-192.168.0.4"
    time="2022-06-08T12:15:26+02:00" level=info msg="got pvc default/pvc-vg as lvm pvc"
    time="2022-06-08T12:15:26+02:00" level=info msg="allocating lvm volume for pod default/testpod"
    time="2022-06-08T12:15:26+02:00" level=error msg="no vg(LVM) named ssd in node node-192.168.0.4"
    time="2022-06-08T12:15:26+02:00" level=info msg="fits: false,failReasons: [no vg(LVM) named ssd in node node-192.168.0.4], err: no vg(LVM) named ssd in node node-192.168.0.4"
    time="2022-06-08T12:15:26+02:00" level=info msg="pod=default/testpod, node=node-192.168.0.4,fits: false,failReasons: [no vg(LVM) named ssd in node node-192.168.0.4], err: <nil>"
    time="2022-06-08T12:15:26+02:00" level=info msg="node node-192.168.0.4 is not suitable for pod default/testpod, reason: [no vg(LVM) named ssd in node node-192.168.0.4] "
    unexpected fault address 0x0
    fatal error: fault
    [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x46845f]
    
    goroutine 91 [running]:
    runtime.throw({0x178205e?, 0x18?})
            /usr/local/go/src/runtime/panic.go:992 +0x71 fp=0xc0004d71e8 sp=0xc0004d71b8 pc=0x4380b1
    runtime.sigpanic()
            /usr/local/go/src/runtime/signal_unix.go:825 +0x305 fp=0xc0004d7238 sp=0xc0004d71e8 pc=0x44e485
    aeshashbody()
            /usr/local/go/src/runtime/asm_amd64.s:1343 +0x39f fp=0xc0004d7240 sp=0xc0004d7238 pc=0x46845f
    runtime.mapiternext(0xc000788780)
            /usr/local/go/src/runtime/map.go:934 +0x2cb fp=0xc0004d72b0 sp=0xc0004d7240 pc=0x411beb
    runtime.mapiterinit(0x0?, 0x8?, 0x1?)
            /usr/local/go/src/runtime/map.go:861 +0x228 fp=0xc0004d72d0 sp=0xc0004d72b0 pc=0x4118c8
    reflect.mapiterinit(0xc000039cf8?, 0xc0004d7358?, 0x461365?)
            /usr/local/go/src/runtime/map.go:1373 +0x19 fp=0xc0004d72f8 sp=0xc0004d72d0 pc=0x464b79
    github.com/modern-go/reflect2.(*UnsafeMapType).UnsafeIterate(...)
            /home/lee/workspace/picloud/open-local/vendor/github.com/modern-go/reflect2/unsafe_map.go:112
    github.com/json-iterator/go.(*sortKeysMapEncoder).Encode(0xc00058f230, 0xc000497f00, 0xc000039ce0)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_map.go:291 +0x225 fp=0xc0004d7468 sp=0xc0004d72f8 pc=0x8553e5
    github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc00058f350, 0x1436da0?, 0xc000039ce0)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_struct_encoder.go:110 +0x56 fp=0xc0004d74e0 sp=0xc0004d7468 pc=0x862b36
    github.com/json-iterator/go.(*structEncoder).Encode(0xc00058f3e0, 0x0?, 0xc000039ce0)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_struct_encoder.go:158 +0x765 fp=0xc0004d75c8 sp=0xc0004d74e0 pc=0x863545
    github.com/json-iterator/go.(*OptionalEncoder).Encode(0xc00013bb80?, 0x0?, 0x0?)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect_optional.go:70 +0xa4 fp=0xc0004d7618 sp=0xc0004d75c8 pc=0x85a744
    github.com/json-iterator/go.(*onePtrEncoder).Encode(0xc0004b3210, 0xc000497ef0, 0xc000497f50?)
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect.go:219 +0x82 fp=0xc0004d7650 sp=0xc0004d7618 pc=0x84d7c2
    github.com/json-iterator/go.(*Stream).WriteVal(0xc000039ce0, {0x158a3e0, 0xc000497ef0})
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/reflect.go:98 +0x158 fp=0xc0004d76c0 sp=0xc0004d7650 pc=0x84cad8
    github.com/json-iterator/go.(*frozenConfig).Marshal(0xc00013bb80, {0x158a3e0, 0xc000497ef0})
            /home/lee/workspace/picloud/open-local/vendor/github.com/json-iterator/go/config.go:299 +0xc9 fp=0xc0004d7758 sp=0xc0004d76c0 pc=0x843d89
    github.com/alibaba/open-local/pkg/scheduler/server.PredicateRoute.func1({0x19bfee0, 0xc00019c080}, 0xc000318000, {0x203000?, 0xc00062b928?, 0xc00062b84d?})
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/routes.go:83 +0x326 fp=0xc0004d7878 sp=0xc0004d7758 pc=0x132d5e6
    github.com/alibaba/open-local/pkg/scheduler/server.DebugLogging.func1({0x19cafb0?, 0xc0005a80e0}, 0xc000056150?, {0x0, 0x0, 0x0})
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/routes.go:217 +0x267 fp=0xc0004d7988 sp=0xc0004d7878 pc=0x132e4a7
    github.com/julienschmidt/httprouter.(*Router).ServeHTTP(0xc0000b0de0, {0x19cafb0, 0xc0005a80e0}, 0xc000318000)
            /home/lee/workspace/picloud/open-local/vendor/github.com/julienschmidt/httprouter/router.go:387 +0x82b fp=0xc0004d7a98 sp=0xc0004d7988 pc=0x12d61ab
    net/http.serverHandler.ServeHTTP({0x19bc700?}, {0x19cafb0, 0xc0005a80e0}, 0xc000318000)
            /usr/local/go/src/net/http/server.go:2916 +0x43b fp=0xc0004d7b58 sp=0xc0004d7a98 pc=0x7e87fb
    net/http.(*conn).serve(0xc0001da3c0, {0x19cbab0, 0xc0001b68a0})
            /usr/local/go/src/net/http/server.go:1966 +0x5d7 fp=0xc0004d7fb8 sp=0xc0004d7b58 pc=0x7e3cb7
    net/http.(*Server).Serve.func3()
            /usr/local/go/src/net/http/server.go:3071 +0x2e fp=0xc0004d7fe0 sp=0xc0004d7fb8 pc=0x7e914e
    runtime.goexit()
            /usr/local/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0004d7fe8 sp=0xc0004d7fe0 pc=0x46b061
    created by net/http.(*Server).Serve
            /usr/local/go/src/net/http/server.go:3071 +0x4db
    
    goroutine 1 [chan receive]:
    testing.(*T).Run(0xc000103ba0, {0x178cc75?, 0x516ac5?}, 0x18541b0)
            /usr/local/go/src/testing/testing.go:1487 +0x37a
    testing.runTests.func1(0xc0001b69c0?)
            /usr/local/go/src/testing/testing.go:1839 +0x6e
    testing.tRunner(0xc000103ba0, 0xc00064bcd8)
            /usr/local/go/src/testing/testing.go:1439 +0x102
    testing.runTests(0xc00050a0a0?, {0x2540700, 0x7, 0x7}, {0x7fa22c405a68?, 0x40?, 0x2557740?})
            /usr/local/go/src/testing/testing.go:1837 +0x457
    testing.(*M).Run(0xc00050a0a0)
            /usr/local/go/src/testing/testing.go:1719 +0x5d9
    main.main()
            _testmain.go:59 +0x1aa
    
    goroutine 19 [chan receive]:
    k8s.io/klog/v2.(*loggingT).flushDaemon(0x0?)
            /home/lee/workspace/picloud/open-local/vendor/k8s.io/klog/v2/klog.go:1169 +0x6a
    created by k8s.io/klog/v2.init.0
            /home/lee/workspace/picloud/open-local/vendor/k8s.io/klog/v2/klog.go:417 +0xf6
    
    goroutine 92 [IO wait]:
    internal/poll.runtime_pollWait(0x7fa204607b38, 0x72)
            /usr/local/go/src/runtime/netpoll.go:302 +0x89
    internal/poll.(*pollDesc).wait(0xc0003c6100?, 0xc00050c2e1?, 0x0)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
    internal/poll.(*pollDesc).waitRead(...)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
    internal/poll.(*FD).Read(0xc0003c6100, {0xc00050c2e1, 0x1, 0x1})
            /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
    net.(*netFD).Read(0xc0003c6100, {0xc00050c2e1?, 0xc000613628?, 0xc00061e000?})
            /usr/local/go/src/net/fd_posix.go:55 +0x29
    net.(*conn).Read(0xc000612180, {0xc00050c2e1?, 0xc0005147a0?, 0x985846?})
            /usr/local/go/src/net/net.go:183 +0x45
    net/http.(*connReader).backgroundRead(0xc00050c2d0)
            /usr/local/go/src/net/http/server.go:672 +0x3f
    created by net/http.(*connReader).startBackgroundRead
            /usr/local/go/src/net/http/server.go:668 +0xca
    
    goroutine 43 [select]:
    net/http.(*persistConn).roundTrip(0xc00056a360, 0xc0006420c0)
            /usr/local/go/src/net/http/transport.go:2620 +0x974
    net/http.(*Transport).roundTrip(0x25410e0, 0xc0004c6600)
            /usr/local/go/src/net/http/transport.go:594 +0x7c9
    net/http.(*Transport).RoundTrip(0x40f405?, 0x19b3900?)
            /usr/local/go/src/net/http/roundtrip.go:17 +0x19
    net/http.send(0xc0004c6600, {0x19b3900, 0x25410e0}, {0x172b2a0?, 0x178c601?, 0x0?})
            /usr/local/go/src/net/http/client.go:252 +0x5d8
    net/http.(*Client).send(0x2556ec0, 0xc0004c6600, {0xd?, 0x1788f4f?, 0x0?})
            /usr/local/go/src/net/http/client.go:176 +0x9b
    net/http.(*Client).do(0x2556ec0, 0xc0004c6600)
            /usr/local/go/src/net/http/client.go:725 +0x8f5
    net/http.(*Client).Do(...)
            /usr/local/go/src/net/http/client.go:593
    net/http.(*Client).Post(0x17b1437?, {0xc000492480?, 0xc00054bdc8?}, {0x178f761, 0x10}, {0x19b0fe0?, 0xc0001b6a20?})
            /usr/local/go/src/net/http/client.go:858 +0x148
    net/http.Post(...)
            /usr/local/go/src/net/http/client.go:835
    github.com/alibaba/open-local/cmd/scheduler.predicateFunc(0xc0000f9800, {0x253ebe0, 0x4, 0x4})
            /home/lee/workspace/picloud/open-local/cmd/scheduler/extender_test.go:348 +0x1e8
    github.com/alibaba/open-local/cmd/scheduler.TestVGWithName(0x4082b9?)
            /home/lee/workspace/picloud/open-local/cmd/scheduler/extender_test.go:135 +0x17e
    testing.tRunner(0xc000103d40, 0x18541b0)
            /usr/local/go/src/testing/testing.go:1439 +0x102
    created by testing.(*T).Run
            /usr/local/go/src/testing/testing.go:1486 +0x35f
    
    goroutine 87 [IO wait]:
    internal/poll.runtime_pollWait(0x7fa204607d18, 0x72)
            /usr/local/go/src/runtime/netpoll.go:302 +0x89
    internal/poll.(*pollDesc).wait(0xc00003a580?, 0xc000064000?, 0x0)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
    internal/poll.(*pollDesc).waitRead(...)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
    internal/poll.(*FD).Accept(0xc00003a580)
            /usr/local/go/src/internal/poll/fd_unix.go:614 +0x22c
    net.(*netFD).accept(0xc00003a580)
            /usr/local/go/src/net/fd_unix.go:172 +0x35
    net.(*TCPListener).accept(0xc0001301e0)
            /usr/local/go/src/net/tcpsock_posix.go:139 +0x28
    net.(*TCPListener).Accept(0xc0001301e0)
            /usr/local/go/src/net/tcpsock.go:288 +0x3d
    net/http.(*Server).Serve(0xc0000dc2a0, {0x19cada0, 0xc0001301e0})
            /usr/local/go/src/net/http/server.go:3039 +0x385
    net/http.(*Server).ListenAndServe(0xc0000dc2a0)
            /usr/local/go/src/net/http/server.go:2968 +0x7d
    net/http.ListenAndServe(...)
            /usr/local/go/src/net/http/server.go:3222
    github.com/alibaba/open-local/pkg/scheduler/server.(*ExtenderServer).InitRouter.func1()
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/web.go:185 +0x157
    created by github.com/alibaba/open-local/pkg/scheduler/server.(*ExtenderServer).InitRouter
            /home/lee/workspace/picloud/open-local/pkg/scheduler/server/web.go:182 +0x478
    
    goroutine 49 [IO wait]:
    internal/poll.runtime_pollWait(0x7fa204607c28, 0x72)
            /usr/local/go/src/runtime/netpoll.go:302 +0x89
    internal/poll.(*pollDesc).wait(0xc00003a800?, 0xc000639000?, 0x0)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
    internal/poll.(*pollDesc).waitRead(...)
            /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
    internal/poll.(*FD).Read(0xc00003a800, {0xc000639000, 0x1000, 0x1000})
            /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
    net.(*netFD).Read(0xc00003a800, {0xc000639000?, 0x17814b4?, 0x0?})
            /usr/local/go/src/net/fd_posix.go:55 +0x29
    net.(*conn).Read(0xc000495a38, {0xc000639000?, 0x19ce530?, 0xc000370ea0?})
            /usr/local/go/src/net/net.go:183 +0x45
    net/http.(*persistConn).Read(0xc00056a360, {0xc000639000?, 0x40757d?, 0x60?})
            /usr/local/go/src/net/http/transport.go:1929 +0x4e
    bufio.(*Reader).fill(0xc000522a80)
            /usr/local/go/src/bufio/bufio.go:106 +0x103
    bufio.(*Reader).Peek(0xc000522a80, 0x1)
            /usr/local/go/src/bufio/bufio.go:144 +0x5d
    net/http.(*persistConn).readLoop(0xc00056a360)
            /usr/local/go/src/net/http/transport.go:2093 +0x1ac
    created by net/http.(*Transport).dialConn
            /usr/local/go/src/net/http/transport.go:1750 +0x173e
    
    goroutine 178 [select]:
    net/http.(*persistConn).writeLoop(0xc00056a360)
            /usr/local/go/src/net/http/transport.go:2392 +0xf5
    created by net/http.(*Transport).dialConn
            /usr/local/go/src/net/http/transport.go:1751 +0x1791
    FAIL    github.com/alibaba/open-local/cmd/scheduler     0.177s
    ?       github.com/alibaba/open-local/cmd/version       [no test files]
    ?       github.com/alibaba/open-local/pkg       [no test files]
    ?       github.com/alibaba/open-local/pkg/agent/common  [no test files]
    === RUN   TestNewAgent
    
    

    Ⅲ. Describe what you expected to happen

    make should run through.

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. git clone https://github.com/alibaba/open-local.git
    2. cd open-local
    3. make 4.
    4. failed

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version: main branch
    • OS (e.g. from /etc/os-release): ubuntu 22.04
    • Kernel (e.g. uname -a): 5.15.0-33
    • Install tools:
    • Others:
  • [bug] Extender fail to update some nls after the nls resource is deleted and rebuilt in a large-scale cluster

    [bug] Extender fail to update some nls after the nls resource is deleted and rebuilt in a large-scale cluster

    批量删除nls后,nls会正常创建,但extender patch时会报错:

    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzx9vn1t003z"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:31+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzx9vn1t003z\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzwo2cqyljpz"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:31+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzwo2cqyljpz\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzwo2cqyli4z"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:31+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzwo2cqyli4z\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:31+08:00" level=debug msg="get update on node local cache izbp1277upijzx9vn1t01pz"
    time="2021-11-26T14:50:31+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:32+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzx9vn1t01pz\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:32+08:00" level=debug msg="get update on node local cache izbp1277upijzwo2cqyljmz"
    time="2021-11-26T14:50:32+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:32+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp1277upijzwo2cqyljmz\": the object has been modified; please apply your changes to the latest version and try again"
    time="2021-11-26T14:50:32+08:00" level=debug msg="get update on node local cache izbp14kyqi4fdsb7ax48itz"
    time="2021-11-26T14:50:32+08:00" level=debug msg="added vgs: []string{\"yoda-pool0\"}"
    time="2021-11-26T14:50:32+08:00" level=error msg="local storage CRD update Status FilteredStorageInfo error: Operation cannot be fulfilled on nodelocalstorages.csi.aliyun.com \"izbp14kyqi4fdsb7ax48itz\": the object has been modified; please apply your changes to the latest version and try again"
    

    影响就是extender无法更新nls的status,导致应用无法使用该节点上的存储设备。

    是在大规模场景下做的测试。

  • VG  Create  Error

    VG Create Error

    Ⅰ. Issue Description

    NodeLocalStorage's Status Is Null. image

    Ⅱ. Describe what happened

    Install Open-local By Chart,Worker Of Kubernetes Adding Raw Device,But VG In The Worker Is Not.Then Check NLS status,Find It Was Null.

    Ⅲ. Describe what you expected to happen

    VG In The Worker Of Raw Device Is Fine.

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    1. helm install open-local 2.vgs in Worker
    2. kubectl get nls

    Ⅴ. Anything else we need to know?

    1、Scheduler Process image 2、scheduler-policy-config.json image 3、driver-registrar Logs image 4、Agent Logs image 5、scheduler-extender Logs image 6、NodeLocalStorageInitConfig image 7、Raw Device Of Worker image

    Ⅵ. Environment:

    -Kubernetes Version image

    • Open-Local version: image

    • OS (e.g. from /etc/os-release):

    image

    • Kernel (e.g. uname -a):

    image

    • Install tools:

    image

    Thanks

  • question about monitor

    question about monitor

    Question

    I am wondering which service will provide the metrics? I found that the ServiceMonitor open-local give following message like this : spec: endpoints:

    • path: /metrics port: http-metrics jobLabel: app namespaceSelector: matchNames:
      • kube-system selector: matchLabels: app: open-local

    but there is no svc like open-local only open-local-scheduler-extender svc exist. so I am so confused about which pod provides the metrics. Is that the open-local-controller ,agent or open-local-scheduler-extender(btw I have already set monitor to true)

  • Source PVC's deletion make VolumeSnapshot broken

    Source PVC's deletion make VolumeSnapshot broken

    Ⅰ. Issue Description

    1. create VolumeSnapshot From PVC
    2. delete PVC
    3. VolumeSnapshot is broken, can not be use, got Log:
    [ProcessSnapshotPVC] get src pvc ns/[VolumeSnapshot] failed: persistentvolumeclaim [source PVC] not found
    

    Ⅱ. Describe what happened

    Ⅲ. Describe what you expected to happen

    protect VolumeSnapshot source PVC from deletion with finalizer

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version:
    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Install tools:
    • Others:
  • Sane-behavior cannot set when enabling hierarchical cgroup v1 blkio throttling

    Sane-behavior cannot set when enabling hierarchical cgroup v1 blkio throttling

    Question

    OS: Ubuntu20.04 5.15.0-46-generic docker: 20.10.12 containerd: 20.10.12-0 runc: 1.1.0-0 cgroup version: v1 kubernetes: 1.22.0

    Hello,

    I implemeted OpenLocal with kubernetes 1.22.0, and trying to throttle LVM block IO following https://github.com/alibaba/open-local/blob/main/docs/user-guide/type-lvm_zh_CN.md.

    The cgroup block io bps and iops limit is all in pod level. Throttling implements hierarchy support, it is reasonable to limit all containers' total block iops and bps under the value that set to the pod's blkio.throttle.read_bps_device, blkio.throttle.write_bps_device, blkio.throttle.read_iops_device and blkio.throttle.write_iops_device.

    However, throttling's hierarchy support is enabled only if "sane_behavior" is enabled from cgroup side, and cgroup.sane_behavior(ReadOnly) is set to 0 by default. Ref: https://www.kernel.org/doc/Documentation/cgroup-v1/blkio-controller.txt To limit container's blkio, I must enable "sane_behavior" by remounting /sys/fs/cgroup/blkio with the flag -o __DEVEL__sane_behavior. The problem is that it seems the kernel desn't recognize the flag.

    $ umount -l /sys/fs/cgroup/blkio $ mount -t cgroup -o blkio -o __DEVEL__sane_behavior none /sys/fs/cgroup/blkio mount: /sys/fs/cgroup: wrong fs type, bad option, bad superblock on none, missing codepage or helper program, or other error

    Is any one tried the io throttling function with success? Could you please give me some info about your kernel version or any problem with my configuration? Thanks!

  • Simplify setup with using dynamic scheduler extenders

    Simplify setup with using dynamic scheduler extenders

    Why you need it?

    Right now open-local use an init-job to statically configure scheduler extends, this is not convenient for some k8s distribution like k3s. I would like to propose doing this at controller startup instead of using external json/yaml file.

    How it could be?

    when open-local controller startup, it uses client-go to create the scheduler config (https://kubernetes.io/docs/reference/config-api/kube-scheduler-config.v1beta3/), when it quick delete the config.

    This should make open-local work much smart and stable. :)

    Other related information

    https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#extended-resources

  • Rename: VG --> StoragePool; LogicalVolume to LocalVolume

    Rename: VG --> StoragePool; LogicalVolume to LocalVolume

    Ⅰ. Issue Description

    The project named, open-local, not open-lvm, it is suppose to support or local storage solution more than LVM.

    Consider to refact the code to make it more generic and prepare to accept new local storage type.

    This seem to be a big jobs, especially some of this naming is used in yaml file, which likely break backward compatibility.

    Not sure if this is feasible.

  • [bug] Pod scheduling problem

    [bug] Pod scheduling problem

    Ⅰ. Issue Description

    Pods with multiple pvc are scheduled to nodes with insufficient capacity, while other nodes can meet the capacity requirements of PVCs.

    Ⅱ. Describe what happened

    I deploy an example sts-nginx with 3 replicas on 4 nodes, each pod mounts two pieces of pvc, 1T and 100G, which are created on the node's volume group 'vgdaasdata' and 'vgdaaslogs' respectively.

    1. Initially, The capacity of the 2 volume groups is 1.7T and 560G , which means that only one pod can be scheduled on each node, the cache of scheduler-extender is shown in Figure 1.
      3D972589-97E9-424F-8946-AE88134B17FE

    2. Figure 2 shows that 5 PVs were successfully created, and one was not created. The reason is that there are two pods scheduled to the same node.
      6D11807B-335A-46EE-AE3F-6A98CC210996 At this time, the cache of scheduler-extender is shown in Figure 3 and 4. FE522939-0071-4D2C-9ECF-3F35B7B73883 A17B1A70-022B-4704-931B-663E9A06321D

    3. Then, I delete the STS and these PVCs, the cache is shown in Figures 5. 60635978-D202-47FF-BB2F-02FBB08603A8

    Ⅲ. Describe what you expected to happen

    Only one pod can be scheduled on each node in this situation.

    Ⅳ. How to reproduce it (as minimally and precisely as possible)

    Deploy a workload as I described.

    Ⅴ. Anything else we need to know?

    Ⅵ. Environment:

    • Open-Local version: 0.5.5
    • OS (e.g. from /etc/os-release): centos 7.9
    • Kernel (e.g. uname -a):
    • Install tools: helm 3.0
    • Others: Kube-scheduler logs shown in Figure 6. 7CB1141F-74EF-4FDC-8669-FE9B74D94767
storage interface for local disk or AWS S3 (or Minio) platform

storage interface for local disk or AWS S3 (or Minio) platform

Apr 19, 2022
Rook is an open source cloud-native storage orchestrator for Kubernetes

Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Oct 25, 2022
Op - A small tool that will allow you to open language or framework documentation in your browser from your terminal

op "op" is a small tool that will allow you to open language or framework docume

Aug 25, 2022
An encrypted object storage system with unlimited space backed by Telegram.

TGStore An encrypted object storage system with unlimited space backed by Telegram. Please only upload what you really need to upload, don't abuse any

Nov 28, 2022
Void is a zero storage cost large file sharing system.

void void is a zero storage cost large file sharing system. License Copyright © 2021 Changkun Ou. All rights reserved. Unauthorized using, copying, mo

Nov 22, 2021
Perkeep (née Camlistore) is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.

Perkeep is your personal storage system. It's a way to store, sync, share, import, model, and back up content. Keep your stuff for life. For more, see

Dec 26, 2022
A set of components that can be composed into a highly available metric system with unlimited storage capacity
A set of components that can be composed into a highly available metric system with unlimited storage capacity

Overview Thanos is a set of components that can be composed into a highly available metric system with unlimited storage capacity, which can be added

Oct 20, 2021
Peerster refers to a gossip-based P2P system composed of multiple peers interacting with each other

Peerster design Peerster refers to a gossip-based P2P system composed of multiple peers interacting with each other. A peer refers to an autonomous en

Jan 22, 2022
Local Disk Manager is one of HwameiStor components
Local Disk Manager is one of HwameiStor components

Local Disk Manager is one of HwameiStor components. It will manage all the local disks of the HwameiStor nodes, including provision local Disk volume, and disk health management.

Aug 6, 2022
Avalanche : a network composed of multiple blockchains

Timestamp Virtual Machine Avalanche is a network composed of multiple blockchains. Each blockchain is an instance of a Virtual Machine (VM), much like

Dec 15, 2021
Avalanche: a network composed of multiple blockchains

Coreth and the C-Chain Avalanche is a network composed of multiple blockchains.

Dec 14, 2022
Avalanche: a network composed of multiple blockchains

Subnet EVM Avalanche is a network composed of multiple blockchains. Each blockchain is an instance of a Virtual Machine (VM), much like an object in a

Dec 19, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

Aug 6, 2022
PolarDB Cluster Manager is the cluster management component of PolarDB for PostgreSQL, responsible for topology management, high availability, configuration management, and plugin extensions.

What is PolarDB Cluster Manager PolarDB Cluster Manager is the cluster management component of PolarDB for PostgreSQL, responsible for topology manage

Nov 9, 2022
Nov 1, 2022
gophertunnel is composed of several packages that may be of use for creating Minecraft related tools
gophertunnel is composed of several packages that may be of use for creating Minecraft related tools

gophertunnel is composed of several packages that may be of use for creating Minecraft related tools. A brief overview of all packages may be found here.

Dec 31, 2022
May 11, 2023
Provide open, community driven reusable components for building distributed applications

Components Contrib The purpose of Components Contrib is to provide open, community driven reusable components for building distributed applications. T

Nov 28, 2021