CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.

OpenEBS LVM CSI Driver

CII Best Practices Slack Community Meetings Go Report

OpenEBS Logo

CSI driver for provisioning Local PVs backed by LVM and more.

Project Status

Currently the LVM CSI Driver is in alpha.

Usage

Prerequisites

Before installing LVM driver please make sure your Kubernetes Cluster must meet the following prerequisites:

  1. all the nodes must have lvm2 utils installed
  2. volume group has been setup for provisioning the volume
  3. You have access to install RBAC components into kube-system namespace. The OpenEBS LVM driver components are installed in kube-system namespace to allow them to be flagged as system critical components.

Supported System

K8S : 1.17+

OS : Ubuntu

LVM : 2

Setup

Find the disk which you want to use for the LVM, for testing you can use the loopback device

truncate -s 1024G /tmp/disk.img
sudo losetup -f /tmp/disk.img --show

Create the Volume group on all the nodes, which will be used by the LVM Driver for provisioning the volumes

sudo pvcreate /dev/loop0
sudo vgcreate lvmvg /dev/loop0

Installation

Deploy the Operator yaml

kubectl apply -f https://raw.githubusercontent.com/openebs/lvm-localpv/master/deploy/lvm-operator.yaml

Deployment

deploy the sample fio application

kubectl apply -f https://raw.githubusercontent.com/openebs/lvm-localpv/master/deploy/sample/fio.yaml

Features

  • Access Modes
    • ReadWriteOnce
    • ReadOnlyMany
    • ReadWriteMany
  • Volume modes
    • Filesystem mode
    • Block mode
  • Supports fsTypes: ext4, btrfs, xfs
  • Volume metrics
  • Topology
  • Snapshot
  • Clone
  • Volume Resize
  • Backup/Restore
  • Ephemeral inline volume
Owner
OpenEBS
Containerized storage for containers
OpenEBS
Comments
  • Rancher?

    Rancher?

    What steps did you take and what happened: Installed this in rancher, but I am not getting any volumes provisioned.

    What did you expect to happen: Volumes to be provisioned

    The output of the following commands will help us better understand what's going on: (Pasting long output into a GitHub gist or other Pastebin is fine.)

    • `kubectl logs -f openebs-lvm-controller-0 -n kube-system -c openebs-lvm-plugin
    • kubectl logs -f openebs-lvm-node-[xxxx] -n kube-system -c openebs-lvm-plugin
    • kubectl get pods -n kube-system
    • kubectl get lvmvol -A -o yaml

    gist with those

    Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

    Environment:

    • LVM Driver version
    • Kubernetes version (use kubectl version)
    kubectl version
    Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.8", GitCommit:"5575935422cc1cf5169dfc8847cb587aa47bac5a", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:07Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}
    
    • Kubernetes installer & version: rancher
    • Cloud provider or hardware configuration: KVM on linux
    • OS (e.g. from /etc/os-release):
    cat /etc/os-release 
    NAME="RancherOS"
    VERSION=v1.5.8
    ID=rancheros
    ID_LIKE=
    VERSION_ID=v1.5.8
    PRETTY_NAME="RancherOS v1.5.8"
    HOME_URL="http://rancher.com/rancher-os/"
    SUPPORT_URL="https://forums.rancher.com/c/rancher-os"
    BUG_REPORT_URL="https://github.com/rancher/os/issues"
    BUILD_ID=
    
    [rancher@orm01-vault2 ~]$ sudo pvscan
      PV /dev/vda1   VG lvmvg           lvm2 [<33.30 GiB / <33.30 GiB free]
      Total: 1 [<33.30 GiB] / in use: 1 [<33.30 GiB] / in no VG: 0 [0   ]
    [rancher@orm01-vault2 ~]$ sudo vgscan
      Reading all physical volumes.  This may take a while...
      Found volume group "lvmvg" using metadata type lvm2
    
    $ cat openebs-lvm-sc.yaml 
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: openebs-lvmpv
    parameters:
      storage: "lvm"
      volgroup: "lvmvg"
    provisioner: local.csi.openebs.io
    
    $ k get sc
    NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5h38m
    openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5h38m
    openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  5h38m
    openebs-lvmpv (default)     local.csi.openebs.io                                       Delete          Immediate              false                  114m
    openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  5h38m
    
    $ k get pvc
    NAME                                       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
    data-whc01elastic-opendistro-es-data-0     Pending                                      openebs-lvmpv   105m
    data-whc01elastic-opendistro-es-master-0   Pending                                      openebs-lvmpv   105m
    
    $ k describe pvc data-whc01elastic-opendistro-es-data-0
    Name:          data-whc01elastic-opendistro-es-data-0
    Namespace:     default
    StorageClass:  openebs-lvmpv
    Status:        Pending
    Volume:        
    Labels:        app=whc01elastic-opendistro-es
                   heritage=Helm
                   release=whc01elastic
                   role=data
    Annotations:   volume.beta.kubernetes.io/storage-provisioner: local.csi.openebs.io
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      
    Access Modes:  
    VolumeMode:    Filesystem
    Mounted By:    whc01elastic-opendistro-es-data-0
    Events:
      Type    Reason                Age                     From                                                                                Message
      ----    ------                ----                    ----                                                                                -------
      Normal  ExternalProvisioning  4m39s (x401 over 104m)  persistentvolume-controller                                                         waiting for a volume to be created, either by external provisioner "local.csi.openebs.io" or manually created by system administrator
      Normal  Provisioning          91s (x29 over 105m)     local.csi.openebs.io_openebs-lvm-controller-0_33139b9f-e336-4a2b-a90a-33b6bb4a91c3  External provisioner is provisioning volume for claim "default/data-whc01elastic-opendistro-es-data-0"
    
  • openebs-lvm-plugin is busylooping on systems without dm-snapshot kernel module loaded

    openebs-lvm-plugin is busylooping on systems without dm-snapshot kernel module loaded

    What steps did you take and what happened:

    I tried to snapshot a volume created by lvm-localpv.

    I created a Default SnapshotClass Without SnapSize Parameter, and then a VolumeSnapshot resource:

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: lvm-localpv-snap
    spec:
      volumeSnapshotClassName: lvmpv-snapclass
      source:
        persistentVolumeClaimName: datadir-redpanda-0
    

    What did you expect to happen: Snapshot to be created.

    • kubectl logs -f openebs-lvm-controller-0 -n kube-system -c openebs-lvm-plugin
    I1017 09:26:48.057768       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateSnapshot requests {"name":"snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}
    I1017 09:26:48.057937       1 controller.go:572] CreateSnapshot volume snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 for pvc-d293d56b-45da-412d-ab62-fec20652e71b
    I1017 09:26:48.068293       1 grpc.go:81] GRPC response: {"snapshot":{"creation_time":{"seconds":1634462808},"snapshot_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b@snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}}
    I1017 09:26:48.650574       1 grpc.go:72] GRPC call: /csi.v1.Identity/GetPluginInfo requests {}
    I1017 09:26:48.650691       1 grpc.go:81] GRPC response: {"name":"local.csi.openebs.io","vendor_version":"0.8.2"}
    I1017 09:26:48.651376       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateSnapshot requests {"name":"snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}
    I1017 09:26:48.651521       1 controller.go:572] CreateSnapshot volume snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 for pvc-d293d56b-45da-412d-ab62-fec20652e71b
    I1017 09:26:48.657290       1 grpc.go:81] GRPC response: {"snapshot":{"creation_time":{"seconds":1634462808},"snapshot_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b@snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}}
    I1017 09:26:49.258316       1 grpc.go:72] GRPC call: /csi.v1.Identity/GetPluginInfo requests {}
    I1017 09:26:49.258430       1 grpc.go:81] GRPC response: {"name":"local.csi.openebs.io","vendor_version":"0.8.2"}
    I1017 09:26:49.259088       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateSnapshot requests {"name":"snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}
    I1017 09:26:49.259214       1 controller.go:572] CreateSnapshot volume snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 for pvc-d293d56b-45da-412d-ab62-fec20652e71b
    I1017 09:26:49.274993       1 grpc.go:81] GRPC response: {"snapshot":{"creation_time":{"seconds":1634462809},"snapshot_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b@snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}}
    I1017 09:26:49.857480       1 grpc.go:72] GRPC call: /csi.v1.Identity/GetPluginInfo requests {}
    I1017 09:26:49.857635       1 grpc.go:81] GRPC response: {"name":"local.csi.openebs.io","vendor_version":"0.8.2"}
    I1017 09:26:49.858364       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateSnapshot requests {"name":"snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}
    I1017 09:26:49.858548       1 controller.go:572] CreateSnapshot volume snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 for pvc-d293d56b-45da-412d-ab62-fec20652e71b
    I1017 09:26:49.874360       1 grpc.go:81] GRPC response: {"snapshot":{"creation_time":{"seconds":1634462809},"snapshot_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b@snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}}
    I1017 09:26:50.450447       1 grpc.go:72] GRPC call: /csi.v1.Identity/GetPluginInfo requests {}
    I1017 09:26:50.450576       1 grpc.go:81] GRPC response: {"name":"local.csi.openebs.io","vendor_version":"0.8.2"}
    I1017 09:26:50.451314       1 grpc.go:72] GRPC call: /csi.v1.Controller/CreateSnapshot requests {"name":"snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}
    I1017 09:26:50.451583       1 controller.go:572] CreateSnapshot volume snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 for pvc-d293d56b-45da-412d-ab62-fec20652e71b
    I1017 09:26:50.456889       1 grpc.go:81] GRPC response: {"snapshot":{"creation_time":{"seconds":1634462810},"snapshot_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b@snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8","source_volume_id":"pvc-d293d56b-45da-412d-ab62-fec20652e71b"}}
    
    • kubectl logs -f openebs-lvm-node-[xxxx] -n kube-system -c openebs-lvm-plugin
    E1017 09:19:28.198591       1 snapshot.go:242] error syncing 'openebs/snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8': exit status 3, requeuing
    E1017 09:19:48.682737       1 lvm_util.go:501] lvm: could not create snapshot lvmvg/ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 cmd [--snapshot --name ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 --permission r /dev/lvmvg/pvc-d293d56b-45da-412d-ab62-fec20652e71b --size 107374182400b] error: modprobe: can't change directory to '/lib/modules': No such file or directory
      /sbin/modprobe failed: 1
      snapshot: Required device-mapper target(s) not detected in your kernel.
      Run `lvcreate --help' for more information.
    E1017 09:19:48.682764       1 snapshot.go:242] error syncing 'openebs/snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8': exit status 3, requeuing
    I1017 09:20:26.774560       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    E1017 09:20:29.646872       1 lvm_util.go:501] lvm: could not create snapshot lvmvg/ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 cmd [--snapshot --name ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 --permission r /dev/lvmvg/pvc-d293d56b-45da-412d-ab62-fec20652e71b --size 107374182400b] error: modprobe: can't change directory to '/lib/modules': No such file or directory
      /sbin/modprobe failed: 1
      snapshot: Required device-mapper target(s) not detected in your kernel.
      Run `lvcreate --help' for more information.
    E1017 09:20:29.646928       1 snapshot.go:242] error syncing 'openebs/snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8': exit status 3, requeuing
    I1017 09:21:26.746526       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    E1017 09:21:51.571130       1 lvm_util.go:501] lvm: could not create snapshot lvmvg/ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 cmd [--snapshot --name ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 --permission r /dev/lvmvg/pvc-d293d56b-45da-412d-ab62-fec20652e71b --size 107374182400b] error: modprobe: can't change directory to '/lib/modules': No such file or directory
      /sbin/modprobe failed: 1
      snapshot: Required device-mapper target(s) not detected in your kernel.
      Run `lvcreate --help' for more information.
    E1017 09:21:51.571168       1 snapshot.go:242] error syncing 'openebs/snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8': exit status 3, requeuing
    I1017 09:22:26.766557       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    I1017 09:23:26.746535       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    I1017 09:24:26.762518       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    E1017 09:24:35.415262       1 lvm_util.go:501] lvm: could not create snapshot lvmvg/ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 cmd [--snapshot --name ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 --permission r /dev/lvmvg/pvc-d293d56b-45da-412d-ab62-fec20652e71b --size 107374182400b] error: modprobe: can't change directory to '/lib/modules': No such file or directory
      /sbin/modprobe failed: 1
      snapshot: Required device-mapper target(s) not detected in your kernel.
      Run `lvcreate --help' for more information.
    E1017 09:24:35.415304       1 snapshot.go:242] error syncing 'openebs/snapshot-ccd9b76d-b16f-4a17-9809-0f769c0bc2f8': exit status 3, requeuing
    I1017 09:25:26.754542       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    I1017 09:26:26.770526       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    I1017 09:27:26.778556       1 lvmnode.go:274] Successfully synced 'openebs/redpanda-2'
    
    • kubectl get pods -n kube-system
    NAME                                           READY   STATUS    RESTARTS   AGE
    metrics-server-86cbb8457f-sgxtn                1/1     Running   0          31d
    local-path-provisioner-5ff76fc89d-h5hr5        1/1     Running   0          31d
    kube-vip-ds-cp-hmkdq                           1/1     Running   0          31d
    coredns-7448499f4d-6f76n                       1/1     Running   0          31d
    kube-vip-ds-svc-l4ncf                          1/1     Running   0          3d
    kube-vip-ds-svc-qqtlj                          1/1     Running   0          3d
    kube-vip-ds-svc-vlpcs                          1/1     Running   0          3d
    kube-vip-ds-svc-szsb4                          1/1     Running   0          3d
    openebs-lvm-node-v8wpt                         2/2     Running   0          3d
    openebs-lvm-node-v8f6s                         2/2     Running   0          3d
    openebs-lvm-controller-0                       5/5     Running   0          3d
    openebs-lvm-node-rjtgl                         2/2     Running   0          3d
    kube-state-metrics-5f97897c99-b45mq            1/1     Running   0          3d
    openebs-lvm-node-5rh8l                         2/2     Running   0          3d
    kube-vip-ds-svc-kchn4                          1/1     Running   0          3d
    openebs-lvm-node-2jtn9                         2/2     Running   0          3d
    kube-vip-ds-svc-wp597                          1/1     Running   0          3d
    kube-vip-ds-svc-2qv26                          1/1     Running   0          3d
    openebs-lvm-node-2gb2m                         2/2     Running   0          3d
    openebs-lvm-node-mc5rj                         2/2     Running   0          3d
    cloud-provider-equinix-metal-7fb9654c9-2xzxc   1/1     Running   2          31d
    
    • kubectl get lvmvol -A -o yaml
    apiVersion: v1
    items:
    - apiVersion: local.openebs.io/v1alpha1
      kind: LVMVolume
      metadata:
        creationTimestamp: "2021-10-14T09:48:54Z"
        finalizers:
        - lvm.openebs.io/finalizer
        generation: 3
        labels:
          kubernetes.io/nodename: redpanda-2
        name: pvc-d293d56b-45da-412d-ab62-fec20652e71b
        namespace: openebs
        resourceVersion: "10188280"
        uid: 0f25c600-c69e-459e-8274-b7ed70081f0b
      spec:
        capacity: "107374182400"
        ownerNodeID: redpanda-2
        shared: "no"
        thinProvision: "no"
        vgPattern: ^lvmvg$
        volGroup: lvmvg
      status:
        state: Ready
    kind: List
    metadata:
      resourceVersion: ""
      selfLink: ""
    

    Anything else you would like to add:

    • The openebs-lvm-controller pod is busylooping a lot. It probably shouldn't try multiple times per second
    • The logs of the openebs-lvm-node suggest there might be a problem with some missing kernel module: lvm_util.go:501] lvm: could not create snapshot lvmvg/ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 cmd [--snapshot --name ccd9b76d-b16f-4a17-9809-0f769c0bc2f8 --permission r /dev/lvmvg/pvc-d293d56b-45da-412d-ab62-fec20652e71b --size 107374182400b] error: modprobe: can't change directory to '/lib/modules': No such file or directory Maybe this folder doesn't exist in the pod/It doesn't exist on the host/It's not mounted into the pod/We need to have another kernel module around on the host?

    If it's some missing kernel feature, this might just need a bit more documentation and some more graceful error handling.

    Environment:

    • LVM Driver version: 0.8.2
    • Kubernetes version (use kubectl version): v1.21.2+k3s1
    • Kubernetes installer & version: k3s
    • Cloud provider or hardware configuration: Equinix Metal
    • OS (e.g. from /etc/os-release): Ubuntu 20.04.3 LTS
  • Snapshots are not created

    Snapshots are not created

    What steps did you take and what happened: Followed this guide: https://github.com/openebs/lvm-localpv/blob/develop/docs/snapshot.md

    What did you expect to happen: Snapshot should be created

    The output of the following commands will help us better understand what's going on:

    E0501 15:06:50.777462 1 volume.go:270] Get snapshot failed err: lvmsnapshots.local.openebs.io "snapshot-44c1fe7d-f202-4af5-b56e-69081360e95e" not found E0501 15:06:50.780672 1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to handle CreateSnapshotRequest for pvc-01f43051-83af-409f-922a-9a3653351aad: snapshot-44c1fe7d-f202-4af5-b56e-69081360e95e, {LVMSnapshot.local.openebs.io "snapshot-44c1fe7d-f202-4af5-b56e-69081360e95e" is invalid: [spec.capacity: Required value, spec.vgPattern: Required value]}

    Anything else you would like to add:

    I can create snapshots manually without any problems: lvcreate --size 1G --snapshot --name my-name /dev/tank/pvc-01f43051-83af-409f-922a-9a3653351aad In the log from the controller i see this name for the snapshot: snapshot-44c1fe7d-f202-4af5-b56e-69081360e95e. When trying to manually create this snapshot i get this error: Names starting "snapshot" are reserved. Please choose a different LV name. Could this be the issue? How to fix that?

    I have checked all logs but i cannot find any useful information about why creating the snapshot fails

  • miss csistoragecapacity object.

    miss csistoragecapacity object.

    Describe the problem/challenge you have did not see object csistoragecapacity in kube-system when enable storageCapacity in csidriver.

    Environment:

    • LVM Driver version: 0.8.0
    • Kubernetes version (use kubectl version): 1.21.3
    • Kubernetes installer & version: kubeadm
    • other: -kubectl get pods -n kube-system -l role=openebs-lvm NAME READY STATUS RESTARTS AGE openebs-lvm-controller-0 5/5 Running 0 99m openebs-lvm-node-2txsf 2/2 Running 0 99m openebs-lvm-node-4mjxh 2/2 Running 0 99m openebs-lvm-node-kr5jb 2/2 Running 0 99m
  • CSIDriver is missing fsGroupPolicy

    CSIDriver is missing fsGroupPolicy

    What steps did you take and what happened: Created a new volume with fsGroup specified on pod spec securityContext but permissions were root:root 0755

    What did you expect to happen: Filesystem mounted with group set to GID specified in fsGroup.

    Anything else you would like to add:

    The CSIDriver is missing fsGroupPolicy: File

    Environment:

    • lvm-localpv version: 0.4.0
  • lvm-operator.yaml in release doesn't pin container version

    lvm-operator.yaml in release doesn't pin container version

    What steps did you take and what happened: I deployed according to the readme, which directs me to kubectl apply -f deploy/lvm-operator.yaml.

    What did you expect to happen: I checked out a specific tag/release (0.8.5 in that case). I expected the yaml file to pin that version.

    However, https://github.com/openebs/lvm-localpv/blob/lvm-localpv-0.8.5/deploy/lvm-operator.yaml#L1258 uses the ci image tag.

    I'd expect a release to pin the version used explicitly, or the release artifacts to include a rendered lvm-operator.yaml file for that specific release.

  • Fix(scheduler): use SpaceWeighted as the default scheduler

    Fix(scheduler): use SpaceWeighted as the default scheduler

    Pull Request template

    Please, go through these steps before you submit a PR.

    Why is this PR required? What issue does it fix?: Fix issue https://github.com/openebs/lvm-localpv/issues/188

    What this PR does?: Use SpaceWeighted as the default scheduler

    Does this PR require any upgrade changes?: No

    If the changes in this PR are manually verified, list down the scenarios covered:: 1、Install openebs lvm 2、Create lvm storageclass(Use two K8S nodes for lvm provision) 3、Create PVC to use the sc above to provision LVM volume, to make sure it use the SpaceWeighted when schedule LVM volume

    Any additional information for your reviewer? : Mention if this PR is part of any design or a continuation of previous PRs

    Checklist:

    • [ ] Fixes #
    • [ ] PR Title follows the convention of <type>(<scope>): <subject>
    • [ ] Has the change log section been updated?
    • [ ] Commit has unit tests
    • [ ] Commit has integration tests
    • [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
    • [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:

    PLEASE REMOVE BELOW INFORMATION BEFORE SUBMITTING

    The PR title message must follow convention: <type>(<scope>): <subject>.

    Where:

    • type is defining if release will be triggering after merging submitted changes, details in CONTRIBUTING.md. Most common types are:

      • feat - for new features, not a new feature for build script
      • fix - for bug fixes or improvements, not a fix for build script
      • chore - changes not related to production code
      • docs - changes related to documentation
      • style - formatting, missing semi colons, linting fix etc; no significant production code changes
      • test - adding missing tests, refactoring tests; no production code change
      • refactor - refactoring production code, eg. renaming a variable or function name, there should not be any significant production code changes
    • scope is a single word that best describes where the changes fit. Most common scopes are like:

      • data engine (localpv, jiva, cstor)
      • feature (provisioning, backup, restore, exporter)
      • code component (api, webhook, cast, upgrade)
      • test (tests, bdd)
      • chores (version, build, log, travis)
    • subject is a single line brief description of the changes made in the pull request.

  • When custom node Labels has been deleted, LVM LocalPV CSI driver will not schedule the PV to the node

    When custom node Labels has been deleted, LVM LocalPV CSI driver will not schedule the PV to the node

    What steps did you take and what happened:

    • Currently, all custom node Labels will be registered when restart the LVM-LocalPV Driver, CSI node topologyKeys will keep all those keys even if node labels has been changed. When custom node Labels has been deleted, LVM LocalPV CSI driver will not schedule the PV to the node, restart the LVM-LocalPV Driver daemon set is a must requirement.

    What did you expect to happen:

    • That is not a reasonable way to register the topologyKeys, because it can't predict Labels deleted.

    The output of the following commands will help us better understand what's going on:

    # kubectl logs -n kube-system openebs-lvm-controller-0  -c csi-provisioner
    
    E0619 07:43:46.123145       1 controller.go:984] error syncing claim "3ad3265d-8479-4485-a7ec-977b585afa19": failed to provision volume with StorageClass "local-lvm": error generating accessibility requirements: topology labels from selected node map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux node-role.kubernetes.io/master: node-role.kubernetes.io/node: openebs.io/nodename:node1] does not match topology keys from CSINode [beta.kubernetes.io/arch beta.kubernetes.io/os **test** kubernetes.io/arch kubernetes.io/hostname node-role.kubernetes.io/master node-role.kubernetes.io/node openebs.io/nodename]
    
    # kubectl get csinode node1 -o yaml
    ......
        topologyKeys:
        - beta.kubernetes.io/arch
        - beta.kubernetes.io/os
        - test
        - kubernetes.io/arch
        - kubernetes.io/hostname
        - kubernetes.io/os
        - node-role.kubernetes.io/master
        - node-role.kubernetes.io/node
        - openebs.io/nodename
    

    Describe the solution you'd like: Is that a better way to register topologyKeys from env rather than all the custom node labels? When you need to change topologyKeys, just modify env and restart LVM-LocalPV Driver.

  • feat(provisioning): add support for multiple vg to use for provisioning

    feat(provisioning): add support for multiple vg to use for provisioning

    Why is this PR required? What issue does it fix?: See #17 for more details.

    What this PR does?: Since the volgroup param now represents the regex, controller pass the same to node plugin by setting up an additional field called VolGroupRegex in lvmvolume resource. Node plugin controller will then chose a vg matching the provided regex and sets up the VolGroup field under lvmvolume. If there are multiple vgs matching the regex, node plugin controller will choose the one having minimum free space (bin packing) available to accommodate that volume capacity.

    Does this PR require any upgrade changes?:

    If the changes in this PR are manually verified, list down the scenarios covered:: Consider a k8s cluster having 4 nodes with vgs [lvmvg-a, lvmvg-b, lvmvg-a, xyzvg]. Configure the storage class by setting volgroup parameter as lvmvg* (regex denoting a lvmvg prefix). After creating a stateful set of size 4, we'll see each pvc gets scheduled on first 3 nodes (since xyzvg doesn't matches the volgroup regex).

    Any additional information for your reviewer? : Mention if this PR is part of any design or a continuation of previous PRs This pull request is dependent on #21. So, we need to close that first before closing this.

    Checklist:

    • [x] Fixes #17
    • [x] PR Title follows the convention of <type>(<scope>): <subject>
    • [ ] Has the change log section been updated?
    • [ ] Commit has unit tests
    • [ ] Commit has integration tests
    • [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
    • [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:
  • feat(snapshot): add snapshot support for LVM PV

    feat(snapshot): add snapshot support for LVM PV

    Why is this PR required? What issue does it fix?: This PR adds support for LVM snapshot to the lvm-localPV CSI driver. The snapshots created will be readonly (as opposed to the default ReadWrite). Also, once snapshots are created for a volume, resize will not work for those volumes, since LVM does't support that.

    To create a snapshot, create a snapshot class as given below and then create a volumesnapshot resource

    kind: VolumeSnapshotClass
    apiVersion: snapshot.storage.k8s.io/v1
    metadata:
      name: lvm-localpv-snapclass
      annotations:
        snapshot.storage.kubernetes.io/is-default-class: "true"
    driver: local.csi.openebs.io
    deletionPolicy: Delete
    ---
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: lvm-localpv-snap
    spec:
      volumeSnapshotClassName: lvm-localpv-snapclass
      source:
        persistentVolumeClaimName: <pvc-name>
    

    What this PR does?:

    • adds LVMSnapshot CRDs
    • add snapshot controller to watch for LVMSnapshot CRs
    • adds the volumesnapshot related CRDs from storage.k8s.io to the deployment
    • use container images from k8s.gcr.io for CSI components

    Limitation Volumes with snapshots cannot be resized, as LVM does not support online resize of origin volumes with a snapshot. ControllerExpandVolume will error out if the volume to resized has any active snapshots.

    Does this PR require any upgrade changes?: dm-snapshot kernel module should be loaded for snapshot to work

    If the changes in this PR are manually verified, list down the scenarios covered::

    1. Snapshot creation
    2. Try to resize volume with snapshot (will error out the volume expansion)
    3. resize should work after snapshots are removed
    4. create multiple snapshots for the same volume

    Any additional information for your reviewer? : Mention if this PR is part of any design or a continuation of previous PRs

    Checklist:

    • [x] Fixes #10
    • [x] PR Title follows the convention of <type>(<scope>): <subject>
    • [x] Has the change log section been updated?
    • [ ] Commit has unit tests
    • [ ] Commit has integration tests
    • [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
    • [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:
  • fix(data engine): moving pkg/config into pkg/driver/config (#8)

    fix(data engine): moving pkg/config into pkg/driver/config (#8)

    Signed-off-by: Oussama Salahouelhadj [email protected]

    moving pkg/comfig into pkg/driver/config

    • relinking import statement to the new config package path "pkg/driver/config" in the following files:
      • cmd/main.go
      • pkg/driver/driver.go
      • pkg/lvm/iolimiter.go
      • pkg/lvm/iolimiter_test.go

    Why is this PR required? What issue does it fix?: this PR is related to this issue: #8

    What this PR does?: change the config package path from ...pkg/comfig to ...pkg/driver/config and fixing/relinking to the new path.

    Checklist:

    • [x] Fixes #8
    • [x] PR Title follows the convention of <type>(<scope>): <subject>
    • [ ] Has the change log section been updated?
    • [x] Commit has unit tests
    • [x] Commit has integration tests
  • Back-off restarting failed container

    Back-off restarting failed container

    What steps did you take and what happened: just after kubectl apply -f xxx contrainer start to run , but there is somethine wrong with back-off pod and the pod log says "exec /csi-node-driver-registrar: exec format error" image

    Environment: 3 ec2 on aws ubuntu 20 kubernetes 1.24

  • Still connecting to unix:///var/lib/csi/sockets/pluginproxy/csi.sock

    Still connecting to unix:///var/lib/csi/sockets/pluginproxy/csi.sock

    the openebs-lvm-controller is running, but when I see the log of this pod. It always reminds me "still connecting to unix:///var/lib/csi/sockets/pluginproxy/csi.sock. I occured this issue in physical machine, virtual machine doesn't occur. could you help me, thanks

  • LVMVolume specifies fields with boolean meaning as string

    LVMVolume specifies fields with boolean meaning as string

    Describe the problem/challenge you have Due to some preprocessing we're doing on rendered YAML files I discovered an issue with that process in combination with the helm-charts provided in this repository.

    Is there a reason the fields shared and thinProvision of the LVMVolume resource are strings accepting yes/no instead of booleans? This might be confusing when creating them manually as yes/no without quotes are automatically converted to booleans in YAML 1.1

    Describe the solution you'd like Change the type of the fields shared and thinProvision of the LVMVolume resource to boolean

    Environment:

    • LVM Driver version: 1.0.0
  • chore: bump go version to 1.19

    chore: bump go version to 1.19

    Why is this PR required? What issue does it fix?:

    As per https://github.com/openebs/lvm-localpv/pull/212#issuecomment-1323062790 tere was a request to bump the golang version

    What this PR does?:

    Bump the golang version to 1.19

    Does this PR require any upgrade changes?:

    No that I am aware of

    Checklist:

    • [ ] Fixes #
    • [ ] PR Title follows the convention of <type>(<scope>): <subject>
    • [ ] Has the change log section been updated?
    • [ ] Commit has unit tests
    • [ ] Commit has integration tests
    • [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
    • [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:
  • kubernetes `v1.25.4` LVM: unable to mount `xfs` File System

    kubernetes `v1.25.4` LVM: unable to mount `xfs` File System

    What steps did you take and what happened: Hey!

    I successfully installed OpenEBS Operator to my Kubernetes cluster recently provisioned with Kubespray. Also successfully created 2 Storage Classes: for ext4 and xfs file systems:

    I'm using xfs file system for mongodb as recommended one and default ext4 for other PVCs.

    ext4 PVC was successfully created and mounted to desired pod but xfs is failing to mount.

    What did you expect to happen: PVC with xfs file system successfully created and mounted to desired mongodb pod

    The output of the following commands will help us better understand what's going on: (Pasting long output into a GitHub gist or other Pastebin is fine.)

    • kubectl logs -f openebs-lvm-node- -n lvm -c openebs-lvm-plugin
    I1121 18:59:58.749533       1 main.go:136] LVM Driver Version :- develop-098a38e:07-06-2022 - commit :- 098a38ea83d2f554ce060c5ba449c345e5a46c8c
    I1121 18:59:58.749603       1 main.go:137] DriverName: local.csi.openebs.io Plugin: agent EndPoint: unix:///plugin/csi.sock NodeID: mic101-06 SetIOLimits: false ContainerRuntime: containerd RIopsPerGB: [] WIopsPerGB: [] RBpsPerGB: [] WBpsPerGB: []
    I1121 18:59:58.749625       1 driver.go:48] enabling volume access mode: SINGLE_NODE_WRITER
    I1121 18:59:58.750463       1 grpc.go:190] Listening for connections on address: &net.UnixAddr{Name:"//plugin/csi.sock", Net:"unix"}
    I1121 18:59:58.751744       1 builder.go:83] Creating event broadcaster
    I1121 18:59:58.751842       1 builder.go:89] Creating lvm volume controller object
    I1121 18:59:58.751875       1 builder.go:99] Adding Event handler functions for lvm volume controller
    I1121 18:59:58.751909       1 start.go:70] Starting informer for lvm volume controller
    I1121 18:59:58.751916       1 start.go:72] Starting Lvm volume controller
    I1121 18:59:58.751922       1 volume.go:294] Starting Vol controller
    I1121 18:59:58.751925       1 volume.go:297] Waiting for informer caches to sync
    I1121 18:59:58.752506       1 builder.go:83] Creating event broadcaster
    I1121 18:59:58.752572       1 builder.go:89] Creating lvm snapshot controller object
    I1121 18:59:58.752589       1 builder.go:98] Adding Event handler functions for lvm snapshot controller
    I1121 18:59:58.752855       1 start.go:70] Starting informer for lvm snapshot controller
    I1121 18:59:58.752870       1 start.go:72] Starting Lvm snapshot controller
    I1121 18:59:58.752876       1 snapshot.go:194] Starting Snap controller
    I1121 18:59:58.752880       1 snapshot.go:197] Waiting for informer caches to sync
    I1121 18:59:58.764412       1 builder.go:93] Creating lvm node controller object
    I1121 18:59:58.764448       1 builder.go:105] Adding Event handler functions for lvm node controller
    I1121 18:59:58.764469       1 start.go:95] Starting informer for lvm node controller
    I1121 18:59:58.764484       1 start.go:98] Starting Lvm node controller
    I1121 18:59:58.764493       1 lvmnode.go:222] Starting Node controller
    I1121 18:59:58.764498       1 lvmnode.go:225] Waiting for informer caches to sync
    I1121 18:59:58.767065       1 lvmnode.go:152] Got add event for lvm node lvm/mic101-06
    I1121 18:59:58.852084       1 volume.go:301] Starting Vol workers
    I1121 18:59:58.852140       1 volume.go:308] Started Vol workers
    I1121 18:59:58.854714       1 snapshot.go:201] Starting Snap workers
    I1121 18:59:58.854743       1 snapshot.go:208] Started Snap workers
    I1121 18:59:58.864631       1 lvmnode.go:230] Starting Node workers
    I1121 18:59:58.864653       1 lvmnode.go:237] Started Node workers
    I1121 18:59:58.900721       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 18:59:59.346070       1 grpc.go:72] GRPC call: /csi.v1.Identity/GetPluginInfo requests {}
    I1121 18:59:59.347186       1 grpc.go:81] GRPC response: {"name":"local.csi.openebs.io","vendor_version":"develop-098a38e:07-06-2022"}
    I1121 19:00:00.318319       1 grpc.go:72] GRPC call: /csi.v1.Node/NodeGetInfo requests {}
    I1121 19:00:00.322318       1 grpc.go:81] GRPC response: {"accessible_topology":{"segments":{"kubernetes.io/hostname":"mic101-06","openebs.io/nodename":"mic101-06"}},"node_id":"mic101-06"}
    I1121 19:00:58.888204       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:01:19.070229       1 volume.go:84] Got add event for Vol pvc-b4e5b534-300f-460c-ba01-92e312ce6b40
    I1121 19:01:19.070245       1 volume.go:85] lvmvolume object to be enqueued by Add handler: &{{LVMVolume local.openebs.io/v1alpha1} {pvc-b4e5b534-300f-460c-ba01-92e312ce6b40  lvm  f95e7f84-ce74-484d-b4d5-d81352d885d3 171987511 1 2022-11-21 19:01:19 +0000 UTC <nil> <nil> map[] map[] [] []  [{lvm-driver Update local.openebs.io/v1alpha1 2022-11-21 19:01:19 +0000 UTC FieldsV1 {"f:spec":{".":{},"f:capacity":{},"f:ownerNodeID":{},"f:shared":{},"f:thinProvision":{},"f:vgPattern":{},"f:volGroup":{}},"f:status":{".":{},"f:state":{}}}}]} {mic101-06  ^lvmvg$ 21474836480 no no} {Pending <nil>}}
    I1121 19:01:19.070363       1 volume.go:54] Getting lvmvol object name:pvc-b4e5b534-300f-460c-ba01-92e312ce6b40, ns:lvm from cache
    I1121 19:01:19.145099       1 lvm_util.go:287] lvm: created volume lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40
    I1121 19:01:19.154097       1 volume.go:366] Successfully synced 'lvm/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40'
    I1121 19:01:20.428406       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    I1121 19:01:20.448698       1 mount_linux.go:366] Disk "/dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40" appears to be unformatted, attempting to format as type: "xfs" with options: [/dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40]
    I1121 19:01:20.545574       1 mount_linux.go:376] Disk successfully formatted (mkfs): xfs - /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    E1121 19:01:20.549099       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:01:20.549167       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:01:20.549231       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:01:21.130985       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:01:21.173108       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:01:21.173140       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:01:21.173163       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:01:22.238272       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:01:22.279818       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:01:22.279848       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:01:22.279891       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:01:24.358032       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:01:24.399898       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:01:24.399936       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:01:24.399960       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:01:28.482745       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:01:28.524674       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:01:28.524711       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:01:28.524734       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:01:36.536941       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:01:36.577764       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:01:36.577790       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:01:36.577813       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:01:52.648665       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:01:52.689956       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:01:52.689990       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:01:52.690011       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:01:58.902968       1 lvmnode.go:109] lvm node controller: node volume groups updated current=[{Name:appvg UUID:5gc1Kp-y6uY-d5Ks-XToJ-c0c1-ie3K-Vuwo5G Size:{i:{value:214744170496 scale:0} d:{Dec:<nil>} s:204796Mi Format:BinarySI} Free:{i:{value:32208060416 scale:0} d:{Dec:<nil>} s:30716Mi Format:BinarySI} LVCount:2 PVCount:1 MaxLV:0 MaxPV:0 SnapCount:0 MissingPVCount:0 MetadataCount:1 MetadataUsedCount:1 MetadataFree:{i:{value:520192 scale:0} d:{Dec:<nil>} s:508Ki Format:BinarySI} MetadataSize:{i:{value:1044480 scale:0} d:{Dec:<nil>} s:1020Ki Format:BinarySI} Permission:0 AllocationPolicy:0} {Name:centos UUID:ftZc0y-rq2X-AUi6-R6P7-Dp98-tb2Z-g5c9Vg Size:{i:{value:61845012480 scale:0} d:{Dec:<nil>} s:58980Mi Format:BinarySI} Free:{i:{value:4194304 scale:0} d:{Dec:<nil>} s:4Mi Format:BinarySI} LVCount:3 PVCount:1 MaxLV:0 MaxPV:0 SnapCount:0 MissingPVCount:0 MetadataCount:1 MetadataUsedCount:1 MetadataFree:{i:{value:519680 scale:0} d:{Dec:<nil>} s:519680 Format:DecimalSI} MetadataSize:{i:{value:1044480 scale:0} d:{Dec:<nil>} s:1020Ki Format:BinarySI} Permission:0 AllocationPolicy:0} {Name:lvmvg UUID:uJlqG9-SzzU-6O5j-MMlt-uVjq-AHFi-BX3hoe Size:{i:{value:1073737629696 scale:0} d:{Dec:<nil>} s:1023996Mi Format:BinarySI} Free:{i:{value:1073737629696 scale:0} d:{Dec:<nil>} s:1023996Mi Format:BinarySI} LVCount:0 PVCount:1 MaxLV:0 MaxPV:0 SnapCount:0 MissingPVCount:0 MetadataCount:1 MetadataUsedCount:1 MetadataFree:{i:{value:520704 scale:0} d:{Dec:<nil>} s:520704 Format:DecimalSI} MetadataSize:{i:{value:1044480 scale:0} d:{Dec:<nil>} s:1020Ki Format:BinarySI} Permission:0 AllocationPolicy:0}], required=[{Name:appvg UUID:5gc1Kp-y6uY-d5Ks-XToJ-c0c1-ie3K-Vuwo5G Size:{i:{value:214744170496 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Free:{i:{value:32208060416 scale:0} d:{Dec:<nil>} s: Format:BinarySI} LVCount:2 PVCount:1 MaxLV:0 MaxPV:0 SnapCount:0 MissingPVCount:0 MetadataCount:1 MetadataUsedCount:1 MetadataFree:{i:{value:520192 scale:0} d:{Dec:<nil>} s: Format:BinarySI} MetadataSize:{i:{value:1044480 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Permission:0 AllocationPolicy:0} {Name:centos UUID:ftZc0y-rq2X-AUi6-R6P7-Dp98-tb2Z-g5c9Vg Size:{i:{value:61845012480 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Free:{i:{value:4194304 scale:0} d:{Dec:<nil>} s: Format:BinarySI} LVCount:3 PVCount:1 MaxLV:0 MaxPV:0 SnapCount:0 MissingPVCount:0 MetadataCount:1 MetadataUsedCount:1 MetadataFree:{i:{value:519680 scale:0} d:{Dec:<nil>} s: Format:BinarySI} MetadataSize:{i:{value:1044480 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Permission:0 AllocationPolicy:0} {Name:lvmvg UUID:uJlqG9-SzzU-6O5j-MMlt-uVjq-AHFi-BX3hoe Size:{i:{value:1073737629696 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Free:{i:{value:1052262793216 scale:0} d:{Dec:<nil>} s: Format:BinarySI} LVCount:1 PVCount:1 MaxLV:0 MaxPV:0 SnapCount:0 MissingPVCount:0 MetadataCount:1 MetadataUsedCount:1 MetadataFree:{i:{value:520192 scale:0} d:{Dec:<nil>} s: Format:BinarySI} MetadataSize:{i:{value:1044480 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Permission:0 AllocationPolicy:0}]
    I1121 19:01:58.903087       1 lvmnode.go:119] lvm node controller: updating node object with &{TypeMeta:{Kind:LVMNode APIVersion:local.openebs.io/v1alpha1} ObjectMeta:{Name:mic101-06 GenerateName: Namespace:lvm SelfLink: UID:be51d8cb-b50b-45e1-84e8-4d0779af27d7 ResourceVersion:171986177 Generation:3 CreationTimestamp:2022-11-21 18:11:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[] OwnerReferences:[{APIVersion:v1 Kind:Node Name:mic101-06 UID:a6d194c4-db2a-44ac-95c5-a4a2bd457659 Controller:0xc000134c08 BlockOwnerDeletion:<nil>}] Finalizers:[] ClusterName: ManagedFields:[{Manager:lvm-driver Operation:Update APIVersion:local.openebs.io/v1alpha1 Time:2022-11-21 18:57:49 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"a6d194c4-db2a-44ac-95c5-a4a2bd457659\"}":{}}},"f:volumeGroups":{}}}]} VolumeGroups:[{Name:appvg UUID:5gc1Kp-y6uY-d5Ks-XToJ-c0c1-ie3K-Vuwo5G Size:{i:{value:214744170496 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Free:{i:{value:32208060416 scale:0} d:{Dec:<nil>} s: Format:BinarySI} LVCount:2 PVCount:1 MaxLV:0 MaxPV:0 SnapCount:0 MissingPVCount:0 MetadataCount:1 MetadataUsedCount:1 MetadataFree:{i:{value:520192 scale:0} d:{Dec:<nil>} s: Format:BinarySI} MetadataSize:{i:{value:1044480 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Permission:0 AllocationPolicy:0} {Name:centos UUID:ftZc0y-rq2X-AUi6-R6P7-Dp98-tb2Z-g5c9Vg Size:{i:{value:61845012480 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Free:{i:{value:4194304 scale:0} d:{Dec:<nil>} s: Format:BinarySI} LVCount:3 PVCount:1 MaxLV:0 MaxPV:0 SnapCount:0 MissingPVCount:0 MetadataCount:1 MetadataUsedCount:1 MetadataFree:{i:{value:519680 scale:0} d:{Dec:<nil>} s: Format:BinarySI} MetadataSize:{i:{value:1044480 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Permission:0 AllocationPolicy:0} {Name:lvmvg UUID:uJlqG9-SzzU-6O5j-MMlt-uVjq-AHFi-BX3hoe Size:{i:{value:1073737629696 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Free:{i:{value:1052262793216 scale:0} d:{Dec:<nil>} s: Format:BinarySI} LVCount:1 PVCount:1 MaxLV:0 MaxPV:0 SnapCount:0 MissingPVCount:0 MetadataCount:1 MetadataUsedCount:1 MetadataFree:{i:{value:520192 scale:0} d:{Dec:<nil>} s: Format:BinarySI} MetadataSize:{i:{value:1044480 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Permission:0 AllocationPolicy:0}]}
    I1121 19:01:58.911452       1 lvmnode.go:164] Got update event for lvm node lvm/mic101-06
    I1121 19:01:58.911482       1 lvmnode.go:123] lvm node controller: updated node object lvm/mic101-06
    I1121 19:01:58.911513       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:01:58.935029       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:02:24.754761       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:02:24.794281       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:02:24.794312       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:02:24.794331       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:02:58.887829       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:03:28.847816       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:03:28.890064       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:03:28.890100       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:03:28.890126       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:03:58.886290       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:04:58.766200       1 lvmnode.go:164] Got update event for lvm node lvm/mic101-06
    I1121 19:04:58.790195       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:04:58.886993       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:05:30.897259       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:05:30.938835       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:05:30.938865       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:05:30.938892       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:05:58.884792       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:06:58.891057       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:07:32.953618       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:07:32.994708       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:07:32.994744       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:07:32.994765       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:07:58.889021       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:08:58.887854       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:09:35.000722       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:09:35.040308       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:09:35.040336       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:09:35.040354       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    I1121 19:09:58.766142       1 lvmnode.go:164] Got update event for lvm node lvm/mic101-06
    I1121 19:09:58.790145       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:09:58.888949       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:10:58.891155       1 lvmnode.go:305] Successfully synced 'lvm/mic101-06'
    I1121 19:11:37.128692       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"mongodb-0","csi.storage.k8s.io/pod.namespace":"mongodb","csi.storage.k8s.io/pod.uid":"d47a5fc6-1bd9-45a8-a61c-c97473092a9d","csi.storage.k8s.io/serviceAccount.name":"mongodb","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/csiProvisionerIdentity":"1669057199601-8081-local.csi.openebs.io"},"volume_id":"pvc-b4e5b534-300f-460c-ba01-92e312ce6b40"}
    E1121 19:11:37.170337       1 mount_linux.go:150] Mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    E1121 19:11:37.170404       1 mount.go:72] lvm: failed to mount volume /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 [xfs] to /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount, error mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    E1121 19:11:37.170425       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40 /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount
    Output: mount: /var/lib/kubelet/pods/d47a5fc6-1bd9-45a8-a61c-c97473092a9d/volumes/kubernetes.io~csi/pvc-b4e5b534-300f-460c-ba01-92e312ce6b40/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40, missing codepage or helper program, or other error.
    
    
    • kubectl get pods -n lvm
    NAME                       READY   STATUS    RESTARTS   AGE
    openebs-lvm-controller-0   5/5     Running   0          15m
    openebs-lvm-node-cc4lh     2/2     Running   0          15m
    openebs-lvm-node-grj6p     2/2     Running   0          15m
    openebs-lvm-node-mp9f6     2/2     Running   0          15m
    openebs-lvm-node-p8nmh     2/2     Running   0          15m
    openebs-lvm-node-t7nvj     2/2     Running   0          15m
    openebs-lvm-node-xfg26     2/2     Running   0          15m
    openebs-lvm-node-z5l7d     2/2     Running   0          15m
    openebs-lvm-node-zcsvh     2/2     Running   0          15m
    
    
    • kubectl get lvmvol -A -o yaml
    apiVersion: v1
    items:
    - apiVersion: local.openebs.io/v1alpha1
      kind: LVMVolume
      metadata:
        creationTimestamp: "2022-11-21T19:01:19Z"
        finalizers:
        - lvm.openebs.io/finalizer
        generation: 3
        labels:
          kubernetes.io/nodename: mic101-06
        name: pvc-b4e5b534-300f-460c-ba01-92e312ce6b40
        namespace: lvm
        resourceVersion: "171987513"
        uid: f95e7f84-ce74-484d-b4d5-d81352d885d3
      spec:
        capacity: "21474836480"
        ownerNodeID: mic101-06
        shared: "no"
        thinProvision: "no"
        vgPattern: ^lvmvg$
        volGroup: lvmvg
      status:
        state: Ready
    - apiVersion: local.openebs.io/v1alpha1
      kind: LVMVolume
      metadata:
        creationTimestamp: "2022-11-21T18:11:56Z"
        finalizers:
        - lvm.openebs.io/finalizer
        generation: 3
        labels:
          kubernetes.io/nodename: mic101-00
        name: pvc-f0e890f7-ce1f-4b8f-a55b-a92ba03cacef
        namespace: lvm
        resourceVersion: "171974076"
        uid: a1aad737-100f-41d7-bdb6-96d5d3672ab9
      spec:
        capacity: "214748364800"
        ownerNodeID: mic101-00
        shared: "no"
        thinProvision: "no"
        vgPattern: ^lvmvg$
        volGroup: lvmvg
      status:
        state: Ready
    kind: List
    metadata:
      resourceVersion: ""
    
    

    Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

    I see that it was trying to mount itself to node mic101-06 so I run a little bit of a debug. If I manually format new volume to xfs, it mounted successfully to the pod.

    NAME                READY   STATUS     RESTARTS      AGE
    mongodb-0           0/2     Init:0/1   0             30m
    mongodb-arbiter-0   1/1     Running    1 (92s ago)   4m34s
    

    From host node:

    [root@mic101-06 mnt]# mount /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40 /mnt/log/
    mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40,
           missing codepage or helper program, or other error
    
           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    
    
    [root@mic101-06 mnt]# mkfs.xfs /dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40 -f
    meta-data=/dev/mapper/lvmvg-pvc--b4e5b534--300f--460c--ba01--92e312ce6b40 isize=512    agcount=4, agsize=1310720 blks
             =                       sectsz=512   attr=2, projid32bit=1
             =                       crc=1        finobt=0, sparse=0
    data     =                       bsize=4096   blocks=5242880, imaxpct=25
             =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal log           bsize=4096   blocks=2560, version=2
             =                       sectsz=512   sunit=0 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    
    

    And back to kubectl:

    NAME                READY   STATUS     RESTARTS      AGE
    mongodb-0           2/2     Running    0             41m
    mongodb-1           0/2     Init:0/1   0             24m
    mongodb-arbiter-0   1/1     Running    4 (28m ago)   41m
    
    --------------
    Events:
      Type     Reason            Age                    From               Message
      ----     ------            ----                   ----               -------
      Warning  FailedScheduling  24m                    default-scheduler  0/8 nodes are available: 8 pod has unbound immediate PersistentVolumeClaims. preemption: 0/8 nodes are available: 8 Preemption is not helpful for scheduling.
      Normal   Scheduled         24m                    default-scheduler  Successfully assigned mongodb/mongodb-1 to mic101-01
      Warning  FailedMount       15m (x2 over 18m)      kubelet            Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[scripts kube-api-access-2vrfj datadir common-scripts]: timed out waiting for the condition
      Warning  FailedMount       4m40s (x6 over 22m)    kubelet            Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[kube-api-access-2vrfj datadir common-scripts scripts]: timed out waiting for the condition
      Warning  FailedMount       2m26s (x2 over 9m13s)  kubelet            Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[common-scripts scripts kube-api-access-2vrfj datadir]: timed out waiting for the condition
      Warning  FailedMount       2m17s (x19 over 24m)   kubelet            MountVolume.SetUp failed for volume "pvc-01b6005c-3728-40ae-be05-88dcc8144c9b" : rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
    Mounting command: mount
    Mounting arguments: -t xfs -o defaults /dev/lvmvg/pvc-01b6005c-3728-40ae-be05-88dcc8144c9b /var/lib/kubelet/pods/c38073a8-aafc-440f-9dcd-39530bb2ebbc/volumes/kubernetes.io~csi/pvc-01b6005c-3728-40ae-be05-88dcc8144c9b/mount
    Output: mount: /var/lib/kubelet/pods/c38073a8-aafc-440f-9dcd-39530bb2ebbc/volumes/kubernetes.io~csi/pvc-01b6005c-3728-40ae-be05-88dcc8144c9b/mount: wrong fs type, bad option, bad superblock on /dev/mapper/lvmvg-pvc--01b6005c--3728--40ae--be05--88dcc8144c9b, missing codepage or helper program, or other error.
    --------------
    
    

    Environment:

    • LVM Driver version: image: openebs/lvm-driver:ci
    • Kubernetes version (use kubectl version):
    $ kubectl version
    WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
    Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:47:25Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"darwin/amd64"}
    Kustomize Version: v4.5.7
    Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:29:58Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
    
    • Kubernetes installer & version: [Kubespray[(https://github.com/kubernetes-sigs/kubespray) master branch
    • Cloud provider or hardware configuration:
    Bare-metal K8S. VMWare ESXi. CentOS 7.
    
    • OS (e.g. from /etc/os-release):
    NAME="CentOS Linux"
    VERSION="7 (Core)"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="7"
    PRETTY_NAME="CentOS Linux 7 (Core)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:centos:centos:7"
    HOME_URL="https://www.centos.org/"
    BUG_REPORT_URL="https://bugs.centos.org/"
    
    CENTOS_MANTISBT_PROJECT="CentOS-7"
    CENTOS_MANTISBT_PROJECT_VERSION="7"
    REDHAT_SUPPORT_PRODUCT="centos"
    REDHAT_SUPPORT_PRODUCT_VERSION="7"
    
    • Host LVM Version: LVM version: 2.02.187(2)-RHEL7 (2020-03-24)
  • fix(exporter): ignore duplicate LVs when collecting metrics

    fix(exporter): ignore duplicate LVs when collecting metrics

    Pull Request template

    Why is this PR required? What issue does it fix?:

    When LVs have more than a single segment(i.e. spread over several), collecting the metrics does no longer work, as lvs returns several entries for the same LV

    What this PR does?:

    As I did not want to change the behaviour of ListLVMLogicalVolume (even though its currently only used in metrics collection. I opted for skipping any further occurrences of the LV (per UUID) during metric collection.

    Does this PR require any upgrade changes?: NO

    If the changes in this PR are manually verified, list down the scenarios covered::

    Assuming a VG with 100GB

    • Create 2 PVCs with 10GB size
    • Delete the PVC that sits before the second 10GB on the VG
    • Create a PVC with 85GB in size, so the LV will end up having two segments
    • Collect metrics and see no scrape errors

    Checklist:

    • [x] Fixes #211
    • [x] PR Title follows the convention of <type>(<scope>): <subject>
    • [ ] Has the change log section been updated?
    • [ ] Commit has unit tests
    • [ ] Commit has integration tests
    • [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
    • [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes

csi-driver-spiffe csi-driver-spiffe is a Container Storage Interface (CSI) driver plugin for Kubernetes to work along cert-manager. This CSI driver tr

Dec 1, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

Aug 6, 2022
Dynamically provisioning persistent local storage with Kubernetes

Local Path Provisioner Overview Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the

Jan 4, 2023
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Jan 23, 2022
Dothill (Seagate) AssuredSAN dynamic provisioner for Kubernetes (CSI plugin).

Dothill-csi dynamic provisioner for Kubernetes A dynamic persistent volume (PV) provisioner for Dothill AssuredSAN based storage systems. Introduction

Oct 11, 2022
Kubernetes CSI driver for QNAP NAS's

QNAP CSI This is a very alpha QNAP Kubernetes CSI driver which lets you automatically provision iSCSI volumes on a QNAP NAS. Its only been tested on a

Jul 29, 2022
Asynchronous data replication for Kubernetes volumes

VolSync VolSync asynchronously replicates Kubernetes persistent volumes between clusters using either rsync or rclone. It also supports creating backu

Jan 1, 2023
A Packer plugin for provisioning with Terraform (local)

Packer Plugin Terraform Inspired by Megan Marsh's talk https://www.hashicorp.com/resources/extending-packer I bit the bullet and started making my own

Nov 23, 2022
Karpenter: an open-source node provisioning project built for Kubernetes
Karpenter: an open-source node provisioning project built for Kubernetes

Karpenter is an open-source node provisioning project built for Kubernetes. Its goal is to improve the efficiency and cost of running workloads on Kub

Dec 1, 2022
A Kubernetes operator that allows for automatic provisioning and distribution of cert-manager certs across namespaces

cached-certificate-operator CachedCertificate Workflow When a CachedCertificate is created or updated the operator does the following: Check for a val

Sep 6, 2022
Hexagonal architecture paradigms, such as dividing adapters into primary (driver) and secondary (driven)Hexagonal architecture paradigms, such as dividing adapters into primary (driver) and secondary (driven)

authorizer Architecture In this project, I tried to apply hexagonal architecture paradigms, such as dividing adapters into primary (driver) and second

Dec 7, 2021
Feb 12, 2022
K8s-cinder-csi-plugin - K8s Pod Use Openstack Cinder Volume

k8s-cinder-csi-plugin K8s Pod Use Openstack Cinder Volume openstack volume list

Jul 18, 2022
Envoy file based dynamic routing using kubernetes config map

Envoy File Based Dynamic Routing Config mapを使用してEnvoy File Based Dynamic Routingを実現します。 概要 アーキテクチャとしては、 +----------+ +--------------+ +-----------

Dec 30, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Feb 14, 2022
Docker Swarm Ingress service based on OpenResty with automatic Let's Encrypt SSL provisioning

Ingress Service for Docker Swarm Swarm Ingress OpenResty is a ingress service for Docker in Swarm mode that makes deploying microservices easy. It con

Jun 23, 2022
FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute (USENIX ATC'21)

FaaSNet FaaSNet is the first system that provides an end-to-end, integrated solution for FaaS-optimized container runtime provisioning. FaaSNet uses l

Jan 2, 2023
Easy cloud instance provisioning

post-init (work in progress) Post-Init is a set of tools that allows you to easily connect to, provision, and interact with cloud instances after they

Dec 6, 2021