Dynamically provisioning persistent local storage with Kubernetes

Local Path Provisioner

Build StatusGo Report Card

Overview

Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the user configuration, the Local Path Provisioner will create hostPath based persistent volume on the node automatically. It utilizes the features introduced by Kubernetes Local Persistent Volume feature, but make it a simpler solution than the built-in local volume feature in Kubernetes.

Compare to built-in Local Persistent Volume feature in Kubernetes

Pros

Dynamic provisioning the volume using hostPath.

Cons

  1. No support for the volume capacity limit currently.
    1. The capacity limit will be ignored for now.

Requirement

Kubernetes v1.12+.

Deployment

Installation

In this setup, the directory /opt/local-path-provisioner will be used across all the nodes as the path for provisioning (a.k.a, store the persistent volume data). The provisioner will be installed in local-path-storage namespace by default.

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Or, use kustomize to deploy.

kustomize build "github.com/rancher/local-path-provisioner/deploy?ref=master" | kubectl apply -f -

After installation, you should see something like the following:

$ kubectl -n local-path-storage get pod
NAME                                     READY     STATUS    RESTARTS   AGE
local-path-provisioner-d744ccf98-xfcbk   1/1       Running   0          7m

Check and follow the provisioner log using:

$ kubectl -n local-path-storage logs -f -l app=local-path-provisioner

Usage

Create a hostPath backend Persistent Volume and a pod uses it:

kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

Or, use kustomize to deploy them.

kustomize build "github.com/rancher/local-path-provisioner/examples/pod?ref=master" | kubectl apply -f -

You should see the PV has been created:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                    STORAGECLASS   REASON    AGE
pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   2Gi        RWO            Delete           Bound     default/local-path-pvc   local-path               4s

The PVC has been bound:

$ kubectl get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
local-path-pvc   Bound     pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   2Gi        RWO            local-path     16s

And the Pod started running:

$ kubectl get pod
NAME          READY     STATUS    RESTARTS   AGE
volume-test   1/1       Running   0          3s

Write something into the pod

kubectl exec volume-test -- sh -c "echo local-path-test > /data/test"

Now delete the pod using

kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

After confirm that the pod is gone, recreated the pod using

kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml

Check the volume content:

$ kubectl exec volume-test cat /data/test
local-path-test

Delete the pod and pvc

kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml

Or, use kustomize to delete them.

kustomize build "github.com/rancher/local-path-provisioner/examples/pod?ref=master" | kubectl delete -f -

The volume content stored on the node will be automatically cleaned up. You can check the log of local-path-provisioner-xxx for details.

Now you've verified that the provisioner works as expected.

Configuration

Customize the ConfigMap

The configuration of the provisioner is a json file config.json and two bash scripts setup and teardown, stored in the a config map, e.g.:

kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |-
        {
                "nodePathMap":[
                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/opt/local-path-provisioner"]
                },
                {
                        "node":"yasker-lp-dev1",
                        "paths":["/opt/local-path-provisioner", "/data1"]
                },
                {
                        "node":"yasker-lp-dev3",
                        "paths":[]
                }
                ]
        }
  setup: |-
        #!/bin/sh
        while getopts "m:s:p:" opt
        do
            case $opt in
                p)
                absolutePath=$OPTARG
                ;;
                s)
                sizeInBytes=$OPTARG
                ;;
                m)
                volMode=$OPTARG
                ;;
            esac
        done

        mkdir -m 0777 -p ${absolutePath}
  teardown: |-
        #!/bin/sh
        while getopts "m:s:p:" opt
        do
            case $opt in
                p)
                absolutePath=$OPTARG
                ;;
                s)
                sizeInBytes=$OPTARG
                ;;
                m)
                volMode=$OPTARG
                ;;
            esac
        done

        rm -rf ${absolutePath}
  helperPod.yaml: |-
        apiVersion: v1
        kind: Pod
        metadata:
          name: helper-pod
        spec:
          containers:
          - name: helper-pod
            image: busybox

config.json

Definition

nodePathMap is the place user can customize where to store the data on each node.

  1. If one node is not listed on the nodePathMap, and Kubernetes wants to create volume on it, the paths specified in DEFAULT_PATH_FOR_NON_LISTED_NODES will be used for provisioning.
  2. If one node is listed on the nodePathMap, the specified paths in paths will be used for provisioning.
    1. If one node is listed but with paths set to [], the provisioner will refuse to provision on this node.
    2. If more than one path was specified, the path would be chosen randomly when provisioning.
Rules

The configuration must obey following rules:

  1. config.json must be a valid json file.
  2. A path must start with /, a.k.a an absolute path.
  3. Root directory(/) is prohibited.
  4. No duplicate paths allowed for one node.
  5. No duplicate node allowed.

Scripts setup and teardown and helperPod.yaml

The script setup will be executed before the volume is created, to prepare the directory on the node for the volume.

The script teardown will be executed after the volume is deleted, to cleanup the directory on the node for the volume.

The yaml file helperPod.yaml will be created by local-path-storage to execute setup or teardown script with three paramemters -p <path> -s <size> -m <mode> :

  • path: the absolute path provisioned on the node
  • size: pvc.Spec.resources.requests.storage in bytes
  • mode: pvc.Spec.VolumeMode

Reloading

The provisioner supports automatic configuration reloading. Users can change the configuration using kubectl apply or kubectl edit with config map local-path-config. There is a delay between when the user updates the config map and the provisioner picking it up.

When the provisioner detects the configuration changes, it will try to load the new configuration. Users can observe it in the log

time="2018-10-03T05:56:13Z" level=debug msg="Applied config: {"nodePathMap":[{"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES","paths":["/opt/local-path-provisioner"]},{"node":"yasker-lp-dev1","paths":["/opt","/data1"]},{"node":"yasker-lp-dev3"}]}"

If the reload fails, the provisioner will log the error and continue using the last valid configuration for provisioning in the meantime.

time="2018-10-03T05:19:25Z" level=error msg="failed to load the new config file: fail to load config file /etc/config/config.json: invalid character '#' looking for beginning of object key string"

time="2018-10-03T05:20:10Z" level=error msg="failed to load the new config file: config canonicalization failed: path must start with / for path opt on node yasker-lp-dev1"

time="2018-10-03T05:23:35Z" level=error msg="failed to load the new config file: config canonicalization failed: duplicate path /data1 on node yasker-lp-dev1

time="2018-10-03T06:39:28Z" level=error msg="failed to load the new config file: config canonicalization failed: duplicate node yasker-lp-dev3"

Uninstall

Before uninstallation, make sure the PVs created by the provisioner have already been deleted. Use kubectl get pv and make sure no PV with StorageClass local-path.

To uninstall, execute:

kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Debug

it providers a out-of-cluster debug env for deverlopers

debug

git clone https://github.com/rancher/local-path-provisioner.git
cd local-path-provisioner
go build
kubectl apply -f debug/config.yaml
./local-path-provisioner --debug start --service-account-name=default

example

Usage

clear

kubectl delete -f debug/config.yaml

License

Copyright (c) 2014-2020 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • Custom teardown script doesn't work

    Custom teardown script doesn't work

    I've tried to set a custom teardown script using Helm values:

    teardown: |-
      #!/bin/sh
      path=$1
      archived_path="$(dirname ${path})/archived-$(basename ${path})"
      mv ${path} ${archived_path}
    

    Although the config map gets updated to the new teardown script, when I delete a pvc local-path-provisioner still deletes the pv folder instead of running the script.

    Any help would be appreciated :-)

  • enable velero backups by using local instead of hostpath

    enable velero backups by using local instead of hostpath

    by using local instead of hostpath it would be possible to use velero with restic for backups. velero with restic for volume backups can not support to backup host-path, but local is supported.

    i use cassandra with rancher/local-path-provisioner for volumes to get bare metal disk performance, but backups are also nice...

    velero limitations https://github.com/vmware-tanzu/velero/blob/master/site/docs/master/restic.md#limitations

    host-path vs local https://kubernetes.io/docs/concepts/storage/volumes/#local https://kubernetes.io/docs/concepts/storage/volumes/#hostpath https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/

    as far as a have read the code, the only place that would need to change is https://github.com/rancher/local-path-provisioner/blob/655eac7962bc1dbafb4bdec60b86bc8bc76b307c/provisioner.go#L212

    does a technical reason exist not to use local? would it break current deployments to have a mixture of volumes types? is a merge request welcome?

  • Support for shared filesystems

    Support for shared filesystems

    Instead of spamming in #174 I will create another PR.

    This is an attempt to bring shared file systems support.

    sharedFileSystemPath allows the provisioner to use a filesystem that is mounted on all nodes at the same time. In this case all access modes are supported: ReadWriteOnce, ReadOnlyMany and ReadWriteMany for storage claims.

    In addition volumeBindingMode: Immediate can be used in StorageClass definition.

    Please note that nodePathMap and sharedFileSystemPath are mutually exclusive. If sharedFileSystemPath is used, then nodePathMap must be set to [].

  • Helper pod fails on nodes running fedora coreos

    Helper pod fails on nodes running fedora coreos

    I have a rpi node running coreos. Local provisioner runnning on the same ns as the pod and pvc. The helper pod that's created in this node goes to Error state immediately after containter is created. I couldn't find any useful logs. Provisioning is successful on another node on the same cluster running debian.

  • Add option to create local volumes instead of hostPath

    Add option to create local volumes instead of hostPath

    resolve #85

    Took the changes from #91 and added them to be enabled by ~~a config value~~ an annotation on the pvc. Default behavior is still to use hostPath volumes. By using an annotation, we can allow the user to create both hostPath and local volumes, instead of just locking down to one.

    Made some small changes to the README, but will update the README and examples further if code changes are ok from maintainers.

  • Documentation: Multiple Local Path Provisioners in the same cluster

    Documentation: Multiple Local Path Provisioners in the same cluster

    Suppose there are two kinds of drives in the K8s nodes, "fast" (SSD) and "big" (HDD). Suppose I want to create two storage classes, one of which provisions volumes on the "fast" drives and one on the "big" drives. Please document how to achieve this.

    From #80 I gather this is possible by deploying two instances of local-path-provisioner that are backed by directories on the fast and big drives respectively. But how do I specify in the storage class specification which instance of LPP to use? Do I have to change the provisioner value? How do I tell LPP which value of the provisioner field in SC to respond to?

  • Why the provisioner only support ReadWriteOnce

    Why the provisioner only support ReadWriteOnce

    Why the provisioner only support ReadWriteOnce pvc and not ReadOnlyMany/ReadWriteMany.

    Since it's just a node-local directory, there's no problem with having multiple writer/readers as long as the application support this.

  • working: multi-arch-images

    working: multi-arch-images

    This has been a very interesting area of docker to learn about. Hopefully this helps folks get more things running on ARM and other platforms.

    I copied the basic multi-arch build structure from https://github.com/rancher/dapper along with a couple of their scripts/*.

    I'm not certain the go build ... -o binary.arch suffixes in the scripts/build I included in this PR are 100% correct.

    I chose these so they match the arch+variant names associated with the current alpine manifest, (read: not the same as the bin_arch names used in https://github.com/rancher/dapper/blob/master/scripts/build .)

    Also, It looked like the multi-arch-images make target in rancher/dapper was being manually called -- as well as the resulting push.sh it emitted... so I'm not sure where/how the CI pipelines build and push the multi-arch images... ?

    I'm not sure if dapper handles exposing files created in containers back to the host, but there is now a manifest.yaml file that gets created and should be fed to the push.sh script which invokes manifest-tool.

    If there's a better way, please let me know, or maintainers can push more commits to this branch.

    Relates to #12

  • Security context not respected

    Security context not respected

    I'm trying to use local-path-provisioner with kind. While it seems to generally work with multi-node clusters, security contexts are not respected. Volumes are always mounted with root as group. Here's a simple example that demonstrates this:

    apiVersion: v1
    kind: Pod
    metadata:
      name: local-path-test
      labels:
        app.kubernetes.io/name: local-path-test
    spec:
      containers:
        - name: test
          image: busybox
          command:
            - /config/test.sh
          volumeMounts:
            - name: test
              mountPath: /test
            - name: config
              mountPath: /config
      securityContext:
        fsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      terminationGracePeriodSeconds: 0
      volumes:
        - name: test
          persistentVolumeClaim:
            claimName: local-path-test
        - name: config
          configMap:
            name: local-path-test
            defaultMode: 0555
    
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: local-path-test
      labels:
        app.kubernetes.io/name: local-path-test
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: "1Gi"
      storageClassName: local-path
    
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: local-path-test
      labels:
        app.kubernetes.io/name: local-path-test
    data:
      test.sh: |
        #!/bin/sh
    
        ls -al /test
    
        echo 'Hello from local-path-test'
        cp /config/text.txt /test/test.txt
        touch /test/foo
    
      text.txt: |
        some test content
    

    Here's the log from the container:

    total 4
    drwxr-xr-x    2 root     root            40 Feb 22 09:50 .
    drwxr-xr-x    1 root     root          4096 Feb 22 09:50 ..
    Hello from local-path-test
    cp: can't create '/test/test.txt': Permission denied
    touch: /test/foo: Permission denied
    

    As can be seen, the mounted volume has root as group instead of 1000 as specified by the security context. I also installed local-path-provisioner on Docker4Mac. The result is the same, so it is not a kind issue. Using the default storage class on Docker4Mac, it works as expected.

  • Race condition for helper-pod when multiple pvc's are provisioned

    Race condition for helper-pod when multiple pvc's are provisioned

    Hello. I think i have uncovered a bug. If you provision multiple pv's in rapid succession the helper pod will only run for the first one.

    First here will mean the helper pod that first get's created might be or not the same as the same pv that should be created.

    How to reproduce: On a single node kubernetes cluster:

    kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
    
    1. Instead of creating one pvc and one pod create 3 of them: pvcs.yaml:
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: local-path-pvc1
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: local-path
      resources:
        requests:
          storage: 2Gi
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: local-path-pvc2
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: local-path
      resources:
        requests:
          storage: 2Gi
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: local-path-pvc3
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: local-path
      resources:
        requests:
          storage: 2Gi
    

    pods.yaml:

    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: volume-test01
      namespace: default
    spec:
      containers:
      - name: volume-test
        image: nginx:stable-alpine
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: volv
          mountPath: /data
        ports:
        - containerPort: 80
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: local-path-pvc1
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: volume-test02
      namespace: default
    spec:
      containers:
      - name: volume-test
        image: nginx:stable-alpine
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: volv
          mountPath: /data
        ports:
        - containerPort: 80
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: local-path-pvc2
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: volume-test03
      namespace: default
    spec:
      containers:
      - name: volume-test
        image: nginx:stable-alpine
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: volv
          mountPath: /data
        ports:
        - containerPort: 80
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: local-path-pvc3
    

    Then check the /opt/local-path-provisioner path on the node:

    [skiss-dev2 ~]$ ls -la /opt/local-path-provisioner
    total 0
    drwxr-xr-x  5 root root 222 Nov 11 08:56 .
    drwxr-xr-x. 5 root root  65 Nov 11 08:50 ..
    drwxr-xr-x  2 root root   6 Nov 11 08:56 pvc-919db3fc-6c88-446e-b77c-01e7ae260289_default_local-path-pvc1
    drwxrwxrwx  2 root root   6 Nov 11 08:56 pvc-95bc6533-cd29-4f89-adaf-7b740e311969_default_local-path-pvc2
    drwxr-xr-x  2 root root   6 Nov 11 08:56 pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5_default_local-path-pvc3
    

    As you can all 3 pv's are being created however only one has the correct set of permissions.

    The provisioner logs:

    I1111 08:56:10.202181       1 controller.go:1202] provision "default/local-path-pvc1" class "local-path": started
    I1111 08:56:10.214691       1 controller.go:1202] provision "default/local-path-pvc2" class "local-path": started
    time="2020-11-11T08:56:10Z" level=debug msg="config doesn't contain node master-worker-1487cf6df4b42f3c60ef, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
    time="2020-11-11T08:56:10Z" level=info msg="Creating volume pvc-919db3fc-6c88-446e-b77c-01e7ae260289 at master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-919db3fc-6c88-446e-b77c-01e7ae260289_default_local-path-pvc1"
    time="2020-11-11T08:56:10Z" level=info msg="create the helper pod helper-pod into local-path-storage"
    I1111 08:56:10.221367       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc1", UID:"919db3fc-6c88-446e-b77c-01e7ae260289", APIVersion:"v1", ResourceVersion:"26463", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/local-path-pvc1"
    I1111 08:56:10.403174       1 controller.go:1202] provision "default/local-path-pvc3" class "local-path": started
    time="2020-11-11T08:56:10Z" level=debug msg="config doesn't contain node master-worker-1487cf6df4b42f3c60ef, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
    time="2020-11-11T08:56:10Z" level=info msg="Creating volume pvc-95bc6533-cd29-4f89-adaf-7b740e311969 at master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-95bc6533-cd29-4f89-adaf-7b740e311969_default_local-path-pvc2"
    time="2020-11-11T08:56:10Z" level=info msg="create the helper pod helper-pod into local-path-storage"
    I1111 08:56:10.410110       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc2", UID:"95bc6533-cd29-4f89-adaf-7b740e311969", APIVersion:"v1", ResourceVersion:"26467", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/local-path-pvc2"
    time="2020-11-11T08:56:10Z" level=debug msg="config doesn't contain node master-worker-1487cf6df4b42f3c60ef, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
    time="2020-11-11T08:56:10Z" level=info msg="Creating volume pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5 at master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5_default_local-path-pvc3"
    time="2020-11-11T08:56:10Z" level=info msg="create the helper pod helper-pod into local-path-storage"
    I1111 08:56:10.420310       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc3", UID:"c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5", APIVersion:"v1", ResourceVersion:"26472", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/local-path-pvc3"
    time="2020-11-11T08:56:25Z" level=info msg="Volume pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5 has been created on master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5_default_local-path-pvc3"
    time="2020-11-11T08:56:25Z" level=info msg="Volume pvc-919db3fc-6c88-446e-b77c-01e7ae260289 has been created on master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-919db3fc-6c88-446e-b77c-01e7ae260289_default_local-path-pvc1"
    time="2020-11-11T08:56:25Z" level=info msg="Volume pvc-95bc6533-cd29-4f89-adaf-7b740e311969 has been created on master-worker-1487cf6df4b42f3c60ef:/opt/local-path-provisioner/pvc-95bc6533-cd29-4f89-adaf-7b740e311969_default_local-path-pvc2"
    time="2020-11-11T08:56:28Z" level=error msg="unable to delete the helper pod: pods \"helper-pod\" not found"
    I1111 08:56:28.440603       1 controller.go:1284] provision "default/local-path-pvc1" class "local-path": volume "pvc-919db3fc-6c88-446e-b77c-01e7ae260289" provisioned
    I1111 08:56:28.440662       1 controller.go:1301] provision "default/local-path-pvc1" class "local-path": succeeded
    I1111 08:56:28.440687       1 volume_store.go:212] Trying to save persistentvolume "pvc-919db3fc-6c88-446e-b77c-01e7ae260289"
    I1111 08:56:28.444032       1 controller.go:1284] provision "default/local-path-pvc3" class "local-path": volume "pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5" provisioned
    I1111 08:56:28.444058       1 controller.go:1301] provision "default/local-path-pvc3" class "local-path": succeeded
    I1111 08:56:28.444067       1 volume_store.go:212] Trying to save persistentvolume "pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5"
    time="2020-11-11T08:56:28Z" level=error msg="unable to delete the helper pod: pods \"helper-pod\" not found"
    I1111 08:56:28.444364       1 controller.go:1284] provision "default/local-path-pvc2" class "local-path": volume "pvc-95bc6533-cd29-4f89-adaf-7b740e311969" provisioned
    I1111 08:56:28.444381       1 controller.go:1301] provision "default/local-path-pvc2" class "local-path": succeeded
    I1111 08:56:28.444387       1 volume_store.go:212] Trying to save persistentvolume "pvc-95bc6533-cd29-4f89-adaf-7b740e311969"
    I1111 08:56:28.456125       1 volume_store.go:219] persistentvolume "pvc-95bc6533-cd29-4f89-adaf-7b740e311969" saved
    I1111 08:56:28.456388       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc2", UID:"95bc6533-cd29-4f89-adaf-7b740e311969", APIVersion:"v1", ResourceVersion:"26467", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-95bc6533-cd29-4f89-adaf-7b740e311969
    I1111 08:56:28.457477       1 volume_store.go:219] persistentvolume "pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5" saved
    I1111 08:56:28.457550       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc3", UID:"c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5", APIVersion:"v1", ResourceVersion:"26472", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-c3e09e1e-19a6-44ec-9cf3-843fe24fe1b5
    I1111 08:56:28.459975       1 volume_store.go:219] persistentvolume "pvc-919db3fc-6c88-446e-b77c-01e7ae260289" saved
    I1111 08:56:28.460001       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc1", UID:"919db3fc-6c88-446e-b77c-01e7ae260289", APIVersion:"v1", ResourceVersion:"26463", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-919db3fc-6c88-446e-b77c-01e7ae260289
    

    Note the two messages regarding unable to delete the helper pod. That's because it want created for them. However there is no creation error message because:

    	// If it already exists due to some previous errors, the pod will be cleaned up later automatically
    	// https://github.com/rancher/local-path-provisioner/issues/27
    	logrus.Infof("create the helper pod %s into %s", helperPod.Name, p.namespace)
    	_, err = p.kubeClient.CoreV1().Pods(p.namespace).Create(helperPod)
    	if err != nil && !k8serror.IsAlreadyExists(err) {
    		return err
    	}
    

    From my understanding it seems that all 3 requests for pv provision are sent very close and the pod being named the same cannot be created multiple times. The first request that get's trough creates the pod the rest fails silently. I'm not entierly sure why the path on the node exists in any case since the helper pod does not get called.

    However it's pretty clear that only one helper pod runs at a time and only one custom provisioning code (such as the one that sets the permissions) is being run.

  • Do not create directory if not found

    Do not create directory if not found

    Use type: Directory instead of type: DirectoryOrCreate allows to block running workload on provisioned directories, to avoid the situations when initial storage is unmounted or broken.

    docker image containing the fix:

    kvaps/local-path-provisioner:v0.0.17-fix-137
    

    fixes https://github.com/rancher/local-path-provisioner/issues/137

  • xfs quota example helper pod failure

    xfs quota example helper pod failure

    The pods fails with Error: failed to prepare subPath for volumeMount "xfs-quota-projects" of container "helper-pod"

    projects file is created in /etc

  • same directory is bind-mounted 32767 times

    same directory is bind-mounted 32767 times

    Moved from https://github.com/k3s-io/k3s/issues/6660.

    I have no idea who to report this bug to, so I'm going to duplicate the report a few places. kubernetes: https://github.com/kubernetes/kubernetes/issues/114583 core-dump-handler: https://github.com/IBM/core-dump-handler/issues/119

    Environmental Info: K3s Version:

    root@dp2426:~# k3s -v
    k3s version v1.23.4+k3s1 (43b1cb48)
    go version go1.17.5
    

    I have also seen this behavior on a different node running a more recent version

    root@dp7744:~# k3s -v
    k3s version v1.25.3+k3s1 (f2585c16)
    go version go1.19.2
    

    Node(s) CPU architecture, OS, and Version:

    root@dp2426:~# uname -a
    Linux dp2426 5.4.0-109-generic #123-Ubuntu SMP Fri Apr 8 09:10:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    

    Cluster Configuration: 6 Server Nodes

    Describe the bug: I'm running core-dump-handler on a few nodes. When core-dump-handler comes under load — we had a service elsewhere that was malfunctioning and segfaulting many times per second — its directory gets bind-mounted over and over and over and over. I do not know by whom.

    mount | grep core
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    /dev/md1 on /home/data/core-dump-handler/cores type ext4 (rw,relatime,stripe=256)
    [...]
    
    # mount | grep core | wc -l
    32767
    

    Steps To Reproduce: No idea how to reproduce this in an isolated environment, but I'll give it a shot as I continue debugging.

    Here's core-dump-handler's DaemonSet configuration file and the PVCs that back it. The pertinent volumes section:

          volumes:
          - name: host-volume
            persistentVolumeClaim:
              claimName: host-storage-pvc
          - name: core-volume
            persistentVolumeClaim:
              claimName: core-storage-pvc
    [...]
            volumeMounts:
            - mountPath: /home/data/core-dump-handler
              mountPropagation: Bidirectional
              name: host-volume
            - mountPath: /home/data/core-dump-handler/cores
              mountPropagation: Bidirectional
              name: core-volume
    

    Possibly a problem with bind-mounting one directory inside another...?

    I'll certainly be opening a report against core-dump-handler but it seems like it must be k8s's bad behavior someplace to create multiple system-level mounts...?

  • Issue with Velero backup

    Issue with Velero backup

    I am using the latest version of rancher Local path provisioner I am trying to backup pvc it is getting backed-up without files? If I restore no files are there. Is this the behaviour?

  • Capacity aware dynamic volume provisioning

    Capacity aware dynamic volume provisioning

    HI,

    Is there any plan to add "capacity aware volume scheduling" like the way topLVM does?

    https://www.youtube.com/watch?v=ocERHX3uPtA https://kccnceu20.sched.com/event/ZerD

  • Configuring local-path where it is predeployed

    Configuring local-path where it is predeployed

    I'm using Rancher Desktop 1.6.2 and Kuebernetes version v1.24.7 and I'm trying to install local-path-provisioner 0.0.23 as instructed per README.md but I'm getting the following error:

    PS C:\Work> kubectl create -f local-path-storage.yaml
    namespace/local-path-storage created
    serviceaccount/local-path-provisioner-service-account created
    deployment.apps/local-path-provisioner created
    configmap/local-path-config created
    Error from server (AlreadyExists): error when creating ".\\local-path-storage.old.yaml": clusterroles.rbac.authorization.k8s.io "local-path-provisioner-role" already exists
    Error from server (AlreadyExists): error when creating ".\\local-path-storage.old.yaml": clusterrolebindings.rbac.authorization.k8s.io "local-path-provisioner-bind" already exists
    Error from server (AlreadyExists): error when creating ".\\local-path-storage.old.yaml": storageclasses.storage.k8s.io "local-path" already exist
    

    It turns out, Rancher Desktop by default already deploys local-path in namespace kube-system (version 0.0.21):

    PS C:\Work> kubectl get deployments -A
    NAMESPACE            NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
    kube-system          coredns                  1/1     1            1           25m
    kube-system          traefik                  1/1     1            1           25m
    kube-system          local-path-provisioner   1/1     1            1           25m
    kube-system          metrics-server           1/1     1            1           25m
    local-path-storage   local-path-provisioner   0/1     1            0           5m31s
    

    Although it deploys an outdated version where I cannot configure the Config as it is predeployed. I'm using the newly added RWX feature, which is only available in version 0.0.22 and upwards and it additionally requires changing local-path-config (adding sharedFileSystemPath to config.json).

    I tried simply applying the config and changing the namespace to kube-system, and indeed it does update it and work fine:

    PS C:\Work> kubectl apply -f local-path-storage.yaml
    serviceaccount/local-path-provisioner-service-account configured
    clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role configured
    clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind configured
    deployment.apps/local-path-provisioner configured
    storageclass.storage.k8s.io/local-path configured
    configmap/local-path-config configured
    

    However, each time rancher is restarted the old version (0.0.21) with the default config (without sharedFileSystemPath, so RWX doesn't work) is deployed again and I need to reapply it all over again. How is this intended to be configured with Rancher Desktop and is there any way to change the version? Thanks in advance.

  • local-path-provisioner does not work with Pod Security Standards

    local-path-provisioner does not work with Pod Security Standards

    On a modern / recent Kubernetes v1.25+ distro, such as https://www.talos.dev Release v1.2, which enables Pod Security Admission, it appears that this local-path-provisioner does not work; the k -n local-path-storage logs -f -l app=local-path-provisioner (for me) will show:

    I1205 19:41:18.333512       1 controller.go:1202] provision "default/pvc1" class "local-path": started
    time="2022-12-05T19:41:18Z" level=debug msg="config doesn't contain node think, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
    time="2022-12-05T19:41:18Z" level=info msg="Creating volume pvc-32bc2773-bfe3-4c78-a687-64ddda0b76d9 at think:/opt/local-path-provisioner/pvc-32bc2773-bfe3-4c78-a687-64ddda0b76d9_default_pvc1"
    time="2022-12-05T19:41:18Z" level=info msg="create the helper pod helper-pod-create-pvc-32bc2773-bfe3-4c78-a687-64ddda0b76d9 into local-path-storage"
    I1205 19:41:18.344099       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pvc1", UID:"32bc2773-bfe3-4c78-a687-64ddda0b76d9", APIVersion:"v1", ResourceVersion:"1517518", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/pvc1"
    W1205 19:41:18.352086       1 controller.go:893] Retrying syncing claim "32bc2773-bfe3-4c78-a687-64ddda0b76d9" because failures 4 < threshold 15
    E1205 19:41:18.352141       1 controller.go:913] error syncing claim "32bc2773-bfe3-4c78-a687-64ddda0b76d9": failed to provision volume with StorageClass "local-path": failed to create volume pvc-32bc2773-bfe3-4c78-a687-64ddda0b76d9: pods "helper-pod-create-pvc-32bc2773-bfe3-4c78-a687-64ddda0b76d9" is forbidden: violates PodSecurity "baseline:latest": hostPath volumes (volume "data")
    I1205 19:41:18.352224       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pvc1", UID:"32bc2773-bfe3-4c78-a687-64ddda0b76d9", APIVersion:"v1", ResourceVersion:"1517518", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "local-path": failed to create volume pvc-32bc2773-bfe3-4c78-a687-64ddda0b76d9: pods "helper-pod-create-pvc-32bc2773-bfe3-4c78-a687-64ddda0b76d9" is forbidden: violates PodSecurity "baseline:latest": hostPath volumes (volume "data")
    

    This is the same whether or not I add local, so possibly related to #279:

    metadata:
      name: pvc1
      annotations:
        volumeType: local
    

    I've even tried an example using in a privileged namespace, but that still didn't work; I'm not 100% sure why, but suspec that may be because persistence volumes (PV) are not namespaced, so probably even though my PVC and Pod where in NS privileged the PV which this controller tries to create is not?

    I'll go play looking for another CSI provisioner... 😃

A Packer plugin for provisioning with Terraform (local)

Packer Plugin Terraform Inspired by Megan Marsh's talk https://www.hashicorp.com/resources/extending-packer I bit the bullet and started making my own

Nov 23, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

Aug 6, 2022
Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using HPE Smart Storage Administrator tool

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Jan 17, 2022
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Jan 23, 2022
Karpenter: an open-source node provisioning project built for Kubernetes
Karpenter: an open-source node provisioning project built for Kubernetes

Karpenter is an open-source node provisioning project built for Kubernetes. Its goal is to improve the efficiency and cost of running workloads on Kub

Dec 1, 2022
A Kubernetes operator that allows for automatic provisioning and distribution of cert-manager certs across namespaces

cached-certificate-operator CachedCertificate Workflow When a CachedCertificate is created or updated the operator does the following: Check for a val

Sep 6, 2022
topolvm operator provide kubernetes local storage which is light weight and high performance

Topolvm-Operator Topolvm-Operator is an open source cloud-native local storage orchestrator for Kubernetes, which bases on topolvm. Supported environm

Nov 24, 2022
An high performance and ops-free local storage solution for Kubernetes.
An high performance and ops-free local storage solution for Kubernetes.

Carina carina 是一个CSI插件,在Kubernetes集群中提供本地存储持久卷 项目状态:开发测试中 CSI Version: 1.3.0 Carina architecture 支持的环境 Kubernetes:1.20 1.19 1.18 Node OS:Linux Filesys

May 18, 2022
Carina: an high performance and ops-free local storage for kubernetes
Carina: an high performance and ops-free local storage for kubernetes

Carina English | 中文 Background Storage systems are complex! There are more and more kubernetes native storage systems nowadays and stateful applicatio

Dec 30, 2022
A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod
A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod

GPU Mounter GPU Mounter is a kubernetes plugin which enables add or remove GPU resources for running Pods. This Introduction(In Chinese) is recommende

Jan 5, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Feb 14, 2022
Docker Swarm Ingress service based on OpenResty with automatic Let's Encrypt SSL provisioning

Ingress Service for Docker Swarm Swarm Ingress OpenResty is a ingress service for Docker in Swarm mode that makes deploying microservices easy. It con

Jun 23, 2022
FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute (USENIX ATC'21)

FaaSNet FaaSNet is the first system that provides an end-to-end, integrated solution for FaaS-optimized container runtime provisioning. FaaSNet uses l

Jan 2, 2023
Easy cloud instance provisioning

post-init (work in progress) Post-Init is a set of tools that allows you to easily connect to, provision, and interact with cloud instances after they

Dec 6, 2021
Custom Terraform provider that allows provisioning VGS Proxy Routes.

VGS Terraform Provider Custom Terraform provider that allows provisioning VGS Proxy Routes. How to Install Requirements: terraform ver 0.12 or later M

Mar 12, 2022
Extensible Provisioning Protocol (EPP) in Go

EPP for Go Extensible Provisioning Protocol (EPP) for Go. EPP is an XML-based protocol for provisioning and managing domain names and other objects at

Jan 18, 2022
Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments.

Apollo Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments. Philosophy Linux-

Feb 7, 2022
SMART information of local storage devices as Prometheus metrics

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Feb 10, 2022