Container Storage Interface driver for Synology NAS

Synology CSI Driver for Kubernetes

The official Container Storage Interface driver for Synology NAS.

Container Images & Kubernetes Compatibility

Driver Name: csi.san.synology.com

Driver Version Image Supported K8s Version
v1.0.0 synology-csi:v1.0.0 1.19

The Synology CSI driver supports:

  • Access Modes: Read/Write Multiple Pods
  • Cloning
  • Expansion
  • Snapshot

Installation

Prerequisites

  • Kubernetes versions 1.19
  • Synology NAS running DSM 7.0 or above
  • Go version 1.16 or above is recommended
  • (Optional) Both Volume Snapshot CRDs and the common snapshot controller must be installed in your Kubernetes cluster if you want to use the Snapshot feature

Notice

  1. Before installing the CSI driver, make sure you have created and initialized at least one storage pool and one volume on your DSM.
  2. Make sure that all the worker nodes in your Kubernetes cluster can connect to your DSM.
  3. After you complete the steps below, the full deployment of the CSI driver, including the snapshotter, will be installed. If you don’t need the Snapshot feature, you can install the basic deployment of the CSI driver instead.

Procedure

  1. Clone the git repository. git clone https://github.com/SynologyOpenSource/synology-csi.git

  2. Enter the directory. cd synology-csi

  3. Copy the client-info-template.yml file. cp config/client-info-template.yml config/client-info.yml

  4. Edit config/client-info.yml to configure the connection information for DSM. You can specify one or more storage systems on which the CSI volumes will be created. Change the following parameters as needed:

    • host: The IPv4 address of your DSM.
    • port: The port for connecting to DSM. The default HTTP port is 5000 and 5001 for HTTPS. Only change this if you use a different port.
    • https: Set "true" to use HTTPS for secure connections. Make sure the port is properly configured as well.
    • username, password: The credentials for connecting to DSM.
  5. Run ./scripts/deploy.sh run to install the driver. This will be a full deployment, which means you'll be building and running all CSI services as well as the snapshotter. If you want a basic deployment, which doesn't include installing a snapshotter, change the command as instructed below.

    • full: ./scripts/deploy.sh run
    • basic: ./scripts/deploy.sh build && ./scripts/deploy.sh install --basic

    If you don’t need to build the driver locally and want to pull the image from Docker instead, run the command as instructed below.

    • full: ./scripts/deploy.sh install --all
    • basic: ./scripts/deploy.sh install --basic

    Running the bash script will:

    • Create a namespace named "synology-csi". This is where the driver will be installed.
    • Create a secret named "client-info-secret" using the credentials from the client-info.yml you configured in the previous step.
    • Build a local image and deploy the CSI driver.
    • Create a default storage class named "synology-iscsi-storage" that uses the "Retain" policy.
    • Create a volume snapshot class named "synology-snapshotclass" that uses the "Delete" policy. (Full deployment only)
  6. Check if the status of all pods of the CSI driver is Running. kubectl get pods -n synology-csi

CSI Driver Configuration

Storage classes and the secret are required for the CSI driver to function properly. This section explains how to do the following things:

  1. Create the storage system secret (This is not mandatory because deploy.sh will complete all the configurations when you configure the config file mentioned previously.)
  2. Configure storageclasses
  3. Configure volumesnapshotclasses

Creating a Secret

Create a secret to specify the storage system address and credentials (username and password). Usually the config file sets up the secret as well, but if you still want to create the secret or recreate it, follow the instructions below:

  1. Edit the config file config/client-info.yml or create a new one like the example shown here:

    clients:
    - host: 192.168.1.1
      port: 5000
      https: false
      username: 
         
          
      password: 
          
           
    - host: 192.168.1.2
      port: 5001
      https: true
      username: 
           
            
      password: 
            
    
            
           
          
         

    The clients field can contain more than one Synology NAS. Seperate them with a prefix -.

  2. Create the secret using the following command (usually done by deploy.sh):

    kubectl create secret -n 
         
           generic client-info-secret --from-file=config/client-info.yml
    
         
    • Make sure to replace with synology-csi. This is the default namespace. Change it to your custom namespace if needed.
    • If you change the secret name "client-info-secret" to a different one, make sure that all files at deploy/kubernetes/ / are using the secret name you set.

Creating Storage Classes

Create and apply StorageClasses with the properties you want.

  1. Create YAML files using the one at deploy/kubernetes/ /storage-class.yml as the example, whose content is as below:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      annotations:
        storageclass.kubernetes.io/is-default-class: "false"
      name: synostorage
    provisioner: csi.san.synology.com
    parameters:
      fsType: 'ext4'
      dsm: '192.168.1.1'
      location: '/volume1'
    reclaimPolicy: Retain
    allowVolumeExpansion: true
    
  2. Configure the StorageClass properties by assigning the parameters in the table. You can also leave blank if you don’t have a preference:

    Name Type Description Default
    dsm string The IPv4 address of your DSM, which must be included in the client-info.yml for the CSI driver to log in to DSM -
    location string The location (/volume1, /volume2, ...) on DSM where the LUN for PersistentVolume will be created -
    fsType string The formatting file system of the PersistentVolumes when you mount them on the pods 'ext4'

    Notice

    • If you leave the parameter location blank, the CSI driver will choose a volume on DSM with available storage to create the volumes.
    • All volumes created by the CSI driver are Thin Provisioned LUNs on DSM. This will allow you to take snapshots of them.
  3. Apply the YAML files to the Kubernetes cluster.

    kubectl apply -f 
         
    
         

Creating Volume Snapshot Classes

Create and apply VolumeSnapshotClasses with the properties you want.

  1. Create YAML files using the one at deploy/kubernetes/ /snapshotter/volume-snapshot-class.yml as the example, whose content is as below:

    apiVersion: snapshot.storage.k8s.io/v1beta1    # v1 for kubernetes v1.20 and above
    kind: VolumeSnapshotClass
    metadata:
      name: synology-snapshotclass
      annotations:
        storageclass.kubernetes.io/is-default-class: "false"
    driver: csi.san.synology.com
    deletionPolicy: Delete
    # parameters:
    #   description: 'Kubernetes CSI'
    #   is_locked: 'false'
    
  2. Configure volume snapshot class properties by assigning the following parameters, all parameters are optional:

    Name Type Description Default
    description string The description of the snapshot on DSM ""
    is_locked string Whether you want to lock the snapshot on DSM 'false'
  3. Apply the YAML files to the Kubernetes cluster.

    kubectl apply -f 
         
    
         

Building & Manually Installing

By default, the CSI driver will pull the latest image from Docker Hub.

If you want to use images you built locally for installation, edit all files under deploy/kubernetes/ / and make sure imagePullPolicy: IfNotPresent is included in every csi-plugin container.

Building

  • To build the CSI driver, execute make.
  • To build the synocli dev tool, execute make synocli. The output binary will be at bin/synocli.
  • To run unit tests, execute make test.
  • To build a docker image, run ./scripts/deploy.sh build. Afterwards, run docker images to check the newly created image.

Installation

  • To install all pods of the CSI driver, run ./scripts/deploy.sh install --all
  • To install pods of the CSI driver without the snapshotter, run ./scripts/deploy.sh install --basic
  • Run ./scripts/deploy.sh --help to see more information on the usage of the commands.

Uninstallation

If you are no longer using the CSI driver, make sure that no other resources in your Kubernetes cluster are using storage managed by Synology CSI driver before uninstalling it.

  • ./scripts/uninstall.sh
Comments
  • Compatibility with Nomad

    Compatibility with Nomad

    Hello! I was wondering if synology-csi works with Nomad? At first glance it would appear there is only support for Kubernetes, but I just wanted to double check. Thank you

  • arm64 release

    arm64 release

    First: congrats on releasing a first version.

    Anyway plan in releasing a ARM64 version of the images?

    Don't know what your using as CI but there many option to easily build multiple targets with Github actions and goreleaser.

  • Unable to login which get 402

    Unable to login which get 402

    Hi, all

    I had to try an admin account or create a new account but the controller always returns the "Failed to login" error. Does any know more information to set up CSI?

  • Error creating volume using SMB protocol

    Error creating volume using SMB protocol

    Hello,

    Thanks for all your work. This integration looks very good.

    I have tried to used it but I am getting below error

    Name:          test
    Namespace:     vaultwarden
    StorageClass:  synology-smb-storage
    Status:        Pending
    Volume:
    Labels:        <none>
    Annotations:   volume.beta.kubernetes.io/storage-provisioner: csi.san.synology.com
                   volume.kubernetes.io/storage-provisioner: csi.san.synology.com
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:
    Access Modes:
    VolumeMode:    Filesystem
    Used By:       <none>
    Events:
      Type     Reason                Age                From                                                             Message
      ----     ------                ----               ----                                                             -------
      Normal   ExternalProvisioning  14s (x3 over 26s)  persistentvolume-controller                                      waiting for a volume to be created, either by external provisioner "csi.san.synology.com" or manually created by system administrator
      Normal   Provisioning          10s (x5 over 26s)  csi.san.synology.com_node2_2fb7c8e5-b9d1-4829-9e76-d2dff23ee566  External provisioner is provisioning volume for claim "vaultwarden/test"
      Warning  ProvisioningFailed    10s (x5 over 26s)  csi.san.synology.com_node2_2fb7c8e5-b9d1-4829-9e76-d2dff23ee566  failed to provision volume with StorageClass "synology-smb-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
    

    I have read the documentation and I have checked that same host is configured in the secret configured in the storage class as well as in the secret that store clients. Here you are.

    StorageClass

    Name:            synology-smb-storage
    IsDefaultClass:  No
    Annotations:     kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"synology-smb-storage"},"parameters":{"csi.storage.k8s.io/node-stage-secret-name":"cifs-csi-credentials","csi.storage.k8s.io/node-stage-secret-namespace":"synology-csi","dsm":"192.168.30.13","location":"/volume1/KubernetesVolumes","protocol":"smb"},"provisioner":"csi.san.synology.com","reclaimPolicy":"Retain"}
    
    Provisioner:           csi.san.synology.com
    Parameters:            csi.storage.k8s.io/node-stage-secret-name=cifs-csi-credentials,csi.storage.k8s.io/node-stage-secret-namespace=synology-csi,dsm=192.168.30.13,location=/volume1/KubernetesVolumes,protocol=smb
    AllowVolumeExpansion:  True
    MountOptions:          <none>
    ReclaimPolicy:         Retain
    VolumeBindingMode:     Immediate
    Events:                <none>
    

    StorageClass Secret

    apiVersion: v1
    data:
      password: xxxxx
      username: xxxxx
    kind: Secret
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"name":"cifs-csi-credentials","namespace":"synology-csi"},"stringData":{"password":"UGVyJmNvMTgxMDE2","username":"ampkaWF6"},"type":"Opaque"}
      creationTimestamp: "2022-06-20T19:25:35Z"
      name: cifs-csi-credentials
      namespace: synology-csi
      resourceVersion: "7344539"
      uid: f283712a-a557-4f5a-83b2-dfea269476c7
    type: Opaque
    

    Clients secret file

    apiVersion: v1
    data:
      client-info.yml: xxxxx
    kind: Secret
    metadata:
      creationTimestamp: "2022-06-20T18:44:45Z"
      name: client-info-secret
      namespace: synology-csi
      resourceVersion: "7338982"
      uid: df09b074-6008-4df2-a5e6-7a870bc840af
    type: Opaque
    

    And content of client-info.yml is

    ---
    clients:
      - host: 192.168.30.13
        port: 5001
        https: true
        username: xxxx
        password: xxxxx
    

    I think everything is configured properly. I can't find any error.

    Logs from pods of deployment synology-csi-node looks fine (no error). The only error I can see is from controller.

    csi-provisioner container

    I0620 19:45:43.549885       1 controller.go:1279] provision "vaultwarden/test" class "synology-smb-storage": started
    I0620 19:45:43.550114       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
    I0620 19:45:43.550152       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-15584bfb-4154-4d8c-9c3e-64a150d562f1","parameters":{"dsm":"192.168.30.13","location":"/volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
    I0620 19:45:43.550269       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vaultwarden", Name:"test", UID:"15584bfb-4154-4d8c-9c3e-64a150d562f1", APIVersion:"v1", ResourceVersion:"7346608", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "vaultwarden/test"
    I0620 19:45:43.838611       1 connection.go:186] GRPC response: {}
    I0620 19:45:43.838809       1 connection.go:187] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
    I0620 19:45:43.838894       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = Internal desc = Couldn't find any host available to create Volume
    I0620 19:45:43.839015       1 controller.go:1074] Final error received, removing PVC 15584bfb-4154-4d8c-9c3e-64a150d562f1 from claims in progress
    W0620 19:45:43.839048       1 controller.go:933] Retrying syncing claim "15584bfb-4154-4d8c-9c3e-64a150d562f1", failure 9
    E0620 19:45:43.839104       1 controller.go:956] error syncing claim "15584bfb-4154-4d8c-9c3e-64a150d562f1": failed to provision volume with StorageClass "synology-smb-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
    I0620 19:45:43.839166       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vaultwarden", Name:"test", UID:"15584bfb-4154-4d8c-9c3e-64a150d562f1", APIVersion:"v1", ResourceVersion:"7346608", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "synology-smb-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
    E0620 19:46:10.337400       1 controller.go:1025] claim "325f380f-ca75-4b27-98e8-e01a85c8f5e4" in work queue no longer exists
    

    csi-plugin container

    2022-06-20T19:51:09Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
    2022-06-20T19:51:09Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
    2022-06-20T19:51:10Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
    2022-06-20T19:51:10Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
    2022-06-20T19:51:11Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
    2022-06-20T19:51:11Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
    2022-06-20T19:51:11Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
    2022-06-20T19:51:11Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
    2022-06-20T19:51:13Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
    2022-06-20T19:51:13Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
    2022-06-20T19:51:13Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
    2022-06-20T19:51:13Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
    2022-06-20T19:51:17Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
    2022-06-20T19:51:17Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
    2022-06-20T19:51:18Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
    2022-06-20T19:51:18Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
    2022-06-20T19:51:26Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
    2022-06-20T19:51:26Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
    2022-06-20T19:51:26Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
    2022-06-20T19:51:26Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
    2022-06-20T19:51:42Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
    2022-06-20T19:51:42Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
    2022-06-20T19:51:43Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
    2022-06-20T19:51:43Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
    2022-06-20T19:52:15Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
    2022-06-20T19:52:15Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
    2022-06-20T19:52:15Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
    2022-06-20T19:52:15Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
    

    I have also checked the user I have set and it has permissions permissions to read/write in location /Volume1/KubernetesVolumes

  • standard_init_linux.go:228: exec user process caused: exec format error

    standard_init_linux.go:228: exec user process caused: exec format error

    I have installed csi synology driver but I am getting this error: "standard_init_linux.go:228: exec user process caused: exec format error" All pods in csi-synology namespace are crashing :( with that message. Can anyone assist me? I am running a cluster k3s kubernetes on 6 raspberry pi v4 with ubuntu installed.

  • LUNs succesfully create but fail to mount

    LUNs succesfully create but fail to mount

    I am able to successfully connect with my client config. Deploying the driver is successful. But using the StorageClass results in the following errors:

    6m31s       Normal    Scheduled                pod/dokuwiki-cf5bf85c9-7bsp4     Successfully assigned dokuwiki/dokuwiki-cf5bf85c9-7bsp4 to loving-kypris
    6m30s       Normal    SuccessfulAttachVolume   pod/dokuwiki-cf5bf85c9-7bsp4     AttachVolume.Attach succeeded for volume "pvc-f2ecf090-4737-41b2-8644-8442f7179b00"
    2m          Warning   FailedMount              pod/dokuwiki-cf5bf85c9-7bsp4     MountVolume.MountDevice failed for volume "pvc-f2ecf090-4737-41b2-8644-8442f7179b00" : rpc error: code = Internal desc = rpc error: code = Internal desc = Failed to login with target iqn [iqn.2000-01.com.synology:mother.pvc-f2ecf090-4737-41b2-8644-8442f7179b00], err: Failed to connect to bus: No data available
    iscsiadm: can not connect to iSCSI daemon (111)!
    iscsiadm: Cannot perform discovery. Initiatorname required.
    iscsiadm: Could not perform SendTargets discovery: could not connect to iscsid
     (exit status 20)
    2m10s   Warning   FailedMount             pod/dokuwiki-cf5bf85c9-7bsp4     Unable to attach or mount volumes: unmounted volumes=[dokuwiki-data], unattached volumes=[kube-api-access-g4bgv dokuwiki-data]: timed out waiting for the condition
    

    Here's an image showing the LUNs successfully created on the NAS-side: image

  • User permission for csi

    User permission for csi

    Hi all,

    is there a way to use the synology-csi without an admin account/permission, with reduced permission? It does not feel good that an admin account/service user with username/password is available in the kubernetes cluster and and potentially usable by others.

  • The iSCSi remove policy

    The iSCSi remove policy

    After success creates a PVC and generates an iSCSI in my NAS. I try to remove the PVC but the iSCSI was not removed after PVC removal succeeded.

    Does there have any rules to remove iSCSI or do I have to remove it manually?

  • Couldn't find any host available to create volume

    Couldn't find any host available to create volume

    Using general defaults for the values and updating my connection strings in the config, l I am receiving this error:

    Failed to create Volume: rpc error: code = Internal desc = Failed to get available location, err: DSM Api error. Error code:105
    GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
    Any idea whats happening here? I saw a previous issue where updating the parameters to the StorageClass was the solution but it doesn't seem to have the same resolution for me.
    

    Thanks for any help!

  • Upgrading to 1.1.0 breaks existing storageclasses with `RPC error: rpc error: code = InvalidArgument desc = Unknown protocol`

    Upgrading to 1.1.0 breaks existing storageclasses with `RPC error: rpc error: code = InvalidArgument desc = Unknown protocol`

    Since upgrading to 1.1.0 mounting no longer works. The daemonset csi-plugin logs:

    2022-04-28T08:34:31Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = InvalidArgument desc = Unknown protocol
    

    I tried adding protocol: iscsi to my storage classes but kubernetes forbids me on the ground that parameters can't be edited after storage class creation.

    I expect the csi-plugin to default to ISCSI and to be backwards compatible.

  • Please don't use `imagePullPolicy: Always`

    Please don't use `imagePullPolicy: Always`

    See https://github.com/SynologyOpenSource/synology-csi/blob/515bc7f0ed52002f2babdbdbf6b2e5cfa6b0af14/deploy/kubernetes/v1.19/controller.yml#L153 and quite some other places.

    Please do not use Always for tags representing stable versions like 1.0.1 in this case. While Always is very convenient for you to use during development, it may create unreproducible builds downstream if you ever pushed an update to a supposedly stable image.

  • Provide documentation on backups, restore and disaster recovery

    Provide documentation on backups, restore and disaster recovery

    My Synology shows many more "volumes"/LUNs created using the Synology CSI than I have stateful sets and they are named arbitrarily. The result is a "black box" of storage that I am unable to reason about for purposes of backup and restore or even cleaning up.

    It would be helpful if documentation provided clear instructions for backing up and restoring volumes in cases of, for example, cluster failure.

    Questions I have:

    • given unrecoverable cluster failure, how does a user restore data to a new stateful set?
    • how does a user backup their storage to e.g. another Synology NAS? With HyperBackup? What about generic off-site storage?
    • how does a user clean up created backing storage safely?

    With Docker Compose, it is easy to reason about mounted volumes, especially when using bind mounts: such and such is the specified mounted volume and backup can be as simple as a single rsync command.

  • Helm chart for synology-csi

    Helm chart for synology-csi

    Resolves #8

    I implemented helm chart for synology-csi deployment.

    It uses github as helm chart registry, but requires setup of github pages as described in https://helm.sh/docs/howto/chart_releaser_action/

  • Lable for iscsi volumes

    Lable for iscsi volumes

    Hi,

    it would be great to have an identifiable name for SAN/ISCI volumes. Id works, but if a pvc was deleted it is hard to know which volume can be deleted safely. I know it is shown as ready instead of connected. But it would be great to have some way to know what volume is used for what without relying on kubernetes

  • The iSCSi remove policy

    The iSCSi remove policy

    Hi there,

    In relation to #5 The iSCSi remove policy.

    If I set the Storage class to Retain, then the PVs should be retained, when I delete a PVC. But if I delete them with a kubectl delete pv, they are not removed from the Synology.

    Do I have to add any rules to remove the iSCSI-Drives or do I have to remove them manually?

  • Prometheus metrics support?

    Prometheus metrics support?

    Hello,

    I’m using this CSI driver in my environment but I wonder if it supports Prometheus metrics endpoint so that I can scrape PVC and storage usage on Prometheus and grafana dashboard.

A simple image hosting script in Golang (Smart Storage with Telegram Cloud Storage)

image-upload-tg It's a simple image hosting script in Golang. It's have http server and image can be uploaded from here, also images store temporary i

Jan 19, 2022
Go language interface to Swift / Openstack Object Storage / Rackspace cloud files (golang)

Swift This package provides an easy to use library for interfacing with Swift / Openstack Object Storage / Rackspace cloud files from the Go Language

Nov 9, 2022
Production-Grade Container Scheduling and Management
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Dec 28, 2022
An edge-native container management system for edge computing
An edge-native container management system for edge computing

SuperEdge is an open source container management system for edge computing to manage compute resources and container applications in multiple edge regions. These resources and applications, in the current approach, are managed as one single Kubernetes cluster. A native Kubernetes cluster can be easily converted to a SuperEdge cluster.

Dec 29, 2022
cloud-native local storage management system
cloud-native local storage management system

Open-Local是由多个组件构成的本地磁盘管理系统,目标是解决当前 Kubernetes 本地存储能力缺失问题。通过Open-Local,使用本地存储会像集中式存储一样简单。

Dec 30, 2022
GoDrive: A cloud storage system similar to Dropbox or Google Drive, with resilient
GoDrive: A cloud storage system similar to Dropbox or Google Drive, with resilient

Cloud Storage Service Author: Marisa Tania, Ryan Tjakrakartadinata Professor: Matthew Malensek See project spec here: https://www.cs.usfca.edu/~mmalen

Dec 7, 2021
Contentrouter - Protect static content via Firebase Hosting with Cloud Run and Google Cloud Storage

contentrouter A Cloud Run service to gate static content stored in Google Cloud

Jan 2, 2022
The extensible SQL interface to your favorite cloud APIs.
The extensible SQL interface to your favorite cloud APIs.

The extensible SQL interface to your favorite cloud APIs.

Jan 4, 2023
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Jan 23, 2022
Kubernetes CSI driver for QNAP NAS's

QNAP CSI This is a very alpha QNAP Kubernetes CSI driver which lets you automatically provision iSCSI volumes on a QNAP NAS. Its only been tested on a

Jul 29, 2022
Sample Driver that provides reference implementation for Container Object Storage Interface (COSI) API

cosi-driver-minio Sample Driver that provides reference implementation for Container Object Storage Interface (COSI) API Community, discussion, contri

Oct 10, 2022
[WIP] Cheap, portable and secure NAS based on the Raspberry Pi Zero - with encryption, backups, and more

PortaDisk - Affordable Raspberry Pi Portable & Secure NAS Project Project Status: Early work in progress. web-unlock is still not ready for production

Nov 23, 2022
Syno-cli - Synology unofficial API CLI and library

Synology CLI Unofficial wrapper over Synology API in Go. Focus on administrative

Jan 6, 2023
webhook forward, such as: synology
webhook forward, such as: synology

Webhook-Forward Usage docker pull starudream/webhook-forward docker run -d starudream/webhook-forward Env ADDR=127.0.0.1:9988 DEBUG=true PROXY=http:/

Jun 10, 2022
Ananas is an experimental project for kubernetes CSI (Container Storage Interface) by using azure disk. Likewise, Ananas is the name of my cute british shorthair.

ananas Ananas is an experimental project for kubernetes CSI (Container Storage Interface) by using azure disk. Likewise, Ananas is the name of my cute

Aug 4, 2021
Container Storage Interface components for SPIFFE

SPIFFE CSI Driver WARNING: This project is in the "Development" phase of the SPIFFE Project Maturity Phases. A Container Storage Interface driver for

Jan 3, 2023
Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package

Go-MySQL-Driver A MySQL-Driver for Go's database/sql package Features Requirements Installation Usage DSN (Data Source Name) Password Protocol Address

Jan 4, 2023
Qmgo - The Go driver for MongoDB. It‘s based on official mongo-go-driver but easier to use like Mgo.

Qmgo English | 简体中文 Qmgo is a Go driver for MongoDB . It is based on MongoDB official driver, but easier to use like mgo (such as the chain call). Qmg

Dec 28, 2022
Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

pqssh Go driver for PostgreSQL over SSH. This driver can connect to postgres on a server via SSH using the local ssh-agent, password, or private-key.

Nov 6, 2022
Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client,

Devcloud-go Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client, you can use them w

Jun 9, 2022