Dothill (Seagate) AssuredSAN dynamic provisioner for Kubernetes (CSI plugin).

Dothill-csi dynamic provisioner for Kubernetes

A dynamic persistent volume (PV) provisioner for Dothill AssuredSAN based storage systems.

Build status Go Report Card

Introduction

Dealing with persistent storage on kubernetes can be particularly cumbersome, especially when dealing with on-premises installations, or when the cloud-provider persistent storage solutions are not applicable.

Entry-level SAN appliances usually propose a low-cost, still powerful, solution to enable redundant persistent storage, with the flexibility of attaching it to any host on your network.

Dothill systems was acquired by Seagate in 2015 for its AssuredSAN family of hybrid storage.

Seagate continues to maintain the line-up with subsequent series :

It is also privately labeled by some of the world's most prominent storage brands :

This project

Dothill-CSI implements the Container Storage Interface in order to facilitate dynamic provisioning of persistent volumes on kubernetes cluster.

All dothill AssuredSAN based equipements share a common API which may or may not be advertised by the final integrator. Although this project is developped and tested on HPE MSA 2050 & 2060 equipments, it should work with a lot of other references from various brands. We are therefore looking for tests and feedbacks while using other references.

Considering that this project reached a certain level of maturity, and as of version 3.0.0, this csi driver is proposed as an open-source project under the MIT license.

Roadmap

This project has reached a beta stage, and we hope to promote it to general availability with the help of external users and contributors. Feel free to help !

The following features are considered for the near future :

  • PV snapshotting (supported by AssuredSAN appliances)
  • additional prometheus metrics

To a lesser extent, the following features are considered for a longer term future :

  • Raw blocks support
  • FiberChannel (supported by AssuredSAN appliances)
  • Authentication proxy, as appliances lack correct right management

Features

Features / Availability roadmap alpha beta general availability
dynamic provisioning 2.3.x
resize 2.4.x
snapshot 3.1.x
prometheus metrics 3.2.x
raw blocks long term
fiber channel long term
authentication proxy long term

Installation

Uninstall ISCSI tools on your node(s)

iscsid and multipathd are now shipped as sidecars on each nodes, it is therefore strongly suggested to uninstall any open-iscsi package.

Deploy the provisioner to your kubernetes cluster

The preferred approach to install this project is to use the provided Helm Chart.

Configure your release

Create a values.yaml file. It should contain configuration for your release.

Please read the dothill-csi helm-chart README.md for more details about this file.

Install the helm chart

You should first add our charts repository, and then install the chart as follows.

helm repo add enix https://charts.enix.io/
helm install my-release enix/dothill-csi -f ./example/values.yaml

Create a storage class

In order to dynamically provision persistants volumes, you first need to create a storage class as well as his associated secret. To do so, please refer to this example.

Run a test pod

To make sure everything went well, there's a example pod you can deploy in the example/ directory. If the pod reaches the Running status, you're good to go!

kubectl apply -f example/pod.yaml

Command-line arguments

You can have a list of all available command line flags using the -help switch.

Logging

Logging can be modified using the -v flag :

  • -v 0 : Standard logs to follow what's going on (default if not specified)
  • -v 9 : Debug logs (quite awful to see)

For advanced logging configuration, see klog.

Development

You can start the drivers over TCP so your remote dev cluster can connect to them.

go run ./cmd/ -bind=tcp://0.0.0.0:10000

Testing

You can run sanity checks by using the sanity helper script in the test/ directory:

./test/sanity
Comments
  • Add nodeSelector in helm chart

    Add nodeSelector in helm chart

    In GitLab by @paul.laffitte on Dec 28, 2020, 18:58

    @abuisine: il faudrait pouvoir ajouter un nodeSelector dans le helm chart pour limiter le daemonset aux noeuds d’une availability zone par exemple

  • improve csi-lib-iscsi logs

    improve csi-lib-iscsi logs

    In GitLab by @paul.laffitte on Dec 29, 2020, 18:20

    • ~~Add levels of verbosity~~
    • ~~Logs in Dothill at lower levels of verbosity~~
    • [x] Log output of multipath -f on error (map in use)
  • Migrate configuration to StorageClass

    Migrate configuration to StorageClass

    In GitLab by @arcln on May 10, 2019, 22:05

    • [x] read parameters from storage class
    • [x] implement lazy login/logout
    • [x] fetch storage class from its name in Delete() call
  • Fix sanity script to allow running it locally without custom setup

    Fix sanity script to allow running it locally without custom setup

    In GitLab by @paul.laffitte on Sep 21, 2020, 23:16

    • [x] The file /etc/iscis/initiatorname.iscsi may not exists and cannot be created by the script without sudo
    • [x] The script should crash if environment variables for secret templating are not available
    • [x] Delete sockets in /tmp after a crash
  • Surround username with double-quotes in logs (%q formatting option)

    Surround username with double-quotes in logs (%q formatting option)

    In GitLab by @paul.laffitte on Sep 15, 2020, 23:01

    Since klog seems to remove \n at the end of a line, if a user encode his username with a trailing \n (echo "username" | base64), he will not notice it in the logs. Worse, it can confirm a fake belief that his username is correctly formatted.

    The goal is to help users to find unwanted characters in their username (, \t or \n for instance).

  • Iscsi rescan should be executed only after unmapping

    Iscsi rescan should be executed only after unmapping

    In GitLab by @paul.laffitte on Nov 4, 2020, 18:38

    https://github.com/container-storage-interface/spec/blob/master/spec.md#volume-lifecycle

    • ControllerUnpublish is executed after NodeUnpublish
    • ControllerUnpublish unmap the volume on the MSA
    • NodeUnpublish rescan after ejecting devices, so they reappears just before being unmapped
  • Dell PowerVault ME4

    Dell PowerVault ME4

    Hi there !

    I am using Dell Powervault ME4024 disk storage which is claimed to be supported, I try to use the driver but I encounter an authentication problem to the API.

    Controller logs :

    I1103 14:36:37.346453 1 driver.go:112] === [ROUTINE START] /csi.v1.Controller/CreateVolume ===
    I1103 14:36:37.346501 1 controller.go:243] using dothill API at address https://sanip/api
    I1103 14:36:37.346513 1 controller.go:245] dothill client is already configured for this API, skipping login
    I1103 14:36:37.346528 1 provisioner.go:67] received 5368709120B volume request
    I1103 14:36:37.346539 1 provisioner.go:76] creating volume fde4b27e9ae44ddd9750fd92c4368b17 (size 5368709120B) in pool A
    I1103 14:36:37.346568 1 dothill.go:94] -> GET /login/<hidden>
    I1103 14:36:37.719588 1 dothill.go:124] <- [2 Error] <hidden>
    E1103 14:36:37.719723 1 driver.go:118] Dothill API returned non-zero code 2 (Invalid sessionkey)
    I1103 14:36:37.719737 1 driver.go:121] === [ROUTINE END] /csi.v1.Controller/CreateVolume === 
    

    Kubernetes version : 1.24.4

    API documentation : https://www.dell.com/support/manuals/fr-fr/powervault-me4024/me4_series_cli_pub/using-a-script-to-access-the-cli?guid=guid-9ae5ccd6-a207-42df-b2f3-1e02a487a354&lang=en-us

    Can you help me?

    Thanks !

  • arm64 image?

    arm64 image?

    Would it be possible to provide the image

    enix/san-iscsi-csi
    

    as a multi arch image covering

    • amd64 (as now)
    • arm64 (additionally)

    Background: I have a hybrid cluster with amd64 and arm64 nodes and the daemonset does not run on arm64 nodes.

  • HPE MSA 2060

    HPE MSA 2060

    Good day!

    I am using HPE MSA 2060 disk storage which is claimed to be supported.

    I took the HPE MSA 2060 API documentation here - https://www.intesiscon.com/ficheros/manuales-tecnicos/255-HPE-a00105313en-us-HPE-MSA-1060-2060-2062-CLI-Reference-Guide.pdf

    Faced some problems while using this project.

    1.https://github.com/enix/san-iscsi-csi/blob/main/pkg/controller/publisher.go#L43 does not match the HPE MSA 2060 API documentation. Here, instead of host-maps should bemaps(347 documentation page) 2. If you connect several PVs with one Helm deployment, then all volumes in the disk storage receive the same LUN number, which is unacceptable. 3. For some reason, the partition table and file system are not created in the slave PV. 4.https://github.com/enix/dothill-api-go/issues/12

    Applications used:

    1. Ubuntu 20.04
    2. Kubernetes 1.21.4
    3. Helm 3.3.4

    PS: If you need additional information - indicate what you need to provide

  • Migrate logging to structured logging

    Migrate logging to structured logging

    With structured logging, log messages are no longer formatted, leaving argument marshalling up to the logging client implementation. This allows messages to be a static description of event.

    All string formatting (%d, %v, %w, %s) should be removed and log message string simplified. Describing arguments in log messages is no longer needed and should be removed leaving only a description of what happened.

    https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/migration-to-structured-logging.md https://kubernetes.io/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/

  • [2.4.1] warning while expanding persistent volumes

    [2.4.1] warning while expanding persistent volumes

    While running 2.4.1 and expanding volumes, I get a warning event on the PVC ressource :

    • reason : ExternalExpanding
    • from : volume_expand
    • message : Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC

    after a rather short (30s) period of time, the resize is being processed as it should

CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.
CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.

OpenEBS LVM CSI Driver CSI driver for provisioning Local PVs backed by LVM and more. Project Status Currently the LVM CSI Driver is in alpha

Dec 24, 2022
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes

csi-driver-spiffe csi-driver-spiffe is a Container Storage Interface (CSI) driver plugin for Kubernetes to work along cert-manager. This CSI driver tr

Dec 1, 2022
K8s-cinder-csi-plugin - K8s Pod Use Openstack Cinder Volume

k8s-cinder-csi-plugin K8s Pod Use Openstack Cinder Volume openstack volume list

Jul 18, 2022
Kubernetes CSI driver for QNAP NAS's

QNAP CSI This is a very alpha QNAP Kubernetes CSI driver which lets you automatically provision iSCSI volumes on a QNAP NAS. Its only been tested on a

Jul 29, 2022
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Jan 23, 2022
Enable dynamic and seamless Kubernetes multi-cluster topologies

Enable dynamic and seamless Kubernetes multi-cluster topologies Explore the docs » View Demo · Report Bug · Request Feature About the project Liqo is

Dec 30, 2022
Envoy file based dynamic routing using kubernetes config map

Envoy File Based Dynamic Routing Config mapを使用してEnvoy File Based Dynamic Routingを実現します。 概要 アーキテクチャとしては、 +----------+ +--------------+ +-----------

Dec 30, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Kubectl Locality Plugin - A plugin to get the locality of pods

Kubectl Locality Plugin - A plugin to get the locality of pods

Nov 18, 2021
A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod
A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod

GPU Mounter GPU Mounter is a kubernetes plugin which enables add or remove GPU resources for running Pods. This Introduction(In Chinese) is recommende

Jan 5, 2023
Kubectl plugin to ease sniffing on kubernetes pods using tcpdump and wireshark
Kubectl plugin to ease sniffing on kubernetes pods using tcpdump and wireshark

ksniff A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod in your Kubernetes cluster. You get the full power of

Jan 4, 2023
octant plugin for kubernetes policy report
octant plugin for kubernetes policy report

Policy Report octant plugin [Under development] Resource Policy Report Tab Namespace Policy Report Tab Policy Report Navigation Installation Install p

Aug 7, 2022
kubectl plugin for signing Kubernetes manifest YAML files with sigstore
kubectl plugin for signing Kubernetes manifest YAML files with sigstore

k8s-manifest-sigstore kubectl plugin for signing Kubernetes manifest YAML files with sigstore ⚠️ Still under developement, not ready for production us

Nov 28, 2022
nano-gpu-agent is a Kubernetes device plugin for GPU resources allocation on node.
nano-gpu-agent is a Kubernetes device plugin for GPU resources allocation on node.

Nano GPU Agent About this Project Nano GPU Agent is a Kubernetes device plugin implement for gpu allocation and use in container. It runs as a Daemons

Dec 29, 2022
Kubectl plugin to run curl commands against kubernetes pods

kubectl-curl Kubectl plugin to run curl commands against kubernetes pods Motivation Sending http requests to kubernetes pods is unnecessarily complica

Dec 22, 2022
A Kubebuilder plugin to accelerate the development of Kubernetes operators
A Kubebuilder plugin to accelerate the development of Kubernetes operators

Operator Builder Accelerate the development of Kubernetes Operators. Operator Builder extends Kubebuilder to facilitate development and maintenance of

Nov 24, 2022
kubectl plugin for generating nginx-ingress compatible basic-auth secrets on kubernetes clusters

kubectl-htpasswd kubectl plugin for easily generating hashed basic auth secrets. Supported hash algorithms bcrypt Examples Create the secret on the cl

Jul 17, 2022
A CoreDNS plugin to create records for Kubernetes nodes.

kubenodes Name kubenodes - creates records for Kubernetes nodes. Description kubenodes watches the Kubernetes API and synthesizes A, AAAA, and PTR rec

Jul 7, 2022
NVIDIA device plugin for Kubernetes

NVIDIA device plugin for Kubernetes Table of Contents About Prerequisites Quick Start Preparing your GPU Nodes Enabling GPU Support in Kubernetes Runn

Dec 31, 2022