CSI driver NVMf mainly supports RDMA and TCP for Software Defined Storage by NVMf

CSI NVMf driver

Overview

This is a repository for NVMe-oF CSI Driver. Currently it implements bare minimum of th CSI spec.

Requirements

The CSI NVMf driver requires initiator and target kernel versions to be Linux kernel 5.0 or newer. Before using this csi driver, you should create a NVMf remote disk on the target side and record traddr/trport/trtype/nqn/deviceuuid.

Modprobe Nvmf mod on Initiator/Target

# when use TCP as transport
$ modprobe nvme-tcp
# when use RDMA as transport
$ modprobe nvme-rdma

Test NVMf driver using csc

Get csc tool from https://github.com/rexray/gocsi/tree/master/csc

$ go get github.com/rexray/gocsi/csc

1. Complile NVMf driver

$ make

2. Start NVMf driver

$ ./output/nvmfplugin --endpoint tcp://127.0.0.1:10000 --nodeid CSINode

3.1 Get plugin info

$ csc identity plugin-info --endpoint tcp://127.0.0.1:10000
"csi.nvmf.com" "v1.0.0"

3.2 NodePublish a volume

$ export TargetTrAddr="NVMf Target Server IP (Ex: 192.168.122.18)"
$ export TargetTrPort="NVMf Target Server Ip Port (Ex: 49153)"
$ export TargetTrType="NVMf Target Type (Ex: tcp | rdma)"
$ export DeviceUUID="NVMf Target Device UUID (Ex: 58668891-c3e4-45d0-b90e-824525c16080)"
$ export NQN="NVMf Target NQN"
$ csc node publish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nvmf --attrib targetTrAddr=$TargetTrAddr
                   --attrib targetTrPort=$TargetTrPort --attrib targetTrType=$TargetTrType
                   --attrib deviceUUID=$DeviceUUID --attrib nqn=$NQN nvmftestvol
nvmftestvol

You can find a new disk on /mnt/nvmf

3.3 NodeUnpublish a volume

$ csc node unpublish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nvmf nvmftestvol
nvmftestvol

Test NVMf driver in kubernetes cluster

TODO: support dynamic provision.

1. Docker Build image

$ make container

2.1 Load Driver

$ kubectl create -f deploy/kubernetes/

2.2 Unload Driver

$ kubectl delete -f deploy/kubenetes/

3.1 Create Storage Class(Dynamic Provisioning)

NotSupport for controller not ready

  • Create
$ kubectl create -f examples/kubernetes/example/storageclass.yaml
  • Check
$ kubectl get sc

3.2 Create PV(Static Provisioning)

  • Create
$ kubectl create -f examples/kubernetes/example/pv.yaml
  • Check
$ kubectl get pv

4. Create Nginx Container

  • Create Deployment
$ kubectl create -f examples/kubernetes/example/nginx.yaml
  • Check
$ kubectl exec -it nginx-451df123421 /bin/bash
$ lsblk

Community,discussion,contribution,and support

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

Owner
Kubernetes CSI
Kubernetes specific Container-Storage-Interface (CSI) components
Kubernetes CSI
Comments
  • Fix some problems

    Fix some problems

    What type of PR is this?

    Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

    /kind bug

    What this PR does / why we need it: Fix some problems. when update csi image,the csi container may not exit Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    NONE

    none
    
  • Deployment according to the docs is not working

    Deployment according to the docs is not working

    Hi,

    first of all thanks for this project, I really like the idea to abstract nvme of mounting with a CSI plugin!

    I'm not sure if I did something wrong or if it is just an error in the examples or the documentation, but after deploying everything this plugin does not work for me - the mounting never happens.

    Few possible errors in the docs I found were

    • not matching storage class name in examples
      • storage class https://github.com/kubernetes-csi/csi-driver-nvmf/blob/master/examples/kubernetes/example/storageclass.yaml#L4
      • pv https://github.com/kubernetes-csi/csi-driver-nvmf/blob/master/examples/kubernetes/example/pv.yaml#L6
    • NVMf CSI driver instead of csi.nvmf.com in PV exampe https://github.com/kubernetes-csi/csi-driver-nvmf/blob/master/examples/kubernetes/example/pv.yaml#L12
    • --attrib parameter of the current https://github.com/rexray/gocsi/tree/master/csc does not exist anymore Not sure if there is more or if the documentation is even complete?

    I basically used the files from deploy/kubernetes to deploy everything on vanilla k8s 1.23, the CSI driver also logs out that the registration was successful and my PV contains the csi.driver field as the registration output showed. The PVC is of course also bound to the PV and I'm pretty confident in my k8s basics :smiley: but if you need more infos I will happily provide everything!

    Is it just me or did I miss something? Does this csi driver work for anyone else as documented?

    Thanks in advance! Vincent

  • chore: fix some error in setup

    chore: fix some error in setup

    What type of PR is this?

    /kind documentation

    What this PR does / why we need it:

    1. add SPDK target setup in README.
    2. fix setup_kernel_nvmf_target command error.
    3. fix pv.yaml's storageclass name.

    Which issue(s) this PR fixes:

    Fixes #12

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    none
    
  • refactor: refactor some code to make it more readable

    refactor: refactor some code to make it more readable

    1. modify go.mod module name.
    2. move disk.go and mounter.go into nvmf.go
    3. make Connector as an objective

    Signed-off-by: Meinhard Zhou [email protected]

  • fix: make example available

    fix: make example available

    What type of PR is this? /kind bug

    What this PR does / why we need it:

    1. fix path error in pkg/nvmf/fabrics.go
    2. add nvmf kernel target setup guide
    3. update example and deploy yaml

    make example is available

    Which issue(s) this PR fixes:

    Fixes #10

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    none
    
  • Failing to test the NVMf driver in a kubernetes cluster

    Failing to test the NVMf driver in a kubernetes cluster

    Hello :)

    I tried to test the NVMf driver in a kubernetes cluster by following https://github.com/kubernetes-csi/csi-driver-nvmf#test-nvmf-driver-in-kubernetes-cluster and failed to successfully bring up the nginx pod.

    The test node is running Debian 11 with kernel 5.19.0-rc4+ and was setup with kubeadm v1.25.2.

    First I wanted to setup a nvmf target like described in https://github.com/kubernetes-csi/csi-driver-nvmf/blob/master/doc/setup_kernel_nvmf_target.md (I used a file as the storage backend device and adjusted the addr_traddr accordingly). However, I go the following error:

    sudo ln -s /sys/kernel/config/nvmet/subsystems/nqn.2022-08.org.test-nvmf.example/namespaces/ /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2022-08.org.test-nvmf.example
    ln: failed to create symbolic link '/sys/kernel/config/nvmet/ports/1/subsystems/nqn.2022-08.org.test-nvmf.example': Invalid argument
    

    So instead I used the nvmetcli tool git://git.infradead.org/users/hch/nvmetcli.git and modified the tcp.json to point to /tmp/nvmet_test.img (removed nquid and uuid), use port 49153, use the nodes IP, and set the subsytems and nqn field to 'nqn.2022-08.org.test-nvmf.example'. To setup the nvmf target I ran:

    sudo nvme disconnect-all
    sudo python3 nvmetcli clear
    sudo python3 nvmetcli restore conv-test-tcp.json
    

    Running sudo nvme discover -t tcp -a 192.168.121.114 -s 49153 gave me:

    Discovery Log Number of Records 2, Generation counter 3
    =====Discovery Log Entry 0======
    trtype:  tcp
    adrfam:  ipv4
    subtype: unrecognized
    treq:    not specified, sq flow control disable supported
    portid:  1
    trsvcid: 49153
    subnqn:  nqn.2014-08.org.nvmexpress.discovery
    traddr:  192.168.121.114
    sectype: none
    =====Discovery Log Entry 1======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified, sq flow control disable supported
    portid:  1
    trsvcid: 49153
    subnqn:  nqn.2022-08.org.test-nvmf.example
    traddr:  192.168.121.114
    sectype: none
    

    Because I am using podman instead of docker I replaced the docker commands in release-tools/build.make like that sed -i 's/docker /podman /g' release-tools/build.make.

    I then adjusted pv.yaml with my targetTrAddr and the deviceUUID from cat /sys/kernel/config/nvmet/subsystems/nqn.2022-08.org.test-nvmf.example/namespaces/1/device_uuid.

    Form there on I followed the instructions of https://github.com/kubernetes-csi/csi-driver-nvmf#test-nvmf-driver-in-kubernetes-cluster

    The nginx pod is not starting and running kubectl describe pods gives me:

    Name:             nginx-block-test1-55f6f8ff94-jfwd7
    Namespace:        default
    Priority:         0
    Service Account:  default
    Node:             <none>
    Labels:           app=nginx
                      pod-template-hash=55f6f8ff94
    Annotations:      <none>
    Status:           Pending
    IP:
    IPs:              <none>
    Controlled By:    ReplicaSet/nginx-block-test1-55f6f8ff94
    Containers:
      nginx:
        Image:        nginx
        Port:         80/TCP
        Host Port:    0/TCP
        Environment:  <none>
        Mounts:
          /dev/nvmf from nvmf-volume (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c6mdn (ro)
    Conditions:
      Type           Status
      PodScheduled   False
    Volumes:
      nvmf-volume:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  csi-nvmf-pvc
        ReadOnly:   false
      kube-api-access-c6mdn:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   BestEffort
    Node-Selectors:              <none>
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason            Age   From               Message
      ----     ------            ----  ----               -------
      Warning  FailedScheduling  16s   default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
    

    And kubectl describe pvc outputs:

    Name:          csi-nvmf-pvc
    Namespace:     default
    StorageClass:  csi-nvmf-sc
    Status:        Pending
    Volume:
    Labels:        <none>
    Annotations:   volume.beta.kubernetes.io/storage-provisioner: csi.nvmf.com
                   volume.kubernetes.io/storage-provisioner: csi.nvmf.com
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:
    Access Modes:
    VolumeMode:    Filesystem
    Used By:       nginx-block-test1-55f6f8ff94-jfwd7
    Events:
      Type    Reason                Age                   From                         Message
      ----    ------                ----                  ----                         -------
      Normal  ExternalProvisioning  3m36s (x82 over 23m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "csi.nvmf.com" or manually created by system administrator
    

    What am I missing or doing wrong? :)

  • feature: support Create/Delete Volume with backend_endpoint

    feature: support Create/Delete Volume with backend_endpoint

    What type of PR is this?

    /kind feature

    What this PR does / why we need it:

    1. add a http_client to communicate with back_end controller
    2. support Create/Delete Volume

    Which issue(s) this PR fixes:

    Fixes #

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    1. Add Create/Delete Volume Feature.
    
Ananas is an experimental project for kubernetes CSI (Container Storage Interface) by using azure disk. Likewise, Ananas is the name of my cute british shorthair.

ananas Ananas is an experimental project for kubernetes CSI (Container Storage Interface) by using azure disk. Likewise, Ananas is the name of my cute

Aug 4, 2021
IRC bot for launch ddos attack, Mainly of scan target are IoT device that run linux and open default SSH port
IRC bot for launch ddos attack, Mainly of scan target are IoT device that run linux and open default SSH port

IRC bot for launch ddos attack, Mainly of scan target are IoT device that run linux and open default SSH port

Nov 10, 2021
Tool for monitoring network devices (mainly using SNMP) - monitoring check plugin
Tool for monitoring network devices (mainly using SNMP) - monitoring check plugin

Thola Description A tool for monitoring network devices written in Go. It features a check mode which complies with the monitoring plugins development

Dec 29, 2022
Collection of useful golang code snippets, mainly for learning purposes

Go-Things Collection of go code snippets, tools, etc. mainly for learning purpos

Dec 31, 2021
TcpRoute , TCP 层的路由器。对于 TCP 连接自动从多个线路(电信、联通、移动)、多个域名解析结果中选择最优线路。

TcpRoute2 TcpRoute , TCP 层的路由器。对于 TCP 连接自动从多个线路(允许任意嵌套)、多个域名解析结果中选择最优线路。 TcpRoute 使用激进的选路策略,对 DNS 解析获得的多个IP同时尝试连接,同时使用多个线路进行连接,最终使用最快建立的连接。支持 TcpRoute

Dec 27, 2022
Multiplexer over TCP. Useful if target server only allows you to create limited tcp connections concurrently.

tcp-multiplexer Use it in front of target server and let your client programs connect it, if target server only allows you to create limited tcp conne

May 27, 2021
TCP output for beats to send events over TCP socket.

beats-tcp-output How To Use Clone this project to elastic/beats/libbeat/output/ Modify elastic/beats/libbeat/publisher/includes/includes.go : // add i

Aug 25, 2022
Tcp chat go - Create tcp chat in golang

TCP chat in GO libs Go net package and goroutines and channels tcp tcp or transm

Feb 5, 2022
A golang library about socks5, supports all socks5 commands. That Provides server and client and easy to use. Compatible with socks4 and socks4a.

socks5 This is a Golang implementation of the Socks5 protocol library. To see in this SOCKS Protocol Version 5. This library is also compatible with S

Nov 22, 2022
:alarm_clock: :fire: A TCP proxy to simulate network and system conditions for chaos and resiliency testing
:alarm_clock: :fire: A TCP proxy to simulate network and system conditions for chaos and resiliency testing

Toxiproxy Toxiproxy is a framework for simulating network conditions. It's made specifically to work in testing, CI and development environments, supp

Jan 7, 2023
Toxiproxy - A TCP proxy to simulate network and system conditions for chaos and resiliency testing
Toxiproxy - A TCP proxy to simulate network and system conditions for chaos and resiliency testing

Toxiproxy is a framework for simulating network conditions. It's made specifically to work in testing, CI and development environments, supp

Nov 3, 2021
Simple Relay between a Unix socket and a TCP socket, and vice versa.
Simple Relay between a Unix socket and a TCP socket, and vice versa.

Simple TCP <-> Unix Relay simpletcpunixrelay is a program which exposes a TCP endpoint as a Unix socket and vice versa. Usecase Let's say you are runn

Nov 23, 2022
Go network programming framework, supports multiplexing, synchronous and asynchronous IO mode, modular design, and provides flexible custom interfaces
Go network programming framework, supports multiplexing, synchronous and asynchronous IO mode, modular design, and provides flexible custom interfaces

Go network programming framework, supports multiplexing, synchronous and asynchronous IO mode, modular design, and provides flexible custom interfaces。The key is the transport layer, application layer protocol has nothing to do

Nov 7, 2022
Package raw enables reading and writing data at the device driver level for a network interface. MIT Licensed.

raw Package raw enables reading and writing data at the device driver level for a network interface. MIT Licensed. For more information about using ra

Dec 28, 2022
A client software for acme-dns with emphasis on usability and guidance through setup and additional security safeguard mechanisms

acme-dns-client A client software for acme-dns with emphasis on usability and guidance through setup and additional security safeguard mechanisms. It

Dec 2, 2022
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.

gev 中文 | English gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily bui

Jan 6, 2023
SOCKS Protocol Version 5 Library in Go. Full TCP/UDP and IPv4/IPv6 support

socks5 中文 SOCKS Protocol Version 5 Library. Full TCP/UDP and IPv4/IPv6 support. Goals: KISS, less is more, small API, code is like the original protoc

Jan 8, 2023
TCPProbe is a modern TCP tool and service for network performance observability.
TCPProbe is a modern TCP tool and service for network performance observability.

TCPProbe is a modern TCP tool and service for network performance observability. It exposes information about socket’s underlying TCP session, TLS and HTTP (more than 60 metrics). you can run it through command line or as a service. the request is highly customizable and you can integrate it with your application through gRPC. it runs in a Kubernetes cluster as cloud native application and by adding annotations on pods allow a fine control of the probing process.

Dec 15, 2022