Rook is an open source cloud-native storage orchestrator for Kubernetes

Rook

CNCF Status Docker Pulls Go Report Card Slack

What is Rook?

Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Rook turns storage software into self-managing, self-scaling, and self-healing storage services. It does this by automating deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. Rook uses the facilities provided by the underlying cloud-native container management, scheduling and orchestration platform to perform its duties.

Rook integrates deeply into cloud native environments leveraging extension points and providing a seamless experience for scheduling, lifecycle management, resource management, security, monitoring, and user experience.

For more details about the storage solutions currently supported by Rook, please refer to the project status section below. We plan to continue adding support for other storage systems and environments based on community demand and engagement in future releases. See our roadmap for more details.

Rook is hosted by the Cloud Native Computing Foundation (CNCF) as a graduated level project. If you are a company that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's involved and how Rook plays a role, read the CNCF announcement.

Getting Started and Documentation

For installation, deployment, and administration of the NFS storage provider, see our Documentation.

Contributing

We welcome contributions. See Contributing to get started.

Report a Bug

For filing bugs, suggesting improvements, or requesting new features, please open an issue.

Reporting Security Vulnerabilities

If you find a vulnerability or a potential vulnerability in Rook please let us know immediately at [email protected]. We'll send a confirmation email to acknowledge your report, and we'll send an additional email when we've identified the issues positively or negatively.

For further details, please see the complete security release process.

Contact

Please use the following to reach members of the community:

Community Meeting

A regular community meeting takes place every other Tuesday at 9:00 AM PT (Pacific Time). Convert to your local timezone.

Any changes to the meeting schedule will be added to the agenda doc and posted to Slack #announcements and the rook-dev mailing list.

Anyone who wants to discuss the direction of the project, design and implementation reviews, or general questions with the broader community is welcome and encouraged to join.

Project Status

The status of each storage provider supported by Rook can be found in the main Rook repo.

Name Details API Group Status
NFS Network File System (NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. nfs.rook.io/v1alpha1 Alpha

Official Releases

Official releases of the NFS operator can be found on the releases page. Please note that it is strongly recommended that you use official releases of Rook, as unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases. Builds from the master branch can have functionality changed and even removed at any time without compatibility support and without prior notice.

Releases of the NFS operator prior to v1.7 are found in the main Rook repo.

Licensing

Rook is under the Apache 2.0 license.

Owner
Rook
Open Cloud-Native Storage for Kubernetes
Rook
Comments
  • Support client access control for NFS volumes

    Support client access control for NFS volumes

    Is this a bug report or feature request?

    • Feature Request

    What should the feature do: Currently, the NFS export can be accessed by any client. This feature will restrict volume access to specified clients and also it will implement squash and access control per client/clients as per the NFS CRD design doc. An example of this can be found here

  • Support NFS CRD update event

    Support NFS CRD update event

    Is this a bug report or feature request?

    • Bug Report

    Deviation from expected behavior: Currently, NFS operator doesn't support the update event of NFS CRD. The update event has to be supported as this will be very useful for day2 operations.

    Expected behavior: The NFSServer deployment should get updated based on the new configuration of NFS CRD during update.

    How to reproduce it (minimal and precise): -> Deploy a NFSServer. -> Change the configuration of NFS Server CRD.

  • mount.nfs connection refused

    mount.nfs connection refused

    Is this a bug report or feature request?

    • Bug Report

    Deviation from expected behavior:

    Expected behavior:

    How to reproduce it (minimal and precise):

    Environment:

    • OS (e.g. from /etc/os-release):
    • Kernel (e.g. uname -a):
    • Cloud provider or hardware configuration:
    • Rook version (use rook version inside of a Rook Pod):
    • Kubernetes version (use kubectl version):
    • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
    • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):
  • NFS Provisioner not upgraded after NFS Operator upgrade

    NFS Provisioner not upgraded after NFS Operator upgrade

    Is this a bug report or feature request?

    • Bug Report

    Deviation from expected behavior: The NFS provisioner and NFSServer deployments not upgrading with the NFS Operator.

    Expected behavior: After I upgraded the NFS Operator from v1.4.6 to v1.5.4 and v1.5.6, the rook-nfs-provisioner pods not upgraded.

    I deployed the NFS operator (v.1.4.6), which created the rook-nfs-operator and rook-nfs-provisioner. After that I created an NFSServer, which works correctly. I upgraded the rook-ceph-operator and NFS operator to v1.5.4, but the provisioner and the created NFSServer stayed at v1.4.6.

    How to reproduce it (minimal and precise):

    • Install the commons.yaml and operator.yaml from v1.4.6 tag
    • Switch to v1.5.4 (or v1.5.6)
    • Apply the common.yaml and operator.yaml again
    $ kubectl get deployments -n rook-nfs-system -o wide
    NAME                   READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS             IMAGES            SELECTOR
    rook-nfs-operator      1/1     1            1           138d   rook-nfs-operator      rook/nfs:v1.5.6   app=rook-nfs-operator
    rook-nfs-provisioner   1/1     1            1           138d   rook-nfs-provisioner   rook/nfs:v1.4.6   app=rook-nfs-provisioner
    
  • NFSv4 with Kerberos support

    NFSv4 with Kerberos support

    Is this a bug report or feature request?

    • Feature Request

    What should the feature do: The feature should be able to mount NFSv4 storage with Kerberos authentication to Kubernetes -- the directory of the person logging in (home directory) should be visible in the pod and mounted on some path.

    What is use case behind this feature: I would be interested in developing this feature for mounting home folders of members of my university research groups that do use Kubernetes for their scientific computations. The home folders contain data needed for computations (and final data are saved here too). As the access to storages is behind Kerberos, the question of authenticating and providing username and password for successful mounting is open.

    I am eager to somehow start working on this but I haven't developed in Kuberentes anything yet and I am not sure how to proceed. Any discussion, ideas are sincerely appreciated. Also, I'd like to contribute to directly to Rook project.

    Environment: I would be interested in developing this feature in University environment as an experimental feature. If successful, the feature might be later helpful to other people. We do have servers with C8/Deb9 where authorization via kinit is needed (authorization to certain realm). After that, the home folder of the person (together with other folders that are enabled via ACL) are accessible to the person.

  • Mongodb replicaset fails when using NFS

    Mongodb replicaset fails when using NFS

    Is this a bug report or feature request?

    • Bug Report

    Deviation from expected behavior: By using Mongodb repilcaset stable helm chart, Replicasets should be created. Instead I get permissions errors.

    Expected behavior: Replicasets should be created.

    How to reproduce it (minimal and precise):

    1. Follow the NFS quick start guide
    2. helm repo add stable https://kubernetes-charts.storage.googleapis.com/
    3. helm install --name my-release stable/mongodb-replicaset
    4. helm install --generate-name stable/mongodb-replicaset --set persistentVolume.storageClass=rook-nfs

    File(s) to submit:

    • Crashing pod(s) logs, if necessary
    2020/05/19 01:05:41 Peer list updated 
    was [] 
    now [mongodb-replicaset-1589848729-0.mongodb-replicaset-1589848729.default.svc.cluster.local] 
    2020/05/19 01:05:41 execing: /init/on-start.sh with stdin: mongodb-replicaset-1589848729-0.mongodb-replicaset-1589848729.default.svc.cluster.local 
    2020/05/19 01:05:42 Failed to execute /init/on-start.sh: [2020-05-19T01:05:41,556551292+00:00] [on-start.sh] Bootstrapping MongoDB replica set member: mongodb-replicaset-1589848729-0 
    [2020-05-19T01:05:41,560126221+00:00] [on-start.sh] Reading standard input... 
    [2020-05-19T01:05:41,564158156+00:00] [on-start.sh] Skipping init mongod standalone script 
    [2020-05-19T01:05:41,567682728+00:00] [on-start.sh] Peers: mongodb-replicaset-1589848729-0.mongodb-replicaset-1589848729.default.svc.cluster.local 
    [2020-05-19T01:05:41,571040658+00:00] [on-start.sh] Starting a MongoDB replica 
    [2020-05-19T01:05:41,575776439+00:00] [on-start.sh] Waiting for MongoDB to be ready... 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten] MongoDB starting : pid=29 port=27017 dbpath=/data/db 64-bit host=mongodb-replicaset-1589848729-0 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten] db version v3.6.18 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten] git version: 2005f25eed7ed88fa698d9b800fe536bb0410ba4 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten] allocator: tcmalloc 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten] modules: none 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten] build environment: 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten]     distarch: x86_64 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten]     target_arch: x86_64 
    2020-05-19T01:05:41.600+0000 I CONTROL  [initandlisten] options: { config: "/data/configdb/mongod.conf", net: { bindIp: "0.0.0.0", port: 27017 }, replication: { replSet: "rs0" }, storage: { dbPath: "/data/db" } } 
    2020-05-19T01:05:41.602+0000 I STORAGE  [initandlisten] exception in initAndListen: DBPathInUse: Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory, terminating 
    2020-05-19T01:05:41.602+0000 F -        [initandlisten] Invariant failure globalStorageEngine src/mongo/db/service_context_d.cpp 272 
    2020-05-19T01:05:41.602+0000 F -        [initandlisten] 
     
    ***aborting after invariant() failure 
     
     
    2020-05-19T01:05:41.617+0000 F -        [initandlisten] Got signal: 6 (Aborted). 
     
     0x559a7aa44991 0x559a7aa43ba9 0x559a7aa4408d 0x7f844a72f390 0x7f844a389428 0x7f844a38b02a 0x559a79145a42 0x559a793ecd28 0x559a7a8eeb01 0x559a7a8eabe7 0x559a791bd262 0x559a7aa3fdd5 0x559a79146c1b 0x559a790df97c 0x559a791c4b19 0x559a79147b79 0x7f844a374830 0x559a791ac579 
    ----- BEGIN BACKTRACE ----- 
    {"backtrace":[{"b":"559A787A9000","o":"229B991","s":"_ZN5mongo15printStackTraceERSo"},{"b":"559A787A9000","o":"229ABA9"},{"b":"559A787A9000","o":"229B08D"},{"b":"7F844A71E000","o":"11390"},{"b":"7F844A354000","o":"35428","s":"gsignal"},{"b":"7F844A354000","o":"3702A","s":"abort"},{"b":"559A787A9000","o":"99CA42","s":"_ZN5mongo22invariantFailedWithMsgEPKcS1_S1_j"},{"b":"559A787A9000","o":"C43D28","s":"_ZN5mongo20ServiceContextMongoD9_newOpCtxEPNS_6ClientEj"},{"b":"559A787A9000","o":"2145B01","s":"_ZN5mongo14ServiceContext20makeOperationContextEPNS_6ClientE"},{"b":"559A787A9000","o":"2141BE7","s":"_ZN5mongo6Client20makeOperationContextEv"},{"b":"559A787A9000","o":"A14262"},{"b":"559A787A9000","o":"2296DD5"},{"b":"559A787A9000","o":"99DC1B","s":"_ZN5mongo8shutdownENS_8ExitCodeERKNS_16ShutdownTaskArgsE"},{"b":"559A787A9000","o":"93697C","s":"_ZZN5mongo13duration_castINS_8DurationISt5ratioILl1ELl1000EEEES2_ILl1ELl1EEEET_RKNS1_IT0_EEENKUlvE_clEv"},{"b":"559A787A9000","o":"A1BB19","s":"_ZN5mongo11mongoDbMainEiPPcS1_"},{"b":"559A787A9000","o":"99EB79","s":"main"},{"b":"7F844A354000","o":"20830","s":"__libc_start_main"},{"b":"559A787A9000","o":"A03579","s":"_start"}],"processInfo":{ "mongodbVersion" : "3.6.18", "gitVersion" : "2005f25eed7ed88fa698d9b800fe536bb0410ba4", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-1062.el7.x86_64", "version" : "#1 SMP Wed Aug 7 18:08:02 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "559A787A9000", "elfType" : 3, "buildId" : "21203110F006D4F5D0C39EFD95E2D99B73139C13" }, { "b" : "7FFDB3CE1000", "elfType" : 3, "buildId" : "FBC1F3D4AA39C1DD87351DC0C29E3D308DB88793" }, { "b" : "7F844B914000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3, "buildId" : "50A923F8DAFECBCD969C8573116A38C18D0E24D5" }, { "b" : "7F844B4CF000", "path" : "/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "15FFEB43278726B025F020862BF51302822A40EC" }, { "b" : "7F844B266000", "path" : "/lib/x86_64-linux-gnu/libssl.so 
    .1.0.0", "elfType" : 3, "buildId" : "FF69EA60EBE05F2DD689D2B26FC85A73E5FBC3A0" }, { "b" : "7F844B062000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "37BFC3D8F7E3B022DAC7943B1A5FACD40CEBF0AD" }, { "b" : "7F844AE5A000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "69143E8B39040C964D3958490535322675F15DD3" }, { "b" : "7F844AB51000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "BAD67A84E56E73D031AE507261DA066B35949D34" }, { "b" : "7F844A93B000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "68220AE2C65D65C1B6AAA12FA6765A6EC2F5F434" }, { "b" : "7F844A71E000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "B17C21299099640A6D863E423D99265824E7BB16" }, { "b" : "7F844A354000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "1CA54A6E0D76188105B12E49FE6B8019BF08803A" }, { "b" : "7F844BB2F000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "C0ADBAD6F9A33944F2B3567C078EC472A1DAE98E" } ] }} 
     mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x559a7aa44991] 
     mongod(+0x229ABA9) [0x559a7aa43ba9] 
     mongod(+0x229B08D) [0x559a7aa4408d] 
     libpthread.so.0(+0x11390) [0x7f844a72f390] 
     libc.so.6(gsignal+0x38) [0x7f844a389428] 
     libc.so.6(abort+0x16A) [0x7f844a38b02a] 
     mongod(_ZN5mongo22invariantFailedWithMsgEPKcS1_S1_j+0x0) [0x559a79145a42] 
     mongod(_ZN5mongo20ServiceContextMongoD9_newOpCtxEPNS_6ClientEj+0x158) [0x559a793ecd28] 
     mongod(_ZN5mongo14ServiceContext20makeOperationContextEPNS_6ClientE+0x41) [0x559a7a8eeb01] 
     mongod(_ZN5mongo6Client20makeOperationContextEv+0x27) [0x559a7a8eabe7] 
     mongod(+0xA14262) [0x559a791bd262] 
     mongod(+0x2296DD5) [0x559a7aa3fdd5] 
     mongod(_ZN5mongo8shutdownENS_8ExitCodeERKNS_16ShutdownTaskArgsE+0x364) [0x559a79146c1b] 
     mongod(_ZZN5mongo13duration_castINS_8DurationISt5ratioILl1ELl1000EEEES2_ILl1ELl1EEEET_RKNS1_IT0_EEENKUlvE_clEv+0x0) [0x559a790df97c] 
     mongod(_ZN5mongo11mongoDbMainEiPPcS1_+0x879) [0x559a791c4b19] 
     mongod(main+0x9) [0x559a79147b79] 
     libc.so.6(__libc_start_main+0xF0) [0x7f844a374830] 
     mongod(_start+0x29) [0x559a791ac579] 
    -----  END BACKTRACE  ----- 
    exception: connect failed 
    [2020-05-19T01:05:42,672568225+00:00] [on-start.sh] mongod shutdown unexpectedly 
    [2020-05-19T01:05:42,677277237+00:00] [on-start.sh] Shutting down MongoDB (force: true)... 
    MongoDB shell version v3.6.18 
    connecting to: mongodb://localhost:27017/admin?gssapiServiceName=mongodb 
    2020-05-19T01:05:42.760+0000 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused 
    2020-05-19T01:05:42.760+0000 E QUERY    [thread1] Error: couldn't connect to server localhost:27017, connection attempt failed : 
    connect@src/mongo/shell/mongo.js:263:13 
    @(connect):1:6 
    exception: connect failed 
    [2020-05-19T01:05:42,767990933+00:00] [on-start.sh] db.shutdownServer() failed, sending the terminate signal 
    /init/on-start.sh: line 77: kill: (30) - No such process 
    , err: exit status 1 
    

    To get logs, use kubectl -n <namespace> logs <pod name> When pasting logs, always surround them with backticks or use the insert code button from the Github UI. Read Github documentation if you need help.

    Environment:

    • OS (e.g. from /etc/os-release): CentOS Linux 7
    • Kernel (e.g. uname -a):Linux fintech-server 3.10.0-1062.el7.x86_64 rook/rook#1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
    • Cloud provider or hardware configuration: Rancher2.0 on premise.
    • Rook version (use rook version inside of a Rook Pod):1.3
    • Storage backend version (e.g. for ceph do ceph -v): NFS
    • Kubernetes version (use kubectl version): Major:"1", Minor:"17", GitVersion:"v1.17.5",
    • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): Rancher2.0
  • Backport: Storage provider refactor including latest build scripts and docs to the release-1.7 branch

    Backport: Storage provider refactor including latest build scripts and docs to the release-1.7 branch

    Description of your changes: The NFS repo is preparing to release from its own repo instead of the main rook/rook repo. This updates everything from master for critical build changes that are required for this separate build and release process.

    Checklist:

    • [ ] Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
    • [ ] Skip Tests for Docs: Add the flag for skipping the build if this is only a documentation change. See here for the flag.
    • [ ] Skip Unrelated Tests: Add a flag to run tests for a specific storage provider. See test options.
    • [ ] Reviewed the developer guide on Submitting a Pull Request
    • [ ] Documentation has been updated, if necessary.
    • [ ] Unit tests have been added, if necessary.
    • [ ] Integration tests have been added, if necessary.
    • [ ] Pending release notes updated with breaking and/or notable changes, if necessary.
    • [ ] Upgrade from previous release is tested and upgrade user guide is updated, if necessary.
    • [ ] Code generation (make codegen) has been run to update object specifications, if necessary.
  • Backport: Storage provider refactor including latest build scripts and docs to the release-1.7 branch

    Backport: Storage provider refactor including latest build scripts and docs to the release-1.7 branch

    Description of your changes: The NFS repo is preparing to release from its own repo instead of the main rook/rook repo. This updates everything from master for critical build changes that are required for this separate build and release process.

    Checklist:

    • [ ] Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
    • [ ] Skip Tests for Docs: Add the flag for skipping the build if this is only a documentation change. See here for the flag.
    • [ ] Skip Unrelated Tests: Add a flag to run tests for a specific storage provider. See test options.
    • [ ] Reviewed the developer guide on Submitting a Pull Request
    • [ ] Documentation has been updated, if necessary.
    • [ ] Unit tests have been added, if necessary.
    • [ ] Integration tests have been added, if necessary.
    • [ ] Pending release notes updated with breaking and/or notable changes, if necessary.
    • [ ] Upgrade from previous release is tested and upgrade user guide is updated, if necessary.
    • [ ] Code generation (make codegen) has been run to update object specifications, if necessary.
  • Add dynamic provisioning integration test for nfs

    Add dynamic provisioning integration test for nfs

    Feature Request The PR https://github.com/rook/rook/pull/2758 implements dynamic provisioning for NFS but currently, there is no integration test available for this feature. So there needs to be some integration test present for testing this.

    What should the feature do: It should test different scenarios for NFS.

    What would be solved through this feature: Add some more test coverage regarding NFS.

    Does this have an impact on existing features: Nope

  • docs: add Rook NFS deprecation notice

    docs: add Rook NFS deprecation notice

    Add a deprecation notice to the top of all NFS docs.

    Signed-off-by: Blaine Gardner [email protected]

    Description of your changes:

    Which issue is resolved by this Pull Request: Resolves #

    Checklist:

    • [ ] Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
    • [ ] Skip Tests for Docs: Add the flag for skipping the build if this is only a documentation change. See here for the flag.
    • [ ] Skip Unrelated Tests: Add a flag to run tests for a specific storage provider. See test options.
    • [ ] Reviewed the developer guide on Submitting a Pull Request
    • [ ] Documentation has been updated, if necessary.
    • [ ] Unit tests have been added, if necessary.
    • [ ] Integration tests have been added, if necessary.
    • [ ] Pending release notes updated with breaking and/or notable changes, if necessary.
    • [ ] Upgrade from previous release is tested and upgrade user guide is updated, if necessary.
    • [ ] Code generation (make codegen) has been run to update object specifications, if necessary.
  • docs: update quickstart.md

    docs: update quickstart.md

    Description of your changes: In quickstart.md, git clones the repo by default in the nfs directory, not root.

    Which issue is resolved by this Pull Request: Resolves #44

    Checklist:

    • [ ] Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
    • [ ] Skip Tests for Docs: Add the flag for skipping the build if this is only a documentation change. See here for the flag.
    • [ ] Skip Unrelated Tests: Add a flag to run tests for a specific storage provider. See test options.
    • [ ] Reviewed the developer guide on Submitting a Pull Request
    • [ ] Documentation has been updated, if necessary.
    • [ ] Unit tests have been added, if necessary.
    • [ ] Integration tests have been added, if necessary.
    • [ ] Pending release notes updated with breaking and/or notable changes, if necessary.
    • [ ] Upgrade from previous release is tested and upgrade user guide is updated, if necessary.
    • [ ] Code generation (make codegen) has been run to update object specifications, if necessary.
  • Support for resources requests and limits for the NFS Server StatefulSet

    Support for resources requests and limits for the NFS Server StatefulSet

    Is this a bug report or feature request?

    • Feature Request

    What should the feature do: Allow the user to specify resources.requests and resources.limits for the NFS Server Statefulset.

    What is use case behind this feature: Like most Pods on a Kubernetes cluster, being able to specify resources requests and limits (both memory and CPU) helps to ensure the Pod has the necessary resources to work its magic, or to prioritize resources for this Pod over others.

    Environment: Any Kubernetes cluster with a Rook NFS Server deployed.

  • Intermittent access issues with NFS Volumes

    Intermittent access issues with NFS Volumes

    Is this a bug report or feature request?

    • Bug Report

    Deviation from expected behavior: I've deployed rook-nfs using the quick-start guide, then followed create and initialize nfs server section to establish two nfs-servers. One NFS-Server was backed by HDD, and the other NFS-Server is backed by a SSD storage. The operator built the NFS servers succesfully.

    Next thing I did was to create a deployment, and create PVC which used the SC for the NFS Server. When the pod first started, it created the PV fine, and binded correctly in the pod. Everything worked expected for a little white (like a week maybe?) Then all of a sudden, the pods were unable to access the volumes anymore. It would just hang, if you would open a shell and do 'ls' on the nfs volume.

    When I restarted the pod that has the NFS volume, the pod failed to start. The pod never passes the "init" stage. Eventually, it will error out because it is unable to mount the volume that is backed by the NFS server.

    I've attempted to restart all the nodes, try to schedule the pod on another node, but issue persists.

    The only way I was able to get the pod to mount the volume again is to change the volume spec from PVC to NFS in the deployment:

          volumes:
          - name: gold-nfs-mount
            nfs:
              path: /gold-scratch/dir <--- Export 
              server: 172.30.17.118 <--- Service IP address of NFS Server
    

    The weird thing was, this has happened one more time before, and the problem eventually went away. By itself.

    Expected behavior: Be able to continue to use persistentVolumeClaim for the volume instead of using nfs to mount volumes.

    How to reproduce it (minimal and precise): Create rook-nfs operator using the quick-start guide, then follow create and initialize nfs server section to establish nfs-servers.

    To make it easier, this is my manifest:

    Persistent Volume:

    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: gold-scratch
      labels:
        type: ssd
    spec:
      storageClassName: local-storage
      capacity:
        storage: 200Gi
      volumeMode: Filesystem
      accessModes:
        - ReadWriteMany
      hostPath:
        path: "/mnt/scratch/gold"
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - node1
              - node2
    

    PVC + NFS Server

    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: gold-scratch
      namespace: rook-nfs
    spec:
      storageClassName: "local-storage"
      accessModes:
      - ReadWriteMany
      selector:
        matchLabels:
          type: ssd
      resources:
        requests:
          storage: 200Gi
    ---
    apiVersion: nfs.rook.io/v1alpha1
    kind: NFSServer
    metadata:
      name: gold-nfs
      namespace: rook-nfs
    spec:
      replicas: 1
      exports:
      - name: gold-scratch
        server:
          accessMode: ReadWrite
          squash: "none"
        persistentVolumeClaim:
          claimName: gold-scratch
      annotations:
        rook-nfs: gold-scratch 
        rook: nfs
    

    StorageClass:

    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      labels:
        rook-nfs: gold-scratch
        type: ssd
      name: gold-local
    parameters:
      exportName: gold-scratch
      nfsServerName: gold-nfs
      nfsServerNamespace: rook-nfs
    provisioner: nfs.rook.io/gold-nfs-provisioner
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    

    Verify:

    $ kubectl get pods -n rook-nfs --selector=app=gold-nfs
    NAME         READY   STATUS    RESTARTS         AGE
    gold-nfs-0   2/2     Running   16 (4d20h ago)   5d13h
    
    $ kubectl get sc gold-local
    NAME         PROVISIONER                        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    gold-local   nfs.rook.io/gold-nfs-provisioner   Delete          Immediate           false                  33d
    

    Deploy an app, and use gold-local SC to for PVC. And Wait?

    File(s) to submit:

    The NFS Server does not show any errors.

    Environment:

    • OS (e.g. from /etc/os-release): Ubuntu 20.04.3 LTS
    • Kernel (e.g. uname -a): Linux 5.11.0-43-generic
    • Cloud provider or hardware configuration: N/A On-Prem
    • Rook version (use rook version inside of a Rook Pod): Rook NFS 1.7.3
    • Storage backend version (e.g. for ceph do ceph -v): Rook NFS 1.7.3
    • Kubernetes version (use kubectl version): v1.23.1
    • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): kubeadm
  • Changing provisioned volume mode to 0777

    Changing provisioned volume mode to 0777

    NFS Provisioner: Changing provisioned volume mode to 0777

    Description of your changes:

    Changing provisioned volume mode to 0777 by adding a chmod call after the dir is created

    This is to allow non-root consumer pod to gain access to the volume. It is quite common now that pods are running as non-root user (sometimes even with restricted SCC). The consumer pod should be responsible for preparing the volume (i.e. changing the permission to proper values)

    I also added on replace clause in go.mod otherwise it won't build (see #39).

    Which issue is resolved by this Pull Request: Resolves #

    Resolves #22 and potentially a band-aid fix for #39

    I've tested this change using a dev build running on Openshift 4.8 with portworx backing the server. With this change non-root pods are able to mount provisioned volume and gain access.

    Checklist:

    • [ ] Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
    • [ ] Skip Tests for Docs: Add the flag for skipping the build if this is only a documentation change. See here for the flag.
    • [ ] Skip Unrelated Tests: Add a flag to run tests for a specific storage provider. See test options.
    • [ ] Reviewed the developer guide on Submitting a Pull Request
    • [ ] Documentation has been updated, if necessary.
    • [ ] Unit tests have been added, if necessary.
    • [ ] Integration tests have been added, if necessary.
    • [ ] Pending release notes updated with breaking and/or notable changes, if necessary.
    • [ ] Upgrade from previous release is tested and upgrade user guide is updated, if necessary.
    • [ ] Code generation (make codegen) has been run to update object specifications, if necessary.
  • Add tolerations to NFSServer StatefulSet

    Add tolerations to NFSServer StatefulSet

    Is this a bug report or feature request?

    • Feature Request

    What should the feature do: Add tolerations to NFSServer StatefulSet What is use case behind this feature: NFSServer node placement based on tolerations Environment:

  • Build Failure with go.mod

    Build Failure with go.mod

    Is this a bug report or feature request?

    • Bug Report

    Deviation from expected behavior: make -j4 produces an error message on go mod command

    === ensuring modules are tidied
    go: github.com/rook/[email protected] requires
            github.com/libopenstorage/[email protected] requires
            github.com/hashicorp/[email protected] requires
            github.com/hashicorp/[email protected] requires
            github.com/hashicorp/vault/[email protected]: invalid version: unknown revision c478d00be0d6
    go: downloading github.com/csi-addons/volume-replication-operator v0.1.1-0.20210525040814-ab575a2879fb
    go: downloading github.com/k8snetworkplumbingwg/network-attachment-definition-client v1.1.0
    go: downloading github.com/spf13/cobra v1.1.1
    go: downloading github.com/tevino/abool v1.2.0
    go: downloading k8s.io/api v0.22.0
    go: downloading k8s.io/apiextensions-apiserver v0.21.1
    go: downloading k8s.io/apimachinery v0.22.0
    go: downloading k8s.io/client-go v0.22.0
    go: downloading sigs.k8s.io/controller-runtime v0.9.0
    go: downloading github.com/rook/rook v1.7.2
    go: downloading k8s.io/component-helpers v0.21.1
    go: downloading k8s.io/utils v0.0.0-20210707171843-4b05e18ac7d9
    go: downloading sigs.k8s.io/sig-storage-lib-external-provisioner/v6 v6.1.0
    go: downloading github.com/google/uuid v1.1.2
    go: downloading github.com/banzaicloud/k8s-objectmatcher v1.1.0
    go: downloading github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.50.0
    go: downloading github.com/prometheus-operator/prometheus-operator/pkg/client v0.50.0
    go: downloading k8s.io/cloud-provider v0.21.1
    go: github.com/rook/[email protected] requires
            github.com/libopenstorage/[email protected] requires
            github.com/hashicorp/[email protected] requires
            github.com/hashicorp/[email protected] requires
            github.com/hashicorp/vault/[email protected]: invalid version: unknown revision c478d00be0d6
    make: *** [go.mod.check] Error 1
    

    I managed to bypass by adding a replace as

    github.com/hashicorp/vault/sdk => github.com/hashicorp/vault/sdk master
    

    But I would assume it's not a proper fix.

    Expected behavior: build successful How to reproduce it (minimal and precise):

    run make -j4 with current code from master branch

    File(s) to submit:

    • Cluster CR (custom resource), typically called cluster.yaml, if necessary
    • Operator's logs, if necessary
    • Crashing pod(s) logs, if necessary

    To get logs, use kubectl -n <namespace> logs <pod name> When pasting logs, always surround them with backticks or use the insert code button from the Github UI. Read Github documentation if you need help.

    Environment:

    • OS (e.g. from /etc/os-release): MacOS, golang ver 1.17
    • Kernel (e.g. uname -a):
    • Cloud provider or hardware configuration:
    • Rook version (use rook version inside of a Rook Pod):
    • Storage backend version (e.g. for ceph do ceph -v):
    • Kubernetes version (use kubectl version):
    • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
  • Be able to specify a nodeSelector for NFSServer

    Be able to specify a nodeSelector for NFSServer

    Is this a bug report or feature request?

    • Feature Request

    What should the feature do: If you specify a nodeSelector in NFSServer.spec.nodeSelector it should be passed to the pod which gets created.

    What is use case behind this feature: If you want to bind a hostPath to make it available to other Nodes in the cluster, you wanna specify on which Node the Pod should schedule.
    My usecase: i got n amount of Nodes and one of them got a big harddrive attached. I wanna make this one available in the whole cluster so i want the NFSServer Pod to be scheduled on this specific Node. Right now i do not know how to archive this with Rook NFSServer.

Related tags
"rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

Website | Documentation | Download | Contributing | Changelog | Installation | Forum Rclone Rclone ("rsync for cloud storage") is a command-line progr

Jan 9, 2023
Cloud-Native distributed storage built on and for Kubernetes
Cloud-Native distributed storage built on and for Kubernetes

Longhorn Build Status Engine: Manager: Instance Manager: Share Manager: Backing Image Manager: UI: Test: Release Status Release Version Type 1.1 1.1.2

Jan 1, 2023
High Performance, Kubernetes Native Object Storage
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Ama

Jan 2, 2023
QingStor Object Storage service support for go-storage

go-services-qingstor QingStor Object Storage service support for go-storage. Install go get github.com/minhjh/go-service-qingstor/v3 Usage import ( "

Dec 13, 2021
Storj is building a decentralized cloud storage network
Storj is building a decentralized cloud storage network

Ongoing Storj v3 development. Decentralized cloud object storage that is affordable, easy to use, private, and secure.

Jan 8, 2023
SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

Jan 4, 2023
s3git: git for Cloud Storage. Distributed Version Control for Data.
s3git: git for Cloud Storage. Distributed Version Control for Data.

s3git: git for Cloud Storage. Distributed Version Control for Data. Create decentralized and versioned repos that scale infinitely to 100s of millions of files. Clone huge PB-scale repos on your local SSD to make changes, commit and push back. Oh yeah, it dedupes too and offers directory versioning.

Dec 27, 2022
Storage Orchestration for Kubernetes

What is Rook? Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse se

Dec 29, 2022
An encrypted object storage system with unlimited space backed by Telegram.

TGStore An encrypted object storage system with unlimited space backed by Telegram. Please only upload what you really need to upload, don't abuse any

Nov 28, 2022
tstorage is a lightweight local on-disk storage engine for time-series data
tstorage is a lightweight local on-disk storage engine for time-series data

tstorage is a lightweight local on-disk storage engine for time-series data with a straightforward API. Especially ingestion is massively opt

Jan 1, 2023
storage interface for local disk or AWS S3 (or Minio) platform

storage interface for local disk or AWS S3 (or Minio) platform

Apr 19, 2022
Terraform provider for the Minio object storage.

terraform-provider-minio A Terraform provider for Minio, a self-hosted object storage server that is compatible with S3. Check out the documenation on

Dec 1, 2022
A Redis-compatible server with PostgreSQL storage backend

postgredis A wild idea of having Redis-compatible server with PostgreSQL backend. Getting started As a binary: ./postgredis -addr=:6380 -db=postgres:/

Nov 8, 2021
CSI for S3 compatible SberCloud Object Storage Service

sbercloud-csi-obs CSI for S3 compatible SberCloud Object Storage Service This is a Container Storage Interface (CSI) for S3 (or S3 compatible) storage

Feb 17, 2022
Void is a zero storage cost large file sharing system.

void void is a zero storage cost large file sharing system. License Copyright © 2021 Changkun Ou. All rights reserved. Unauthorized using, copying, mo

Nov 22, 2021
This is a simple file storage server. User can upload file, delete file and list file on the server.
This is a simple file storage server.  User can upload file,  delete file and list file on the server.

Simple File Storage Server This is a simple file storage server. User can upload file, delete file and list file on the server. If you want to build a

Jan 19, 2022
Perkeep (née Camlistore) is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.

Perkeep is your personal storage system. It's a way to store, sync, share, import, model, and back up content. Keep your stuff for life. For more, see

Dec 26, 2022
A High Performance Object Storage released under Apache License
A High Performance Object Storage released under Apache License

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storag

Sep 30, 2021
Open-Local is a local disk management system composed of multiple components.
Open-Local is a local disk management system composed of multiple components.

Open-Local is a local disk management system composed of multiple components. With Open-Local, using local storage in Kubernetes will be as simple as centralized storage.

Dec 30, 2022