Asynchronous data replication for Kubernetes volumes

VolSync

VolSync asynchronously replicates Kubernetes persistent volumes between clusters using either rsync or rclone. It also supports creating backups of persistent volumes via restic.

Documentation Status Go Report Card codecov maturity

Documentation operator

Getting started

The fastest way to get started is to install VolSync in a kind cluster:

  • Install kind if you don't already have it:
    $ go install sigs.k8s.io/kind@latest
  • Use our convenient script to start a cluster, install the CSI hostpath driver, and the snapshot controller.
    $ ./hack/setup-kind-cluster.sh
  • Install the latest release via Helm
    $ helm repo add backube https://backube.github.io/helm-charts/
    $ helm install --create-namespace -n volsync-system volsync backube/volsync
  • See the usage instructions for information on setting up replication relationships.

More detailed information on installation and usage can be found in the official documentation.

VolSync kubectl plugin

We're also working on a command line interface to VolSync via a kubectl plugin. To try that out:

make cli
cp bin/kubectl-volsync /usr/local/bin/

NOTE: volsync plugin is being actively developed. Options, flags, and names are likely to be updated frequently. PRs and new issues are welcome!

Available commands:

kubectl volsync start-replication
kubectl volsync set-replication
kubectl volsync continue-replication
kubectl volsync remove-replication

Try the current examples:

Helpful links

Licensing

This project is licensed under the GNU AGPL 3.0 License with the following exceptions:

Owner
Comments
  • Document migration CLI sub-command

    Document migration CLI sub-command

    Describe what this PR does

    • [x] Document kubectl-volsync migration
    • [x] Fix references to external migration
    • [x] Remove old external sync script

    Is there anything that requires special attention?

    Related issues: Fixes: #154 Depends on: #141 (docs)

  • CLI - pvbackup create, schedule, sync, restore and delete

    CLI - pvbackup create, schedule, sync, restore and delete

    Describe what this PR does The backup and restore is based on restic utility.

    Create:

    1. Create the secret referring restic config file
    2. Create the replication source
    3. Save the details to relationship file

    Schedule:

    1. Set schedule on the replication source for
    2. Scheduled backups

    Sync:

    1. Trigger single manual backup by setting manual trigger

    Restore

    1. Create the NS, pvc, secret if doesn't exist or use the same if exists
    2. Pause ReplicationSource
    3. Create ReplicationDestination and trigger the restore
    4. Unpause the ReplicationSource
    5. Save the details to relationship file

    Delete:

    1. Delete the replicationSource/replicationDestination relationship
    2. Delete the secret
    3. Delete the relationship file

    Related issues: #193

    Signed-off-by: Vinayakswami Hariharmath [email protected]

  • set minKubeVersion to 1.19.0 in CSV (added also to version.mk file)

    set minKubeVersion to 1.19.0 in CSV (added also to version.mk file)

    Signed-off-by: Tesshu Flower [email protected]

    Describe what this PR does

    • adds MIN_KUBE_VERSION of 1.19.0 to version.mk

    • Subsequent calls to make bundle will update the bundle csv to use this MIN_KUBE_VERSION.

    • Sets the minKubeVersion in the bundle csv to 1.19.0

    Is there anything that requires special attention?

    Related issues:

  • Port restic e2e tests to ansible

    Port restic e2e tests to ansible

    Signed-off-by: Tesshu Flower [email protected]

    Describe what this PR does Ports over the following kuttl tests over to ansible e2e test:

    • restic-with-manual-trigger
    • restic-with-previous
    • restic-with-restoreasof
    • restic-without-trigger

    Is there anything that requires special attention?

    • I modified the write_to_pvc role to optionally leave the pod behind to allow for the podAffinity parts to work

    • I had to alter the behavior of restic-without-trigger slightly as I wasn't able to catch the condition Synchronizing going to "false" - previously we did this in 20-ensure-multiple-syncs:

      kubectl -n $NAMESPACE wait --for=condition=Synchronizing=true --timeout=5m ReplicationSource/source
      kubectl -n $NAMESPACE wait --for=condition=Synchronizing=false --timeout=5m ReplicationSource/source
      kubectl -n $NAMESPACE wait --for=condition=Synchronizing=true --timeout=5m ReplicationSource/source
      

      I think with no trigger the status is changing so fast I wasn't able to catch it with the ansible k8s calls. Instead I wait until another sync completes and make sure the lastSyncTime doesn't match the previous. Might make the test take a bit longer .

    • Additionally moves some of the pvc writer roles to use jobs instead of creating pods directly - this is to avoid issues when running tests manually with users with different permissions (for example running the tests as an admin user means the pods get created with a different SCC which affects the pod options). It will still make sense to move the other pod roles (reader ones) into jobs, but this can be done in a separate PR.

    Related issues:

  • Pvc storage size - clone & snapshot - use capacity if possible

    Pvc storage size - clone & snapshot - use capacity if possible

    Describe what this PR does For pvcFromSnapshot:

    • attempt to use the restoreSize from the snapshot to determine size to create the PVC
    • if restoreSize isn't available, attempt to use the status.capacity from the origin pvc
    • fallback to using the requested storage size from origin pvc

    For clone:

    • attempt to use the status.capacity from the origin pvc
    • fallback to using the requested storage size from origin pvc

    Is there anything that requires special attention?

    • For snapshots, we had discussed making sure the snapshot is bound at the beginning of pvcFromSnapshot() - it turns out that EnsurePVCFromSrc() calls ensureSnapshot first and then pvcFromSnapshot (see https://github.com/backube/volsync/blob/main/controllers/volumehandler/volumehandler.go#L73-L78) - and in fact ensureSnapshot will only return a snapshot after it's bound. So it turns out the bound check was already there https://github.com/backube/volsync/blob/fa4bdcfb2d88d438f39bdceb1159c5a2f30fbaa6/controllers/volumehandler/volumehandler.go#L405-L414

      The issue I found when recreating is it's possible to get a snapshot with a status like this:

      {"boundVolumeSnapshotContentName":"snapcontent-b735e05f-6352-45bf-a5ab-89a87577b64f","readyToUse":false}}
      

      That is, bound but no restoreSize set yet

      and then later get one with restore size set - so in this sort of situation we'll always be falling back to the PVC capacity.

      We could potentially try to wait for the snapshot readyToUse: true before trying to create a PVC?

      Unfortunately I wasn't able to find any concrete information about whether these fields are mandatory or not, none are specifically shown as required in the status as far as I can tell.

    • For clone, one thing I see that we could hit is where a user has their storageclass set to WaitForFirstConsumer. In this case if a user creates a repilcationsource for that PVC then capacity may never be filled out, and for clone we'll still fallback to using the requested size. I wasn't sure if this is a real concern out there or not - we could potentially check that the source pvc is in Bound state before proceeding.

      note I don't think this is an issue with the volumesnapshot case as I believe volumesnapshots won't go into bound state until the pvc has proceeded to bound state.

    Related issues: https://github.com/backube/volsync/issues/246 https://github.com/backube/volsync/issues/48

  • Syncthing - permission reduction

    Syncthing - permission reduction

    Signed-off-by: Tesshu Flower [email protected]

    Describe what this PR does

    • Runs syncthing as normal user by default
    • Runs syncthing as root with elevated permissions if namespace has the volsync privileged mover annotation set
    • Enables specifying the pod security context in the syncthing spec

    Is there anything that requires special attention?

    Related issues: https://github.com/backube/volsync/issues/368

  • Implements the Syncthing  data mover

    Implements the Syncthing data mover

    Describe what this PR does

    This PR seeks to implement the Syncthing API in the VolSync operator, making use of Syncthing's REST API.

    The following things are added:

    • [x] Syncthing controller implementation
    • [x] Syncthing controller unit-testing
    • [x] E2E testing for the Syncthing data mover
    • [x] Documentation for the Syncthing mover

    Is there anything that requires special attention?

    Related issues:

  • Fix auto gen chck for release branches

    Fix auto gen chck for release branches

    • custom scorecard config generation was always using "latest" which doesn't actually match what we want in release branches - where we want to use the custom-scorecard-image tagged with "release-x.y". This attempts to use the correct tag on the custom scorecard image.

    Signed-off-by: Tesshu Flower [email protected]

    Describe what this PR does

    Is there anything that requires special attention?

    Related issues:

  • Use nodeSelector rather than nodeName to not bypass scheduler

    Use nodeSelector rather than nodeName to not bypass scheduler

    Signed-off-by: Tesshu Flower [email protected]

    Describe what this PR does Stops setting NodeName in the mover job spec and instead uses NodeSelector. It seems that specifying NodeName directly bypasses the scheduler which means that if we have a PVC (like the restic cache pvc) that is in pending (because the storageclass has volumeBindingMode: WaitForFirstConsumer) - then the PVC will be stuck waiting for first consumer, and at the same time the mover pod is stuck waiting for the cache PVC to be Bound. It seems using NodeSelector does go through the scheduler and everything starts as it should.

    Is there anything that requires special attention? We are using the common node label kubernetes.io/hostname for the NodeSelector.

    Related issues: https://github.com/backube/volsync/issues/361#issuecomment-1211065869

  • Syncthing: Fix CI e2e

    Syncthing: Fix CI e2e

    Describe what this PR does Ensures that the config.xml file is readable in the mover image.

    This is to fix the following (from container logs):

    $ kubectl -n kuttl-test-brave-sheepdog logs pod/volsync-syncthing-1-755d844cdb-7trw5
    ===== STARTING CONTAINER =====
    ===== VolSync Syncthing container version: v0.5.0+ed1e00f =====
    ===== run =====
    ===== Running preflight check =====
    ===== ensuring necessary variables are defined =====
    ===== populating /config with /config.xml =====
    cp: cannot open '/config.xml' for reading: Permission denied
    

    It was causing the mover to CLBO.

    • ~This is also attempting to fix #309 by failing if the certs can't be copied from the secret~ Turned out to be a CI config problem
    • Attempts to fix #306 by waiting for previous pod to be deleted
    • Enables Syncthing backward compatibility w/ TLS 1.2 because the current FIPS-enabled golang builder doesn't support TLS 1.3

    Is there anything that requires special attention?

    Related issues:

  • Add controller code for rsync-tls mover

    Add controller code for rsync-tls mover

    Describe what this PR does

    • [x] API types for rsync-tls mover
    • [x] Operator code to manage mover
    • [x] Updates to Helm chart to deploy the new mover
    • [x] Updates to OLM manifests for new mover

    Is there anything that requires special attention?

    Related issues:

    • Related to #364
    • Follow-on to #510
    • Requires openshift/release#34231
    • Docs in #516
  • Local-path backup/restore feature

    Local-path backup/restore feature

    Describe the feature you'd like to have. Support of the local-path provisioner which is often used on bare-metal all-in-one installations .

    What is the value to the end user? (why is it a priority?) Allow the backup/restore for local-path provisioned volumes on small systems (kubernetes at home).

    How will we know we have a good solution? (acceptance criteria)

    • The solution should support the backup of local-path volumes
    • The solution should support the restore of local-path volumes

    Additional context This could be a great feature and could be adopted by multiple users that are currently running K8s at home where there is no needed of using more powerful CSI.

  • e2e: race in namespace deletion

    e2e: race in namespace deletion

    Describe the bug The test_roles e2e causes the namespace deletion handler to run more than once, and the resulting delete commands are redundant. This can cause the test to fail if the API server has removed the namespaces prior to the second call.

    We should find a way to make this more tolerant of timing, either by ensuring the handler runs only once or by ignoring the "not found".

    TASK [Make sure we got one] ****************************************************
    Wednesday 04 January 2023  15:16:50 +0000 (0:00:00.042)       0:00:09.180 ***** 
    skipping: [localhost]
    
    RUNNING HANDLER [create_namespace : Delete temporary Namespaces] ***************
    Wednesday 04 January 2023  15:16:50 +0000 (0:00:00.045)       0:00:09.225 ***** 
    changed: [localhost] => (item=test-104065-0)
    changed: [localhost] => (item=test-104065-1)
    changed: [localhost] => (item=test-417803-0)
    changed: [localhost] => (item=test-417803-1)
    changed: [localhost] => (item=test-417803-2)
    changed: [localhost] => (item=test-448836-0)
    
    RUNNING HANDLER [create_namespace : Delete temporary Namespaces] ***************
    Wednesday 04 January 2023  15:16:57 +0000 (0:00:06.611)       0:00:15.836 ***** 
    changed: [localhost] => (item=test-104065-0)
    changed: [localhost] => (item=test-104065-1)
    changed: [localhost] => (item=test-417803-0)
    ok: [localhost] => (item=test-417803-1)
    ok: [localhost] => (item=test-417803-2)
    failed: [localhost] (item=test-448836-0) => {"ansible_loop_var": "item", "changed": false, "error": 404, "item": "test-448836-0", "msg": "Namespace test-448836-0: Failed to delete object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"namespaces \\\\\"test-448836-0\\\\\" not found\",\"reason\":\"NotFound\",\"details\":{\"name\":\"test-448836-0\",\"kind\":\"namespaces\"},\"code\":404}\\n'", "reason": "Not Found", "status": 404}
    
    NO MORE HOSTS LEFT *************************************************************
    
    PLAY RECAP *********************************************************************
    localhost                  : ok=21   changed=4    unreachable=0    failed=1    skipped=8    rescued=0    ignored=0   
    

    Steps to reproduce

    Expected behavior

    Actual results

    Additional context

  • ci flake: latestImage being changed?

    ci flake: latestImage being changed?

    Describe the bug It appears that latestImage may be getting set/updated multiple times at the end of a sync.

    Steps to reproduce Seems to happen in openshift CI w/ test_rsync_tls_normal

    Expected behavior

    Actual results

    Additional context

    PVC that is being recreated from latestImage:

            "Name:          data-dest",
            "Namespace:     test-35701-0",
            "StorageClass:  gp3-csi",
            "Status:        Pending",
            "Volume:        ",
            "Labels:        <none>",
            "Annotations:   volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com",
            "               volume.kubernetes.io/selected-node: ip-10-0-232-126.us-east-2.compute.internal",
            "               volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com",
            "Finalizers:    [kubernetes.io/pvc-protection]",
            "Capacity:      ",
            "Access Modes:  ",
            "VolumeMode:    Filesystem",
            "DataSource:",
            "  APIGroup:  snapshot.storage.k8s.io",
            "  Kind:      VolumeSnapshot",
            "  Name:      volsync-test-dst-20230103193317",
            "Used By:     compare-pvcs-7rlxt-lrj7l",
            "Events:",
            "  Type     Reason                Age                   From                                                                                               Message",
            "  ----     ------                ----                  ----                                                                                               -------",
            "  Normal   WaitForFirstConsumer  24m                   persistentvolume-controller                                                                        waiting for first consumer to be created before binding",
            "  Warning  ProvisioningFailed    24m (x5 over 24m)     ebs.csi.aws.com_aws-ebs-csi-driver-controller-99f7b5f7-wrjs6_561cbe81-5cc6-4d69-b366-8bc7dae47703  failed to provision volume with StorageClass \"gp3-csi\": error getting handle for DataSource Type VolumeSnapshot by Name volsync-test-dst-20230103193317: snapshot volsync-test-dst-20230103193317 is not Ready",
            "  Warning  ProvisioningFailed    5m34s (x9 over 24m)   ebs.csi.aws.com_aws-ebs-csi-driver-controller-99f7b5f7-wrjs6_561cbe81-5cc6-4d69-b366-8bc7dae47703  failed to provision volume with StorageClass \"gp3-csi\": error getting handle for DataSource Type VolumeSnapshot by Name volsync-test-dst-20230103193317: snapshot volsync-test-dst-20230103193317 is currently being deleted",
            "  Normal   ExternalProvisioning  4m49s (x84 over 24m)  persistentvolume-controller                                                                        waiting for a volume to be created, either by external provisioner \"ebs.csi.aws.com\" or manually created by system administrator",
            "  Normal   Provisioning          34s (x15 over 24m)    ebs.csi.aws.com_aws-ebs-csi-driver-controller-99f7b5f7-wrjs6_561cbe81-5cc6-4d69-b366-8bc7dae47703  External provisioner is provisioning volume for claim \"test-35701-0/data-dest\"",
    

    ReplicationDestination:

            "Name:         test",
            "Namespace:    test-35701-0",
            "Labels:       <none>",
            "Annotations:  <none>",
            "API Version:  volsync.backube/v1alpha1",
            "Kind:         ReplicationDestination",
            "Metadata:",
            "  Creation Timestamp:  2023-01-03T19:31:41Z",
            "  Generation:          1",
            "  Managed Fields:",
            "    API Version:  volsync.backube/v1alpha1",
            "    Fields Type:  FieldsV1",
            "    fieldsV1:",
            "      f:spec:",
            "        .:",
            "        f:rsyncTLS:",
            "          .:",
            "          f:accessModes:",
            "          f:capacity:",
            "          f:copyMethod:",
            "    Manager:      OpenAPI-Generator",
            "    Operation:    Update",
            "    Time:         2023-01-03T19:31:41Z",
            "    API Version:  volsync.backube/v1alpha1",
            "    Fields Type:  FieldsV1",
            "    fieldsV1:",
            "      f:status:",
            "        .:",
            "        f:conditions:",
            "        f:lastSyncDuration:",
            "        f:lastSyncStartTime:",
            "        f:lastSyncTime:",
            "        f:latestImage:",
            "          .:",
            "          f:apiGroup:",
            "          f:kind:",
            "          f:name:",
            "        f:rsyncTLS:",
            "          .:",
            "          f:address:",
            "          f:keySecret:",
            "    Manager:         manager",
            "    Operation:       Update",
            "    Subresource:     status",
            "    Time:            2023-01-03T19:36:18Z",
            "  Resource Version:  47413",
            "  UID:               81ddb0e5-815f-494b-a9a1-e5cbeb799fdf",
            "Spec:",
            "  Rsync TLS:",
            "    Access Modes:",
            "      ReadWriteOnce",
            "    Capacity:     1Gi",
            "    Copy Method:  Snapshot",
            "Status:",
            "  Conditions:",
            "    Last Transition Time:  2023-01-03T19:36:18Z",
            "    Message:               Synchronization in-progress",
            "    Reason:                SyncInProgress",
            "    Status:                True",
            "    Type:                  Synchronizing",
            "  Last Sync Duration:      2m29.359876117s",
            "  Last Sync Start Time:    2023-01-03T19:36:18Z",
            "  Last Sync Time:          2023-01-03T19:36:18Z",
            "  Latest Image:",
            "    API Group:  snapshot.storage.k8s.io",
            "    Kind:       VolumeSnapshot",
            "    Name:       volsync-test-dst-20230103193618",
            "  Rsync TLS:",
            "    Address:     172.30.145.15",
            "    Key Secret:  volsync-rsync-tls-test",
            "Events:",
            "  Type    Reason                        Age                From                Message",
            "  ----    ------                        ----               ----                -------",
            "  Normal  PersistentVolumeClaimCreated  26m                volsync-controller  created PersistentVolumeClaim/volsync-test-dst to receive incoming data",
            "  Normal  ServiceAddressAssigned        26m (x2 over 26m)  volsync-controller  listening on address 172.30.145.15 for incoming connections",
            "  Normal  VolumeSnapshotCreated         25m                volsync-controller  created VolumeSnapshot/volsync-test-dst-20230103193317 from PersistentVolumeClaim/volsync-test-dst",
            "  Normal  VolumeSnapshotCreated         24m                volsync-controller  created VolumeSnapshot/volsync-test-dst-20230103193341 from PersistentVolumeClaim/volsync-test-dst",
            "  Normal  TransferStarted               21m (x4 over 26m)  volsync-controller  starting Job/volsync-rsync-tls-dst-test to receive data",
            "  Normal  VolumeSnapshotCreated         21m                volsync-controller  created VolumeSnapshot/volsync-test-dst-20230103193618 from PersistentVolumeClaim/volsync-test-dst"
        ]
    
    • LatestImage is volsync-test-dst-20230103193618, but the snapshot we're trying to restore is volsync-test-dst-20230103193317. I'm not sure what explains the 3 minute gap.
    • The source schedule is 0 0 1 1 *, so it should only be syncing once (per year).
    • volsync-test-dst-20230103193618 is ready to use
    • It looks like the transfer may have succeeded on the destination, but failed on the source?
      • Source events:
            "Events:",
            "  Type     Reason                        Age                From                Message",
            "  ----     ------                        ----               ----                -------",
            "  Normal   VolumeSnapshotCreated         26m                volsync-controller  created VolumeSnapshot/volsync-source-src from PersistentVolumeClaim/data-source",
            "  Normal   PersistentVolumeClaimCreated  25m                volsync-controller  created PersistentVolumeClaim/volsync-source-src from VolumeSnapshot/volsync-source-src",
            "  Normal   TransferStarted               22m (x3 over 25m)  volsync-controller  starting Job/volsync-rsync-tls-src-source to transmit data",
            "  Warning  TransferFailed                22m (x2 over 24m)  volsync-controller  mover Job backoff limit reached",
        
      - Destination events:
        ```
            "Events:",
            "  Type    Reason                        Age                From                Message",
            "  ----    ------                        ----               ----                -------",
            "  Normal  PersistentVolumeClaimCreated  26m                volsync-controller  created PersistentVolumeClaim/volsync-test-dst to receive incoming data",
            "  Normal  ServiceAddressAssigned        26m (x2 over 26m)  volsync-controller  listening on address 172.30.145.15 for incoming connections",
            "  Normal  VolumeSnapshotCreated         25m                volsync-controller  created VolumeSnapshot/volsync-test-dst-20230103193317 from PersistentVolumeClaim/volsync-test-dst",
            "  Normal  VolumeSnapshotCreated         24m                volsync-controller  created VolumeSnapshot/volsync-test-dst-20230103193341 from PersistentVolumeClaim/volsync-test-dst",
            "  Normal  TransferStarted               21m (x4 over 26m)  volsync-controller  starting Job/volsync-rsync-tls-dst-test to receive data",
            "  Normal  VolumeSnapshotCreated         21m                volsync-controller  created VolumeSnapshot/volsync-test-dst-20230103193618 from PersistentVolumeClaim/volsync-test-dst"
    
  • build(deps): bump github.com/syncthing/syncthing from 1.22.2 to 1.23.0

    build(deps): bump github.com/syncthing/syncthing from 1.22.2 to 1.23.0

    Bumps github.com/syncthing/syncthing from 1.22.2 to 1.23.0.

    Release notes

    Sourced from github.com/syncthing/syncthing's releases.

    v1.23.0

    Bugfixes:

    • #8572: Incorrect rescan interval on auto accepted encrypted folder
    • #8646: Perhaps the list of devices contains empty elements
    • #8686: Properly indicate whether a connection is "LAN" or not in the GUI

    v1.22.3-rc.2

    Bugfixes:

    • #8572: Incorrect rescan interval on auto accepted encrypted folder
    • #8646: Perhaps the list of devices contains empty elements
    • #8686: Properly indicate whether a connection is "LAN" or not in the GUI

    v1.22.3-rc.1

    Bugfixes:

    • #8646: Perhaps the list of devices contains empty elements
    • #8686: Properly indicate whether a connection is "LAN" or not in the GUI
    Commits
    • ded881c gui, man, authors: Update docs, translations, and contributors
    • fb4209e gui: Fix undefined lastSeenDays error in disconnected-inactive status check (...
    • 473ca68 gui, man, authors: Update docs, translations, and contributors
    • c4e69cd gui, api: Indicate running under container (#8728)
    • 634a3d0 lib/fs: Use io/fs errors as recommended in std lib (#8726)
    • 09f4d86 build: Handle co-authors (ref #3744) (#8708)
    • ad0044f lib/fs: Watching is unsupported on android/amd64 (fixes #8709) (#8710)
    • d157d12 lib/model: Only log at info level if setting change time fails (#8725)
    • f9d6847 lib/model: Don't lower rescan interval from default on auto accepted enc fold...
    • f0126fe gui, man, authors: Update docs, translations, and contributors
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • build(deps): bump github.com/kubernetes-csi/external-snapshotter/client/v6 from 6.1.0 to 6.2.0

    build(deps): bump github.com/kubernetes-csi/external-snapshotter/client/v6 from 6.1.0 to 6.2.0

    Bumps github.com/kubernetes-csi/external-snapshotter/client/v6 from 6.1.0 to 6.2.0.

    Release notes

    Sourced from github.com/kubernetes-csi/external-snapshotter/client/v6's releases.

    client/v6.2.0

    The release tag client/v6.2.0 is for VolumeSnapshot APIs and client library which are in a separate go package.

    Full Changelog

    https://github.com/kubernetes-csi/external-snapshotter/blob/v6.2.0/CHANGELOG/CHANGELOG-6.2.md

    v6.2.0

    Overall Status

    Volume snapshotting has been a GA feature since Kubernetes v1.20.

    Supported CSI Spec Versions

    1.0-1.7

    Minimum Kubernetes version

    1.20

    Recommended Kubernetes version

    1.24

    Container

    docker pull registry.k8s.io/sig-storage/snapshot-controller:v6.2.0
    docker pull registry.k8s.io/sig-storage/csi-snapshotter:v6.2.0
    docker pull registry.k8s.io/sig-storage/snapshot-validation-webhook:v6.2.0
    

    Full Changelog

    https://github.com/kubernetes-csi/external-snapshotter/blob/v6.2.0/CHANGELOG/CHANGELOG-6.2.md

    Commits
    • 5456412 Merge pull request #800 from xing-yang/changelog_6.2.0
    • fcb21dc Add changelog for v6.2.0
    • 0485546 Merge pull request #802 from sunnylovestiramisu/module-update-master
    • b82eefd Upgrade csi-lib-utils to v0.12.0
    • 77ccc89 Merge pull request #801 from xing-yang/add_replace_go_mod
    • 5d3d28c Add replace clause back to go.mod
    • 59290a1 Merge pull request #798 from xing-yang/update_client_dep
    • f70ede2 Update client dep to golang.org/x/net v0.4.0
    • 3ed7d85 Merge pull request #797 from sunnylovestiramisu/module-update-master
    • e58d9d2 Replace golang.org/x/net to v0.4.0
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes

csi-driver-spiffe csi-driver-spiffe is a Container Storage Interface (CSI) driver plugin for Kubernetes to work along cert-manager. This CSI driver tr

Dec 1, 2022
Kubegres is a Kubernetes operator allowing to create a cluster of PostgreSql instances and manage databases replication, failover and backup.

Kubegres is a Kubernetes operator allowing to deploy a cluster of PostgreSql pods with data replication enabled out-of-the box. It brings simplicity w

Dec 30, 2022
Litestream-read-replica-demo - A demo application for running live read replication on fly.io with Litestream

Litestream Read Replica Demo A demo application for running live read replicatio

Oct 18, 2022
An example of using Litestream's live read replication feature.

Litestream Read Replica Example This repository is an example of how to setup and deploy a multi-node SQLite database using Litestream's live read rep

Dec 14, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

Dec 18, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Oct 15, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Nov 25, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Dec 30, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

Mar 25, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

Nov 13, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Jan 3, 2023