πŸ›… Backup your Kubernetes Stateful Applications

Go Report Card Build Status Docker Pulls Slack Twitter

Stash

Stash by AppsCode is a cloud-native data backup and recovery solution for Kubernetes workloads. If you are running production workloads in Kubernetes, you might want to take backup of your disks, databases, etc. Traditional tools are too complex to set up and maintain in a dynamic compute environment like Kubernetes. Stash is a Kubernetes operator that uses restic or Kubernetes CSI Driver VolumeSnapshotter functionality to address these issues. Using Stash, you can backup Kubernetes volumes mounted in workloads, stand-alone volumes, and databases. Users may even extend Stash via addons for any custom workload.

Features

Features Community Edition Enterprise Edition Scope
Open source Stash Free for everyone Open Core Stash for production Enterprise workloads
Backup & Restore Workload Data βœ“ βœ“ Deployment, DaemonSet, StatefulSet, ReplicaSet, ReplicationController, OpenShift DeploymentConfig
Backup & Restore Stand-alone Volume (PVC) βœ“ βœ“ PersistentVolumeClaim, PersistentVolume
Schedule Backup, Instant Backup βœ“ βœ“ Schedule through cron expression or trigger instant backup using Stash Kubernetes plugin
Pause Backup βœ“ βœ“ No new backup when paused.
Backup & Restore subset of files βœ“ βœ“ Only backup/restore the files that matches the provided patterns
Cleanup old snapshots automatically βœ“ βœ“ Cleanup old snapshots according to different retention policies
Encryption, Deduplication (send only diff) βœ“ βœ“ Encrypt backed up data with AES-256. Stash only sends the changes since last backup.
CSI Driver Integration βœ“ βœ“ VolumeSnapshot for Kubernetes workloads. Supported for Kubernetes v1.17.0+.
Prometheus Metrics βœ“ βœ“ Rich backup metrics, restore metrics and Stash operator metrics.
Security βœ“ βœ“ Built-in support for RBAC, PSP and Network Policy
CLI βœ“ βœ“ kubectl plugin (for Kubernetes 1.12+)
Extensibility and Customizability βœ“ βœ“ Write addons for bespoke applications and customize currently supported workloads
Hooks βœ“ βœ“ Execute httpGet, httpPost, tcpSocket and exec hooks before and after of backup or restore process.
Cloud Storage as Backend βœ“ βœ“ Stores backup data in AWS S3, Minio, Rook, GCS, Azure, OpenStack Swift, Backblaze B2 and Rest Server
On-prem Storage as Backend βœ— βœ“ Stores backup data in any locally mounted Kubernetes Volumes such as NFS, etc.
Backup & Restore databases βœ— βœ“ PostgreSQL, MySQL, MongoDB, Elasticsearch, Redis, MariaDB, Percona XtraDB
Auto Backup βœ— βœ“ Share backup configuration across workloads using templates. Enable backup for a target application via annotation.
Batch Backup & Batch Restore βœ— βœ“ Backup and restore co-related applications (eg, WordPress server and its database) together
Point-In-Time Recovery (PITR) βœ— Planned Restore a set of files from a time in the past.

Installation

To install Stash, please follow the guide here.

Using Stash

Want to learn how to use Stash? Please start here.

Contribution guidelines

Want to help improve Stash? Please start here.

Acknowledgement

Support

To speak with us, please leave a message on our website.

To join public discussions with the Stash community, join us in the AppsCode Slack team channel #stash. To sign up, use our Slack inviter.

To receive product annoucements, follow us on Twitter.

If you have found a bug with Stash or want to request new features, please file an issue.

License

FOSSA Status

Owner
Stash by AppsCode
Backup your Kubernetes Stateful Applications
Stash by AppsCode
Comments
  • Backup Jobs spawn multiple backup pods / leave locks

    Backup Jobs spawn multiple backup pods / leave locks

    After updating stash from 0.9.x to 0.11.1, every backup job spawned for a backupConfiguration spawns 2 backup pods and leaves the lock.

    Some jobs work, but at least the second one doesn't, as the lock still exists. In the repository used by the following configuration we currently have 25 locks!

    BackupConfiguration:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupConfiguration
    metadata:
      annotations:
        helm.fluxcd.io/antecedent: flux:helmrelease/***-4ap-produktiv
      creationTimestamp: "2020-09-29T16:12:07Z"
      finalizers:
      - stash.appscode.com
      generation: 1
      labels:
        app.kubernetes.io/instance: ***-4ap-produktiv
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: 4allportal
        helm.sh/chart: 4allportal-7.8.0
      name: ***-4ap-produktiv-4allportal-assets
      namespace: ***-4ap
    spec:
      driver: Restic
      repository:
        name: ***-4ap-produktiv-4allportal-assets
      retentionPolicy:
        keepLast: 14
        name: retention
        prune: true
      runtimeSettings:
        container:
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsGroup: 1000
            runAsNonRoot: true
            runAsUser: 1000
        pod:
          securityContext:
            fsGroup: 1000
            runAsGroup: 1000
            runAsNonRoot: true
            runAsUser: 1000
      schedule: 0 0 * * *
      target:
        ref:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: ***-4ap-produktiv-4allportal-assets
      task:
        name: pvc-backup
    status:
      conditions:
      - lastTransitionTime: "2020-09-29T16:12:07Z"
        message: Repository ***-4ap/***-4ap-produktiv-4allportal-assets exist.
        reason: RepositoryAvailable
        status: "True"
        type: RepositoryFound
      - lastTransitionTime: "2020-09-29T16:12:07Z"
        message: Backend Secret ***-4ap/***-4ap-produktiv-4allportal-backup exist.
        reason: BackendSecretAvailable
        status: "True"
        type: BackendSecretFound
      - lastTransitionTime: "2020-09-29T16:12:07Z"
        message: Backup target v1 persistentvolumeclaim/***-4ap-produktiv-4allportal-assets
          found.
        reason: TargetAvailable
        status: "True"
        type: BackupTargetFound
      - lastTransitionTime: "2020-09-30T13:28:28Z"
        message: Successfully created backup triggering CronJob.
        reason: CronJobCreationSucceeded
        status: "True"
        type: CronJobCreated
      observedGeneration: 1
    

    Job:

    apiVersion: batch/v1
    kind: Job
    metadata:
      creationTimestamp: "2020-10-05T00:00:10Z"
      labels:
        app.kubernetes.io/component: stash-backup
        app.kubernetes.io/instance: ***-4ap-produktiv
        app.kubernetes.io/managed-by: stash.appscode.com
        app.kubernetes.io/name: 4allportal
        helm.sh/chart: 4allportal-7.8.0
      name: stash-backup-***-4ap-produktiv-4allportal-assets-1601856010-0
      namespace: ***-4ap
      ownerReferences:
      - apiVersion: stash.appscode.com/v1beta1
        blockOwnerDeletion: true
        controller: true
        kind: BackupSession
        name: ***-4ap-produktiv-4allportal-assets-1601856010
        uid: c04d1e45-85dc-4eb8-822a-53f7e35b1624
      resourceVersion: "195782934"
      selfLink: /apis/batch/v1/namespaces/***-4ap/jobs/stash-backup-***-4ap-produktiv-4allportal-assets-1601856010-0
      uid: a342ab36-e550-4bd3-acb2-d8eb45fa7bf6
    spec:
      backoffLimit: 1
      completions: 1
      parallelism: 1
      selector:
        matchLabels:
          controller-uid: a342ab36-e550-4bd3-acb2-d8eb45fa7bf6
      template:
        metadata:
          creationTimestamp: null
          labels:
            app.kubernetes.io/component: stash-backup
            app.kubernetes.io/instance: ***-4ap-produktiv
            app.kubernetes.io/managed-by: stash.appscode.com
            app.kubernetes.io/name: 4allportal
            controller-uid: a342ab36-e550-4bd3-acb2-d8eb45fa7bf6
            helm.sh/chart: 4allportal-7.8.0
            job-name: stash-backup-***-4ap-produktiv-4allportal-assets-1601856010-0
        spec:
          containers:
          - args:
            - update-status
            - --provider=s3
            - --bucket=pvc-backup
            - --endpoint=http://10.90.45.131:8080
            - --path=***-4ap/***-4ap-produktiv/persistentvolumeclaim/assets
            - --secret-dir=/etc/repository/secret
            - --scratch-dir=/tmp
            - --enable-cache=true
            - --max-connections=0
            - --namespace=***-4ap
            - --backupsession=***-4ap-produktiv-4allportal-assets-1601856010
            - --repository=***-4ap-produktiv-4allportal-assets
            - --invoker-kind=BackupConfiguration
            - --invoker-name=***-4ap-produktiv-4allportal-assets
            - --target-kind=PersistentVolumeClaim
            - --target-name=***-4ap-produktiv-4allportal-assets
            - --output-dir=/tmp/output
            - --metrics-enabled=true
            - --metrics-pushgateway-url=http://stash.kube-system.svc:56789
            - --prom-job-name=***-4ap-produktiv-4allportal-assets
            image: appscode/stash:v0.11.2
            imagePullPolicy: Always
            name: update-status-1
            resources: {}
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              privileged: false
              readOnlyRootFilesystem: true
              runAsGroup: 1000
              runAsNonRoot: true
              runAsUser: 1000
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /etc/repository/secret
              name: secret-volume
            - mountPath: /tmp
              name: tmp-dir
          dnsPolicy: ClusterFirst
          initContainers:
          - args:
            - backup-pvc
            - --provider=s3
            - --bucket=pvc-backup
            - --endpoint=http://10.90.45.131:8080
            - --path=***-4ap/***-4ap-produktiv/persistentvolumeclaim/assets
            - --secret-dir=/etc/repository/secret
            - --scratch-dir=/tmp
            - --enable-cache=true
            - --max-connections=0
            - --hostname=host-0
            - --backup-paths=/stash-data
            - --exclude=
            - --invoker-kind=BackupConfiguration
            - --invoker-name=***-4ap-produktiv-4allportal-assets
            - --target-kind=PersistentVolumeClaim
            - --target-name=***-4ap-produktiv-4allportal-assets
            - --backupsession=***-4ap-produktiv-4allportal-assets-1601856010
            - --retention-keep-last=14
            - --retention-keep-hourly=0
            - --retention-keep-daily=0
            - --retention-keep-weekly=0
            - --retention-keep-monthly=0
            - --retention-keep-yearly=0
            - --retention-keep-tags=
            - --retention-prune=true
            - --retention-dry-run=false
            - --output-dir=/tmp/output
            image: appscode/stash:v0.11.2
            imagePullPolicy: Always
            name: pvc-backup-0
            resources: {}
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              privileged: false
              readOnlyRootFilesystem: true
              runAsGroup: 1000
              runAsNonRoot: true
              runAsUser: 1000
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /stash-data
              name: stash-volume
            - mountPath: /etc/repository/secret
              name: secret-volume
            - mountPath: /tmp
              name: tmp-dir
          restartPolicy: Never
          schedulerName: default-scheduler
          securityContext:
            fsGroup: 1000
            runAsGroup: 1000
            runAsNonRoot: true
            runAsUser: 1000
          serviceAccount: stash-backup-***-4ap-produktiv-4allportal-assets-0
          serviceAccountName: stash-backup-***-4ap-produktiv-4allportal-assets-0
          terminationGracePeriodSeconds: 30
          volumes:
          - name: stash-volume
            persistentVolumeClaim:
              claimName: ***-4ap-produktiv-4allportal-assets
          - name: secret-volume
            secret:
              defaultMode: 420
              secretName: ***-4ap-produktiv-4allportal-backup
          - emptyDir: {}
            name: tmp-dir
    status:
      conditions:
      - lastProbeTime: "2020-10-05T02:17:14Z"
        lastTransitionTime: "2020-10-05T02:17:14Z"
        message: Job has reached the specified backoff limit
        reason: BackoffLimitExceeded
        status: "True"
        type: Failed
      failed: 2
      startTime: "2020-10-05T00:00:10Z"
    

    Pods:

    apiVersion: v1
    items:
    - apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          cni.projectcalico.org/podIP: 192.168.64.245/32
        creationTimestamp: "2020-10-05T00:00:10Z"
        generateName: stash-backup-***-4ap-produktiv-4allportal-assets-1601856010-0-
        labels:
          app.kubernetes.io/component: stash-backup
          app.kubernetes.io/instance: ***-4ap-produktiv
          app.kubernetes.io/managed-by: stash.appscode.com
          app.kubernetes.io/name: 4allportal
          controller-uid: a342ab36-e550-4bd3-acb2-d8eb45fa7bf6
          helm.sh/chart: 4allportal-7.8.0
          job-name: stash-backup-***-4ap-produktiv-4allportal-assets-1601856010-0
        name: stash-backup-***-4ap-produktiv-4allportal-assets-16018560fmxng
        namespace: ***-4ap
        ownerReferences:
        - apiVersion: batch/v1
          blockOwnerDeletion: true
          controller: true
          kind: Job
          name: stash-backup-***-4ap-produktiv-4allportal-assets-1601856010-0
          uid: a342ab36-e550-4bd3-acb2-d8eb45fa7bf6
        resourceVersion: "195716189"
        selfLink: /api/v1/namespaces/***-4ap/pods/stash-backup-***-4ap-produktiv-4allportal-assets-16018560fmxng
        uid: 9b0a21fb-ab46-4e4a-82cf-e734cc26232c
      spec:
        containers:
        - args:
          - update-status
          - --provider=s3
          - --bucket=pvc-backup
          - --endpoint=http://10.90.45.131:8080
          - --path=***-4ap/***-4ap-produktiv/persistentvolumeclaim/assets
          - --secret-dir=/etc/repository/secret
          - --scratch-dir=/tmp
          - --enable-cache=true
          - --max-connections=0
          - --namespace=***-4ap
          - --backupsession=***-4ap-produktiv-4allportal-assets-1601856010
          - --repository=***-4ap-produktiv-4allportal-assets
          - --invoker-kind=BackupConfiguration
          - --invoker-name=***-4ap-produktiv-4allportal-assets
          - --target-kind=PersistentVolumeClaim
          - --target-name=***-4ap-produktiv-4allportal-assets
          - --output-dir=/tmp/output
          - --metrics-enabled=true
          - --metrics-pushgateway-url=http://stash.kube-system.svc:56789
          - --prom-job-name=***-4ap-produktiv-4allportal-assets
          image: appscode/stash:v0.11.2
          imagePullPolicy: Always
          name: update-status-1
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsGroup: 1000
            runAsNonRoot: true
            runAsUser: 1000
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /etc/repository/secret
            name: secret-volume
          - mountPath: /tmp
            name: tmp-dir
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: stash-backup-***-4ap-produktiv-4allportal-assets-0-token-nn9vt
            readOnly: true
        dnsPolicy: ClusterFirst
        enableServiceLinks: true
        initContainers:
        - args:
          - backup-pvc
          - --provider=s3
          - --bucket=pvc-backup
          - --endpoint=http://10.90.45.131:8080
          - --path=***-4ap/***-4ap-produktiv/persistentvolumeclaim/assets
          - --secret-dir=/etc/repository/secret
          - --scratch-dir=/tmp
          - --enable-cache=true
          - --max-connections=0
          - --hostname=host-0
          - --backup-paths=/stash-data
          - --exclude=
          - --invoker-kind=BackupConfiguration
          - --invoker-name=***-4ap-produktiv-4allportal-assets
          - --target-kind=PersistentVolumeClaim
          - --target-name=***-4ap-produktiv-4allportal-assets
          - --backupsession=***-4ap-produktiv-4allportal-assets-1601856010
          - --retention-keep-last=14
          - --retention-keep-hourly=0
          - --retention-keep-daily=0
          - --retention-keep-weekly=0
          - --retention-keep-monthly=0
          - --retention-keep-yearly=0
          - --retention-keep-tags=
          - --retention-prune=true
          - --retention-dry-run=false
          - --output-dir=/tmp/output
          image: appscode/stash:v0.11.2
          imagePullPolicy: Always
          name: pvc-backup-0
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsGroup: 1000
            runAsNonRoot: true
            runAsUser: 1000
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /stash-data
            name: stash-volume
          - mountPath: /etc/repository/secret
            name: secret-volume
          - mountPath: /tmp
            name: tmp-dir
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: stash-backup-***-4ap-produktiv-4allportal-assets-0-token-nn9vt
            readOnly: true
        nodeName: k8s-srv-2
        priority: 0
        restartPolicy: Never
        schedulerName: default-scheduler
        securityContext:
          fsGroup: 1000
          runAsGroup: 1000
          runAsNonRoot: true
          runAsUser: 1000
        serviceAccount: stash-backup-***-4ap-produktiv-4allportal-assets-0
        serviceAccountName: stash-backup-***-4ap-produktiv-4allportal-assets-0
        terminationGracePeriodSeconds: 30
        tolerations:
        - effect: NoExecute
          key: node.kubernetes.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: node.kubernetes.io/unreachable
          operator: Exists
          tolerationSeconds: 300
        volumes:
        - name: stash-volume
          persistentVolumeClaim:
            claimName: ***-4ap-produktiv-4allportal-assets
        - name: secret-volume
          secret:
            defaultMode: 420
            secretName: ***-4ap-produktiv-4allportal-backup
        - emptyDir: {}
          name: tmp-dir
        - name: stash-backup-***-4ap-produktiv-4allportal-assets-0-token-nn9vt
          secret:
            defaultMode: 420
            secretName: stash-backup-***-4ap-produktiv-4allportal-assets-0-token-nn9vt
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: "2020-10-05T00:45:52Z"
          status: "True"
          type: Initialized
        - lastProbeTime: null
          lastTransitionTime: "2020-10-05T00:00:10Z"
          message: 'containers with unready status: [update-status-1]'
          reason: ContainersNotReady
          status: "False"
          type: Ready
        - lastProbeTime: null
          lastTransitionTime: "2020-10-05T00:00:10Z"
          message: 'containers with unready status: [update-status-1]'
          reason: ContainersNotReady
          status: "False"
          type: ContainersReady
        - lastProbeTime: null
          lastTransitionTime: "2020-10-05T00:00:10Z"
          status: "True"
          type: PodScheduled
        containerStatuses:
        - containerID: docker://a67e2c38d7bbd4997c6fb590845ac1ecd6b4a2a27212115a40f4af93c2ced599
          image: appscode/stash:v0.11.2
          imageID: docker-pullable://appscode/stash@sha256:2c7d47e394bdc49c96abec2becfb1e196429593199ddcb24e894731730ad1fae
          lastState: {}
          name: update-status-1
          ready: false
          restartCount: 0
          started: false
          state:
            terminated:
              containerID: docker://a67e2c38d7bbd4997c6fb590845ac1ecd6b4a2a27212115a40f4af93c2ced599
              exitCode: 255
              finishedAt: "2020-10-05T00:45:55Z"
              reason: Error
              startedAt: "2020-10-05T00:45:54Z"
        hostIP: 10.90.45.12
        initContainerStatuses:
        - containerID: docker://59eaade6ead04f94b0f6afa7809e254a7f55866d600361b6a6367bcfddbc969f
          image: appscode/stash:v0.11.2
          imageID: docker-pullable://appscode/stash@sha256:2c7d47e394bdc49c96abec2becfb1e196429593199ddcb24e894731730ad1fae
          lastState: {}
          name: pvc-backup-0
          ready: true
          restartCount: 0
          state:
            terminated:
              containerID: docker://59eaade6ead04f94b0f6afa7809e254a7f55866d600361b6a6367bcfddbc969f
              exitCode: 0
              finishedAt: "2020-10-05T00:45:51Z"
              reason: Completed
              startedAt: "2020-10-05T00:00:15Z"
        phase: Failed
        podIP: 192.168.64.245
        podIPs:
        - ip: 192.168.64.245
        qosClass: BestEffort
        startTime: "2020-10-05T00:00:10Z"
    - apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          cni.projectcalico.org/podIP: 192.168.18.0/32
        creationTimestamp: "2020-10-05T00:45:55Z"
        generateName: stash-backup-***-4ap-produktiv-4allportal-assets-1601856010-0-
        labels:
          app.kubernetes.io/component: stash-backup
          app.kubernetes.io/instance: ***-4ap-produktiv
          app.kubernetes.io/managed-by: stash.appscode.com
          app.kubernetes.io/name: 4allportal
          controller-uid: a342ab36-e550-4bd3-acb2-d8eb45fa7bf6
          helm.sh/chart: 4allportal-7.8.0
          job-name: stash-backup-***-4ap-produktiv-4allportal-assets-1601856010-0
        name: stash-backup-***-4ap-produktiv-4allportal-assets-16018560wq67b
        namespace: ***-4ap
        ownerReferences:
        - apiVersion: batch/v1
          blockOwnerDeletion: true
          controller: true
          kind: Job
          name: stash-backup-***-4ap-produktiv-4allportal-assets-1601856010-0
          uid: a342ab36-e550-4bd3-acb2-d8eb45fa7bf6
        resourceVersion: "195782691"
        selfLink: /api/v1/namespaces/***-4ap/pods/stash-backup-***-4ap-produktiv-4allportal-assets-16018560wq67b
        uid: d96aaf49-aeb7-47c5-97fe-e3822165fa3f
      spec:
        containers:
        - args:
          - update-status
          - --provider=s3
          - --bucket=pvc-backup
          - --endpoint=http://10.90.45.131:8080
          - --path=***-4ap/***-4ap-produktiv/persistentvolumeclaim/assets
          - --secret-dir=/etc/repository/secret
          - --scratch-dir=/tmp
          - --enable-cache=true
          - --max-connections=0
          - --namespace=***-4ap
          - --backupsession=***-4ap-produktiv-4allportal-assets-1601856010
          - --repository=***-4ap-produktiv-4allportal-assets
          - --invoker-kind=BackupConfiguration
          - --invoker-name=***-4ap-produktiv-4allportal-assets
          - --target-kind=PersistentVolumeClaim
          - --target-name=***-4ap-produktiv-4allportal-assets
          - --output-dir=/tmp/output
          - --metrics-enabled=true
          - --metrics-pushgateway-url=http://stash.kube-system.svc:56789
          - --prom-job-name=***-4ap-produktiv-4allportal-assets
          image: appscode/stash:v0.11.2
          imagePullPolicy: Always
          name: update-status-1
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsGroup: 1000
            runAsNonRoot: true
            runAsUser: 1000
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /etc/repository/secret
            name: secret-volume
          - mountPath: /tmp
            name: tmp-dir
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: stash-backup-***-4ap-produktiv-4allportal-assets-0-token-nn9vt
            readOnly: true
        dnsPolicy: ClusterFirst
        enableServiceLinks: true
        initContainers:
        - args:
          - backup-pvc
          - --provider=s3
          - --bucket=pvc-backup
          - --endpoint=http://10.90.45.131:8080
          - --path=***-4ap/***-4ap-produktiv/persistentvolumeclaim/assets
          - --secret-dir=/etc/repository/secret
          - --scratch-dir=/tmp
          - --enable-cache=true
          - --max-connections=0
          - --hostname=host-0
          - --backup-paths=/stash-data
          - --exclude=
          - --invoker-kind=BackupConfiguration
          - --invoker-name=***-4ap-produktiv-4allportal-assets
          - --target-kind=PersistentVolumeClaim
          - --target-name=***-4ap-produktiv-4allportal-assets
          - --backupsession=***-4ap-produktiv-4allportal-assets-1601856010
          - --retention-keep-last=14
          - --retention-keep-hourly=0
          - --retention-keep-daily=0
          - --retention-keep-weekly=0
          - --retention-keep-monthly=0
          - --retention-keep-yearly=0
          - --retention-keep-tags=
          - --retention-prune=true
          - --retention-dry-run=false
          - --output-dir=/tmp/output
          image: appscode/stash:v0.11.2
          imagePullPolicy: Always
          name: pvc-backup-0
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsGroup: 1000
            runAsNonRoot: true
            runAsUser: 1000
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /stash-data
            name: stash-volume
          - mountPath: /etc/repository/secret
            name: secret-volume
          - mountPath: /tmp
            name: tmp-dir
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: stash-backup-***-4ap-produktiv-4allportal-assets-0-token-nn9vt
            readOnly: true
        nodeName: k8s-srv-1
        priority: 0
        restartPolicy: Never
        schedulerName: default-scheduler
        securityContext:
          fsGroup: 1000
          runAsGroup: 1000
          runAsNonRoot: true
          runAsUser: 1000
        serviceAccount: stash-backup-***-4ap-produktiv-4allportal-assets-0
        serviceAccountName: stash-backup-***-4ap-produktiv-4allportal-assets-0
        terminationGracePeriodSeconds: 30
        tolerations:
        - effect: NoExecute
          key: node.kubernetes.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: node.kubernetes.io/unreachable
          operator: Exists
          tolerationSeconds: 300
        volumes:
        - name: stash-volume
          persistentVolumeClaim:
            claimName: ***-4ap-produktiv-4allportal-assets
        - name: secret-volume
          secret:
            defaultMode: 420
            secretName: ***-4ap-produktiv-4allportal-backup
        - emptyDir: {}
          name: tmp-dir
        - name: stash-backup-***-4ap-produktiv-4allportal-assets-0-token-nn9vt
          secret:
            defaultMode: 420
            secretName: stash-backup-***-4ap-produktiv-4allportal-assets-0-token-nn9vt
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: "2020-10-05T02:16:47Z"
          status: "True"
          type: Initialized
        - lastProbeTime: null
          lastTransitionTime: "2020-10-05T02:16:54Z"
          message: 'containers with unready status: [update-status-1]'
          reason: ContainersNotReady
          status: "False"
          type: Ready
        - lastProbeTime: null
          lastTransitionTime: "2020-10-05T02:16:54Z"
          message: 'containers with unready status: [update-status-1]'
          reason: ContainersNotReady
          status: "False"
          type: ContainersReady
        - lastProbeTime: null
          lastTransitionTime: "2020-10-05T00:45:55Z"
          status: "True"
          type: PodScheduled
        containerStatuses:
        - containerID: docker://86e1b550664dc7a7227cb10a4179d053176f372884786c30b450d273cfdc74e8
          image: appscode/stash:v0.11.2
          imageID: docker-pullable://appscode/stash@sha256:2c7d47e394bdc49c96abec2becfb1e196429593199ddcb24e894731730ad1fae
          lastState: {}
          name: update-status-1
          ready: false
          restartCount: 0
          started: false
          state:
            terminated:
              containerID: docker://86e1b550664dc7a7227cb10a4179d053176f372884786c30b450d273cfdc74e8
              exitCode: 255
              finishedAt: "2020-10-05T02:16:53Z"
              reason: Error
              startedAt: "2020-10-05T02:16:49Z"
        hostIP: 10.90.45.11
        initContainerStatuses:
        - containerID: docker://2091a7cd0b17b04f016d27b95af39c294eb22b3a590b2cb7438e9b33bc36558b
          image: appscode/stash:v0.11.2
          imageID: docker-pullable://appscode/stash@sha256:2c7d47e394bdc49c96abec2becfb1e196429593199ddcb24e894731730ad1fae
          lastState: {}
          name: pvc-backup-0
          ready: true
          restartCount: 0
          state:
            terminated:
              containerID: docker://2091a7cd0b17b04f016d27b95af39c294eb22b3a590b2cb7438e9b33bc36558b
              exitCode: 0
              finishedAt: "2020-10-05T02:16:47Z"
              reason: Completed
              startedAt: "2020-10-05T00:45:59Z"
        phase: Failed
        podIP: 192.168.18.0
        podIPs:
        - ip: 192.168.18.0
        qosClass: BestEffort
        startTime: "2020-10-05T00:45:55Z"
    kind: List
    metadata:
      resourceVersion: ""
      selfLink: ""
    
  • Stash does not work with Flux

    Stash does not work with Flux

    I manually delete the CronJob of a BackupConfiguration , because for some reason the ServiceAccount for the Job did not exist and failed to start pods. It is best practice for a Kubernetes Operators, to periodically reconcile the actual state in the cluster to remove inconsistencies, but the CronJob was not recreated after 30 minutes by the Stash Operator and also the status of the BackupConfiguration was not updated and indicated that the CronJob exists, but it did not.

  • Operator keeps restarting, is it normal?

    Operator keeps restarting, is it normal?

    I just noticed that my Stash operator keeps restarting (I'm over 700 restarts now). The log is the following:

    Stopping Stash controller
    worker.go:44] Shutting down Deployment Queue
    worker.go:44] Shutting down Repository Queue
    worker.go:44] Shutting down DaemonSet Queue
    worker.go:44] Shutting down ReplicationController Queue
    worker.go:44] Shutting down StatefulSet Queue
    worker.go:44] Shutting down Restic Queue
    worker.go:44] Shutting down Job Queue
    worker.go:44] Shutting down ReplicaSet Queue
    worker.go:44] Shutting down Recovery Queue
    main.go:26] Exiting Stash Main
    
  • Stash v1beta1 design discussion

    Stash v1beta1 design discussion

    Stash Design Overview

    We are going to make a design overhaul of Stash to simplify backup and recovery process and support some most requested features. This doc will discuss what features stash is going to support and how these features may work.

    We have introduced some new crd such as Function, Task etc. and made whole process more modular. This will make easy to add support for new features and the users will also be able to customize backup process. Furthermore, this will make stash resources inter-operable between different tools and even might allow to use stash resources as function in serverless concept.

    We are hoping this design will graduate to GA. So, we are taking security seriously. We are going to make sure that nobody can bypass clusters security using Stash. This might requires to remove some existing features (for example, restore from different namespace). However, we will provide an alternate way to cover those use cases.

    Goal

    Goal of this new design to support following features:

    Schedule Backup and Restore Workload Data

    Backup Workload Data

    User will be able to backup data from a running workload.

    What user have to do?

    • Create a Repository crd.
    • Create a BackupConfiguration crd pointing to targeted workload.

    Sample Repository crd:

    apiVersion: stash.appscode.com/v1alpha2
    kind: Repository
    metadata:
      name: stash-backup-repo
      namespace: demo
    spec:
      backend:
        gcs:
          bucket: stash-backup-repo
          prefix: default/deployment/stash-demo
        storageSecretName: gcs-secret
    

    Sample BackupConfiguration crd:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupConfiguration
    metadata:
      name: workload-data-backup
      namespace: demo
    spec:
      schedule: '@every 1h'
      # <no backupProcedure required for sidecar model>
      # repository refers to the Repository crd that hold backend information
      repository:
        name: stash-backup-repo
      # target indicate the target workload that we want to backup
      target:
        ref:
          apiVersion: apps/v1
          kind: Deployment
          name: stash-demo
        # directories indicates the directories inside the workload we want to backup
        directories:
        - /source/data
      # retentionPolicies specify the policy to follow to clean old backup snapshots
      retentionPolicy:
        keepLast: 5
        prune: true
    

    How it will work?

    • Stash will watch for BackupCofiguration crd. When it will find a BackupConfiguration crd, it will inject a sidecar container to the workload and start a cron for scheduled backup.
    • In each schedule, the cron will create BackupSession crd.
    • The sidecar container watches for BackupSession crd. If it find one, it will take backup instantly and update BackupSession status accordingly.

    Sample BackupSession crd:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupSession
    metadata:
      name: demo-volume-backup-session
      namespace: demo
    spec:
      # backupConfiguration indicates the BackupConfiguration crd of respective target that we want to backup
      backupConfiguration:
        name: backup-volume-demo
    status:
      observedGeneration: 239844#2
      phase: Succeed
      stats:
      - direcotry: /source/data
        snapshot: 40dc1520
        size: 1.720 GiB
        uploaded: 1.200 GiB # upload size can be smaller than original file size if there are some duplicate files
        fileStats:
          new: 5307
          changed: 0
          unmodified: 0
    

    Restore Workload Data

    User will be able to restore backed up data either into a separate volume or into the same workload from where the backup was taken. Here, is an example for recovering into same workload.

    What user have to do?

    • Create a RestoreSession crd pointing target field to the workload.

    Sample RestoreSession crd to restore into same workload:

    apiVersion: stash.appscode.com/v1beta1
    kind: RestoreSession
    metadata:
      name: recovery-database-demo
      namespace: demo
    spec:
      repository:
        name: stash-backup-repo
      target: # target indicates where the recovered data will be stored
        ref:
          apiVersion: apps/v1
          kind: Deployment
          name: stash-demo
        directories: # indicates which directories will be recovered
        - /source/data
    

    How it will work?

    • When Stash will find a RestoreSession crd created to restore into a workload, it will inject a init-container to the targeted workload.
    • Then, it will restart the workload.
    • The init-container will restore data inside the workload.

    Warning: Restore in same workload require to restart the workload. So, there will be downtime of the workload.

    Schedule Backup and Restore PVC

    Backup PVC

    User will be also able to backup stand-alone pvc. This is useful for ReadOnlyMany or ReadWriteMany type pvc.

    What user have to do?

    • Create a Repository crd for respective backend.

    • Create a BackupConfiguration crd pointing target field to the volume.

    Sample BackupConfiguration crd to backup a PVC:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupConfiguration
    metadata:
      name: volume-backup-demo
      namespace: demo
    spec:
      schedule: '@every 1h'
      # task indicates Task crd that specifies the steps to backup a volume.
      # stash will create some default Task crd  while install to backup/restore various resources.
      # user can also crate their own Task to customize backup/recovery
      task:
        name: volumeBackup
      # repository refers to the Repository crd that hold backend information
      repository:
        name: stash-backup-repo
      # target indicate the target workload that we want to backup
      target:
        ref:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: demo-pvc  
        mountPath: /source/data
      # retentionPolicies specify the policy to follow to clean old backup snapshots
      retentionPolicy:
        keepLast: 5
        prune: true
    

    How it will work?

    1. Stash will create a CronJob using information of respective Task crd specified by task field.
    2. The CronJob will take periodic backup of the target volume.

    Restore PVC

    User will be able to restore backed up data into a volume.

    What user have to do?

    • Create a RestoreSession crd pointing target field to the target volume where the recovered data will be stored.

    Sample RestoreSession crd to restore into a volume:

    apiVersion: stash.appscode.com/v1beta1
    kind: RestoreSession
    metadata:
      name: recovery-volume-demo
      namespace: demo
    spec:
      repository:
        name: stash-backup-repo
      # task indicates Task crd that specifies steps to restore a volume
      task:
        name: volumeRecovery
      target: # target indicates where the recovered data will be stored
        ref:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: demo-pvc  
        mountPath: /source/data
        directories: # indicates which directories will be recovered
        - /source/data
    

    How it will work?

    • When Stash will find a RestoreSession crd created to restore into a volume, it will launch a Job to restore into that volume.
    • The recovery Job will restore and store recovered data to the specified volume.

    Schedule Backup and Restore Database

    Backup Database

    User will be able to backup database using Stash.

    What user have to do?

    • Create a Repository crd for respective backend.
    • Create an AppBinding crd which holds connection information for the database. If the database is deployed with KubeDB, AppBinding crd will be created automatically for each database.
    • Create a BackupConfiguration crd pointing to the AppBinding crd.

    Sample AppBinding crd:

    apiVersion: appcatalog.appscode.com/v1alpha1
    kind: AppBinding
    metadata:
      name: quick-postgres
      namespace: demo
      labels:
        kubedb.com/kind: Postgres
        kubedb.com/name: quick-postgres
    spec:
      clientConfig:
        insecureSkipTLSVerify: true
        service:
          name: quick-postgres
          port: 5432
          scheme: "http"
      secret:
        name: quick-postgres-auth
      type: kubedb.com/postgres
    

    Sample BackupConfiguration crd for database backup:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupConfiguration
    metadata:
      name: database-backup-demo
      namespace: demo
    spec:
      schedule: '@every 1h'
      # task indicates Task crd that specifies the steps to backup postgres database
      task:
        name:   pgBackup
        database: my-postgres # specify this field if you want to backup a particular database.
      # repository refers to the Repository crd that hold backend information
      repository:
        name: stash-backup-repo
      # target indicates the respective AppBinding crd for target database
      target:
        ref:
          apiVersion: appcatalog.appscode.com/v1alpha1
          kind: AppBinding
          name: quick-postgres
      # retentionPolicies specify the policy to follow to clean old backup snapshots
      retentionPolicy:
        keepLast: 5
        prune: true
    

    How it will work?

    • When Stash will see a BackupConfiguration crd for database backup, it will lunch a CronJob to take periodic backup of this database.

    Restore Database

    User will be able to initialize a database from backed up snapshot.

    What user have to do?

    • Create a RestoreSession crd with target field pointing to respective AppBinding crd of the target database.

    Sample RestoreSession crd to restore database:

    apiVersion: stash.appscode.com/v1beta1
    kind: RestoreSession
    metadata:
      name: database-recovery-demo
      namespace: demo
    spec:
      repository:
        name: stash-backup-repo
      # task indicates Task crd that specifies the steps to restore Postgres database
      task:
        name: pgRecovery
      target: # target indicates where to restore
        # indicates the respective AppBinding crd for target database that we want to initialize from backup
        ref:
          apiVersion: appcatalog.appscode.com/v1alpha1
          kind: AppBinding
          name: quick-postgres
    

    How it will work?:

    • Stash will lunch a Job to restore the backed up database and initialize target with this recovered data.

    Schedule Backup Cluster YAMLs

    User will be able to backup yaml of the cluster resources. However, currently stash will not provide automatic restore cluster from the YAMLs. So, user will have to create them manually.

    In future, Stash might be able to backup and restore not only YAMLs but also entire cluster.

    What user have to do?

    • Create a Repository crd for respective backend.
    • Create a BackupConfiguration crd with task field point to a Task crd that backup cluster.

    Sample BackupConfiguration crd to backup YAMLs of cluster resources:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupConfiguration
    metadata:
      name: cluster-backup-demo
      namespace: demo
    spec:
      schedule: '@every 1h'
      # task indicates Task crd that specifies the steps of backup cluster yamls
      task:
        name: clusterBackup
      # repository refers to the Repository crd that hold backend information
      repository:
        name: stash-backup-repo
      # <no target required for cluster backup>
      # retentionPolicies specify the policy to follow to clean old backup snapshots
      retentionPolicy:
        keepLast: 5
        prune: true
    

    How it will work?

    • Stash will lunch a CronJob using informations of the Task crd specified through task filed.
    • The CronJob will take periodic backup of the cluster.

    Trigger Backup Instantly

    User will be able to trigger a scheduled backup instantly.

    What user have to do?

    • Create a BackupSession crd pointing to the target BackupConfiguration crd.

    Sample BackupSession crd for triggering instant backup:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupSession
    metadata:
      name: demo-volume-backup-session
      namespace: demo
    spec:
      # backupConfiguration indicates the BackupConfiguration crd of respective target that we want to backup
      backupConfiguration:
        name: volume-backup-demo
    

    How it will work?

    • For scheduled backup through sidecar container, the sidecar container will take instant backup as it watches for BackupSession crd.
    • For scheduled backup through CronJob, Stash will lunch another job to take instant backup of the target.

    Default Backup

    User will also be able to configure a default backup for the cluster. So, user will no longer need to create Repository and BackupConfiguration crd for every workload he want to backup. Instead, she will need to add some annotations to the target workload.

    What user have to do?

    • Create a BackupTemplate crd which will hold backend information and backup information.
    • Add some annotations to the target. If the target is a database then add the annotations to respective AppBinding crd.

    Default Backup of Workload Data

    Sample BackupTemplate crd to backup workload data:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupTemplate
    metadata:
      name: workload-data-backup-template
    spec:
      backend:
        gcs:
          bucket: stash-backup-repo
          prefix: ${target.namespace}/${target.name} # this prefix template will be used to initialize repository in different directory in backend.
        storageSecretName: gcs-secret # users must ensure this secret is present in respective namespace
      schedule: '@every 1h'
      # < no task required >
      retentionPolicy:
        name: 'keep-last-5'
        keepLast: 5
        prune: true
    

    Sample workload with annotations for default backup:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: stash-demo
      namespace: demo
      labels:
        app: stash-demo
      # if stash find bellow annotations, it will take backup of it.
      annotations:
        stash.appscode.com/backuptemplate: "workload-data-backup-template"
        stash.appscode.com/targetDirectories: "[/source/data]"
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: stash-demo
      template:
        metadata:
          labels:
            app: stash-demo
          name: busybox
        spec:
          containers:
          - args:
            - sleep
            - "3600"
            image: busybox
            imagePullPolicy: IfNotPresent
            name: busybox
            volumeMounts:
            - mountPath: /source/data
              name: source-data
          restartPolicy: Always
          volumes:
          - name: source-data
            configMap:
              name: stash-sample-data
    

    Default Backup of a PVC

    Sample BackupTemplate crd for stand-alone pvc backup:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupTemplate
    metadata:
      name: volume-backup-template
    spec:
      backend:
        gcs:
          bucket: stash-backup-repo
          prefix: ${target.namespace}/${target.name} # this prefix template will be used to initialize repository in different directory in backend.
        storageSecretName: gcs-secret # users must ensure this secret is present in respective namespace
      schedule: '@every 1h'
      task:
        name: volumeBackup
      retentionPolicy:
        name: 'keep-last-5'
        keepLast: 5
        prune: true
    

    Sample PVC with annotation for default backup:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: demo-pvc
      namespace: demo
      # if stash find bellow annotations, it will take backup of it.
      annotations:
        stash.appscode.com/backuptemplate: "volume-backup-template"
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 1Gi
    

    Default Backup of Database

    Sample BackupTemplate crd for database backup:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupTemplate
    metadata:
      name: pgdb-backup-template
    spec:
      backend:
        gcs:
          bucket: stash-backup-repo
          prefix: ${target.namespace}/${target.name} # this prefix template will be used to initialize repository in different directory in backend.
        storageSecretName: gcs-secret # users must ensure this secret is present in respective namespace
      schedule: '@every 1h'
      task:
        name: pgBackup
      retentionPolicy:
        name: 'keep-last-5'
        keepLast: 5
        prune: true
    

    Sample AppBinding crd with annotations for default backup:

    apiVersion: appcatalog.appscode.com/v1alpha1
    kind: AppBinding
    metadata:
      name: quick-postgres
      namespace: demo
      labels:
        kubedb.com/kind: Postgres
        kubedb.com/name: quick-postgres
        # if stash find bellow annotations, it will take backup of it.
        annotations:
          stash.appscode.com/backuptemplate: "pgdb-backup-template"
    spec:
      clientConfig:
        insecureSkipTLSVerify: true
        service:
          name: quick-postgres
          port: 5432
          scheme: "http"
      secret:
        name: quick-postgres-auth
      type: kubedb.com/postgres
    

    How it will work?

    • Stash will watch the workloads, volume and AppBinding crds. When Stash will find an workload/volume/AppBinding crd with these annotations, it will create a Repository crd and a BackupConfiguration crd using the information from respective Task.
    • Then, Stash will take normal backup as discussed earlier.

    Auto Restore

    User will be also able to configure an automatic recovery for a particular workload. Each time the workload restart, at first it will perform restore data from backup then original workload's container will start.

    What user have to do?

    • User will have to provide some annotations in the workload.

    Sample workload wit annotation to restore on restart:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: stash-demo
      namespace: demo
      labels:
        app: stash-demo
      # This annotations indicates that data shoul be recovered on each restart of the workload
      annotations:
        stash.appscode.com/restorepolicy: "OnRestart"
        stash.appscode.com/repository: "demo-backup-repo"
        stash.appscode.com/directories: "[/source/data]"
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: stash-demo
      template:
        metadata:
          labels:
            app: stash-demo
          name: busybox
        spec:
          containers:
          - args:
            - sleep
            - "3600"
            image: busybox
            imagePullPolicy: IfNotPresent
            name: busybox
            volumeMounts:
            - mountPath: /source/data
              name: source-data
          restartPolicy: Always
          volumes:
          - name: source-data
            configMap:
              name: stash-sample-data
    

    How it will work?

    • When Stash will see a RestoreSession crd configured for auto recovery, it will inject an init-container to the target.
    • The init-container will perform recovery on each restart.

    Stash cli/kubectl-plugin

    We are going to provide a Stash plugin for kubectl. This will help to perform following operations:

    • Restore into local machine instead of cluster (necessary for testing purpose).
    • Restore into a different namespace from a repository: copy repository + secret into the desired namespace and then create RestoreSession object.
    • Backup PV: creates matching PVC from PV (ensures that user has permission to read PV)
    • Trigger instant backup.

    Function

    Function are independent single-containered workload specification that perform only single task. For example, pgBackup takes backup a PostgreSQL database and clusterBackup takes backup of YAMLs of cluster resources. Function crd has some variable fields with $ prefix which hast be resolved while creating respective workload. You can consider these variable fields as input for an Function.

    Some example Function definition is given below:

    clusterBackup

    # clusterBackup function backup yamls of all resources of the cluster
    apiVersion: stash.appscode.com/v1beta1
    kind: Function
    metadata:
      name: clusterBackup
    spec:
      container:
        image:  appscodeci/cluster-tool:v1
        name:  cluster-tool
        args:
        - backup
        - --sanitize=${sanitize}
        - --provider=${provider}
        - --hostname=${hostname}
        - --path=${repoDir}
        - --output-dir=${outputDir}
        - --retention-policy.policy=${policy}
        - --retention-policy.value=${retentionValue}
        - --metrics.enabled=${enableMetric}
        - --metrics.pushgateway-url=${pushgatewayURL}
        - --metrics.labels="workload-kind=${workloadKind},workload-name=${workloadName}"
        volumeMounts:
        - name: ${tempVolumeName}
          mountPath: /tmp/restic
        - name: ${storageSecretName}
          mountPath: /etc/secrets/storage-secret
    

    pgBackup

    # pgBackup function backup a PostgreSQL database
    apiVersion: stash.appscode.com/v1beta1
    kind: Function
    metadata:
      name: pgBackup
    spec:
      container:
        image:  appscodeci/postgresql-tool:v1
        name:  postgres-tool
        args:
        - backup
        - --database=${databases}
        - --provider=${provider}
        - --hostname=${hostname}
        - --path=${repoDir}
        - --output-dir=${outputDir}
        - --retention-policy.policy=${policy}
        - --retention-policy.value=${retentionValue}
        - --metrics.enabled=${enableMetric}
        - --metrics.pushgateway-url=${pushgatewayURL}
        - --metrics.labels="workload-kind=${workloadKind},workload-name=${workloadName}"
        env:
        - name:  PGPASSWORD
          valueFrom:
            secretKeyRef:
              name: $(databaseSecret)
              key: "POSTGRES_PASSWORD"
        - name:  DB_USER
          valueFrom:
            secretKeyRef:
              name: $(databaseSecret)
              key: "POSTGRES_USER"
        - name:  DB_HOST
          value: $(host)
        volumeMounts:
        - name: ${tempVolumeName}
          mountPath: /tmp/restic
        - name: ${storageSecretName}
          mountPath: /etc/secrets/storage-secret
    
    

    pgRecovery

    # pgRecovery function restore a PostgreSQL database
    apiVersion: stash.appscode.com/v1beta1
    kind: Function
    metadata:
      name: pgRecovery
    spec:
      container:
        image:  appscodeci/postgresql-tool:v1
        name:  postgres-tool
        args:
        - restore
        - --provider=${provider}
        - --hostname=${hostname}
        - --path=${repoDir}
        - --output-dir=${outputDir}
        - --metrics.enabled=${enableMetric}
        - --metrics.pushgateway-url=${pushgatewayURL}
        - --metrics.labels="workload-kind=${workloadKind},workload-name=${workloadName}"
        env:
        - name:  PGPASSWORD
          valueFrom:
            secretKeyRef:
              name: $(databaseSecret)
              key: "POSTGRES_PASSWORD"
        - name:  DB_USER
          valueFrom:
            secretKeyRef:
              name: $(databaseSecret)
              key: "POSTGRES_USER"
        - name:  DB_HOST
          value: $(host)
        volumeMounts:
        - name: ${tempVolumeName}
          mountPath: /tmp/restic
        - name: ${storageSecretName}
          mountPath: /etc/secrets/storage-secret
    

    stashPostBackup

    # stashPostBackup update Repository and BackupSession status for respective backup
    apiVersion: stash.appscode.com/v1beta1
    kind: Function
    metadata:
      name: stashPostBackup
    spec:
      container:
        image: appscode/stash:0.9.0
        name:  stash-post-backup
        args:
        - post-backup-update
        - --repository=${repoName}
        - --backupsession=${backupSessionName}
        - --output-json-dir=${outputJsonDir}
        volumeMounts:
        - name: ${outputVolumeName}
          mountPath: /tmp/restic
    

    stashPostRecovery

    # stashPostRecovery update RestoreSession status for respective recovery
    apiVersion: stash.appscode.com/v1beta1
    kind: Function
    metadata:
      name: stashPostRecovery
    spec:
      container:
        image: appscode/stash:0.9.0
        name:  stash-post-recovery
        args:
        - post-recovery-update
        - --recoveryconfiguration=${recoveryConfigurationName}
        - --output-json-dir=${outputJsonDir}
        volumeMounts:
        - name: ${outputVolumeName}
          mountPath: /tmp/restic
    

    Task

    A complete backup process may need to perform multiple function. For example, if you want to backup a PostgreSQL database, we need to initialize a Repository, then backup the database and finally update Repository and BackupSession status to inform backup is completed or push backup metrics to a pushgateway . Task specifies these functions sequentially along with their inputs.

    We have chosen to break complete backup process into several independent steps so that those individual functions can be used with other tool than Stash. It also make easy to add support for new feature. For example, to add support new database backup, we will just require to add a Function and Task crd. We will no longer need change anything in Stash operator code. This will also helps users to backup databases that are not officially supported by stash.

    Some sample Task is given below:

    pgBackup

    # pgBackup specifies required functions and their inputs to backup PostgreSQL database
    apiVersion: stash.appscode.com/v1beta1
    kind: Task
    metadata:
      name: pgBackup
    spec:
      functions:
      - name: pgBackup
        inputs:
          database: ${databases}
          provider: ${provider}
          hostname: ${hostname}
          repoDir: ${prefix}
          outputDir: ${outputDir}
          policy: ${retentionPolicyName}
          retentionValue: ${retentionPolicyValue}
          enableMetric: ${enableMetric}
          pushgatewayURL: ${pushgatewayURL}
          workloadKind: ${kind}
          workloadName: ${name}
          tempVolumeName: ${tmpVolumeName}
          storageSecretName: ${secretName}
      - name: stashPostBackup
        inputs:
          repoName: ${repoName}
          backupSession: ${backupSessionName}
          outputJsonDir: ${output-dir}
          outputVolumeName: ${output-volume-name}
    

    pgRecovery

    # pgRecovery specifies required functions and their inputs to restore PostgreSQL database
    apiVersion: stash.appscode.com/v1beta1
    kind: Task
    metadata:
      name: pgRecovery
    spec:
      functions:
      - name: pgRecovery
        inputs:
          provider: ${provider}
          hostname: ${hostname}
          repoDir: ${prefix}
          outputDir: ${outputDir}
          enableMetric: ${enableMetric}
          pushgatewayURL: ${pushgatewayURL}
          workloadKind: ${kind}
          workloadName: ${name}
          tempVolumeName: ${tmpVolumeName}
          storageSecretName: ${secretName}
      - name: stashPostRecovery
        inputs:
          recoveryConfigurationName: ${recoveryConfigurationName}
          outputJsonDir: ${output-dir}
          outputVolumeName: ${output-volume-name}
    

    clusterBackup

    # clusterBackup specifies required functions and their inputs to backup cluster yaml
    apiVersion: stash.appscode.com/v1beta1
    kind: Task
    metadata:
      name: clusterBackup
    spec:
      functions:
      - name: clusterBackup
        inputs:
          sanitize: ${sanitize}
          provider: ${provider}
          hostname: ${hostname}
          repoDir: ${prefix}
          outputDir: ${outputDir}
          policy: ${retentionPolicyName}
          retentionValue: ${retentionPolicyValue}
          enableMetric: ${enableMetric}
          pushgatewayURL: ${pushgatewayURL}
          workloadKind: ${kind}
          workloadName: ${name}
          tempVolumeName: ${tmpVolumeName}
          storageSecretName: ${secretName}
      - name: stashPostBackup
        inputs:
          repoName: ${repoName}
          backupSession: ${backupSessionName}
          outputJsonDir: ${output-dir}
          outputVolumeName: ${output-volume-name}
    
  • How can I restore from the backup?

    How can I restore from the backup?

    Hi there, I need restore the backup, but I get error

    stash version: v0.11.6

    helm list -A |grep stash
    stash-operator          stash                   8               2020-11-09 13:59:39.73655288 +0200 EET          deployed        stash-v0.11.6                           v0.11.6
    
    apiVersion: stash.appscode.com/v1beta1
    kind: RestoreSession
    metadata:
      name: jenkins-pvc-restore-2
      namespace: jenkins
    spec:
      repository:
        name: minio-repo
      rules:
      - paths:
        - /var/jenkins_home
      - snapshots:
        - minio-repo-6eb9f707
      hooks:
        preRestore:
          exec:
            command: ["/bin/sh","-c","rm -rf /var/jenkins_home/*"]
          containerName: stash-init
      target:
        ref:
          apiVersion: apps/v1
          kind: Deployment
          name: jenkins-new
        volumeMounts:
        - name:  jenkins-home
          mountPath:  /var/jenkins_home
    
    
    k get restoresessions.stash.appscode.com
    NAME                    REPOSITORY   PHASE       AGE
    jenkins-pvc-restore     minio-repo   Succeeded   42d
    jenkins-pvc-restore-2   minio-repo   Failed      7m8s
    
    k get repositories.stash.appscode.com 
    NAME         INTEGRITY   SIZE   SNAPSHOT-COUNT   LAST-SUCCESSFUL-BACKUP   AGE
    minio-repo                                                                42d
    
    k get snapshots.repositories.stash.appscode.com 
    NAME                  REPOSITORY   HOSTNAME   CREATED AT
    minio-repo-497b073b   minio-repo   host-0     2020-12-12T23:00:06Z
    minio-repo-f6a38424   minio-repo   host-0     2020-12-13T23:00:05Z
    minio-repo-91adbf93   minio-repo   host-0     2020-12-14T23:00:05Z
    minio-repo-da28722c   minio-repo   host-0     2020-12-15T23:00:11Z
    minio-repo-b4bc865f   minio-repo   host-0     2020-12-16T23:00:15Z
    minio-repo-830fab84   minio-repo   host-0     2020-12-17T23:00:08Z
    minio-repo-0f497431   minio-repo   host-0     2020-12-18T23:00:14Z
    minio-repo-4a2018b1   minio-repo   host-0     2020-12-19T23:00:08Z
    minio-repo-6eb9f707   minio-repo   host-0     2020-12-20T23:00:03Z
    minio-repo-dc2c4642   minio-repo   host-0     2020-12-21T23:00:12Z
    
    k describe restoresessions.stash.appscode.com jenkins-pvc-restore-2
    
    Name:         jenkins-pvc-restore-2
    Namespace:    jenkins
    Labels:       <none>
    Annotations:  API Version:  stash.appscode.com/v1beta1
    Kind:         RestoreSession
    Metadata:
      Creation Timestamp:  2020-12-22T03:03:11Z
      Finalizers:
        stash.appscode.com
      Generation:  1
      Managed Fields:
        API Version:  stash.appscode.com/v1beta1
        Fields Type:  FieldsV1
        fieldsV1:
          f:metadata:
            f:annotations:
              .:
              f:kubectl.kubernetes.io/last-applied-configuration:
          f:spec:
            .:
            f:driver:
            f:hooks:
              .:
              f:preRestore:
                .:
                f:containerName:
                f:exec:
                  .:
                  f:command:
            f:repository:
              .:
              f:name:
            f:rules:
            f:target:
              .:
              f:ref:
                .:
                f:apiVersion:
                f:kind:
                f:name:
              f:volumeMounts:
        Manager:      kubectl
        Operation:    Update
        Time:         2020-12-22T03:03:11Z
        API Version:  stash.appscode.com/v1beta1
        Fields Type:  FieldsV1
        fieldsV1:
          f:metadata:
            f:finalizers:
          f:status:
            .:
            f:conditions:
            f:phase:
            f:sessionDuration:
            f:stats:
            f:totalHosts:
        Manager:         stash
        Operation:       Update
        Time:            2020-12-22T03:03:29Z
      Resource Version:  194822748
      Self Link:         /apis/stash.appscode.com/v1beta1/namespaces/jenkins/restoresessions/jenkins-pvc-restore-2
      UID:               ccf68471-70b8-4e8a-8d99-d7b0260bbc49
    Spec:
      Driver:  Restic
      Hooks:
        Pre Restore:
          Container Name:  stash-init
          Exec:
            Command:
              /bin/sh
              -c
              rm -rf /var/jenkins_home/*
      Repository:
        Name:  minio-repo
      Runtime Settings:
      Target:
        Ref:
          API Version:  apps/v1
          Kind:         Deployment
          Name:         jenkins-new
        Rules:
          Paths:
            /var/jenkins_home
          Snapshots:
            minio-repo-6eb9f707
        Volume Mounts:
          Mount Path:  /var/jenkins_home
          Name:        jenkins-home
      Task:
        Name:  
      Temp Dir:
    Status:
      Conditions:
        Last Transition Time:  2020-12-22T03:03:11Z
        Message:               Repository jenkins/minio-repo exist.
        Reason:                RepositoryAvailable
        Status:                True
        Type:                  RepositoryFound
        Last Transition Time:  2020-12-22T03:03:11Z
        Message:               Backend Secret jenkins/minio-secret exist.
        Reason:                BackendSecretAvailable
        Status:                True
        Type:                  BackendSecretFound
        Last Transition Time:  2020-12-22T03:03:11Z
        Message:               Restore target apps/v1 deployment/jenkins-new found.
        Reason:                TargetAvailable
        Status:                True
        Type:                  RestoreTargetFound
        Last Transition Time:  2020-12-22T03:03:11Z
        Message:               Successfully injected stash init-container.
        Reason:                InitContainerInjectionSucceeded
        Status:                True
        Type:                  StashInitContainerInjected
      Phase:                   Failed
      Session Duration:        18.566622799s
      Stats:
        Error:      failed to complete restore process for host host-0. Reason: invalid id "minio-repo-6eb9f707": no matching ID found
        Hostname:   host-0
        Phase:      Failed
      Total Hosts:  1
    Events:
      Type     Reason               Age    From                       Message
      ----     ------               ----   ----                       -------
      Normal   Restore Running      17m    RestoreSession Controller  restore has been started for RestoreSession jenkins/jenkins-pvc-restore-2
      Normal   Restore Running      17m    RestoreSession Controller  restore has been started for RestoreSession jenkins/jenkins-pvc-restore-2
      Normal   Restore Running      17m    RestoreSession Controller  restore has been started for RestoreSession jenkins/jenkins-pvc-restore-2
      Warning  Host Restore Failed  17m    Status Updater             restore failed for host "host-0". Reason: failed to complete restore process for host host-0. Reason: invalid id "minio-repo-6eb9f707": no matching ID found
      Warning  Restore Failed       17m    RestoreSession Controller  Restore has failed to complete. Reason: restore failed for target: Deployment/jenkins-new
      Warning  Host Restore Failed  17m    Status Updater             restore failed for host "host-0". Reason: failed to complete restore process for host host-0. Reason: invalid id "minio-repo-6eb9f707": no matching ID found
      Warning  Host Restore Failed  16m    Status Updater             restore failed for host "host-0". Reason: failed to complete restore process for host host-0. Reason: invalid id "minio-repo-6eb9f707": no matching ID found
      Warning  Host Restore Failed  16m    Status Updater             restore failed for host "host-0". Reason: failed to complete restore process for host host-0. Reason: invalid id "minio-repo-6eb9f707": no matching ID found
      Warning  Host Restore Failed  14m    Status Updater             restore failed for host "host-0". Reason: failed to complete restore process for host host-0. Reason: invalid id "minio-repo-6eb9f707": no matching ID found
      Warning  Host Restore Failed  11m    Status Updater             restore failed for host "host-0". Reason: failed to complete restore process for host host-0. Reason: invalid id "minio-repo-6eb9f707": no matching ID found
      Warning  Host Restore Failed  8m41s  Status Updater             restore failed for host "host-0". Reason: failed to complete restore process for host host-0. Reason: invalid id "minio-repo-6eb9f707": no matching ID found
      Warning  Host Restore Failed  117s   Status Updater             restore failed for host "host-0". Reason: failed to complete restore process for host host-0. Reason: invalid id "minio-repo-6eb9f707": no matching ID found
    
    
    
    k get all
    NAME                               READY   STATUS                  RESTARTS   AGE
    pod/jenkins-new-65db7c65b4-r9kfp   0/1     Init:CrashLoopBackOff   6          11m
    
    NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
    service/jenkins-new         ClusterIP   10.233.56.98    <none>        80/TCP      23m
    service/jenkins-new-agent   ClusterIP   10.233.18.253   <none>        50000/TCP   23m
    
    NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/jenkins-new   0/1     1            0           23m
    
    NAME                                     DESIRED   CURRENT   READY   AGE
    replicaset.apps/jenkins-new-65db7c65b4   1         1         0       11m
    replicaset.apps/jenkins-new-749499f95c   0         0         0       23m
    
    NAME                                                           REPOSITORY   HOSTNAME   CREATED AT
    snapshot.repositories.stash.appscode.com/minio-repo-497b073b   minio-repo   host-0     2020-12-12T23:00:06Z
    snapshot.repositories.stash.appscode.com/minio-repo-f6a38424   minio-repo   host-0     2020-12-13T23:00:05Z
    snapshot.repositories.stash.appscode.com/minio-repo-91adbf93   minio-repo   host-0     2020-12-14T23:00:05Z
    snapshot.repositories.stash.appscode.com/minio-repo-da28722c   minio-repo   host-0     2020-12-15T23:00:11Z
    snapshot.repositories.stash.appscode.com/minio-repo-b4bc865f   minio-repo   host-0     2020-12-16T23:00:15Z
    snapshot.repositories.stash.appscode.com/minio-repo-830fab84   minio-repo   host-0     2020-12-17T23:00:08Z
    snapshot.repositories.stash.appscode.com/minio-repo-0f497431   minio-repo   host-0     2020-12-18T23:00:14Z
    snapshot.repositories.stash.appscode.com/minio-repo-4a2018b1   minio-repo   host-0     2020-12-19T23:00:08Z
    snapshot.repositories.stash.appscode.com/minio-repo-6eb9f707   minio-repo   host-0     2020-12-20T23:00:03Z
    snapshot.repositories.stash.appscode.com/minio-repo-dc2c4642   minio-repo   host-0     2020-12-21T23:00:12Z
    
    NAME                                                      REPOSITORY   PHASE       AGE
    restoresession.stash.appscode.com/jenkins-pvc-restore     minio-repo   Succeeded   42d
    restoresession.stash.appscode.com/jenkins-pvc-restore-2   minio-repo   Failed      12m
    
  • Unable to rollout the operator

    Unable to rollout the operator

    I am not able to deploy the operator v0.9.0-rc.2 on OKD 3.11.

    While initializing the operator, I always get the following error message:

    I0105 00:47:11.850796       1 run.go:26] Starting operator version v0.9.0-rc.2+8ce4ab865ccb7b57fc4d98a65669cc7fef8a3c9e ...
    I0105 00:47:12.663345       1 lib.go:112] Kubernetes version: &version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2019-12-19T05:39:29Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
    Error: CustomResourceDefinition.apiextensions.k8s.io "restics.stash.appscode.com" is invalid: spec.validation.openAPIV3Schema: Invalid value: apiextensions.JSONSchemaProps{ID:"", Schema:"", Ref:(*string)(nil), Description:"", Type:"object", Format:"", Title:"", Default:(*apiextensions.JSON)(nil), Maximum:(*float64)(nil), ExclusiveMaximum:false, Minimum:(*float64)(nil), ExclusiveMinimum:false, MaxLength:(*int64)(nil), MinLength:(*int64)(nil), Pattern:"", MaxItems:(*int64)(nil), MinItems:(*int64)(nil), UniqueItems:false, MultipleOf:(*float64)(nil), Enum:[]apiextensions.JSON(nil), MaxProperties:(*int64)(nil), MinProperties:(*int64)(nil), Required:[]string(nil), Items:(*apiextensions.JSONSchemaPropsOrArray)(nil), AllOf:[]apiextensions.JSONSchemaProps(nil), OneOf:[]apiextensions.JSONSchemaProps(nil), AnyOf:[]apiextensions.JSONSchemaProps(nil), Not:(*apiextensions.JSONSchemaProps)(nil), Properties:map[string]apiextensions.JSONSchemaProps{"kind":apiextensions.JSONSchemaProps{ID:"", Schema:"", Ref:(*string)(nil), Description:"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds ", Type:"string", Format:"", Title:"", Default:(*apiextensions.JSON)(nil), Maximum:(*float64)(nil), ExclusiveMaximum:false, Minimum:(*float64)(nil), ExclusiveMinimum:false, MaxLength:(*int64)(nil), MinLength:(*int64)(nil), Pattern:"", MaxItems:(*int64)(nil), MinItems:(*int64)(nil), UniqueItems:false, MultipleOf:(*float64)(nil), Enum:[]apiextensions.JSON(nil), MaxProperties:(*int64)(nil), MinProperties:(*int64)(nil), Required:[]string(nil), Items:(*apiextensions.JSONSchemaPropsOrArray)(nil), AllOf:[]apiextensions.JSONSchemaProps(nil), OneOf:[]apiextensions.JSONSchemaProps(nil), AnyOf:[]apiextensions.JSONSchemaProps(nil), Not:(*apiextensions.JSONSchemaProps)(nil), Properties:map[string]apiextensions.JSONSchemaProps(nil), AdditionalProperties:(*apiextensions.JSONSchemaPropsOrBool)(nil), PatternProperties:map[string]apiextensions.JSONSchemaProps(nil), Dependencies:apiextensions.JSONSchemaDependencies(nil), AdditionalItems:(*apiextensions.JSONSchemaPropsOrBool)(nil), Definitions:apiextensions.JSONSchemaDefinitions(nil), ExternalDocs:(*apiextensions.ExternalDocumentation)(nil), Example:(*apiextensions.JSON)(nil)}, 
    [...]
    Definitions:apiextensions.JSONSchemaDefinitions(nil), ExternalDocs:(*apiextensions.ExternalDocumentation)(nil), Example:(*apiextensions.JSON)(nil)}: must only have "properties", "required" or "description" at the root if the status subresource is enabled
    

    Installing version v0.9.0-rc.1 works flawlessly. I had to disable the cr subresource status checking though (--enable-status-subresource=false), which has been removed in rc1.

    Any advice on this? Has anyone tested version 0.9.0-rc2 on OpenShift 3.11?

  • Unable to get snapshots on K8 v1.10.3

    Unable to get snapshots on K8 v1.10.3

    Similar to #599, deploying stash 0.7.0 on kube 1.10.3 (server version) using helm 2.9.1, with enabling rbac, passing the correct CA certificate (same as the ones used for signing kube apiserver credentials), and enabling both webhooks, I am able to create a restic resource which seems to be taking backups of the specified volume, judging by the resulting repository, but when trying to look at snapshots, I just get

    the server doesn't have a resource type "snapshots"
    

    here's the output of kubectl get repositories -o yaml:

    apiVersion: v1
    items:
    - apiVersion: stash.appscode.com/v1alpha1
      kind: Repository
      metadata:
        clusterName: ""
        creationTimestamp: 2018-11-04T21:23:31Z
        finalizers:
        - stash
        generation: 1
        labels:
          restic: stash-demo
          workload-kind: Deployment
          workload-name: gitlab-postgresql
        name: deployment.gitlab-postgresql
        namespace: default
        resourceVersion: "13782694"
        selfLink: /apis/stash.appscode.com/v1alpha1/namespaces/default/repositories/deployment.gitlab-postgresql
        uid: e0ef7fbf-e077-11e8-b346-00155d3313a9
      spec:
        backend:
          local:
            hostPath:
              path: /data/stash-test/restic-repo
            mountPath: /safe/data
            subPath: deployment/gitlab-postgresql
          storageSecretName: stash-demo
      status:
        backupCount: 12
        firstBackupTime: 2018-11-04T21:24:31Z
        lastBackupDuration: 2.793389927s
        lastBackupTime: 2018-11-04T21:35:31Z
    kind: List
    metadata:
      resourceVersion: ""
      selfLink: ""
    

    and here's the stash operator logs:

    I1104 21:10:41.182712       1 logs.go:19] FLAG: --alsologtostderr="false"
    I1104 21:10:41.204067       1 logs.go:19] FLAG: --audit-log-batch-buffer-size="10000"
    I1104 21:10:41.204086       1 logs.go:19] FLAG: --audit-log-batch-max-size="400"
    I1104 21:10:41.204097       1 logs.go:19] FLAG: --audit-log-batch-max-wait="30s"
    I1104 21:10:41.204104       1 logs.go:19] FLAG: --audit-log-batch-throttle-burst="15"
    I1104 21:10:41.204112       1 logs.go:19] FLAG: --audit-log-batch-throttle-enable="false"
    I1104 21:10:41.204122       1 logs.go:19] FLAG: --audit-log-batch-throttle-qps="10"
    I1104 21:10:41.204131       1 logs.go:19] FLAG: --audit-log-format="json"
    I1104 21:10:41.204138       1 logs.go:19] FLAG: --audit-log-maxage="0"
    I1104 21:10:41.204143       1 logs.go:19] FLAG: --audit-log-maxbackup="0"
    I1104 21:10:41.204149       1 logs.go:19] FLAG: --audit-log-maxsize="0"
    I1104 21:10:41.204155       1 logs.go:19] FLAG: --audit-log-mode="blocking"
    I1104 21:10:41.204161       1 logs.go:19] FLAG: --audit-log-path="-"
    I1104 21:10:41.204176       1 logs.go:19] FLAG: --audit-policy-file=""
    I1104 21:10:41.204182       1 logs.go:19] FLAG: --audit-webhook-batch-buffer-size="10000"
    I1104 21:10:41.204193       1 logs.go:19] FLAG: --audit-webhook-batch-initial-backoff="10s"
    I1104 21:10:41.204201       1 logs.go:19] FLAG: --audit-webhook-batch-max-size="400"
    I1104 21:10:41.204213       1 logs.go:19] FLAG: --audit-webhook-batch-max-wait="30s"
    I1104 21:10:41.204232       1 logs.go:19] FLAG: --audit-webhook-batch-throttle-burst="15"
    I1104 21:10:41.204246       1 logs.go:19] FLAG: --audit-webhook-batch-throttle-enable="true"
    I1104 21:10:41.204267       1 logs.go:19] FLAG: --audit-webhook-batch-throttle-qps="10"
    I1104 21:10:41.204294       1 logs.go:19] FLAG: --audit-webhook-config-file=""
    I1104 21:10:41.204306       1 logs.go:19] FLAG: --audit-webhook-initial-backoff="10s"
    I1104 21:10:41.204324       1 logs.go:19] FLAG: --audit-webhook-mode="batch"
    I1104 21:10:41.204335       1 logs.go:19] FLAG: --authentication-kubeconfig=""
    I1104 21:10:41.204349       1 logs.go:19] FLAG: --authentication-skip-lookup="false"
    I1104 21:10:41.204366       1 logs.go:19] FLAG: --authentication-token-webhook-cache-ttl="10s"
    I1104 21:10:41.204378       1 logs.go:19] FLAG: --authorization-kubeconfig=""
    I1104 21:10:41.204392       1 logs.go:19] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
    I1104 21:10:41.204403       1 logs.go:19] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
    I1104 21:10:41.204423       1 logs.go:19] FLAG: --bind-address="0.0.0.0"
    I1104 21:10:41.204437       1 logs.go:19] FLAG: --burst="100"
    I1104 21:10:41.204451       1 logs.go:19] FLAG: --cert-dir="apiserver.local.config/certificates"
    I1104 21:10:41.204470       1 logs.go:19] FLAG: --client-ca-file=""
    I1104 21:10:41.204483       1 logs.go:19] FLAG: --contention-profiling="false"
    I1104 21:10:41.204500       1 logs.go:19] FLAG: --docker-registry="appscode"
    I1104 21:10:41.204513       1 logs.go:19] FLAG: --enable-analytics="true"
    I1104 21:10:41.204530       1 logs.go:19] FLAG: --enable-swagger-ui="false"
    I1104 21:10:41.204543       1 logs.go:19] FLAG: --help="false"
    I1104 21:10:41.204560       1 logs.go:19] FLAG: --http2-max-streams-per-connection="1000"
    I1104 21:10:41.204576       1 logs.go:19] FLAG: --image-tag="0.7.0"
    I1104 21:10:41.204588       1 logs.go:19] FLAG: --kubeconfig=""
    I1104 21:10:41.204604       1 logs.go:19] FLAG: --log_backtrace_at=":0"
    I1104 21:10:41.204616       1 logs.go:19] FLAG: --log_dir=""
    I1104 21:10:41.204634       1 logs.go:19] FLAG: --logtostderr="false"
    I1104 21:10:41.204646       1 logs.go:19] FLAG: --ops-address=":56790"
    I1104 21:10:41.204663       1 logs.go:19] FLAG: --profiling="true"
    I1104 21:10:41.204677       1 logs.go:19] FLAG: --qps="100"
    I1104 21:10:41.204694       1 logs.go:19] FLAG: --rbac="true"
    I1104 21:10:41.204718       1 logs.go:19] FLAG: --requestheader-allowed-names="[]"
    I1104 21:10:41.204734       1 logs.go:19] FLAG: --requestheader-client-ca-file=""
    I1104 21:10:41.204745       1 logs.go:19] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]"
    I1104 21:10:41.204775       1 logs.go:19] FLAG: --requestheader-group-headers="[x-remote-group]"
    I1104 21:10:41.204795       1 logs.go:19] FLAG: --requestheader-username-headers="[x-remote-user]"
    I1104 21:10:41.204809       1 logs.go:19] FLAG: --resync-period="10m0s"
    I1104 21:10:41.204825       1 logs.go:19] FLAG: --scratch-dir="/tmp"
    I1104 21:10:41.204843       1 logs.go:19] FLAG: --secure-port="8443"
    I1104 21:10:41.204861       1 logs.go:19] FLAG: --stderrthreshold="0"
    I1104 21:10:41.204875       1 logs.go:19] FLAG: --tls-ca-file=""
    I1104 21:10:41.204895       1 logs.go:19] FLAG: --tls-cert-file="/var/serving-cert/tls.crt"
    I1104 21:10:41.204914       1 logs.go:19] FLAG: --tls-cipher-suites="[]"
    I1104 21:10:41.204932       1 logs.go:19] FLAG: --tls-min-version=""
    I1104 21:10:41.204945       1 logs.go:19] FLAG: --tls-private-key-file="/var/serving-cert/tls.key"
    I1104 21:10:41.204992       1 logs.go:19] FLAG: --tls-sni-cert-key="[]"
    I1104 21:10:41.205008       1 logs.go:19] FLAG: --v="3"
    I1104 21:10:41.205016       1 logs.go:19] FLAG: --vmodule=""
    I1104 21:11:11.234919       1 run.go:21] Starting operator version 0.7.0+705ecd09ddde608b9a66ed248acbc8229cfe8c24 ...
    I1104 21:11:11.843227       1 audit.go:229] No audit policy file provided for AdvancedAuditing, no events will be recorded.
    I1104 21:11:15.039184       1 clusterrole.go:19] Creating ClusterRole stash-sidecar.
    I1104 21:11:15.059300       1 controller.go:102] Starting Stash controller
    I1104 21:11:15.059544       1 reflector.go:202] Starting reflector *v1alpha1.Restic (10m0s) from github.com/appscode/stash/client/informers/externalversions/factory.go:74
    I1104 21:11:15.059569       1 reflector.go:240] Listing and watching *v1alpha1.Restic from github.com/appscode/stash/client/informers/externalversions/factory.go:74
    I1104 21:11:15.059601       1 reflector.go:202] Starting reflector *v1alpha1.Recovery (10m0s) from github.com/appscode/stash/client/informers/externalversions/factory.go:74
    I1104 21:11:15.059618       1 reflector.go:240] Listing and watching *v1alpha1.Recovery from github.com/appscode/stash/client/informers/externalversions/factory.go:74
    I1104 21:11:15.059655       1 reflector.go:202] Starting reflector *v1beta1.StatefulSet (10m0s) from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.059680       1 reflector.go:240] Listing and watching *v1beta1.StatefulSet from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.059907       1 reflector.go:202] Starting reflector *v1alpha1.Repository (10m0s) from github.com/appscode/stash/client/informers/externalversions/factory.go:74
    I1104 21:11:15.059923       1 reflector.go:240] Listing and watching *v1alpha1.Repository from github.com/appscode/stash/client/informers/externalversions/factory.go:74
    I1104 21:11:15.059989       1 reflector.go:202] Starting reflector *v1.Job (10m0s) from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060006       1 reflector.go:240] Listing and watching *v1.Job from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060051       1 reflector.go:202] Starting reflector *v1.ReplicationController (10m0s) from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060064       1 reflector.go:240] Listing and watching *v1.ReplicationController from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060271       1 reflector.go:202] Starting reflector *v1beta1.ReplicaSet (10m0s) from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060283       1 reflector.go:240] Listing and watching *v1beta1.ReplicaSet from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060489       1 reflector.go:202] Starting reflector *v1.Namespace (10m0s) from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060506       1 reflector.go:240] Listing and watching *v1.Namespace from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060555       1 reflector.go:202] Starting reflector *v1beta1.DaemonSet (10m0s) from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060560       1 reflector.go:202] Starting reflector *v1beta1.Deployment (10m0s) from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060580       1 reflector.go:240] Listing and watching *v1beta1.Deployment from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:15.060567       1 reflector.go:240] Listing and watching *v1beta1.DaemonSet from github.com/appscode/stash/vendor/k8s.io/client-go/informers/factory.go:87
    I1104 21:11:16.061231       1 daemonsets.go:69] Sync/Add/Update for DaemonSet kube-system/weave-net
    I1104 21:11:16.061253       1 statefulsets.go:69] Sync/Add/Update for StatefulSet default/gitlab-gitaly
    I1104 21:11:16.061322       1 replicasets.go:78] Sync/Add/Update for ReplicaSet default/gitlab-prometheus-server-847c8bb76
    I1104 21:11:16.061342       1 deployment.go:78] Sync/Add/Update for Deployment kube-system/tiller-deploy
    I1104 21:11:16.061395       1 daemonsets.go:69] Sync/Add/Update for DaemonSet rook-ceph-system/rook-ceph-agent
    I1104 21:11:16.061396       1 daemonsets.go:69] Sync/Add/Update for DaemonSet rook-ceph-system/rook-discover
    I1104 21:11:16.061396       1 deployment.go:78] Sync/Add/Update for Deployment rook-ceph/rook-ceph-osd-id-1
    I1104 21:11:16.061457       1 deployment.go:78] Sync/Add/Update for Deployment rook-ceph/rook-ceph-osd-id-3
    I1104 21:11:16.061470       1 controller.go:135] [Listening on :56790]
    I1104 21:11:16.061341       1 replicasets.go:78] Sync/Add/Update for ReplicaSet default/gitlab-redis-7fbd4d8c85
    I1104 21:11:16.061564       1 deployment.go:78] Sync/Add/Update for Deployment tiller-deploy/tiller-deploy
    I1104 21:11:16.061572       1 deployment.go:78] Sync/Add/Update for Deployment default/gitlab-minio
    ...
    I1104 21:11:16.061614       1 deployment.go:78] Sync/Add/Update for Deployment default/gitlab-prometheus-server
    I1104 21:11:16.127344       1 replicasets.go:78] Sync/Add/Update for ReplicaSet default/gitlab-gitlab-shell-5fc6c544f
    I1104 21:11:16.129612       1 serve.go:96] Serving securely on [::]:8443
    I1104 21:11:16.129961       1 replicasets.go:78] Sync/Add/Update for ReplicaSet default/stash-operator-bbfcd6df9
    I1104 21:11:16.130968       1 replicasets.go:78] Sync/Add/Update for ReplicaSet default/gitlab-task-runner-75cdfb785c
    I1104 21:11:16.353194       1 replicasets.go:78] Sync/Add/Update for ReplicaSet default/gitlab-unicorn-7b584cc87d
    ...
    I1104 21:11:16.360031       1 replicasets.go:78] Sync/Add/Update for ReplicaSet rook-ceph/rook-ceph-mgr-a-75cc4ccbf4
    I1104 21:11:19.667856       1 wrap.go:42] GET /healthz: (4.89368ms) 200 [[kube-probe/1.10] 10.34.0.0:37162]
    I1104 21:11:19.709522       1 deployment.go:78] Sync/Add/Update for Deployment default/stash-operator
    I1104 21:11:19.739726       1 replicasets.go:78] Sync/Add/Update for ReplicaSet default/stash-operator-bbfcd6df9
    I1104 21:11:29.663078       1 wrap.go:42] GET /healthz: (179.649¡s) 200 [[kube-probe/1.10] 10.34.0.0:37172]
    ...
    I1104 21:21:09.675269       1 wrap.go:42] GET /healthz: (7.830144ms) 200 [[kube-probe/1.10] 10.34.0.0:37784]
    I1104 21:21:15.152428       1 daemonsets.go:69] Sync/Add/Update for DaemonSet rook-ceph-system/rook-discover
    I1104 21:21:15.152542       1 daemonsets.go:69] Sync/Add/Update for DaemonSet rook-ceph-system/rook-ceph-agent
    ...
    I1104 21:21:15.336096       1 replicasets.go:78] Sync/Add/Update for ReplicaSet default/gitlab-gitlab-shell-58f7c74979
    I1104 21:21:19.663221       1 wrap.go:42] GET /healthz: (157.583¡s) 200 [[kube-probe/1.10] 10.34.0.0:37794]
    ...
    I1104 21:22:19.668685       1 wrap.go:42] GET /healthz: (194.267¡s) 200 [[kube-probe/1.10] 10.34.0.0:37858]
    I1104 21:22:21.679123       1 restics.go:131] Sync/Add/Update for Restic stash-demo
    I1104 21:22:21.679328       1 deployment.go:78] Sync/Add/Update for Deployment default/gitlab-postgresql
    I1104 21:22:21.679525       1 replicasets.go:78] Sync/Add/Update for ReplicaSet default/gitlab-postgresql-5c64b549b9
    I1104 21:22:21.690569       1 rolebinding.go:19] Creating RoleBinding default/gitlab-postgresql-stash-sidecar.
    I1104 21:22:21.761145       1 deployment.go:58] Patching Deployment default/gitlab-postgresql with {"metadata":{"annotations":{"restic.appscode.com/last-applied-configuration":"{\"kind\":\"Restic\",\"apiVersion\":\"stash.appscode.com/v1alpha1\",\"metadata\":{\"name\":\"stash-demo\",\"namespace\":\"default\",\"selfLink\":\"/apis/stash.appscode.com/v1alpha1/namespaces/default/restics/stash-demo\",\"uid\":\"b75a3a5e-e077-11e8-b142-00155d331585\",\"resourceVersion\":\"13780771\",\"generation\":1,\"creationTimestamp\":\"2018-11-04T21:22:21Z\",\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"stash.appscode.com/v1alpha1\\\",\\\"kind\\\":\\\"Restic\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"stash-demo\\\",\\\"namespace\\\":\\\"default\\\"},\\\"spec\\\":{\\\"backend\\\":{\\\"local\\\":{\\\"hostPath\\\":{\\\"path\\\":\\\"/data/stash-test/restic-repo\\\"},\\\"mountPath\\\":\\\"/safe/data\\\"},\\\"storageSecretName\\\":\\\"stash-demo\\\"},\\\"fileGroups\\\":[{\\\"path\\\":\\\"/var/lib/postgresql\\\",\\\"retentionPolicyName\\\":\\\"keep-last-5\\\"}],\\\"retentionPolicies\\\":[{\\\"keepLast\\\":5,\\\"name\\\":\\\"keep-last-5\\\",\\\"prune\\\":true}],\\\"schedule\\\":\\\"@every 1m\\\",\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"postgresql\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/postgresql/data/pgdata\\\",\\\"name\\\":\\\"data\\\"}]}}\\n\"}},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"postgresql\"}},\"fileGroups\":[{\"path\":\"/var/lib/postgresql\",\"retentionPolicyName\":\"keep-last-5\"}],\"backend\":{\"storageSecretName\":\"stash-demo\",\"local\":{\"hostPath\":{\"path\":\"/data/stash-test/restic-repo\"},\"mountPath\":\"/safe/data\"}},\"schedule\":\"@every 1m\",\"volumeMounts\":[{\"name\":\"data\",\"mountPath\":\"/var/lib/postgresql/data/pgdata\"}],\"resources\":{},\"retentionPolicies\":[{\"name\":\"keep-last-5\",\"keepLast\":5,\"prune\":true}]}}\n","restic.appscode.com/tag":"0.7.0"}},"spec":{"template":{"metadata":{"annotations":{"restic.appscode.com/resource-hash":"17686287010712179251"}},"spec":{"$setElementOrder/containers":[{"name":"gitlab-postgresql"},{"name":"metrics"},{"name":"stash"}],"$setElementOrder/volumes":[{"name":"data"},{"name":"password-file"},{"name":"stash-scratchdir"},{"name":"stash-podinfo"},{"name":"stash-local"}],"containers":[{"args":["backup","--restic-name=stash-demo","--workload-kind=Deployment","--workload-name=gitlab-postgresql","--docker-registry=appscode","--image-tag=0.7.0","--run-via-cron=true","--pushgateway-url=http://stash-operator.default.svc:56789","--enable-analytics=true","--enable-rbac=true","--logtostderr=false","--alsologtostderr=false","--v=3","--stderrthreshold=0"],"env":[{"name":"NODE_NAME","valueFrom":{"fieldRef":{"fieldPath":"spec.nodeName"}}},{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"APPSCODE_ANALYTICS_CLIENT_ID","value":"d41d8cd98f00b204e9800998ecf8427e"}],"image":"appscode/stash:0.7.0","name":"stash","resources":{},"volumeMounts":[{"mountPath":"/tmp","name":"stash-scratchdir"},{"mountPath":"/etc/stash","name":"stash-podinfo"},{"mountPath":"/var/lib/postgresql/data/pgdata","name":"data","readOnly":true},{"mountPath":"/safe/data","name":"stash-local"}]}],"volumes":[{"emptyDir":{},"name":"stash-scratchdir"},{"downwardAPI":{"items":[{"fieldRef":{"fieldPath":"metadata.labels"},"path":"labels"}]},"name":"stash-podinfo"},{"hostPath":{"path":"/data/stash-test/restic-repo"},"name":"stash-local"}]}}}}.
    I1104 21:22:21.779595       1 deployment.go:78] Sync/Add/Update for Deployment default/gitlab-postgresql
    ...
    
    Is kube 1.10.x also not supported yet?
  • Backup on OKD 4.10 not working due to missing finalizer permission

    Backup on OKD 4.10 not working due to missing finalizer permission

    Hi,

    have setup a community edition of stash within an OKD 4.10 cluster (version: v2022.06.27, for v2022.07.09 I got ImagePullErrors). I created the Repository and the BackupConfiguration resources:

    apiVersion: stash.appscode.com/v1alpha1
    kind: Repository
    metadata:
      name: foo
      namespace: example
    spec:
      backend:
        s3:
          endpoint: my.s3.end.point
          bucket: my-bucket
          region: us-west-1
          prefix: /stash
        storageSecretName: s3-secret
    
    apiVersion: stash.appscode.com/v1beta1
    kind: BackupConfiguration
    metadata:
      name: backup
      namespace: example
    spec:
      repository:
        name: foo
      schedule: "*/5 * * * *"
      target:
        ref:
          apiVersion: apps/v1
          kind: Deployment
          name: backup-test
        volumeMounts:
          - name: nginx-logs
            mountPath: /var/log/nginx
        paths:
          - /var/log/nginx
      retentionPolicy:
        name: "keep-last-5"
        keepLast: 5
        prune: true
    

    Stash tries to trigger the cron job, but fails immediately:

    operator W0714 12:54:40.968039       1 sidecar.go:214] Failed to inject stash sidecar into Deployment example/backup-test. Reason: rolebindings.rbac.authorization.k8s.io "stash-sidecar-deployment-backup-test" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>
    

    How can this be fixed?

    Thanks for your help in advance!

    Cheers

  • Stash operator restarts indefinitely

    Stash operator restarts indefinitely

    Hi, The stash operator restarts indefinitely and there don't seem to be enough log to debug the issue. The following message has been logged before restart.

    F0619 06:53:38.922225 1 main.go:41] Error in Stash Main: timed out waiting for the condition

    stash-operator version - appscode/stash:v0.9.0-rc.6 push-gateway - prom/pushgateway:v0.5.2

  • Can't read snapshots when using BackupBatch

    Can't read snapshots when using BackupBatch

    I am using BackupBatch to backup my deployments. The backups work fine.

    However, when I run kubectl get snapshots I get the following error.

    Error from server (InternalError): Internal error occurred: Fatal: unable to open config file: Stat: The specified key does not exist. Is there a repository at the following location? s3:s3.amazonaws.com/siliconhills-backups/stash
    
  • Run stash as non root user

    Run stash as non root user

    Hello. It probably run the pods of stash as non root user? I use Pod Security Policy so users mustn't run pod as root. I added the user stash in the image.

    FROM appscode/stash:0.8.3
    RUN  addgroup -S stash && adduser -S stash -G stash
    USER stash
    

    But i can't recovery my files. I get the errors:

    Warning  FailedRecovery      5m    stash-recovery  failed to recover FileGroup /source/data, reason:     /usr/local/go/src/runtime/asm_amd64.s:2361
    Warning  FailedRecovery      5m    stash-recovery  Failed to complete recovery local-recovery, reason:   /usr/local/go/src/runtime/asm_amd64.s:2361 
    
  • BackupConfiguration tempDir doesn't seem to be used?

    BackupConfiguration tempDir doesn't seem to be used?

    I have this in my BackupConfiguration:

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupConfiguration
    spec:
      tempDir:
        medium: "Memory"
        sizeLimit: "512Mi"
    

    According to https://stash.run/docs/v2022.09.29/concepts/crds/backupconfiguration/:

    Stash mounts an emptyDir for holding temporary files. It is also used for caching for faster backup performance. You can configure the emptyDir using spec.tempDir section.

    However, the Pod has this as its tmp-dir Volume:

      - emptyDir:
          medium: Memory
          sizeLimit: 256Mi
        name: tmp-dir
    

    I.e. the sizeLimit of 512Mi is not used when patching the Pod definition. Am I doing this wrong or is it a bug?

    This is on Stash v0.22.3.

  • Failed BackupSession is still marked as

    Failed BackupSession is still marked as "running"

    I have the following in my stash container logs:

    I0930 05:51:15.930910       1 commands.go:120] Backing up target data
    [golang-sh]$ /bin/restic backup /mnt/data/timemachine --quiet --json --host samba-0 --cache-dir /tmp/restic-cache --cleanup-cache
    E0930 05:53:15.949682       1 leaderelection.go:367] Failed to update lock: context deadline exceeded
    I0930 05:53:17.302915       1 leaderelection.go:283] failed to renew lease samba/lock-statefulset-samba-backup: timed out waiting for the condition
    I0930 05:53:17.509894       1 backupsession.go:398] Lost leadership
    I0930 05:53:18.075803       1 backupsession.go:187] Sync/Add/Update for Backup Session backup-1664245947
    I0930 05:53:19.934081       1 backupsession.go:214] Skip processing BackupSession samba/backup-1664245947. Reason: BackupSession has been processed already for host "samba-0"
    {"message_type":"error","error":{"Op":"lstat","Path":"/mnt/data/timemachine/sjors/[snip]","Err":2},"during":"archival","item":"/mnt/data/timemachine/sjors/[snip]"}
    {"message_type":"error","error":{"Op":"lstat","Path":"/mnt/data/timemachine/sjors/[snip]","Err":2},"during":"archival","item":"/mnt/data/timemachine/sjors/[snip]"}
    Warning: at least one source file could not be read
    W0930 09:48:21.666033       1 backupsession.go:442] Failed to take backup for BackupSession backup-1664505145. Reason: Warning: at least one source file could not be read
    I0930 09:48:21.693846       1 status.go:99] Updating post backup status.......
    

    Inside the stash container, only /stash is running, nothing restic.

    However, the BackupSession is still Phase: Running:

    Status:
      Conditions:
        Last Transition Time:  2022-09-30T02:32:32Z
        Message:               Repository exist in the backend.
        Reason:                BackendRepositoryFound
        Status:                True
        Type:                  BackendRepositoryInitialized
      Phase:                   Running
      Session Deadline:        2022-10-02T02:32:25Z
    

    Could this be a bug?

  • failed to complete restore [..] no snapshot found

    failed to complete restore [..] no snapshot found

    I am trying to validate my Stash backups by performing a restore to a different namespace. However, I am encountering the "no snapshot found" error during restore.

    First, I performed a backup to Backblaze B2, and the snapshot seems to have been created successfully: the BackupSession is successful; in the bucket, /stash/gitea/snapshots/<long filename> exists; the /bin/restic check in the backup container logs indicates no errors were found. Also, SNAPSHOT-COUNT is 1:

    $ kubectl get repository -n stash gitea-to-b2
    NAME          INTEGRITY   SIZE         SNAPSHOT-COUNT   LAST-SUCCESSFUL-BACKUP   AGE
    gitea-to-b2   true        11.254 MiB   1                101m                     106m
    

    So, I created a new namespace gitea-restore and created the PVC, an Ingress with a different name, a RestoreSession according to https://stash.run/docs/v2022.07.09/guides/use-cases/cross-cluster-backup/, and the StatefulSet itself. The RestoreSession contains the same Repository as the BackupConfiguration; its YAML is below. Indeed, Stash created an init container in the StatefulSet, whose logs contain:

    I0926 10:38:05.508833       1 restore.go:116] Got leadership, preparing for restore
    I0926 10:38:05.572293       1 commands.go:233] Restoring backed up data
    [golang-sh]$ /bin/restic restore latest --path /data --host host-0 --target / --cache-dir /tmp/restic-cache
    latest snapshot for criteria not found: no snapshot found Paths:[/data] Hosts:[host-0]
    I0926 10:38:10.950152       1 status.go:192] Updating hosts status for restore target StatefulSet gitea-restore/gitea.
    F0926 10:38:11.092775       1 restore.go:123] failed to complete restore. Reason: latest snapshot for criteria not found: no snapshot found Paths:[/data] Hosts:[host-0]
    [......goroutine stacks....]
    I0926 10:38:11.092905       1 restore.go:129] Lost leadership
    I0926 10:38:11.096042       1 restore.go:98] Restore completed successfully for RestoreSession gitea-restore/restore
    I0926 10:38:11.096111       1 main.go:45] Exiting Stash Main
    

    Two things jump out to me. First of all: the init container failed to restore, but it still said "restore completed successfully" and exited 0. So, now, my Gitea container is running without data, while I had expected the init container to fail and further setup of the application to block.

        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Mon, 26 Sep 2022 12:38:04 +0200
          Finished:     Mon, 26 Sep 2022 12:38:11 +0200
    

    But, worse still, the restore itself failed. With the RestoreSession pointing at the same Repository as the BackupSession (gitea-to-b2 in the namespace stash), I would have expected it to find the exact same backups in the same directory in the same bucket, but it finds no snapshots. I tried to take a look at the contents of the snapshot, but it's binary gibberish so I don't know what's in it. Could you help me figure out why this is / what's going on?

    Here's the RestoreSession YAML:

    apiVersion: stash.appscode.com/v1beta1
    kind: RestoreSession
    metadata:
      name: restore
      namespace: gitea-restore
    spec:
      repository:
        name: gitea-to-b2
        namespace: stash
      target: # target indicates where the recovered data will be stored
        ref:
          apiVersion: apps/v1
          kind: StatefulSet
          name: gitea
        volumeMounts:
        - mountPath: /data
          name: data
        rules:
        - paths:
          - /data
    
  • Support permanent cache

    Support permanent cache

    I have several large repos where cache data is many GBs.

    The use of emptyDir for cache is problematic because it's so transient, particularly for one-off jobs like PVC backups.

    Could there be a way to specify a permanent PVC for cache?

  • Backup Rook on PVCs (Block VolumeMode)

    Backup Rook on PVCs (Block VolumeMode)

    Hi,

    I've setup up Rook on PVC in Block VolumeMode and Stash Community Edition. I'd like to backup the PVCs that are used by Rook osd. Is it even possible to backup and restore PVCs in Block VolumeMode?

    This is my working configuration to backup an osd PVC from Rook.

    apiVersion: stash.appscode.com/v1beta1
    kind: BackupConfiguration
    metadata:
      name: rook-ceph-osd-0-backup
      namespace: rook-ceph
    spec:
      driver: Restic
      paused: true
      repository:
        name: s3-repo
      retentionPolicy:
        keepLast: 5
        name: keep-last-5-3
        prune: true
      runtimeSettings:
        container:
          securityContext:
            runAsUser: 0
      schedule: '*/5 * * * *'
      target:
        paths:
        - /source/data
        ref:
          apiVersion: apps/v1
          kind: Deployment
          name: rook-ceph-osd-0
        volumeMounts:
        - mountPath: /source/data
          name: set1-data-1bzgdq-bridge
          subPath: ceph-0
    

    This is my restore session which is not working

    apiVersion: stash.appscode.com/v1beta1
    kind: RestoreSession
    metadata:
      name: deployment-restore
      namespace: rook-ceph
    spec:
      driver: Restic
      repository:
        name: s3-repo
      runtimeSettings: {}
      target:
        ref:
          apiVersion: apps/v1
          kind: Deployment
          name: rook-ceph-osd-0
        rules:
        - paths:
          - /source/data/
        volumeMounts:
        - mountPath: /source/data
          name: set1-data-1bzgdq-bridge
          subPath: ceph-0
      task: {}
      tempDir: {}
    

    The stash-init container shows an error. See the attached log file for the error rook-ceph-osd-0-557656658b-rfqt2_stash-init.log

    I'm confused by the error. Is the pod not starting because the /source/data/block file exists or because of the thrown error.

    Maybe one of you can help me to solve this problem.

    Thanks

Lightweight, single-binary Backup Repository client. Part of E2E Backup Architecture designed by RiotKit

Backup Maker Tiny backup client packed in a single binary. Interacts with a Backup Repository server to store files, uses GPG to secure your backups e

Apr 4, 2022
Kubegres is a Kubernetes operator allowing to create a cluster of PostgreSql instances and manage databases replication, failover and backup.

Kubegres is a Kubernetes operator allowing to deploy a cluster of PostgreSql pods with data replication enabled out-of-the box. It brings simplicity w

Dec 30, 2022
Cmsnr - cmsnr (pronounced "commissioner") is a lightweight framework for running OPA in a sidecar alongside your applications in Kubernetes.

cmsnr Description cmsnr (pronounced "commissioner") is a lightweight framework for running OPA in a sidecar alongside your applications in Kubernetes.

Jan 13, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Dgraph Backup and Restore (cloud). Read-only mirror.

dgbrx Dgraph Backup and Restore X dgbrx is a Go commandline tool which helps to do a backup, restore or clean on a Dgraph Cloud (aka slash / managed)

Oct 28, 2021
Tape backup software optimized for large WORM data and long-term recoverability

Mixtape Backup software for tape users with lots of WORM data. Draft design License This codebase is not open-source software (or free, or "libre") at

Oct 30, 2022
Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd data, and intelligent diagnosis.
Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd data, and intelligent diagnosis.

Kstone δΈ­ζ–‡ Kstone is an etcd management platform, providing cluster management, monitoring, backup, inspection, data migration, visual viewing of etcd

Dec 27, 2022
A library for writing backup programs in Golang

Barkup godoc.org/github.com/keighl/barkup Barkup is a library for backing things up. It provides tools for writing bare-bones backup programs in Go. T

Nov 13, 2022
Simple SFTP backup tool for files.

BakTP Simple SFTP backup tool for files. config.example.json Contains an example how to backup a database. This application can be added to crontab -e

Dec 30, 2021
WaffleSyrup - Simple backup solution written by Go.

WaffleSyrup Simple backup solution written by Go. Usage WaffleSyrup runs in the current working directory. It will create ./tmp directory to save tarb

Apr 22, 2022
MongoBackup - This is container that takes backup of MongoDB

MongoBackup This is container that takes backup of MongoDB. It is ment to be ran

Feb 15, 2022
Lxmin - Backup and Restore LXC instances from MinIO

lxmin Backup and restore LXC instances from MinIO Usage NAME: lxmin - backup a

Dec 7, 2022
A simple program to automatically backup a database using git. Err handling by Sentry, Reporting by Betteruptime. Made with 🩸 , πŸ˜“ & 😭

backup What is this? A Simple program to automatically backup a database using git. Err handling by Sentry, Uses heartbeats by Betteruptime Made with

Nov 4, 2022
Simple backup tool for PostgreSQL

pg_back dumps databases from PostgreSQL Description pg_back is a dump tool for PostgreSQL. The goal is to dump all or some databases with globals at o

Dec 25, 2022
Github-backup application

Github-backup application This application clone your github repository with all commits, branch, tags etc. to your local disk Dependencies This App u

Dec 26, 2022
Build and deploy Go applications on Kubernetes
Build and deploy Go applications on Kubernetes

ko: Easy Go Containers ko is a simple, fast container image builder for Go applications. It's ideal for use cases where your image contains a single G

Jan 5, 2023
⚑️ A dev tool for microservice developers to run local applications and/or forward others from/to Kubernetes SSH or TCP
⚑️ A dev tool for microservice developers to run local applications and/or forward others from/to Kubernetes SSH or TCP

Your new microservice development environment friend. This CLI tool allows you to define a configuration to work with both local applications (Go, Nod

Jan 4, 2023
Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications
Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications

Continuous Delivery for Declarative Kubernetes, Serverless and Infrastructure Applications Explore PipeCD docs Β» Overview PipeCD provides a unified co

Jan 3, 2023
A Kubernetes Operator used for pre-scaling applications in anticipation of load

Pre-Scaling Kubernetes Operator Built out of necessity, the Operator helps pre-scale applications in anticipation of load. At its core, it manages a c

Oct 14, 2021