Cloud-Native distributed storage built on and for Kubernetes

Longhorn

Build Status

  • Engine: Build StatusGo Report CardFOSSA Status
  • Manager: Build StatusGo Report CardFOSSA Status
  • Instance Manager: Build StatusGo Report CardFOSSA Status
  • Share Manager: Build StatusGo Report CardFOSSA Status
  • Backing Image Manager: Build StatusGo Report CardFOSSA Status
  • UI: Build StatusFOSSA Status
  • Test: Build Status

Release Status

Release Version Type
1.1 1.1.2 Latest & Stable
1.2 1.2.2 Latest

Overview

Longhorn is a distributed block storage system for Kubernetes. Longhorn is cloud native storage because it is built using Kubernetes and container primitives.

Longhorn is lightweight, reliable, and powerful. You can install Longhorn on an existing Kubernetes cluster with one kubectl apply command or using Helm charts. Once Longhorn is installed, it adds persistent volume support to the Kubernetes cluster.

Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Here are some notable features of Longhorn:

  1. Enterprise-grade distributed storage with no single point of failure
  2. Incremental snapshot of block storage
  3. Backup to secondary storage (NFSv4 or S3-compatible object storage) built on efficient change block detection
  4. Recurring snapshot and backup
  5. Automated non-disruptive upgrade. You can upgrade the entire Longhorn software stack without disrupting running volumes!
  6. Intuitive GUI dashboard

You can read more technical details of Longhorn here.

Get Involved

Community Meeting and Office Hours!: Hosted by the core maintainers of Longhorn: 2nd Friday of the every month at 09:00 Pacific Time (PT)/12:00 Eastern Time (ET) on Zoom: http://bit.ly/longhorn-community-meeting. Gcal event: http://bit.ly/longhorn-events

Longhorn Mailing List!: Stay up to date on the latest news and events: https://lists.cncf.io/g/cncf-longhorn

You can read more about the community and its events here: https://github.com/longhorn/community

Current status

The latest release of Longhorn is Releases

Source code

Longhorn is 100% open source software. Project source code is spread across a number of repos:

Component What it does GitHub repo
Longhorn Backing Image Manager Backing image download, sync, and deletion in a disk longhorn/backing-image-manager
Longhorn Engine Core controller/replica logic longhorn/longhorn-engine
Longhorn Instance Manager Controller/replica instance lifecycle management longhorn/longhorn-instance-manager
Longhorn Manager Longhorn orchestration, includes CSI driver for Kubernetes longhorn/longhorn-manager
Longhorn Share Manager NFS provisioner that exposes Longhorn volumes as ReadWriteMany volumes longhorn/longhorn-share-manager
Longhorn UI The Longhorn dashboard longhorn/longhorn-ui

Longhorn UI

Requirements

For the installation requirements, refer to the Longhorn documentation.

Installation

Longhorn can be installed on a Kubernetes cluster in several ways:

Documentation

The official Longhorn documentation is here.

Community

Longhorn is open source software, so contributions are greatly welcome. Please read Code of Conduct and Contributing Guideline before contributing.

Contributing code is not the only way of contributing. We value feedbacks very much and many of the Longhorn features are originated from users' feedback. If you have any feedbacks, feel free to file an issue and talk to the developers at the CNCF #longhorn slack channel.

License

Copyright (c) 2014-2021 The Longhorn Authors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Longhorn is a CNCF Sandbox Project

Longhorn is a CNCF Sandbox Project

Owner
Longhorn.io
Cloud native distributed block storage built on and for Kubernetes
Longhorn.io
Comments
  • Feature Request: Support for ARM64 Architecture

    Feature Request: Support for ARM64 Architecture

    If not already working I would like to request support for the ARM architecture in order to run a full Rancher toolset on baremetal ARM providers like Packet or Scaleway.

  • [BUG] Self-Hosted Minio-Backupstorage - timeout during Backup

    [BUG] Self-Hosted Minio-Backupstorage - timeout during Backup

    Describe the bug I have longhorn 1.1.0 on Rancher 2.5.1 and a self-hosted minio backupstorage. When I do a backup in Longhorn, it sometimes happens that minio displays a timeout error. The "Snapshots and Backups Longhorn" view shows that a backup was carried out. When I click on "Backup" in the Longhorn UI, the last backup is not available.

    To Reproduce Manual or automatic backup to minio.

    Expected behavior The backup is created.

    Log docker logs -f

    API: PutObject(bucket=k8s-cluster01, object=backupstore/volumes/79/dd/pvc-dca02b3d-8845-4e35-b4ba-7e004238d70d/blocks/2f/c1/2fc17d80430fbb443f3d6432f3d3565078acb49be1c6eff98a756888fffcc945.blk)
    Time: 13:52:46 UTC 01/28/2021
    DeploymentID: 60a01f5f-7567-48t6-a9f2-d86b7d8df3c6
    RequestID: 165E6977AB3A804E
    RemoteHost: XXX.XXX.XXX.XX
    Host: minio.domain.de
    UserAgent: aws-sdk-go/1.25.16 (go1.14.4; linux; amd64)
    Error: Operation timed out (cmd.OperationTimedOut)
           3: cmd/fs-v1.go:1100:cmd.(*FSObjects).PutObject()
           2: cmd/object-handlers.go:1565:cmd.objectAPIHandlers.PutObjectHandler()
           1: net/http/server.go:2042:http.HandlerFunc.ServeHTTP()
    

    Environment:

    • Longhorn version: 1.1.0
    • Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: RKE
      • Number of management node in the cluster: 3
      • Number of worker node in the cluster: 3
    • Node config
      • OS type and version: Ubuntu 20.04
      • CPU per node: 32
      • Memory per node: 256
      • Disk type(e.g. SSD/NVMe): SSD
      • Network bandwidth between the nodes: 1G
    • Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): Baremetal
    • Number of Longhorn volumes in the cluster: 25
  • cannot format to ext4 error on various environment

    cannot format to ext4 error on various environment

    Hi,

    I can't get Longhorn to work on my kubernetes cluster. I'm using CentOS7 as a base OS, and the iscsi-initiator-utils package is installed, which should contain all the tools required for Longhorn. Kubernetes version 1.9.7.

    Installed Longhorn as follows:

    kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/v0.2/deploy/longhorn.yaml
    kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/v0.2/deploy/example-storageclass.yaml
    

    All components come up fine:

    longhorn-system   longhorn-flexvolume-driver-deployer-85dd94b9bc-gcwrv   1/1       Running             0          12m
    longhorn-system   longhorn-flexvolume-driver-td9hk                       1/1       Running             0          12m
    longhorn-system   longhorn-flexvolume-driver-xs62n                       1/1       Running             0          12m
    longhorn-system   longhorn-manager-2p9hp                                 1/1       Running             0          12m
    longhorn-system   longhorn-manager-2xf4g                                 1/1       Running             0          12m
    longhorn-system   longhorn-ui-599694bf-zhndh                             1/1       Running             0          12m
    

    Then tried one of the examples:

    kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/v0.2/examples/pvc.yaml

    This doesn't work, the container gets stuck in the "creating" phase:

    default volume-test 0/1 ContainerCreating 0 10m

    However, the pvc is bound:

    NAME                STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    longhorn-volv-pvc   Bound     pvc-8a69cce3-60ac-11e8-a060-02b050e3657a   2Gi        RWO            longhorn       11m
    

    In the Longhorn webui, I saw the volume switching between attached and detached, and now it seems to have settled for "detached":

    Detached | pvc-8a69cce3-60ac-11e8-a060-02b050e3657a |   |   | 2 Gi | 12 minutes ago
    -- | -- | -- | -- | -- | --
    
    

    I also noticed some other containers spun up, but they got destroyed:

    longhorn-system   pvc-8a69cce3-60ac-11e8-a060-02b050e3657a-r-41479c98    0/1       Terminating         0          46s
    longhorn-system   pvc-8a69cce3-60ac-11e8-a060-02b050e3657a-r-456c4516    0/1       Terminating         0          46s
    longhorn-system   pvc-8a69cce3-60ac-11e8-a060-02b050e3657a-r-d1ffaba1    0/1       Terminating         0          46s
    

    How can I fix this?

  • [BUG] After updating longhorn to version 1.3.0, only 1 node had problems and I can't even delete it

    [BUG] After updating longhorn to version 1.3.0, only 1 node had problems and I can't even delete it

    Hello, I updated my longhorn to version 1.3.0, but 1 node was not healthy and crashed some volumes, they are degraded, what can I do? Screen Shot 2022-07-07 at 22 00 22

    I tried to delete the node with problems, but it only has this state

    Screen Shot 2022-07-07 at 22 30 53
  • [Question] longhorn-driver-deployer can not start

    [Question] longhorn-driver-deployer can not start

    kubectl get pods \
    > --namespace longhorn-system
    NAME                                       READY   STATUS             RESTARTS   AGE
    engine-image-ei-eee5f438-s7lb4             1/1     Running            0          10m
    instance-manager-e-2c134851                1/1     Running            0          10m
    instance-manager-r-100de490                1/1     Running            0          10m
    longhorn-driver-deployer-cd74cb75b-dlgvt   0/1     Init:0/1           0          10m
    longhorn-manager-8g48d                     1/1     Running            0          10m
    longhorn-ui-8486987944-r78hc               0/1     CrashLoopBackOff   6          10m
    
    kubectl describe pod longhorn-driver-deployer-cd74cb75b-dlgvt   --namespace longhorn-system
    
    Events:
      Type    Reason     Age        From                              Message
      ----    ------     ----       ----                              -------
      Normal  Scheduled  <unknown>  default-scheduler                 Successfully assigned longhorn-system/longhorn-driver-deployer-cd74cb75b-dlgvt to izj6cco39nfexbhvl3qk7oz
      Normal  Pulled     11m        kubelet, izj6cco39nfexbhvl3qk7oz  Container image "longhornio/longhorn-manager:v1.0.0" already present on machine
      Normal  Created    11m        kubelet, izj6cco39nfexbhvl3qk7oz  Created container wait-longhorn-manager
      Normal  Started    11m        kubelet, izj6cco39nfexbhvl3qk7oz  Started container wait-longhorn-manager
    
    
    kubectl logs longhorn-driver-deployer-cd74cb75b-dlgvt   --namespace longhorn-system                                                                                                      
    Error from server (BadRequest): container "longhorn-driver-deployer" in pod "longhorn-driver-deployer-cd74cb75b-dlgvt" is waiting to start: PodInitializing
    
    kubectl logs longhorn-ui-8486987944-r78hc  --namespace longhorn-system
    2020/07/04 09:17:33 [warn] 1#1: duplicate MIME type "text/html" in /etc/nginx/nginx.conf:7
    nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/nginx.conf:7
    2020/07/04 09:17:33 [emerg] 1#1: host not found in upstream "longhorn-backend" in /etc/nginx/nginx.conf:32
    nginx: [emerg] host not found in upstream "longhorn-backend" in /etc/nginx/nginx.conf:32
    
  • [BUG] helm upgrade won't apply customize default setting

    [BUG] helm upgrade won't apply customize default setting

    Is your feature request related to a problem? Please describe.

    The longhorn-default-setting ConfigMap sync to settings CR only when settings CR does not exist. This means that during helm upgrade if the user changes the setting, then it won't applies to settings CR (but applies to longhorn-default-setting ConfigMap only)

    helm upgrade longhorn longhorn/longhorn -n longhorn-system \
        --set defaultSettings.backupTarget=<new-backup-target> \
        --set defaultSettings.backupTargetCredentialSecret=<new-backup-target-credential-secret>
    

    Describe the solution you'd like

    I think we could configure the settings CR directly. So, we don't have to write a setting controller to reconcile to longhorn-default-setting ConfigMap to settings CR. However, to accomplish it, we need to have a structural schema on CRDs and also, have an admission webhook to validate the input of these settings.

    Describe alternatives you've considered

    N/A

    Additional context

    Related issues

    • https://github.com/longhorn/longhorn/issues/2562#issuecomment-832413461
    • https://github.com/longhorn/longhorn/issues/2539#issuecomment-827290662
    • https://github.com/longhorn/longhorn/issues/2611
    • https://github.com/longhorn/longhorn/issues/2744
    • https://github.com/longhorn/longhorn/issues/2825
    • https://github.com/longhorn/longhorn/issues/3398
    • https://github.com/longhorn/longhorn/issues/3458
  • [IMPROVEMENT] Support K8s 1.25 by updating removed deprecated resource versions like PodSecurityPolicy

    [IMPROVEMENT] Support K8s 1.25 by updating removed deprecated resource versions like PodSecurityPolicy

    Is your improvement request related to a feature? Please describe

    PodSecurityPolicy has been deprecated and will be removed from K8s 1.25, so we need to find an alternative way to resolve the need for PSP in Longhorn to support 1.25.

    Also, some deprecated resource versions are also removed from 1.25. Need to resolve this via https://github.com/longhorn/longhorn/issues/4239 or even consider to backport this to 1.3 & 1.2 via an adaptive way to determine the K8s version of the cluster to use which API resource version if possible (except PSP, because it's totally removed instead of version bump).

    • Cronjob v1beta1 -> v1
    • EndpointSlice v1beta1 -> v1
    • Event v1beta1 -> v1
    • HorizontalPodAutoscaler v2beta1 -> v2
    • PodDisruptionBudget v1beta1 -> removed

    Note: client-go is backward compatible with K8s any version.

    Compatibility: client-go <-> Kubernetes clusters Since Kubernetes is backwards compatible with clients, older client-go versions will work with many different Kubernetes cluster versions.

    Describe the solution you'd like

    Deprecate PSP if it's not needed. Otherwise, we need an alternative solution like https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/.

    Describe alternatives you've considered

    N/A

    Additional context

    https://www.kubernetes.dev/resources/release/#timeline https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-25 https://github.com/longhorn/longhorn/issues/4239

  • [BUG] Backup - S3 Timeout

    [BUG] Backup - S3 Timeout

    Describe the bug In the backup UI, I get an error and the backups are not listed.

    error listing backups: error listing backup volumes: Timeout executing: /var/lib/longhorn/engine-binaries/longhornio-longhorn-engine-v1.0.0/longhorn [backup ls --volume-only s3://PATH/], output , stderr, , error <nil>

    Number of Volumes: 39 Number of Volumes with backup enabled: 32

    S3 Bucket Size: 2TB

    Expected behavior Backups of Volumes are showed

    Log time="2020-11-05T12:55:15Z" level=warning msg="backup store monitor: failed to list backup volumes in s3://S3PATH/: error listing backup volumes: Timeout executing: /var/lib/longhorn/engine-binaries/longhornio-longhorn-engine-v1.0.0/longhorn [backup ls --volume-only s3://longhorn-production@ch-dk-2/], output , stderr, , error <nil>"

    Environment:

    • Longhorn version: 1.0.0
    • Kubernetes version: 1.18.3
    • Node OS type and version: Centos 7.7

    Additional context On an empty bucket the backups are listed normally.

    S3 Provider https://exoscale.com

  • [BUG] Corruption using XFS after node restart or pod scale

    [BUG] Corruption using XFS after node restart or pod scale

    Describe the bug

    Upon either restarting a kubernetes worker node hosting Longhorn replicas, or during pod scaling (e.g. scaling a StatefulSet to 0), I've sometimes experienced silent data corruption. Following the Longhorn docs for detecting and deleting a failed replica does find a different hash, but deleting the replica doesn't solve the problem. Ultimately restoring from a snapshot is the only fix.

    Of note, I'm using XFS as the underlying filesystem, and all affected apps have used SQLite.

    Unfortunately/fortunately, this problem seems to be non-deterministic. I've experienced it twice; once when restarting a node for maintenance, and once scaling an app down and up.

    To Reproduce

    Steps to reproduce the behavior:

    1. Use StatefulSets with VolumeClaimTemplates to define PVCs.
    2. Restart a node containing replicas, or scale StatefulSets utilizing the PVCs to 0 and back to n>0.

    Expected behavior

    The replica volumes to return to service with no corruption.

    Log or Support bundle

    Example dmesg from a longhorn-manager pod. Note that while this shows the recovery completing, others do not. I unfortunately don't have timestamps to match up a failed one, although I can confirm that I experienced issues with this linked PVC, requiring a snapshot recovery.

    
    [ 1332.278329] XFS (sdc): Metadata CRC error detected at xfs_agfl_read_verify+0xa2/0xf0 [xfs], xfs_agfl block 0x3
    [ 1332.278332] XFS (sdc): Unmount and run xfs_repair
    [ 1332.278334] XFS (sdc): First 128 bytes of corrupted metadata buffer:
    [ 1332.278343] 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    [ 1332.278344] 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    [ 1332.278345] 00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    [ 1332.278346] 00000030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    [ 1332.278346] 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    [ 1332.278347] 00000050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    [ 1332.278348] 00000060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    [ 1332.278348] 00000070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    [ 1332.278368] XFS (sdc): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x3 len 1 error 74
    [ 1332.365389] XFS (sdc): xfs_do_force_shutdown(0x8) called from line 446 of file fs/xfs/libxfs/xfs_defer.c. Return address = 0000000061a87018
    [ 1332.365393] XFS (sdc): Corruption of in-memory data detected.  Shutting down filesystem
    [ 1332.365394] XFS (sdc): Please unmount the filesystem and rectify the problem(s)
    [ 1447.117969] cni0: port 13(veth6f28afba) entered disabled state
    [ 1447.129033] device veth6f28afba left promiscuous mode
    [ 1447.129059] cni0: port 13(veth6f28afba) entered disabled state
    [ 1448.304291] XFS (sdc): Unmounting Filesystem
    [ 1452.882691] sd 7:0:0:1: [sdc] Synchronizing SCSI cache
    [ 1459.419198] scsi host7: iSCSI Initiator over TCP/IP
    [ 1459.448202] scsi 7:0:0:0: RAID              IET      Controller       0001 PQ: 0 ANSI: 5
    [ 1459.450009] scsi 7:0:0:0: Attached scsi generic sg3 type 12
    [ 1459.451601] scsi 7:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0001 PQ: 0 ANSI: 5
    [ 1459.452849] sd 7:0:0:1: Attached scsi generic sg4 type 0
    [ 1459.453233] sd 7:0:0:1: Power-on or device reset occurred
    [ 1459.455747] sd 7:0:0:1: [sdc] 4194304 512-byte logical blocks: (2.15 GB/2.00 GiB)
    [ 1459.456106] sd 7:0:0:1: [sdc] Write Protect is off
    [ 1459.456111] sd 7:0:0:1: [sdc] Mode Sense: 69 00 10 08
    [ 1459.456794] sd 7:0:0:1: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
    [ 1460.586295] sd 7:0:0:1: [sdc] Attached SCSI disk
    [ 1469.044275] XFS (sdc): Mounting V5 Filesystem
    [ 1471.123131] XFS (sdc): Starting recovery (logdev: internal)
    [ 1471.572637] XFS (sdc): Ending recovery (logdev: internal)
    [ 1471.624639] xfs filesystem being mounted at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-81fb33a4-4b2e-425b-b6df-9dbd7af36034/globalmount supports timestamps until 2038 (0x7fffffff)
    [ 1471.850475] xfs filesystem being remounted at /var/lib/kubelet/pods/f0f3ff6b-841c-4c4e-a0f9-28cbd9fbac80/volumes/kubernetes.io~csi/pvc-81fb33a4-4b2e-425b-b6df-9dbd7af36034/mount supports timestamps until 2038
    (0x7fffffff)
    
    

    /sys/devices/system/edac/mc/mc{0,1} on each node shows 0 corrected and 0 uncorrected errors.

    Environment

    • Longhorn version: v1.2.3
    • Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Helm
    • Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: K3s v1.21.5+k3s2
      • Number of management node in the cluster: 3
      • Number of worker node in the cluster: 3
    • Node config
      • OS type and version: k3OS v0.21.5-k3s2r1
      • CPU per node: E5-2650 v2
        • Manager: 4
        • Worker: 28
      • Memory per node: DDR3-PC10600R
        • Manager: 8 Gi
        • Worker: 24 Gi
      • Disk type(e.g. SSD/NVMe): SSD (Intel DC S3500 300GB)
      • Network bandwidth between the nodes: 1 GBe
    • Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): KVM (Proxmox)
    • Number of Longhorn volumes in the cluster: 20

    Additional context

    Add any other context about the problem here.

  • Remove the first backup snapshot of a restored volume result in rebuild always fail

    Remove the first backup snapshot of a restored volume result in rebuild always fail

    there is a problem with the other two replicas which wasn't rebuilding:

    time="2018-12-27T07:00:05Z" level=error msg="Error in request: Replica tcp://10.42.0.51:9502's chain not equal to RW replica tcp://10.42.1.245:9502's chain"
    

    And this caused the rebuilding replica was reopened repeatedly without actually being worked on.

    time="2018-12-27T07:45:01Z" level=error msg="Error in request: Replica must be closed, Can not add in state: open"
    

    See the comments starting at: https://github.com/rancher/longhorn/issues/253#issuecomment-443540620

  • [BUG] Instance managers and Pods with attached volumes restarted every hour

    [BUG] Instance managers and Pods with attached volumes restarted every hour

    Describe the bug On a number of different clusters I've had over the past few months (k3s of various versions, on various clouds and OKE (Oracle Cloud)) on 1.19 and 1.20, I've had an issue where all instance managers and Pods with attached volumes get restarted precisely every hour. Sometimes redploying the whole of Longhorn from scratch and restoring from backup resolves the issue. The nodes are all healthy in this scenario and the rest of the cluster is stable and unchanged. However like clockwork, every hour it restarts all my pods.

    Related to: https://github.com/longhorn/longhorn/issues/2435

    To Reproduce I need to do further testing to see if it happens with completely fresh cluster without my backups restored. However it is currently happening for me on a two node ARM-based OKE cluster on Kubernetes 1.20, with 6 small volumes (100-500MB) restored from S3 backups and 3 larger 1-10GB volumes which were created fresh.

    Expected behavior To not have everything restart every hour.

    Log There is nothing in the logs indicating this is about to happen. The only thing which gets logged is the recovery.

    time="2021-07-24T08:53:13Z" level=debug msg="Polling backup store for new volume backups" component=backup-store-monitor controller=longhorn-setting node=10.0.121.185
    time="2021-07-24T08:53:13Z" level=debug msg="Refreshed all volumes last backup based on backup store information" component=backup-store-monitor controller=longhorn-setting node=10.0.121.185
    time="2021-07-24T09:00:48Z" level=debug msg="Stop monitoring instance manager instance-manager-r-07214e36" controller=longhorn-instance-manager instance manager=instance-manager-r-07214e36 node=10.0.121.185
    time="2021-07-24T09:00:48Z" level=debug msg="removed the engine from imc.instanceManagerMonitorMap" controller=longhorn-instance-manager instance manager=instance-manager-r-07214e36 node=10.0.121.185
    time="2021-07-24T09:00:48Z" level=error msg="error receiving next item in engine watch: rpc error: code = Canceled desc = context canceled" controller=longhorn-instance-manager instance manager=instance-manager-r-07214e36 node=10.0.121.185
    time="2021-07-24T09:00:48Z" level=error msg="error receiving next item in engine watch: rpc error: code = Unavailable desc = transport is closing" controller=longhorn-instance-manager instance manager=instance-manager-e-04771971 node=10.0.121.185
    time="2021-07-24T09:00:48Z" level=debug msg="Stop monitoring instance manager instance-manager-e-04771971" controller=longhorn-instance-manager instance manager=instance-manager-e-04771971 node=10.0.121.185
    time="2021-07-24T09:00:48Z" level=debug msg="removed the engine from imc.instanceManagerMonitorMap" controller=longhorn-instance-manager instance manager=instance-manager-e-04771971 node=10.0.121.185
    time="2021-07-24T09:00:48Z" level=warning msg="Cannot find the instance manager for the running instance pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066, will mark the instance as state ERROR"
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066 state, old state running, new state error"
    time="2021-07-24T09:00:48Z" level=warning msg="Cannot find the instance manager for the running instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c, will mark the instance as state ERROR"
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c state, old state running, new state error"
    time="2021-07-24T09:00:48Z" level=warning msg="Cannot find the instance manager for the running instance pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483, will mark the instance as state ERROR"
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483 state, old state running, new state error"
    time="2021-07-24T09:00:48Z" level=warning msg="Cannot find the instance manager for the running instance pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e, will mark the instance as state ERROR"
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e state, old state running, new state error"
    time="2021-07-24T09:00:48Z" level=warning msg="Cannot find the instance manager for the running instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852, will mark the instance as state ERROR"
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 state, old state running, new state error"
    time="2021-07-24T09:00:48Z" level=warning msg="Cannot find the instance manager for the running instance pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e, will mark the instance as state ERROR"
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e state, old state running, new state error"
    time="2021-07-24T09:00:48Z" level=warning msg="Cannot find the instance manager for the running instance pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c, will mark the instance as state ERROR"
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c state, old state running, new state error"
    time="2021-07-24T09:00:48Z" level=warning msg="Cannot find the instance manager for the running instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9, will mark the instance as state ERROR"
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9 state, old state running, new state error"
    time="2021-07-24T09:00:48Z" level=warning msg="Cannot find the instance manager for the running instance pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031, will mark the instance as state ERROR"
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031 state, old state running, new state error"
    time="2021-07-24T09:00:48Z" level=warning msg="Try to get requested log for pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c on node 10.0.121.185"
    time="2021-07-24T09:00:48Z" level=warning msg="cannot get requested log for instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c on node 10.0.121.185, error invalid Instance Manager instance-manager-r-07214e36, state: error, IP: "
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c state, old state error, new state stopped"
    time="2021-07-24T09:00:48Z" level=warning msg="Try to get requested log for pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066 on node 10.0.121.185"
    time="2021-07-24T09:00:48Z" level=warning msg="cannot get requested log for instance pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066 on node 10.0.121.185, error invalid Instance Manager instance-manager-r-07214e36, state: error, IP: "
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066 state, old state error, new state stopped"
    time="2021-07-24T09:00:48Z" level=warning msg="Try to get requested log for pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 on node 10.0.121.185"
    time="2021-07-24T09:00:48Z" level=warning msg="cannot get requested log for instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 on node 10.0.121.185, error invalid Instance Manager instance-manager-r-07214e36, state: error, IP: "
    time="2021-07-24T09:00:48Z" level=debug msg="Instance handler updated instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 state, old state error, new state stopped"
    time="2021-07-24T09:00:58Z" level=info msg="Created instance manager pod instance-manager-e-04771971 for instance manager instance-manager-e-04771971"
    time="2021-07-24T09:00:58Z" level=info msg="Created instance manager pod instance-manager-r-07214e36 for instance manager instance-manager-r-07214e36"
    time="2021-07-24T09:00:58Z" level=warning msg="The starting instance manager instance-manager-r-07214e36 shouldn't contain the running instance pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e, will mark the instance as state ERROR"
    time="2021-07-24T09:00:58Z" level=warning msg="The starting instance manager instance-manager-r-07214e36 shouldn't contain the running instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9, will mark the instance as state ERROR"
    time="2021-07-24T09:00:58Z" level=warning msg="The starting instance manager instance-manager-r-07214e36 shouldn't contain the running instance pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031, will mark the instance as state ERROR"
    time="2021-07-24T09:00:58Z" level=warning msg="The starting instance manager instance-manager-r-07214e36 shouldn't contain the running instance pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e, will mark the instance as state ERROR"
    time="2021-07-24T09:00:58Z" level=warning msg="The starting instance manager instance-manager-r-07214e36 shouldn't contain the running instance pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c, will mark the instance as state ERROR"
    time="2021-07-24T09:00:58Z" level=warning msg="The starting instance manager instance-manager-r-07214e36 shouldn't contain the running instance pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483, will mark the instance as state ERROR"
    time="2021-07-24T09:00:59Z" level=debug msg="Start monitoring instance manager instance-manager-e-04771971" controller=longhorn-instance-manager instance manager=instance-manager-e-04771971 node=10.0.121.185
    time="2021-07-24T09:00:59Z" level=debug msg="Start monitoring instance manager instance-manager-r-07214e36" controller=longhorn-instance-manager instance manager=instance-manager-r-07214e36 node=10.0.121.185
    time="2021-07-24T09:00:59Z" level=warning msg="Cannot find the instance status in instance manager instance-manager-r-07214e36 for the running instance pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c, will mark the instance as state ERROR"
    time="2021-07-24T09:00:59Z" level=warning msg="Cannot find the instance status in instance manager instance-manager-r-07214e36 for the running instance pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e, will mark the instance as state ERROR"
    time="2021-07-24T09:00:59Z" level=warning msg="Cannot find the instance status in instance manager instance-manager-r-07214e36 for the running instance pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e, will mark the instance as state ERROR"
    time="2021-07-24T09:00:59Z" level=warning msg="Cannot find the instance status in instance manager instance-manager-r-07214e36 for the running instance pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483, will mark the instance as state ERROR"
    time="2021-07-24T09:00:59Z" level=warning msg="Cannot find the instance status in instance manager instance-manager-r-07214e36 for the running instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9, will mark the instance as state ERROR"
    time="2021-07-24T09:00:59Z" level=warning msg="Cannot find the instance status in instance manager instance-manager-r-07214e36 for the running instance pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031, will mark the instance as state ERROR"
    time="2021-07-24T09:01:02Z" level=debug msg="Instance handler updated instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9 state, old state error, new state stopped"
    time="2021-07-24T09:01:04Z" level=debug msg="Instance handler updated instance pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483 state, old state error, new state stopped"
    time="2021-07-24T09:01:06Z" level=debug msg="Instance handler updated instance pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e state, old state error, new state stopped"
    time="2021-07-24T09:01:07Z" level=debug msg="Instance handler updated instance pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e state, old state error, new state stopped"
    time="2021-07-24T09:01:09Z" level=debug msg="Instance handler updated instance pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c state, old state error, new state stopped"
    time="2021-07-24T09:01:13Z" level=debug msg="Instance handler updated instance pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031 state, old state error, new state stopped"
    time="2021-07-24T09:01:16Z" level=debug msg="Stop monitoring instance manager instance-manager-r-07214e36" controller=longhorn-instance-manager instance manager=instance-manager-r-07214e36 node=10.0.121.185
    time="2021-07-24T09:01:16Z" level=debug msg="removed the engine from imc.instanceManagerMonitorMap" controller=longhorn-instance-manager instance manager=instance-manager-r-07214e36 node=10.0.121.185
    time="2021-07-24T09:01:16Z" level=error msg="error receiving next item in engine watch: rpc error: code = Canceled desc = context canceled" controller=longhorn-instance-manager instance manager=instance-manager-r-07214e36 node=10.0.121.185
    time="2021-07-24T09:01:17Z" level=debug msg="Stop monitoring instance manager instance-manager-e-04771971" controller=longhorn-instance-manager instance manager=instance-manager-e-04771971 node=10.0.121.185
    time="2021-07-24T09:01:17Z" level=debug msg="removed the engine from imc.instanceManagerMonitorMap" controller=longhorn-instance-manager instance manager=instance-manager-e-04771971 node=10.0.121.185
    time="2021-07-24T09:01:17Z" level=error msg="error receiving next item in engine watch: rpc error: code = Canceled desc = context canceled" controller=longhorn-instance-manager instance manager=instance-manager-e-04771971 node=10.0.121.185
    time="2021-07-24T09:01:19Z" level=warning msg="Error syncing Longhorn replica longhorn-system/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9" controller=longhorn-replica error="fail to sync replica for longhorn-system/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9: invalid Instance Manager instance-manager-r-07214e36, state: error, IP: " node=10.0.121.185
    time="2021-07-24T09:01:19Z" level=warning msg="Error syncing Longhorn replica longhorn-system/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9" controller=longhorn-replica error="fail to sync replica for longhorn-system/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9: invalid Instance Manager instance-manager-r-07214e36, state: error, IP: " node=10.0.121.185
    E0724 09:01:19.450376       1 replica_controller.go:178] fail to sync replica for longhorn-system/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9: invalid Instance Manager instance-manager-r-07214e36, state: error, IP:
    time="2021-07-24T09:01:19Z" level=warning msg="Dropping Longhorn replica longhorn-system/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9 out of the queue" controller=longhorn-replica error="fail to sync replica for longhorn-system/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9: invalid Instance Manager instance-manager-r-07214e36, state: error, IP: " node=10.0.121.185
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3453 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:25Z" level=info msg="Volume pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 detachment from node 10.0.117.182 requested"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 4564 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:25Z" level=info msg="Volume pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 detachment from node 10.0.117.182 requested"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "POST /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1?action=detach HTTP/1.1" 200 2229 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3525 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:25Z" level=info msg="Volume pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 detachment from node 10.0.117.182 requested"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "POST /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338?action=detach HTTP/1.1" 200 3362 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 4568 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:25Z" level=info msg="Volume pvc-7be8efab-347f-463a-b507-3875c8e369fc detachment from node 10.0.117.182 requested"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 4558 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:25Z" level=info msg="Volume pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 detachment from node 10.0.117.182 requested"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "POST /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc?action=detach HTTP/1.1" 200 3366 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "POST /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6?action=detach HTTP/1.1" 200 3376 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3465 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:25Z" level=info msg="Volume pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e detachment from node 10.0.117.182 requested"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "POST /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72?action=detach HTTP/1.1" 200 2324 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:25 +0000] "POST /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e?action=detach HTTP/1.1" 200 2244 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:27 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3454 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:27 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 4584 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:27 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 4568 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:27 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 4558 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:27 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3526 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:27 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3466 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:28Z" level=info msg="Created instance manager pod instance-manager-r-07214e36 for instance manager instance-manager-r-07214e36"
    time="2021-07-24T09:01:28Z" level=info msg="Created instance manager pod instance-manager-e-04771971 for instance manager instance-manager-e-04771971"
    10.244.2.136 - - [24/Jul/2021:09:01:29 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3454 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:29 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3506 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:29 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3446 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:29Z" level=debug msg="Start monitoring instance manager instance-manager-e-04771971" controller=longhorn-instance-manager instance manager=instance-manager-e-04771971 node=10.0.121.185
    time="2021-07-24T09:01:30Z" level=debug msg="Start monitoring instance manager instance-manager-r-07214e36" controller=longhorn-instance-manager instance manager=instance-manager-r-07214e36 node=10.0.121.185
    10.244.2.136 - - [24/Jul/2021:09:01:31 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3434 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:31 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3526 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:31 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:31 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 4558 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:31Z" level=info msg="Volume pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 attachment to 10.0.117.182 with disableFrontend false requested"
    10.244.2.136 - - [24/Jul/2021:09:01:31 +0000] "POST /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6?action=attach HTTP/1.1" 200 3376 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:31 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3466 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:33 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3454 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:33 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3506 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:33 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 4558 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:33 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3466 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:34Z" level=debug msg="Prepare to create instance pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031"
    time="2021-07-24T09:01:34Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031\", UID:\"908ed51e-b8a2-4e57-8bc0-c519bb40c3c9\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9015760\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031"
    time="2021-07-24T09:01:35Z" level=debug msg="Instance pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031 starts running, IP 10.244.2.205"
    time="2021-07-24T09:01:35Z" level=debug msg="Instance pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031 starts running, Port 10000"
    time="2021-07-24T09:01:35Z" level=debug msg="Instance handler updated instance pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6-r-5878c031 state, old state stopped, new state running"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3437 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3506 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 4609 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3437 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3437 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3449 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3449 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3449 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:35 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3449 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:36 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:36 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3437 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:36 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:36 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3429 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 4548 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 4544 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:37Z" level=info msg="Volume pvc-7be8efab-347f-463a-b507-3875c8e369fc attachment to 10.0.117.182 with disableFrontend false requested"
    time="2021-07-24T09:01:37Z" level=info msg="Volume pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 attachment to 10.0.117.182 with disableFrontend false requested"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "POST /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338?action=attach HTTP/1.1" 200 3362 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 4562 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "POST /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc?action=attach HTTP/1.1" 200 3366 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 4609 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 4562 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:37Z" level=info msg="Volume pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 attachment to 10.0.117.182 with disableFrontend false requested"
    10.244.2.136 - - [24/Jul/2021:09:01:37 +0000] "POST /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72?action=attach HTTP/1.1" 200 3360 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:39 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 4544 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:39 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 4548 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:39 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3552 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:39 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 4562 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:40Z" level=debug msg="Prepare to create instance pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066"
    time="2021-07-24T09:01:40Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066\", UID:\"6ad0043e-62ff-4ec5-b0d2-399c22c3b563\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9015878\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066"
    10.244.2.136 - - [24/Jul/2021:09:01:40 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:40 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3377 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:41Z" level=debug msg="Instance pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066 starts running, IP 10.244.2.205"
    time="2021-07-24T09:01:41Z" level=debug msg="Instance pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066 starts running, Port 10015"
    time="2021-07-24T09:01:41Z" level=debug msg="Instance handler updated instance pvc-7be8efab-347f-463a-b507-3875c8e369fc-r-62b67066 state, old state stopped, new state running"
    time="2021-07-24T09:01:41Z" level=debug msg="Prepare to create instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9"
    time="2021-07-24T09:01:41Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9\", UID:\"f40af8c3-b27e-41f4-9df0-d8fa96b353f5\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9015897\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9"
    10.244.2.136 - - [24/Jul/2021:09:01:41 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 4524 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:41 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 4599 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:41 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3552 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:41 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 4562 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:41Z" level=debug msg="Prepare to create instance pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e"
    time="2021-07-24T09:01:41Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e\", UID:\"47166af1-dc55-4945-aabf-b68034792aa1\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9015902\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e"
    time="2021-07-24T09:01:42Z" level=debug msg="Instance pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e starts running, IP 10.244.2.205"
    time="2021-07-24T09:01:42Z" level=debug msg="Instance pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e starts running, Port 10045"
    time="2021-07-24T09:01:42Z" level=debug msg="Instance handler updated instance pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338-r-af4a8f2e state, old state stopped, new state running"
    time="2021-07-24T09:01:42Z" level=debug msg="Instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9 starts running, IP 10.244.2.205"
    time="2021-07-24T09:01:42Z" level=debug msg="Instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9 starts running, Port 10030"
    time="2021-07-24T09:01:42Z" level=debug msg="Instance handler updated instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9 state, old state stopped, new state running"
    10.244.2.136 - - [24/Jul/2021:09:01:43 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 4595 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:43 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 4599 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:43 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3552 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:43 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3505 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:44Z" level=debug msg="Prepare to create instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c"
    time="2021-07-24T09:01:44Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c\", UID:\"5b8f6e66-4d04-4064-90ea-00efa1de7df6\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9015945\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c"
    time="2021-07-24T09:01:44Z" level=debug msg="Instance process pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c had been created, need to wait for instance manager update"
    10.244.2.136 - - [24/Jul/2021:09:01:44 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:44 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 4462 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:44Z" level=info msg="Volume pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e attachment to 10.0.117.182 with disableFrontend false requested"
    10.244.2.136 - - [24/Jul/2021:09:01:44 +0000] "POST /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e?action=attach HTTP/1.1" 200 3280 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:45Z" level=debug msg="Instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c starts running, IP 10.244.2.205"
    time="2021-07-24T09:01:45Z" level=debug msg="Instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c starts running, Port 10060"
    time="2021-07-24T09:01:45Z" level=debug msg="Instance handler updated instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c state, old state stopped, new state running"
    time="2021-07-24T09:01:45Z" level=debug msg="Prepare to create instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852"
    time="2021-07-24T09:01:45Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852\", UID:\"3cdcfb9c-c86d-477e-b682-d16e8ee24638\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9015973\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852"
    10.244.2.136 - - [24/Jul/2021:09:01:45 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 3538 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:45Z" level=debug msg="Prepare to create instance pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e"
    time="2021-07-24T09:01:45Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e\", UID:\"0b2f24bc-2883-4164-b608-25a150d1876a\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9015978\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e"
    10.244.2.136 - - [24/Jul/2021:09:01:45 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 3542 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:45 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3552 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:45 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3505 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:46Z" level=debug msg="Instance pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e starts running, IP 10.244.2.205"
    time="2021-07-24T09:01:46Z" level=debug msg="Instance pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e starts running, Port 10090"
    time="2021-07-24T09:01:46Z" level=debug msg="Instance handler updated instance pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e-r-2b7f722e state, old state stopped, new state running"
    time="2021-07-24T09:01:46Z" level=debug msg="Instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 starts running, IP 10.244.2.205"
    time="2021-07-24T09:01:46Z" level=debug msg="Instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 starts running, Port 10075"
    time="2021-07-24T09:01:46Z" level=debug msg="Instance handler updated instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 state, old state stopped, new state running"
    10.244.2.136 - - [24/Jul/2021:09:01:46 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 4533 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:47 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 3609 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:47 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 3613 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:47 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3623 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:47 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3576 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:48 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:48 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 4450 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:48Z" level=info msg="Volume pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 attachment to 10.0.117.182 with disableFrontend false requested"
    10.244.2.136 - - [24/Jul/2021:09:01:48 +0000] "POST /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1?action=attach HTTP/1.1" 200 3265 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:48 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 4533 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:49 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 3609 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:49 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 3625 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:49 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3623 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:49 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3588 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:50 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 4450 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:50 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3476 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:51 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 3621 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:51 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 3625 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:51 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3635 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:51 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3588 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:52 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 4450 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:52 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3547 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:53 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 3621 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:53 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 3625 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:53 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3635 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:53 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3588 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:53Z" level=debug msg="Prepare to create instance pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c"
    time="2021-07-24T09:01:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c\", UID:\"c28e49bc-2bea-49fb-a581-076b13e8f67e\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016091\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c"
    time="2021-07-24T09:01:54Z" level=debug msg="Instance pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c starts running, IP 10.244.2.205"
    time="2021-07-24T09:01:54Z" level=debug msg="Instance pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c starts running, Port 10105"
    time="2021-07-24T09:01:54Z" level=debug msg="Instance handler updated instance pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1-r-4a8c740c state, old state stopped, new state running"
    10.244.2.136 - - [24/Jul/2021:09:01:54 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 4521 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:54 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3559 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:55 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 3648 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:55 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 3417 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:55Z" level=info msg="Volume pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa detachment from node 10.0.117.182 requested"
    10.244.2.136 - - [24/Jul/2021:09:01:55 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 3625 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:55 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 3651 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:55Z" level=info msg="Volume pvc-65580647-e44a-4210-8545-2aff63ff0fe2 detachment from node 10.0.117.182 requested"
    10.244.2.136 - - [24/Jul/2021:09:01:55 +0000] "POST /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2?action=detach HTTP/1.1" 200 2316 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:55 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3635 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:55 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3682 "" "Go-http-client/1.1"
    time="2021-07-24T09:01:55Z" level=info msg="Volume pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 detachment from node 10.0.117.182 requested"
    10.244.2.136 - - [24/Jul/2021:09:01:55 +0000] "POST /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270?action=detach HTTP/1.1" 200 2320 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:55 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 3659 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:55 +0000] "POST /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa?action=detach HTTP/1.1" 200 2053 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:56 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3464 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:56 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3559 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:57 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 3648 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:57 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 3625 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:57 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 3679 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:57 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3635 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:57 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3727 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:57 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 5456 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:57 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 3462 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:58 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3535 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:58 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3559 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:59 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 3692 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:59 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 3625 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:59 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 3659 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:59 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3662 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:59 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3695 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:59 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 5456 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:01:59 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 3442 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:00 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3547 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:00 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3559 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:01 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 3692 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:01Z" level=debug msg="Prepare to create instance pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483"
    time="2021-07-24T09:02:01Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483\", UID:\"acbef7c7-8e56-4687-933f-718fbac4b78a\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016207\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483"
    10.244.2.136 - - [24/Jul/2021:09:02:01 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 5493 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:01 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 3659 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:01 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 3706 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:01 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3695 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:01 +0000] "GET /v1/volumes/pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72 HTTP/1.1" 200 5636 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:01 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 3791 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:02Z" level=debug msg="Instance pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483 starts running, IP 10.244.2.205"
    time="2021-07-24T09:02:02Z" level=debug msg="Instance pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483 starts running, Port 10120"
    time="2021-07-24T09:02:02Z" level=debug msg="Instance handler updated instance pvc-ee608cc7-bc48-4abc-97d2-c72caf3a0c72-r-553d4483 state, old state stopped, new state running"
    10.244.2.136 - - [24/Jul/2021:09:02:02 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3547 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:02 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3586 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:03 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 3692 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:03 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 5493 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:03 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 3703 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:03 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 5503 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:03 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3695 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:03 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 3791 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:04 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3547 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:04 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3586 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:05 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 5489 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:05 +0000] "GET /v1/volumes/pvc-7be8efab-347f-463a-b507-3875c8e369fc HTTP/1.1" 200 5842 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:05 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 3703 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:05 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 5503 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:05 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3624 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:05 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 4182 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:06Z" level=debug msg="Prepare to delete instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c"
    time="2021-07-24T09:02:06Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c\", UID:\"5b8f6e66-4d04-4064-90ea-00efa1de7df6\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016263\", FieldPath:\"\"}): type: 'Normal' reason: 'Stop' Stops pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c"
    time="2021-07-24T09:02:06Z" level=debug msg="Prepare to delete instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c"
    time="2021-07-24T09:02:06Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c\", UID:\"5b8f6e66-4d04-4064-90ea-00efa1de7df6\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016264\", FieldPath:\"\"}): type: 'Normal' reason: 'Stop' Stops pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c"
    10.244.2.136 - - [24/Jul/2021:09:02:06 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3547 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:06 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3630 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:07Z" level=debug msg="Instance handler updated instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c state, old state running, new state stopped"
    10.244.2.136 - - [24/Jul/2021:09:02:07 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 5489 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:07 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 4052 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:07 +0000] "GET /v1/volumes/pvc-dd45f8fb-fc5e-4abb-bc15-62ed9d042dd6 HTTP/1.1" 200 5852 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:07 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3553 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:07 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 4166 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:08 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3547 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:08 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 3630 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:09 +0000] "GET /v1/volumes/pvc-ab37a40c-a031-4b0d-bb6d-a09f01bd3338 HTTP/1.1" 200 5838 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:09 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 4052 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:09 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3553 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:09 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 4046 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:10 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 3618 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:10 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 5427 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:11 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 4443 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:11 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3429 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:11 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 4046 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:12 +0000] "GET /v1/volumes/pvc-3775cc56-b35a-4d5f-8d4b-ed16880d19c1 HTTP/1.1" 200 5764 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:12 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 5427 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:13Z" level=debug msg="Prepare to delete instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9"
    time="2021-07-24T09:02:13Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9\", UID:\"f40af8c3-b27e-41f4-9df0-d8fa96b353f5\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016350\", FieldPath:\"\"}): type: 'Normal' reason: 'Stop' Stops pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9"
    time="2021-07-24T09:02:13Z" level=debug msg="Prepare to delete instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9"
    time="2021-07-24T09:02:13Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9\", UID:\"f40af8c3-b27e-41f4-9df0-d8fa96b353f5\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016353\", FieldPath:\"\"}): type: 'Normal' reason: 'Stop' Stops pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9"
    10.244.2.136 - - [24/Jul/2021:09:02:13 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 4427 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:13 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3358 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:13 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 3235 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:14Z" level=debug msg="Instance handler updated instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9 state, old state running, new state stopped"
    10.244.2.136 - - [24/Jul/2021:09:02:14 +0000] "GET /v1/volumes/pvc-de808e12-9704-413c-a4b9-86bcabe4ca6e HTTP/1.1" 200 5776 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:15 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 4307 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:15 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 4538 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:15 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 3164 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:15 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:15 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 4538 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:15Z" level=info msg="Volume pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 attachment to 10.0.117.182 with disableFrontend false requested"
    10.244.2.136 - - [24/Jul/2021:09:02:15 +0000] "POST /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270?action=attach HTTP/1.1" 200 3356 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:17 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 3496 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:17 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 3093 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:17Z" level=debug msg="Prepare to create instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c"
    time="2021-07-24T09:02:17Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c\", UID:\"5b8f6e66-4d04-4064-90ea-00efa1de7df6\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016418\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c"
    10.244.2.136 - - [24/Jul/2021:09:02:17 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 4538 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:17Z" level=debug msg="Prepare to delete instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852"
    time="2021-07-24T09:02:17Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852\", UID:\"3cdcfb9c-c86d-477e-b682-d16e8ee24638\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016420\", FieldPath:\"\"}): type: 'Normal' reason: 'Stop' Stops pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852"
    time="2021-07-24T09:02:17Z" level=debug msg="Prepare to delete instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852"
    time="2021-07-24T09:02:17Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852\", UID:\"3cdcfb9c-c86d-477e-b682-d16e8ee24638\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016421\", FieldPath:\"\"}): type: 'Normal' reason: 'Stop' Stops pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852"
    time="2021-07-24T09:02:18Z" level=debug msg="Instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c starts running, IP 10.244.2.205"
    time="2021-07-24T09:02:18Z" level=debug msg="Instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c starts running, Port 10030"
    time="2021-07-24T09:02:18Z" level=debug msg="Instance handler updated instance pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270-r-d98ed33c state, old state stopped, new state running"
    time="2021-07-24T09:02:18Z" level=debug msg="Instance handler updated instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 state, old state running, new state stopped"
    10.244.2.136 - - [24/Jul/2021:09:02:19 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 4534 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:19 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 4273 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:19 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:19 +0000] "GET /v1/nodes/10.0.117.182 HTTP/1.1" 200 2197 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:19 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 4534 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:19Z" level=info msg="Volume pvc-65580647-e44a-4210-8545-2aff63ff0fe2 attachment to 10.0.117.182 with disableFrontend false requested"
    10.244.2.136 - - [24/Jul/2021:09:02:19 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 4273 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:19Z" level=info msg="Volume pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa attachment to 10.0.117.182 with disableFrontend false requested"
    10.244.2.136 - - [24/Jul/2021:09:02:19 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3635 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:19 +0000] "POST /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2?action=attach HTTP/1.1" 200 3352 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:19 +0000] "POST /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa?action=attach HTTP/1.1" 200 3089 "" "Go-http-client/1.1"
    time="2021-07-24T09:02:20Z" level=debug msg="Prepare to create instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9"
    time="2021-07-24T09:02:20Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9\", UID:\"f40af8c3-b27e-41f4-9df0-d8fa96b353f5\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016464\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9"
    time="2021-07-24T09:02:20Z" level=debug msg="Prepare to create instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852"
    time="2021-07-24T09:02:20Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852\", UID:\"3cdcfb9c-c86d-477e-b682-d16e8ee24638\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"9016469\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852"
    time="2021-07-24T09:02:21Z" level=debug msg="Instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 starts running, IP 10.244.2.205"
    time="2021-07-24T09:02:21Z" level=debug msg="Instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 starts running, Port 10075"
    time="2021-07-24T09:02:21Z" level=debug msg="Instance handler updated instance pvc-65580647-e44a-4210-8545-2aff63ff0fe2-r-711d6852 state, old state stopped, new state running"
    time="2021-07-24T09:02:21Z" level=debug msg="Instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9 starts running, IP 10.244.2.205"
    time="2021-07-24T09:02:21Z" level=debug msg="Instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9 starts running, Port 10060"
    time="2021-07-24T09:02:21Z" level=debug msg="Instance handler updated instance pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa-r-d55a13c9 state, old state stopped, new state running"
    10.244.2.136 - - [24/Jul/2021:09:02:21 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 3662 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:21 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 3548 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:21 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 3358 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:23 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 5503 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:23 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 3658 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:23 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 3397 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:25 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 5503 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:25 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 5499 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:25 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 5238 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:27 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 5503 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:27 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 5499 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:27 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 5238 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:29 +0000] "GET /v1/volumes/pvc-5c4f2a7f-97e0-4c23-8055-4d78c302c270 HTTP/1.1" 200 6243 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:29 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 5499 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:29 +0000] "GET /v1/volumes/pvc-75559a4f-b71b-4998-8ad7-1c76a1129afa HTTP/1.1" 200 5978 "" "Go-http-client/1.1"
    10.244.2.136 - - [24/Jul/2021:09:02:31 +0000] "GET /v1/volumes/pvc-65580647-e44a-4210-8545-2aff63ff0fe2 HTTP/1.1" 200 6239 "" "Go-http-client/1.1"
    time="2021-07-24T09:03:13Z" level=debug msg="Polling backup store for new volume backups" component=backup-store-monitor controller=longhorn-setting node=10.0.121.185
    time="2021-07-24T09:03:13Z" level=debug msg="Refreshed all volumes last backup based on backup store information" component=backup-store-monitor controller=longhorn-setting node=10.0.121.185
    

    You can also attach a Support Bundle here. You can generate a Support Bundle using the link at the footer of the Longhorn UI. longhorn-support-bundle_d4963230-e637-4991-9edd-4526a0295afe_2021-07-22T18-24-07Z.zip

    Environment:

    • Longhorn version: v1.1.2
    • Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Helm
    • Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: OKE 1.20
      • Number of management node in the cluster: Unknown. Managed.
      • Number of worker node in the cluster: 2
    • Node config
      • OS type and version: Oracle Linux 7.9
      • CPU per node: 2 (ARM64)
      • Memory per node: 12GB
      • Disk type(e.g. SSD/NVMe): Network attached SSDs of some kind
      • Network bandwidth between the nodes: Unknown but unlikely to be a factor
    • Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): Oracle Cloud OKE
    • Number of Longhorn volumes in the cluster: 9

    This has happened to me on single node (Intel) k3s clusters in AWS, a 3 node (Intel) k3s cluster on Civo Cloud as well as my current two node ARM64 Oracle setup. All with replicas set appropriately for the number of nodes of course.

    Additional context If the support bundle comes up with nothing useful I'll spin up a few different clusters and see if I can get any more idea about what triggers it.

    FAO @joshimoo

  • [BUG] RWX doesn't work with release 1.4.0

    [BUG] RWX doesn't work with release 1.4.0

    Describe the bug (🐛 if you encounter this issue)

    I reinstall longhorn 1.4.0 with k3s 1.25.5, everything is fine but RWX volume mount is repeatedly failed

    To Reproduce

    Steps to reproduce the behavior:

    1. Make a volume with RWX
    2. Mount it with a pod

    Expected behavior

    RWX should be mounted, as was in 1.3.2

    Log or Support bundle

    Here is the log from share-manager-:

    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_FILEHANDLE from NIV_EVENT to NIV_INFO                                                                       │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_DISPATCH from NIV_EVENT to NIV_INFO                                                                         │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_CACHE_INODE from NIV_EVENT to NIV_INFO                                                                      │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_CACHE_INODE_LRU from NIV_EVENT to NIV_INFO                                                                  │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_HASHTABLE from NIV_EVENT to NIV_INFO                                                                        │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_HASHTABLE_CACHE from NIV_EVENT to NIV_INFO                                                                  │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_DUPREQ from NIV_EVENT to NIV_INFO                                                                           │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_INIT from NIV_EVENT to NIV_INFO                                                                             │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_MAIN from NIV_EVENT to NIV_INFO                                                                             │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_IDMAPPER from NIV_EVENT to NIV_INFO                                                                         │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_NFS_READDIR from NIV_EVENT to NIV_INFO                                                                      │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_NFS_V4_LOCK from NIV_EVENT to NIV_INFO                                                                      │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_CONFIG from NIV_EVENT to NIV_INFO                                                                           │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_CLIENTID from NIV_EVENT to NIV_INFO                                                                         │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_SESSIONS from NIV_EVENT to NIV_INFO                                                                         │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_PNFS from NIV_EVENT to NIV_INFO                                                                             │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_RW_LOCK from NIV_EVENT to NIV_INFO                                                                          │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_NLM from NIV_EVENT to NIV_INFO                                                                              │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_RPC from NIV_EVENT to NIV_INFO                                                                              │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_TIRPC from NIV_EVENT to NIV_INFO                                                                            │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_NFS_CB from NIV_EVENT to NIV_INFO                                                                           │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_THREAD from NIV_EVENT to NIV_INFO                                                                           │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_NFS_V4_ACL from NIV_EVENT to NIV_INFO                                                                       │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_STATE from NIV_EVENT to NIV_INFO                                                                            │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_9P from NIV_EVENT to NIV_INFO                                                                               │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_9P_DISPATCH from NIV_EVENT to NIV_INFO                                                                      │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_FSAL_UP from NIV_EVENT to NIV_INFO                                                                          │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_DBUS from NIV_EVENT to NIV_INFO                                                                             │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] SetComponentLogLevel :LOG :NULL :LOG: Changing log level of COMPONENT_NFS_MSK from NIV_EVENT to NIV_INFO                                                                          │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed                                                                                               │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] init_fds_limit :INODE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.                                                                                             │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] init_server_pkgs :NFS STARTUP :INFO :State lock layer successfully initialized                                                                                                    │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] init_server_pkgs :NFS STARTUP :INFO :IP/name cache successfully initialized                                                                                                       │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.                                                                                                                     │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.                                                                                                         │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] nfs4_recovery_init :CLIENT ID :INFO :Recovery Backend Init for longhorn                                                                                                           │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] longhorn_recov_init :CLIENT ID :EVENT :Initialize recovery backend 'share-manager-shared-volume'                                                                                  │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90                                                                                                               │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] longhorn_read_recov_clids :CLIENT ID :EVENT :Read clients from recovery backend share-manager-shared-volume                                                                       │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] read_clids :CLIENT ID :EVENT :response={"actions":{},"clients":[],"hostname":"share-manager-shared-volume","id":"share-manager-shared-volume","links":{"self":"http://longhorn-re │
    │                                                                                                                                                                                                                                                                             │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend                                                                                                    │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)                                                                                                   │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] longhorn_recov_end_grace :CLIENT ID :EVENT :End grace for recovery backend 'share-manager-shared-volume' version LUUZWL8T                                                         │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] http_call :CLIENT ID :EVENT :HTTP error: 500 (url=http://longhorn-recovery-backend:9600/v1/recoverybackend/share-manager-shared-volume, payload={"version": "LUUZWL8T"})          │
    │ 31/12/2022 22:40:52 : epoch 63b0ba74 : share-manager-shared-volume : nfs-ganesha-29[main] longhorn_recov_end_grace :CLIENT ID :FATAL :HTTP call error: res=-1 ((null))                                                                                                      │
    │ time="2022-12-31T22:40:52Z" level=error msg="NFS server exited with error" encrypted=false error="ganesha.nfsd failed with error: exit status 2, output: " volume=shared-volume                                                                                             │
    │ W1231 22:40:52.523325       1 mount_helper_common.go:133] Warning: "/export/shared-volume" is not a mountpoint, deleting                                                                                                                                                    │
    │ time="2022-12-31T22:40:52Z" level=debug msg="Device /dev/mapper/shared-volume is not an active LUKS device" error="failed to run cryptsetup args: [status shared-volume] output:  error: exit status 4"
    

    Environment

    • Longhorn version: 1.4.0
    • Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Helm
    • Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s 1.25.5
      • Number of management node in the cluster: 2
      • Number of worker node in the cluster: 2
    • Node config
      • OS type and version: Ubuntu 20.04
      • CPU per node: 64
      • Memory per node: 384Gi
      • Disk type(e.g. SSD/NVMe): SSD
      • Network bandwidth between the nodes: 10G + 10G (link aggregated)
    • Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): On-prem
    • Number of Longhorn volumes in the cluster: 7

    Additional context

    Add any other context about the problem here.

  • [TASK] Update the test log instruction in the test repo.

    [TASK] Update the test log instruction in the test repo.

    What's the task? Please describe

    The instruction to watch the test logs need to be updated in our test repo. https://github.com/longhorn/longhorn-tests/tree/master/manager/integration

    Same has to be updated like below kubectl logs -f longhorn-test -c longhorn-test

  • [TASK] Clarify if any upcoming K8s API deprecation/removal will impact Longhorn 1.4

    [TASK] Clarify if any upcoming K8s API deprecation/removal will impact Longhorn 1.4

    What's the task? Please describe

    After discussing with @PhanLe1010 again, we need to double clarify if the removed storage v1beta1 API from 1.27 really impacts us.

    • There is no v1beta1 storage.k8s.io used in our current code base (1.4 and master), because we already upgrade to v1.
    • https://kubernetes.io/docs/reference/using-api/deprecation-guide/#csistoragecapacity-v127 is not related to CSI snapshot CRD which is individual from K8s builtin.

    Describe the items of the task (DoD, definition of done) you'd like

    • [ ] clarify if the upcoming API deprecation/removal will impact Longhorn 1.4

    Additional context

    cc @longhorn/dev

  • [TEST] Create a testing guidance for feature/regression testing

    [TEST] Create a testing guidance for feature/regression testing

    What's the test to develop? Please describe

    Usually, QA will rely on the test cases provided by engineering to do the feature testing or bug regression testing. However, it's the potential to ignore unhappy paths.

    We should provide a non-programming framework or guidance to let QA members follow up on how to test a new feature or regression bug, not only rely on the test cases provided by engineering but also base on them and cover possible unhappy paths, especially involuntary factors/events like network partition, node reboot, node deletion, kubelet restart, etc. The framework and guidance can be like a cheat sheet.

    Describe the items of the test development (DoD, definition of done) you'd like

    • [ ] Create a cheat sheet/framework/guidance doc for QA to guide them on how to test a feature or bug regression.

    Additional context

    A good example when testing online volume expansion, https://github.com/longhorn/longhorn/issues/1674#issuecomment-1366969645

    cc @longhorn/qa

"rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

Website | Documentation | Download | Contributing | Changelog | Installation | Forum Rclone Rclone ("rsync for cloud storage") is a command-line progr

Jan 9, 2023
Rook is an open source cloud-native storage orchestrator for Kubernetes

Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Oct 25, 2022
High Performance, Kubernetes Native Object Storage
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Ama

Jan 2, 2023
s3git: git for Cloud Storage. Distributed Version Control for Data.
s3git: git for Cloud Storage. Distributed Version Control for Data.

s3git: git for Cloud Storage. Distributed Version Control for Data. Create decentralized and versioned repos that scale infinitely to 100s of millions of files. Clone huge PB-scale repos on your local SSD to make changes, commit and push back. Oh yeah, it dedupes too and offers directory versioning.

Dec 27, 2022
QingStor Object Storage service support for go-storage

go-services-qingstor QingStor Object Storage service support for go-storage. Install go get github.com/minhjh/go-service-qingstor/v3 Usage import ( "

Dec 13, 2021
SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

Jan 4, 2023
Storj is building a decentralized cloud storage network
Storj is building a decentralized cloud storage network

Ongoing Storj v3 development. Decentralized cloud object storage that is affordable, easy to use, private, and secure.

Jan 8, 2023
Storage Orchestration for Kubernetes

What is Rook? Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse se

Dec 29, 2022
This is a simple file storage server. User can upload file, delete file and list file on the server.
This is a simple file storage server.  User can upload file,  delete file and list file on the server.

Simple File Storage Server This is a simple file storage server. User can upload file, delete file and list file on the server. If you want to build a

Jan 19, 2022
Perkeep (nÊe Camlistore) is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.

Perkeep is your personal storage system. It's a way to store, sync, share, import, model, and back up content. Keep your stuff for life. For more, see

Dec 26, 2022
An encrypted object storage system with unlimited space backed by Telegram.

TGStore An encrypted object storage system with unlimited space backed by Telegram. Please only upload what you really need to upload, don't abuse any

Nov 28, 2022
tstorage is a lightweight local on-disk storage engine for time-series data
tstorage is a lightweight local on-disk storage engine for time-series data

tstorage is a lightweight local on-disk storage engine for time-series data with a straightforward API. Especially ingestion is massively opt

Jan 1, 2023
storage interface for local disk or AWS S3 (or Minio) platform

storage interface for local disk or AWS S3 (or Minio) platform

Apr 19, 2022
Terraform provider for the Minio object storage.

terraform-provider-minio A Terraform provider for Minio, a self-hosted object storage server that is compatible with S3. Check out the documenation on

Dec 1, 2022
A Redis-compatible server with PostgreSQL storage backend

postgredis A wild idea of having Redis-compatible server with PostgreSQL backend. Getting started As a binary: ./postgredis -addr=:6380 -db=postgres:/

Nov 8, 2021
CSI for S3 compatible SberCloud Object Storage Service

sbercloud-csi-obs CSI for S3 compatible SberCloud Object Storage Service This is a Container Storage Interface (CSI) for S3 (or S3 compatible) storage

Feb 17, 2022
Void is a zero storage cost large file sharing system.

void void is a zero storage cost large file sharing system. License Copyright Š 2021 Changkun Ou. All rights reserved. Unauthorized using, copying, mo

Nov 22, 2021
A High Performance Object Storage released under Apache License
A High Performance Object Storage released under Apache License

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storag

Sep 30, 2021
Akutan is a distributed knowledge graph store, sometimes called an RDF store or a triple store.

Akutan is a distributed knowledge graph store, sometimes called an RDF store or a triple store. Knowledge graphs are suitable for modeling data that is highly interconnected by many types of relationships, like encyclopedic information about the world. A knowledge graph store enables rich queries on its data, which can be used to power real-time interfaces, to complement machine learning applications, and to make sense of new, unstructured information in the context of the existing knowledge.

Jan 7, 2023