Workflow engine for Kubernetes

slack CI CII Best Practices Twitter Follow

What is Argo Workflows?

Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).

  • Define workflows where each step in the workflow is a container.
  • Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic graph (DAG).
  • Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo Workflows on Kubernetes.

Argo is a Cloud Native Computing Foundation (CNCF) hosted project.

Argo Workflows in 5 minutes

Use Cases

  • Machine Learning pipelines
  • Data and batch processing
  • ETL
  • Infrastructure automation
  • CI/CD

Why Argo Workflows?

  • Argo Workflows is the most popular workflow execution engine for Kubernetes.
  • It can run 1000s of workflows a day, each with 1000s of concurrent tasks.
  • Our users say it is lighter-weight, faster, more powerful, and easier to use
  • Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments.
  • Cloud agnostic and can run on any Kubernetes cluster.

Read what people said in our latest survey

Try Argo Workflows

Access the demo environment (login using Github)

Screenshot

Ecosystem

Ecosystem

Argo Events | Argo Workflows Catalog | Couler | Katib | Kedro | Kubeflow Pipelines | Onepanel | Ploomber | Seldon | SQLFlow

Client Libraries

Check out our Java, Golang and Python clients.

Quickstart

kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/install.yaml

Who uses Argo Workflows?

Official Argo Workflows user list

Documentation

Features

  • UI to visualize and manage Workflows
  • Artifact support (S3, Artifactory, Alibaba Cloud OSS, HTTP, Git, GCS, raw)
  • Workflow templating to store commonly used Workflows in the cluster
  • Archiving Workflows after executing for later access
  • Scheduled workflows using cron
  • Server interface with REST API (HTTP and GRPC)
  • DAG or Steps based declaration of workflows
  • Step level input & outputs (artifacts/parameters)
  • Loops
  • Parameterization
  • Conditionals
  • Timeouts (step & workflow level)
  • Retry (step & workflow level)
  • Resubmit (memoized)
  • Suspend & Resume
  • Cancellation
  • K8s resource orchestration
  • Exit Hooks (notifications, cleanup)
  • Garbage collection of completed workflow
  • Scheduling (affinity/tolerations/node selectors)
  • Volumes (ephemeral/existing)
  • Parallelism limits
  • Daemoned steps
  • DinD (docker-in-docker)
  • Script steps
  • Event emission
  • Prometheus metrics
  • Multiple executors
  • Multiple pod and workflow garbage collection strategies
  • Automatically calculated resource usage per step
  • Java/Golang/Python SDKs
  • Pod Disruption Budget support
  • Single-sign on (OAuth2/OIDC)
  • Webhook triggering
  • CLI
  • Out-of-the box and custom Prometheus metrics
  • Windows container support
  • Embedded widgets
  • Multiplex log viewer

Community Meetings

We host monthly community meetings where we and the community showcase demos and discuss the current and future state of the project. Feel free to join us! For Community Meeting information, minutes and recordings please see here.

Participation in the Argo Workflows project is governed by the CNCF Code of Conduct

Community Blogs and Presentations

Project Resources

Comments
  • 3.4-rc2 - Workflows UI can no longer get logs (s3)

    3.4-rc2 - Workflows UI can no longer get logs (s3)

    Checklist

    • [x] Double-checked my configuration.
    • [x] Tested using the latest version.
    • [x] Used the Emissary executor.

    Summary

    This occurs following an upgrade from workflows 3.3.9 to 3.4-rc2.

    Logs are still correctly sent to s3 by argo workflows, I can see main.log in s3 and the contents of the log file are correct.

    However, once the workflow has finished the pod has been archived, the logs field is now empty in the UI. Clicking Try getting logs from the artifacts results in Internal Server Error

    The argo-server logs show this error, only when trying to clikc the 'try getting logs from artifacts' link.: level=error msg="Artifact Server returned internal error" error="artifact not found: main-logs"

    No errors at all when just trying to view in the UI. None on the controller either.

    Note, as I'm using IRSA, I have the following patch on my argo-server:

        spec:
          securityContext:
            fsGroup: 65534
    

    What version are you running? 3.4-rc2

    Config summary: controller-configmap:

      artifactRepository: |
        # archiveLogs will archive the main container logs as an artifact
        archiveLogs: true
    
        s3:
          endpoint: s3.amazonaws.com
          bucket: my-bucket-name
          region: us-east-1
          insecure: false
          keyFormat: "my-artifacts\
            /{{workflow.creationTimestamp.Y}}\
            /{{workflow.creationTimestamp.m}}\
            /{{workflow.creationTimestamp.d}}\
            /{{workflow.name}}\
            /{{pod.name}}"
          useSDKCreds: true
    

    The argo-server ServiceAccount has the eks.amazonaws.com/role-arn annotation as always.


    Message from the maintainers:

    Impacted by this regression? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

  • v2.10/v2.11/latest(21st Sep): Too many warn & error messages in Argo Workflow Controller (msg=

    v2.10/v2.11/latest(21st Sep): Too many warn & error messages in Argo Workflow Controller (msg="error in entry template execution" error="Deadline exceeded")

    Summary

    Too many warning and error messages inside Argo workflow controllers

    Argo workflow controller logs

    $ kubectl logs --tail=20 workflow-controller-cb99d68cf-znssr

    time="2020-09-16T13:46:45Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionczpht
    time="2020-09-16T13:46:45Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestioncbmt6
    time="2020-09-16T13:46:45Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestioncbmt6
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionvz4km
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionvz4km
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionhvnhs
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionhvnhs
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionnnsbb
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionnnsbb
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionkc5sb
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionkc5sb
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionc9fcz
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionc9fcz
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionpjczx
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionpjczx
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionftmdh
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionftmdh
    time="2020-09-16T13:46:46Z" level=warning msg="Deadline exceeded" namespace=argo workflow=internal-data-ingestionbfrc5
    time="2020-09-16T13:46:46Z" level=error msg="error in entry template execution" error="Deadline exceeded" namespace=argo workflow=internal-data-ingestionbfrc5
    

    Workflows are getting stuck after some time and not completing in 12+ hours while normal execution is around 1 minute.

    I am creating almost 1000 workflows with each workflow containing 4 Pods in short span of time. There are enough worker nodes to do processing so no issues as such from K8s cluster side.

    internal-data-ingestiontj79x error in entry template execution: Deadline exceeded github.com/argoproj/argo/errors.New  /go/src/github.com/argoproj/argo/errors/errors.go:49 github.com/argoproj/argo/workflow/controller.init  /go/src/github.com/argoproj/argo/workflow/controller/operator.go:102 runtime.doInit  /usr/local/go/src/runtime/proc.go:5222 runtime.doInit  /usr/local/go/src/runtime/proc.go:5217 runtime.main  /usr/local/go/src/runtime/proc.go:190 runtime.goexit  /usr/local/go/src/runtime/asm_amd64.s:1357
    --
    

    Diagnostics

    What version of Argo Workflows are you running?

    Argo v2.10.1


    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

  • Successfully finished pods show stuck in pending phase indefinitely in GKE + Kubernetes v1.20

    Successfully finished pods show stuck in pending phase indefinitely in GKE + Kubernetes v1.20

    Summary

    We were running reasonably complex Argo workflows without issues for a long time. However around the time we updated Kubernetes version to 1.19.10-gke.1000 (Running ARGO in GKE) we started experiencing frequent problems with workflow getting stuck because Pod that was successfully started by Argo and finished is shown stuck in the pending state in Argo. (Even though we can see from logs that the main container successfully finished) We have tried using PNS executor and k8sapi executors but that did not fix the issue. We have removed Argo namespace and recreated it again and the issue is still happening. We updated from Argo 2.x to 3.0.8 to 3.1.1 and to stable 3.0.3 and it still occurred. Currently we are at latest tag (argoproj/workflow-controller:v3.0.3)

    Diagnostics

    What Kubernetes provider are you using? We are using GKE with Kubernetes version 1.19.10-gke.1000

    What version of Argo Workflows are you running? Tested on multiple versions. Currently runing 3.0.3 but it started in 2.x version and happened in 3.1.1 and 3.0.8

    What executor are you running? Docker/K8SAPI/Kubelet/PNS/Emissary Failed with default(I guess docker) K8SAPI and PNS

    Did this work in a previous version? I.e. is it a regression? We are not sure if that was the cause but if worked without issues 2.6 on GKE before updating kubernetes to 1.19

    # Simplified failing part of workflow. 
     Parents-parent
        our-workflow-1625018400-118234800:
          boundaryID: our-workflow-1625018400-2940179300
          children:
            - our-workflow-1625018400-1820417972
            - our-workflow-1625018400-1463043834
          displayName: '[2]'
          finishedAt: null
          id: our-workflow-1625018400-118234800
          name: our-workflow-1625018400[2].fill-supervised-product-clusters[0].fill-supervised-product-cluster(19:configs:["things"],products:["params"],supervised-configs:[])[2]
          phase: Running
          progress: 0/2
          startedAt: "2021-06-30T09:34:44Z"
          templateScope: local/our-workflow-1625018400
          type: StepGroup   
      Parent stuck running
        our-workflow-1625018400-1820417972:
          boundaryID: our-workflow-1625018400-2940179300
          children:
          - our-workflow-1625018400-655392735
          displayName: fill-products(0:params)
          finishedAt: null
          id: our-workflow-1625018400-1820417972
          inputs:
            parameters:
            - name: products
              value: send_val
          name: our-workflow-1625018400[2].fill-supervised-product-clusters[0].fill-supervised-product-cluster(19:configs:["things"],products:["params"],supervised-configs:[])[2].fill-products(0:params)
          phase: Running
          progress: 0/1
          startedAt: "2021-06-30T09:34:44Z"
          templateName: fill-products
          templateScope: local/our-workflow-1625018400
          type: Retry
      Finished pod stuck pending
        our-workflow-1625018400-655392735:
          boundaryID: our-workflow-1625018400-2940179300
          displayName: fill-products(0:params)(0)
          finishedAt: null
          id: our-workflow-1625018400-655392735
          inputs:
            parameters:
              - name: products
                value: send_val
          name: our-workflow-1625018400[2].fill-supervised-product-clusters[0].fill-supervised-product-cluster(19:configs:["things"],products:["params"],supervised-configs:[])[2].fill-products(0:params)(0)
          phase: Pending
          progress: 0/1
          startedAt: "2021-06-30T09:34:44Z"
          templateName: fill-products
          templateScope: local/our-workflow-1625018400
          type: Pod
    
    # Logs from the workflow controller:
    # Workflow container restarts occasionally failing logs:
    #  Edit: NOTE: Pods get stuck even if workflow controller doesn't restart.
    time="2021-06-30T08:03:26.894Z" level=info msg="Workflow update successful" namespace=default phase=Running resourceVersion=349445621 workflow=workflow-working-fine-1625040000
    time="2021-06-30T08:03:26.898Z" level=info msg="SG Outbound nodes of affected-workflow-1625018400-3050482061 are [affected-workflow-1625018400-3894231772]" namespace=default workflow=affected-workflow-1625018400
    time="2021-06-30T08:03:26.898Z" level=info msg="SG Outbound nodes of affected-workflow-1625018400-483040858 are [affected-workflow-1625018400-3675982209]" namespace=default workflow=affected-workflow-1625018400
    time="2021-06-30T08:03:26.899Z" level=info msg="SG Outbound nodes of affected-workflow-1625018400-744350190 are [affected-workflow-1625018400-952860445]" namespace=default workflow=affected-workflow-1625018400
    E0630 08:03:27.386046       1 leaderelection.go:361] Failed to update lock: Put "https://10.47.240.1:443/apis/coordination.k8s.io/v1/namespaces/argo/leases/workflow-controller": context deadline exceeded
    I0630 08:03:27.681192       1 leaderelection.go:278] failed to renew lease argo/workflow-controller: timed out waiting for the condition
    time="2021-06-30T08:03:28.071Z" level=info msg="Update leases 409"
    time="2021-06-30T08:03:29.474Z" level=info msg="Create events 201"
    time="2021-06-30T08:03:31.003Z" level=info msg="cleaning up pod" action=labelPodCompleted key=default/workflow-working-fine-1625040000-957740190/labelPodCompleted
    E0630 08:03:31.031922       1 leaderelection.go:301] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "workflow-controller": the object has been modified; please apply your changes to the latest version and try again
    time="2021-06-30T08:03:31.477Z" level=info msg="stopped leading" id=workflow-controller-7c568955d7-bxtdj
    E0630 08:03:31.640872       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
    goroutine 78 [running]:
    k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1b6eb00, 0x2c9aa80)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x89
    panic(0x1b6eb00, 0x2c9aa80)
    	/usr/local/go/src/runtime/panic.go:969 +0x1b9
    github.com/argoproj/argo-workflows/v3/workflow/controller.(*WorkflowController).Run.func2()
    	/go/src/github.com/argoproj/argo-workflows/workflow/controller/controller.go:256 +0x7c
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0xc0006547e0)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:200 +0x29
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0006547e0, 0x206c3e0, 0xc0002a48c0)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:210 +0x15d
    k8s.io/client-go/tools/leaderelection.RunOrDie(0x206c3e0, 0xc000522d40, 0x2078780, 0xc00085e000, 0x37e11d600, 0x2540be400, 0x12a05f200, 0xc000cb9c00, 0xc000a6a700, 0xc001096060, ...)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:222 +0x9c
    created by github.com/argoproj/argo-workflows/v3/workflow/controller.(*WorkflowController).Run
    	/go/src/github.com/argoproj/argo-workflows/workflow/controller/controller.go:241 +0xfb8
    panic: runtime error: invalid memory address or nil pointer dereference [recovered]
    	panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x19cf4fc]
    
    goroutine 78 [running]:
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x10c
    panic(0x1b6eb00, 0x2c9aa80)
    	/usr/local/go/src/runtime/panic.go:969 +0x1b9
    github.com/argoproj/argo-workflows/v3/workflow/controller.(*WorkflowController).Run.func2()
    	/go/src/github.com/argoproj/argo-workflows/workflow/controller/controller.go:256 +0x7c
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0xc0006547e0)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:200 +0x29
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0006547e0, 0x206c3e0, 0xc0002a48c0)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:210 +0x15d
    k8s.io/client-go/tools/leaderelection.RunOrDie(0x206c3e0, 0xc000522d40, 0x2078780, 0xc00085e000, 0x37e11d600, 0x2540be400, 0x12a05f200, 0xc000cb9c00, 0xc000a6a700, 0xc001096060, ...)
    	/go/pkg/mod/k8s.io/[email protected]/tools/leaderelection/leaderelection.go:222 +0x9c
    created by github.com/argoproj/argo-workflows/v3/workflow/controller.(*WorkflowController).Run
    	/go/src/github.com/argoproj/argo-workflows/workflow/controller/controller.go:241 +0xfb8
    
    # Other than that is acting like this: 
    time="2021-06-30T11:59:37.960Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.960Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.961Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.962Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.963Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.963Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.964Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.965Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.965Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.966Z" level=info msg="Patch events 200"
    time="2021-06-30T11:59:37.966Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.968Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.968Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.969Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.970Z" level=info msg="template (node failing-workflow-1625018400-2046183375) active children parallelism reached 3/3" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.970Z" level=info msg="Workflow step group node failing-workflow-1625018400-1253377659 not yet completed" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.970Z" level=info msg="Workflow step group node failing-workflow-1625018400-1369709231 not yet completed" namespace=default workflow=failing-workflow-1625018400
    time="2021-06-30T11:59:37.972Z" level=info msg="Patch events 200"
    time="2021-06-30T11:59:37.977Z" level=info msg="Patch events 200"
    time="2021-06-30T11:59:39.413Z" level=info msg="Get leases 200"
    time="2021-06-30T11:59:39.420Z" level=info msg="Update leases 200"
    time="2021-06-30T11:59:43.564Z" level=info msg="Enforcing history limit 
    
    # Pending pod main logs:
    Logs everything as expected and finishes. 
    
    # Logs from pending workflow's wait container, something like:
    time="2021-06-30T09:34:45.973Z" level=info msg="Starting Workflow Executor" version="{v3.0.3 2021-05-11T21:14:20Z 02071057c082cf295ab8da68f1b2027ff8762b5a v3.0.3 clean go1.
    15.7 gc linux/amd64}"
    time="2021-06-30T09:34:45.979Z" level=info msg="Creating a docker executor"
    time="2021-06-30T09:34:45.979Z" level=info msg="Executor (version: v3.0.3, build_date: 2021-05-11T21:14:20Z) initialized (pod: default/failing-workflow-workfl
    ow-1625018400-531201185) with template:\n{\"name\":\"fill-products\",\"inputs\":{\"parameters\":[{\"name\":\"products\",\"value\":\"products\"}]},\"outputs\":{},\"metadata\":{},\"container\":{\"name\":\"\",\"i
    mage\":\"us.gcr.io/spaceknow-backend/failing-workflow-workflow-base-delivery:287\",\"command\":[\"python3\"],\"args\":[\"deliveries/datacube-fill-products.py\
    ",\"--products\",\"products\"],\"envF
    rom\":[{\"secretRef\":{\"name\":\"failing-workflow-workflow\"}}],\"resources\":{\"limits\":{\"memory\":\"2Gi\"},\"requests\":{\"cpu\":\"2\",\"memory\":\"2Gi\"
    }}},\"retryStrategy\":{\"limit\":2,\"retryPolicy\":\"Always\"}}"
    time="2021-06-30T09:34:45.979Z" level=info msg="Starting annotations monitor"
    time="2021-06-30T09:34:45.979Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:34:45.980Z" level=info msg="Starting deadline monitor"
    time="2021-06-30T09:34:46.039Z" level=info msg="mapped container name \"main\" to container ID \"bcd7bcab1bd8ab36553933c1b41cb81deacfbab26a9a440f36360aecef06af6f\" (created
     at 2021-06-30 09:34:45 +0000 UTC, status Created)"
    time="2021-06-30T09:34:46.039Z" level=info msg="mapped container name \"wait\" to container ID \"34831534a3aefb25a5744dfd102c8060dc2369767cab729b343f3b18d375828e\" (created
     at 2021-06-30 09:34:45 +0000 UTC, status Up)"
    time="2021-06-30T09:34:46.980Z" level=info msg="docker wait bcd7bcab1bd8ab36553933c1b41cb81deacfbab26a9a440f36360aecef06af6f"
    time="2021-06-30T09:35:15.026Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:35:16.066Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:35:26.101Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:35:36.135Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:35:46.168Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:35:56.200Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:06.234Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:16.266Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:26.299Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:36.333Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:46.372Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:36:56.406Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=
    label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=failing-workflow-workflow-1625018400-531201185"
    time="2021-06-30T09:37:04.406Z" level=info msg="Main container completed"
    time="2021-06-30T09:37:04.406Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
    time="2021-06-30T09:37:04.406Z" level=info msg="Capturing script exit code"
    time="2021-06-30T09:37:04.438Z" level=info msg="No output parameters"
    time="2021-06-30T09:37:04.438Z" level=info msg="No output artifacts"
    time="2021-06-30T09:37:04.438Z" level=info msg="Annotating pod with output"
    time="2021-06-30T09:37:04.454Z" level=info msg="Patch pods 200"
    time="2021-06-30T09:37:04.464Z" level=info msg="Killing sidecars []"
    time="2021-06-30T09:37:04.464Z" level=info msg="Alloc=5435 TotalAlloc=11825 Sys=73041 NumGC=4 Goroutines=10"
    

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

  • downloading artifact from s3 in ui, timed out waiting for condition

    downloading artifact from s3 in ui, timed out waiting for condition

    Checklist:

    • [x] I've included the version.
    • [x] I've included reproduction steps.
    • [] I've included the workflow YAML.
    • [x] I've included the logs.

    What happened: Installed the latest 2.5.0rc7 via install.yaml on eks 1.14 and added to the install.yaml the diff shown in the output below, so the archivedLogs and s3 config are enabled (workflow-controller-configmap):

    336,344d322
    < data:
    <   config: |
    <     artifactRepository:
    <       archiveLogs: true
    <       s3:
    <         bucket: "example-argo"
    <         keyPrefix: "example"
    <         endpoint: "s3.amazonaws.com"
    < 
    

    to gain ui access on localhost to the ui in kubernetes kubectl port-forward svc/argo-server 2746:2746 -n argo

    run a basic hello world workflow via argo cli and the workflow completes as expected, and clicking on artifacts link in ui shows main-logs object as expected, but when you click to download the actual artfact in the ui, the browser eventually returns a "timed out waiting on condition"

    What you expected to happen: I expect clicking on the link to download the requested artifact.

    How to reproduce it (as minimally and precisely as possible): install.yaml with a s3 config similar to above and run any workflow, and then try and download the resulting main-log artifact.

    Logs argo-server log shows:

    time="2020-01-31T23:02:06Z" level=info msg="S3 Load path: artifact368826374, key: example/local-script-gd5zj/local-script-gd5zj/main.log"
    time="2020-01-31T23:02:06Z" level=info msg="Creating minio client s3.amazonaws.com using IAM role"
    time="2020-01-31T23:02:06Z" level=info msg="Getting from s3 (endpoint: s3.amazonaws.com, bucket: example-argo, key: example/local-script-gd5zj/local-script-gd5zj/main.log) to artifact368826374"
    time="2020-01-31T23:02:06Z" level=warning msg="Failed get file: Get https://s3.amazonaws.com/example-argo/?location=: x509: certificate signed by unknown authority"
    

    Message from the maintainers:

    If you are impacted by this bug please add a πŸ‘ reaction to this issue! We often sort issues this way to know what to prioritize.

  • Workflow - Could not get container status

    Workflow - Could not get container status

    Summary

    What happened/what you expected to happen?

    The workflow should successfully terminate all of your triggered tasks. It wasn't easy to reproduce the anomalous behaviour outside of the cluster and also not happen every single time, so please find additional configurations that might help to reproduce it

    argo-server: 3 replicas
    executor: pns
    parallelism: 1000
    

    Diagnostics

    What Kubernetes provider are you using? GKE or Kind

    What version of Argo Workflows are you running?

    ❯ argo version
    argo: v3.0.0-rc3
      BuildDate: 2021-02-23T21:06:58Z
      GitCommit: c0c364c229e3b72306bd0b73161df090d24e0c31
      GitTreeState: clean
      GitTag: v3.0.0-rc3
      GoVersion: go1.13
      Compiler: gc
      Platform: darwin/amd64
    

    Workflow template

    apiVersion: argoproj.io/v1alpha1
    kind: Workflow
    metadata:
      creationTimestamp: "2021-03-02T00:36:25Z"
      generateName: ci-
      generation: 10
      labels:
        submit-from-ui: "true"
        workflows.argoproj.io/completed: "true"
        workflows.argoproj.io/creator: system-serviceaccount-argo-argo-server
        workflows.argoproj.io/phase: Error
        workflows.argoproj.io/workflow-template: ci
      managedFields:
      - apiVersion: argoproj.io/v1alpha1
        fieldsType: FieldsV1
        fieldsV1:
          f:metadata:
            f:generateName: {}
            f:labels:
              .: {}
              f:submit-from-ui: {}
              f:workflows.argoproj.io/creator: {}
              f:workflows.argoproj.io/workflow-template: {}
          f:spec:
            .: {}
            f:arguments: {}
            f:entrypoint: {}
            f:workflowTemplateRef: {}
          f:status:
            .: {}
            f:storedTemplates: {}
        manager: argo
        operation: Update
        time: "2021-03-02T00:36:25Z"
      - apiVersion: argoproj.io/v1alpha1
        fieldsType: FieldsV1
        fieldsV1:
          f:metadata:
            f:labels:
              f:workflows.argoproj.io/completed: {}
              f:workflows.argoproj.io/phase: {}
          f:status:
            f:artifactRepositoryRef: {}
            f:conditions: {}
            f:finishedAt: {}
            f:nodes: {}
            f:phase: {}
            f:progress: {}
            f:resourcesDuration: {}
            f:startedAt: {}
            f:storedWorkflowTemplateSpec: {}
        manager: workflow-controller
        operation: Update
        time: "2021-03-02T00:37:47Z"
      name: ci-t8vq2
      namespace: argo
      resourceVersion: "119304"
      uid: fb296b31-b810-42c0-90bc-c4e8bb879b22
    spec:
      arguments: {}
      entrypoint: main
      workflowTemplateRef:
        name: ci
    status:
      artifactRepositoryRef:
        default: true
      conditions:
      - status: "False"
        type: PodRunning
      - status: "True"
        type: Completed
      finishedAt: "2021-03-02T00:37:47Z"
      nodes:
        ci-t8vq2:
          children:
          - ci-t8vq2-28546012
          - ci-t8vq2-212400199
          displayName: ci-t8vq2
          finishedAt: "2021-03-02T00:37:47Z"
          id: ci-t8vq2
          name: ci-t8vq2
          outboundNodes:
          - ci-t8vq2-2699866519
          - ci-t8vq2-2921795641
          - ci-t8vq2-935633491
          - ci-t8vq2-2460421745
          - ci-t8vq2-1497633967
          - ci-t8vq2-2529876561
          - ci-t8vq2-2517646915
          - ci-t8vq2-2594947737
          - ci-t8vq2-2608532967
          - ci-t8vq2-3834947628
          - ci-t8vq2-1156673745
          - ci-t8vq2-2620463353
          - ci-t8vq2-3303001669
          - ci-t8vq2-139394661
          - ci-t8vq2-1087124617
          - ci-t8vq2-1466005665
          - ci-t8vq2-3097822101
          - ci-t8vq2-3815346741
          - ci-t8vq2-3045333153
          - ci-t8vq2-2135555749
          - ci-t8vq2-1671854890
          - ci-t8vq2-2189885778
          - ci-t8vq2-2454096578
          - ci-t8vq2-3752440098
          - ci-t8vq2-1958506170
          - ci-t8vq2-569544386
          - ci-t8vq2-743674290
          - ci-t8vq2-553546802
          - ci-t8vq2-3806784823
          - ci-t8vq2-13895053
          - ci-t8vq2-2653728782
          - ci-t8vq2-3144997424
          - ci-t8vq2-2221870794
          - ci-t8vq2-2998710952
          - ci-t8vq2-924300910
          - ci-t8vq2-2687447728
          - ci-t8vq2-441930226
          - ci-t8vq2-2027865656
          - ci-t8vq2-857446365
          - ci-t8vq2-1089779475
          - ci-t8vq2-1533362641
          - ci-t8vq2-2193859387
          - ci-t8vq2-2924122029
          - ci-t8vq2-245955437
          - ci-t8vq2-3119937059
          - ci-t8vq2-3119595669
          - ci-t8vq2-418065567
          - ci-t8vq2-2120215045
          - ci-t8vq2-4250566955
          - ci-t8vq2-145289723
          - ci-t8vq2-1642984369
          - ci-t8vq2-1325696727
          - ci-t8vq2-3992325445
          - ci-t8vq2-3744868327
          - ci-t8vq2-1719834657
          - ci-t8vq2-178844961
          phase: Error
          progress: 57/57
          resourcesDuration:
            cpu: 889
            memory: 889
          startedAt: "2021-03-02T00:36:26Z"
          templateName: main
          templateScope: local/
          type: DAG
        ci-t8vq2-13895053:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(29:context:L)
          finishedAt: "2021-03-02T00:36:54Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-13895053
          inputs:
            parameters:
            - name: message
              value: L
          name: ci-t8vq2.echo-multiple-times(29:context:L)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 17
            memory: 17
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-28546012:
          boundaryID: ci-t8vq2
          children:
          - ci-t8vq2-2699866519
          - ci-t8vq2-2921795641
          - ci-t8vq2-935633491
          - ci-t8vq2-2460421745
          - ci-t8vq2-1497633967
          - ci-t8vq2-2529876561
          - ci-t8vq2-2517646915
          - ci-t8vq2-2594947737
          - ci-t8vq2-2608532967
          - ci-t8vq2-3834947628
          - ci-t8vq2-1156673745
          - ci-t8vq2-2620463353
          - ci-t8vq2-3303001669
          - ci-t8vq2-139394661
          - ci-t8vq2-1087124617
          - ci-t8vq2-1466005665
          - ci-t8vq2-3097822101
          - ci-t8vq2-3815346741
          - ci-t8vq2-3045333153
          - ci-t8vq2-2135555749
          - ci-t8vq2-1671854890
          - ci-t8vq2-2189885778
          - ci-t8vq2-2454096578
          - ci-t8vq2-3752440098
          - ci-t8vq2-1958506170
          - ci-t8vq2-569544386
          - ci-t8vq2-743674290
          - ci-t8vq2-553546802
          - ci-t8vq2-3806784823
          - ci-t8vq2-13895053
          - ci-t8vq2-2653728782
          - ci-t8vq2-3144997424
          - ci-t8vq2-2221870794
          - ci-t8vq2-2998710952
          - ci-t8vq2-924300910
          - ci-t8vq2-2687447728
          - ci-t8vq2-441930226
          - ci-t8vq2-2027865656
          displayName: echo-multiple-times
          finishedAt: "2021-03-02T00:37:26Z"
          id: ci-t8vq2-28546012
          name: ci-t8vq2.echo-multiple-times
          phase: Succeeded
          progress: 38/38
          resourcesDuration:
            cpu: 705
            memory: 705
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: TaskGroup
        ci-t8vq2-128512104:
          boundaryID: ci-t8vq2
          children:
          - ci-t8vq2-3119937059
          - ci-t8vq2-3119595669
          - ci-t8vq2-418065567
          - ci-t8vq2-2120215045
          - ci-t8vq2-4250566955
          displayName: task4
          finishedAt: "2021-03-02T00:37:47Z"
          id: ci-t8vq2-128512104
          name: ci-t8vq2.task4
          phase: Succeeded
          progress: 5/5
          resourcesDuration:
            cpu: 42
            memory: 42
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: TaskGroup
        ci-t8vq2-139394661:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(13:context:O)
          finishedAt: "2021-03-02T00:36:46Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-139394661
          inputs:
            parameters:
            - name: message
              value: O
          name: ci-t8vq2.echo-multiple-times(13:context:O)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 13
            memory: 13
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-145289723:
          boundaryID: ci-t8vq2
          displayName: task5
          finishedAt: "2021-03-02T00:37:33Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-145289723
          inputs:
            parameters:
            - name: message
              value: task5
          name: ci-t8vq2.task5
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 10
            memory: 10
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-162067342:
          boundaryID: ci-t8vq2
          children:
          - ci-t8vq2-1642984369
          - ci-t8vq2-1325696727
          - ci-t8vq2-3992325445
          - ci-t8vq2-3744868327
          - ci-t8vq2-1719834657
          displayName: task6
          finishedAt: "2021-03-02T00:37:47Z"
          id: ci-t8vq2-162067342
          name: ci-t8vq2.task6
          phase: Succeeded
          progress: 5/5
          resourcesDuration:
            cpu: 46
            memory: 46
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: TaskGroup
        ci-t8vq2-178844961:
          boundaryID: ci-t8vq2
          displayName: task7
          finishedAt: "2021-03-02T00:37:35Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-178844961
          inputs:
            parameters:
            - name: message
              value: task7
          name: ci-t8vq2.task7
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 11
            memory: 11
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-212400199:
          boundaryID: ci-t8vq2
          children:
          - ci-t8vq2-229177818
          - ci-t8vq2-245955437
          - ci-t8vq2-128512104
          - ci-t8vq2-145289723
          - ci-t8vq2-162067342
          - ci-t8vq2-178844961
          displayName: task1
          finishedAt: "2021-03-02T00:37:08Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-212400199
          inputs:
            parameters:
            - name: message
              value: task1
          name: ci-t8vq2.task1
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 26
            memory: 26
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-229177818:
          boundaryID: ci-t8vq2
          children:
          - ci-t8vq2-857446365
          - ci-t8vq2-1089779475
          - ci-t8vq2-1533362641
          - ci-t8vq2-2193859387
          - ci-t8vq2-2924122029
          displayName: task2
          finishedAt: "2021-03-02T00:37:47Z"
          id: ci-t8vq2-229177818
          name: ci-t8vq2.task2
          phase: Error
          progress: 5/5
          resourcesDuration:
            cpu: 42
            memory: 42
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: TaskGroup
        ci-t8vq2-245955437:
          boundaryID: ci-t8vq2
          displayName: task3
          finishedAt: "2021-03-02T00:37:29Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-245955437
          inputs:
            parameters:
            - name: message
              value: task3
          name: ci-t8vq2.task3
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 7
            memory: 7
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-418065567:
          boundaryID: ci-t8vq2
          displayName: task4(2:context:C)
          finishedAt: "2021-03-02T00:37:26Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-418065567
          inputs:
            parameters:
            - name: message
              value: task4
          name: ci-t8vq2.task4(2:context:C)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 5
            memory: 5
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-441930226:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(36:context:S)
          finishedAt: "2021-03-02T00:37:08Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-441930226
          inputs:
            parameters:
            - name: message
              value: S
          name: ci-t8vq2.echo-multiple-times(36:context:S)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 28
            memory: 28
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-553546802:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(27:context:I)
          finishedAt: "2021-03-02T00:37:02Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-553546802
          inputs:
            parameters:
            - name: message
              value: I
          name: ci-t8vq2.echo-multiple-times(27:context:I)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 23
            memory: 23
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-569544386:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(25:context:G)
          finishedAt: "2021-03-02T00:37:01Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-569544386
          inputs:
            parameters:
            - name: message
              value: G
          name: ci-t8vq2.echo-multiple-times(25:context:G)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 25
            memory: 25
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-743674290:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(26:context:H)
          finishedAt: "2021-03-02T00:36:56Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-743674290
          inputs:
            parameters:
            - name: message
              value: H
          name: ci-t8vq2.echo-multiple-times(26:context:H)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 16
            memory: 16
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-857446365:
          boundaryID: ci-t8vq2
          displayName: task2(0:context:A)
          finishedAt: "2021-03-02T00:37:29Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-857446365
          inputs:
            parameters:
            - name: message
              value: task2
          name: ci-t8vq2.task2(0:context:A)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 9
            memory: 9
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-924300910:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(34:context:Q)
          finishedAt: "2021-03-02T00:37:03Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-924300910
          inputs:
            parameters:
            - name: message
              value: Q
          name: ci-t8vq2.echo-multiple-times(34:context:Q)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 24
            memory: 24
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-935633491:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(2:context:C)
          finishedAt: "2021-03-02T00:36:45Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-935633491
          inputs:
            parameters:
            - name: message
              value: C
          name: ci-t8vq2.echo-multiple-times(2:context:C)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1087124617:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(14:context:P)
          finishedAt: "2021-03-02T00:36:49Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1087124617
          inputs:
            parameters:
            - name: message
              value: P
          name: ci-t8vq2.echo-multiple-times(14:context:P)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 14
            memory: 14
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1089779475:
          boundaryID: ci-t8vq2
          displayName: task2(1:context:B)
          finishedAt: "2021-03-02T00:37:31Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1089779475
          inputs:
            parameters:
            - name: message
              value: task2
          name: ci-t8vq2.task2(1:context:B)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 12
            memory: 12
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1156673745:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(10:context:L)
          finishedAt: "2021-03-02T00:36:53Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1156673745
          inputs:
            parameters:
            - name: message
              value: L
          name: ci-t8vq2.echo-multiple-times(10:context:L)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 20
            memory: 20
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1325696727:
          boundaryID: ci-t8vq2
          displayName: task6(1:context:B)
          finishedAt: "2021-03-02T00:37:33Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1325696727
          inputs:
            parameters:
            - name: message
              value: task6
          name: ci-t8vq2.task6(1:context:B)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 11
            memory: 11
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1466005665:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(15:context:Q)
          finishedAt: "2021-03-02T00:36:46Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1466005665
          inputs:
            parameters:
            - name: message
              value: Q
          name: ci-t8vq2.echo-multiple-times(15:context:Q)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 12
            memory: 12
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1497633967:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(4:context:E)
          finishedAt: "2021-03-02T00:36:52Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1497633967
          inputs:
            parameters:
            - name: message
              value: E
          name: ci-t8vq2.echo-multiple-times(4:context:E)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 20
            memory: 20
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1533362641:
          boundaryID: ci-t8vq2
          displayName: task2(2:context:C)
          finishedAt: "2021-03-02T00:37:24Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1533362641
          inputs:
            parameters:
            - name: message
              value: task2
          message: 'Error (exit code 1): Could not get container status'
          name: ci-t8vq2.task2(2:context:C)
          phase: Error
          progress: 1/1
          resourcesDuration:
            cpu: 4
            memory: 4
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1642984369:
          boundaryID: ci-t8vq2
          displayName: task6(0:context:A)
          finishedAt: "2021-03-02T00:37:27Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1642984369
          inputs:
            parameters:
            - name: message
              value: task6
          name: ci-t8vq2.task6(0:context:A)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 5
            memory: 5
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1671854890:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(20:context:B)
          finishedAt: "2021-03-02T00:36:48Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1671854890
          inputs:
            parameters:
            - name: message
              value: B
          name: ci-t8vq2.echo-multiple-times(20:context:B)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 13
            memory: 13
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1719834657:
          boundaryID: ci-t8vq2
          displayName: task6(4:context:E)
          finishedAt: "2021-03-02T00:37:32Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1719834657
          inputs:
            parameters:
            - name: message
              value: task6
          name: ci-t8vq2.task6(4:context:E)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 11
            memory: 11
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-1958506170:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(24:context:F)
          finishedAt: "2021-03-02T00:36:50Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-1958506170
          inputs:
            parameters:
            - name: message
              value: F
          name: ci-t8vq2.echo-multiple-times(24:context:F)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 14
            memory: 14
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2027865656:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(37:context:T)
          finishedAt: "2021-03-02T00:37:02Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2027865656
          inputs:
            parameters:
            - name: message
              value: T
          name: ci-t8vq2.echo-multiple-times(37:context:T)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 24
            memory: 24
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2120215045:
          boundaryID: ci-t8vq2
          displayName: task4(3:context:D)
          finishedAt: "2021-03-02T00:37:28Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2120215045
          inputs:
            parameters:
            - name: message
              value: task4
          name: ci-t8vq2.task4(3:context:D)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 8
            memory: 8
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2135555749:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(19:context:A)
          finishedAt: "2021-03-02T00:36:47Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2135555749
          inputs:
            parameters:
            - name: message
              value: A
          name: ci-t8vq2.echo-multiple-times(19:context:A)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 13
            memory: 13
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2189885778:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(21:context:C)
          finishedAt: "2021-03-02T00:36:55Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2189885778
          inputs:
            parameters:
            - name: message
              value: C
          name: ci-t8vq2.echo-multiple-times(21:context:C)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 19
            memory: 19
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2193859387:
          boundaryID: ci-t8vq2
          displayName: task2(3:context:D)
          finishedAt: "2021-03-02T00:37:30Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2193859387
          inputs:
            parameters:
            - name: message
              value: task2
          name: ci-t8vq2.task2(3:context:D)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 9
            memory: 9
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2221870794:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(32:context:O)
          finishedAt: "2021-03-02T00:36:53Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2221870794
          inputs:
            parameters:
            - name: message
              value: O
          name: ci-t8vq2.echo-multiple-times(32:context:O)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 17
            memory: 17
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2454096578:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(22:context:D)
          finishedAt: "2021-03-02T00:36:59Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2454096578
          inputs:
            parameters:
            - name: message
              value: D
          name: ci-t8vq2.echo-multiple-times(22:context:D)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 22
            memory: 22
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2460421745:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(3:context:D)
          finishedAt: "2021-03-02T00:36:49Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2460421745
          inputs:
            parameters:
            - name: message
              value: D
          name: ci-t8vq2.echo-multiple-times(3:context:D)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2517646915:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(6:context:G)
          finishedAt: "2021-03-02T00:36:46Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2517646915
          inputs:
            parameters:
            - name: message
              value: G
          name: ci-t8vq2.echo-multiple-times(6:context:G)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 13
            memory: 13
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2529876561:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(5:context:F)
          finishedAt: "2021-03-02T00:36:48Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2529876561
          inputs:
            parameters:
            - name: message
              value: F
          name: ci-t8vq2.echo-multiple-times(5:context:F)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2594947737:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(7:context:H)
          finishedAt: "2021-03-02T00:36:48Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2594947737
          inputs:
            parameters:
            - name: message
              value: H
          name: ci-t8vq2.echo-multiple-times(7:context:H)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2608532967:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(8:context:I)
          finishedAt: "2021-03-02T00:36:52Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2608532967
          inputs:
            parameters:
            - name: message
              value: I
          name: ci-t8vq2.echo-multiple-times(8:context:I)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2620463353:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(11:context:M)
          finishedAt: "2021-03-02T00:36:50Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2620463353
          inputs:
            parameters:
            - name: message
              value: M
          name: ci-t8vq2.echo-multiple-times(11:context:M)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 14
            memory: 14
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2653728782:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(30:context:M)
          finishedAt: "2021-03-02T00:37:05Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2653728782
          inputs:
            parameters:
            - name: message
              value: M
          name: ci-t8vq2.echo-multiple-times(30:context:M)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 25
            memory: 25
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2687447728:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(35:context:R)
          finishedAt: "2021-03-02T00:37:02Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2687447728
          inputs:
            parameters:
            - name: message
              value: R
          name: ci-t8vq2.echo-multiple-times(35:context:R)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 26
            memory: 26
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2699866519:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(0:context:A)
          finishedAt: "2021-03-02T00:36:54Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2699866519
          inputs:
            parameters:
            - name: message
              value: A
          name: ci-t8vq2.echo-multiple-times(0:context:A)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 26
            memory: 26
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2921795641:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(1:context:B)
          finishedAt: "2021-03-02T00:36:45Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2921795641
          inputs:
            parameters:
            - name: message
              value: B
          name: ci-t8vq2.echo-multiple-times(1:context:B)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 18
            memory: 18
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2924122029:
          boundaryID: ci-t8vq2
          displayName: task2(4:context:E)
          finishedAt: "2021-03-02T00:37:28Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2924122029
          inputs:
            parameters:
            - name: message
              value: task2
          name: ci-t8vq2.task2(4:context:E)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 8
            memory: 8
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-2998710952:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(33:context:P)
          finishedAt: "2021-03-02T00:37:05Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-2998710952
          inputs:
            parameters:
            - name: message
              value: P
          name: ci-t8vq2.echo-multiple-times(33:context:P)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 29
            memory: 29
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3045333153:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(18:context:T)
          finishedAt: "2021-03-02T00:36:51Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3045333153
          inputs:
            parameters:
            - name: message
              value: T
          name: ci-t8vq2.echo-multiple-times(18:context:T)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 17
            memory: 17
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3097822101:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(16:context:R)
          finishedAt: "2021-03-02T00:36:45Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3097822101
          inputs:
            parameters:
            - name: message
              value: R
          name: ci-t8vq2.echo-multiple-times(16:context:R)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 12
            memory: 12
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3119595669:
          boundaryID: ci-t8vq2
          displayName: task4(1:context:B)
          finishedAt: "2021-03-02T00:37:31Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3119595669
          inputs:
            parameters:
            - name: message
              value: task4
          name: ci-t8vq2.task4(1:context:B)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 8
            memory: 8
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3119937059:
          boundaryID: ci-t8vq2
          displayName: task4(0:context:A)
          finishedAt: "2021-03-02T00:37:34Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3119937059
          inputs:
            parameters:
            - name: message
              value: task4
          name: ci-t8vq2.task4(0:context:A)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 12
            memory: 12
          startedAt: "2021-03-02T00:37:16Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3144997424:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(31:context:N)
          finishedAt: "2021-03-02T00:37:09Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3144997424
          inputs:
            parameters:
            - name: message
              value: "N"
          name: ci-t8vq2.echo-multiple-times(31:context:N)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 29
            memory: 29
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3303001669:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(12:context:N)
          finishedAt: "2021-03-02T00:36:52Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3303001669
          inputs:
            parameters:
            - name: message
              value: "N"
          name: ci-t8vq2.echo-multiple-times(12:context:N)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 20
            memory: 20
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3744868327:
          boundaryID: ci-t8vq2
          displayName: task6(3:context:D)
          finishedAt: "2021-03-02T00:37:34Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3744868327
          inputs:
            parameters:
            - name: message
              value: task6
          name: ci-t8vq2.task6(3:context:D)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 11
            memory: 11
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3752440098:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(23:context:E)
          finishedAt: "2021-03-02T00:36:45Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3752440098
          inputs:
            parameters:
            - name: message
              value: E
          name: ci-t8vq2.echo-multiple-times(23:context:E)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 9
            memory: 9
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3806784823:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(28:context:K)
          finishedAt: "2021-03-02T00:36:50Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3806784823
          inputs:
            parameters:
            - name: message
              value: K
          name: ci-t8vq2.echo-multiple-times(28:context:K)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 15
            memory: 15
          startedAt: "2021-03-02T00:36:27Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3815346741:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(17:context:S)
          finishedAt: "2021-03-02T00:36:51Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3815346741
          inputs:
            parameters:
            - name: message
              value: S
          name: ci-t8vq2.echo-multiple-times(17:context:S)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 16
            memory: 16
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3834947628:
          boundaryID: ci-t8vq2
          displayName: echo-multiple-times(9:context:K)
          finishedAt: "2021-03-02T00:36:47Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3834947628
          inputs:
            parameters:
            - name: message
              value: K
          name: ci-t8vq2.echo-multiple-times(9:context:K)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 12
            memory: 12
          startedAt: "2021-03-02T00:36:26Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-3992325445:
          boundaryID: ci-t8vq2
          displayName: task6(2:context:C)
          finishedAt: "2021-03-02T00:37:27Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-3992325445
          inputs:
            parameters:
            - name: message
              value: task6
          name: ci-t8vq2.task6(2:context:C)
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 8
            memory: 8
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
        ci-t8vq2-4250566955:
          boundaryID: ci-t8vq2
          displayName: task4(4:context:E)
          finishedAt: "2021-03-02T00:37:30Z"
          hostNodeName: kind-control-plane
          id: ci-t8vq2-4250566955
          inputs:
            parameters:
            - name: message
              value: task4
          name: ci-t8vq2.task4(4:context:E)
          outputs:
            exitCode: "0"
          phase: Succeeded
          progress: 1/1
          resourcesDuration:
            cpu: 9
            memory: 9
          startedAt: "2021-03-02T00:37:17Z"
          templateRef:
            name: base
            template: main
          templateScope: local/
          type: Pod
      phase: Error
      progress: 57/57
      resourcesDuration:
        cpu: 889
        memory: 889
      startedAt: "2021-03-02T00:36:25Z"
      storedTemplates:
        namespaced/base/main:
          container:
            args:
            - echo {{inputs.parameters.message}}
            command:
            - sh
            - -c
            image: curlimages/curl:7.75.0
            name: ""
            resources: {}
          inputs:
            parameters:
            - name: message
          metadata: {}
          name: main
          outputs: {}
        namespaced/ci/main:
          dag:
            tasks:
            - arguments:
                parameters:
                - name: message
                  value: task1
              name: task1
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: '{{item.context}}'
              name: echo-multiple-times
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
              - context: F
              - context: G
              - context: H
              - context: I
              - context: K
              - context: L
              - context: M
              - context: "N"
              - context: O
              - context: P
              - context: Q
              - context: R
              - context: S
              - context: T
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
              - context: F
              - context: G
              - context: H
              - context: I
              - context: K
              - context: L
              - context: M
              - context: "N"
              - context: O
              - context: P
              - context: Q
              - context: R
              - context: S
              - context: T
            - arguments:
                parameters:
                - name: message
                  value: task2
              dependencies:
              - task1
              name: task2
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task3
              dependencies:
              - task1
              name: task3
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: task4
              dependencies:
              - task1
              name: task4
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task5
              dependencies:
              - task1
              name: task5
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: task6
              dependencies:
              - task1
              name: task6
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task7
              dependencies:
              - task1
              name: task7
              templateRef:
                name: base
                template: main
          inputs: {}
          metadata: {}
          name: main
          outputs: {}
      storedWorkflowTemplateSpec:
        arguments: {}
        entrypoint: main
        parallelism: 1000
        serviceAccountName: argo
        templates:
        - dag:
            tasks:
            - arguments:
                parameters:
                - name: message
                  value: task1
              name: task1
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: '{{item.context}}'
              name: echo-multiple-times
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
              - context: F
              - context: G
              - context: H
              - context: I
              - context: K
              - context: L
              - context: M
              - context: "N"
              - context: O
              - context: P
              - context: Q
              - context: R
              - context: S
              - context: T
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
              - context: F
              - context: G
              - context: H
              - context: I
              - context: K
              - context: L
              - context: M
              - context: "N"
              - context: O
              - context: P
              - context: Q
              - context: R
              - context: S
              - context: T
            - arguments:
                parameters:
                - name: message
                  value: task2
              dependencies:
              - task1
              name: task2
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task3
              dependencies:
              - task1
              name: task3
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: task4
              dependencies:
              - task1
              name: task4
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task5
              dependencies:
              - task1
              name: task5
              templateRef:
                name: base
                template: main
            - arguments:
                parameters:
                - name: message
                  value: task6
              dependencies:
              - task1
              name: task6
              templateRef:
                name: base
                template: main
              withItems:
              - context: A
              - context: B
              - context: C
              - context: D
              - context: E
            - arguments:
                parameters:
                - name: message
                  value: task7
              dependencies:
              - task1
              name: task7
              templateRef:
                name: base
                template: main
          inputs: {}
          metadata: {}
          name: main
          outputs: {}
        workflowTemplateRef:
          name: ci
    

    Controller logs

    time="2021-03-02T00:36:25.857Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:25.865Z" level=info msg="Updated phase  -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.010Z" level=info msg="DAG node ci-t8vq2 initialized Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.013Z" level=info msg="TaskGroup node ci-t8vq2-28546012 initialized Running (message: )" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.013Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(0:context:A) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.014Z" level=info msg="Pod node ci-t8vq2-2699866519 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.032Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(0:context:A) (ci-t8vq2-2699866519)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.032Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(1:context:B) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.035Z" level=info msg="Pod node ci-t8vq2-2921795641 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.061Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(1:context:B) (ci-t8vq2-2921795641)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.062Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(2:context:C) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.063Z" level=info msg="Pod node ci-t8vq2-935633491 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.082Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(2:context:C) (ci-t8vq2-935633491)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.082Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(3:context:D) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.087Z" level=info msg="Pod node ci-t8vq2-2460421745 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.097Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(3:context:D) (ci-t8vq2-2460421745)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.097Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(4:context:E) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.105Z" level=info msg="Pod node ci-t8vq2-1497633967 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.150Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(4:context:E) (ci-t8vq2-1497633967)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.151Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(5:context:F) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.157Z" level=info msg="Pod node ci-t8vq2-2529876561 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.184Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(5:context:F) (ci-t8vq2-2529876561)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.184Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(6:context:G) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.185Z" level=info msg="Pod node ci-t8vq2-2517646915 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.199Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(6:context:G) (ci-t8vq2-2517646915)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.199Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(7:context:H) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.199Z" level=info msg="Pod node ci-t8vq2-2594947737 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.222Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(7:context:H) (ci-t8vq2-2594947737)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.222Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(8:context:I) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.222Z" level=info msg="Pod node ci-t8vq2-2608532967 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.243Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(8:context:I) (ci-t8vq2-2608532967)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.244Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(9:context:K) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.264Z" level=info msg="Pod node ci-t8vq2-3834947628 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.274Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(9:context:K) (ci-t8vq2-3834947628)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.274Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(10:context:L) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.275Z" level=info msg="Pod node ci-t8vq2-1156673745 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.294Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(10:context:L) (ci-t8vq2-1156673745)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.294Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(11:context:M) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.294Z" level=info msg="Pod node ci-t8vq2-2620463353 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.315Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(11:context:M) (ci-t8vq2-2620463353)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.315Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(12:context:N) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.315Z" level=info msg="Pod node ci-t8vq2-3303001669 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.347Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(12:context:N) (ci-t8vq2-3303001669)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.347Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(13:context:O) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.347Z" level=info msg="Pod node ci-t8vq2-139394661 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.378Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(13:context:O) (ci-t8vq2-139394661)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.378Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(14:context:P) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.378Z" level=info msg="Pod node ci-t8vq2-1087124617 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.426Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(14:context:P) (ci-t8vq2-1087124617)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.426Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(15:context:Q) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.427Z" level=info msg="Pod node ci-t8vq2-1466005665 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.466Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(15:context:Q) (ci-t8vq2-1466005665)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.466Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(16:context:R) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.479Z" level=info msg="Pod node ci-t8vq2-3097822101 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.506Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(16:context:R) (ci-t8vq2-3097822101)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.506Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(17:context:S) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.506Z" level=info msg="Pod node ci-t8vq2-3815346741 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.554Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(17:context:S) (ci-t8vq2-3815346741)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.556Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(18:context:T) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.569Z" level=info msg="Pod node ci-t8vq2-3045333153 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.623Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(18:context:T) (ci-t8vq2-3045333153)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.623Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(19:context:A) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.624Z" level=info msg="Pod node ci-t8vq2-2135555749 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.657Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(19:context:A) (ci-t8vq2-2135555749)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.657Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(20:context:B) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.664Z" level=info msg="Pod node ci-t8vq2-1671854890 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.696Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(20:context:B) (ci-t8vq2-1671854890)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.696Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(21:context:C) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.697Z" level=info msg="Pod node ci-t8vq2-2189885778 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.711Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(21:context:C) (ci-t8vq2-2189885778)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.712Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(22:context:D) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.712Z" level=info msg="Pod node ci-t8vq2-2454096578 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.732Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(22:context:D) (ci-t8vq2-2454096578)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.732Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(23:context:E) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.732Z" level=info msg="Pod node ci-t8vq2-3752440098 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.764Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(23:context:E) (ci-t8vq2-3752440098)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.764Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(24:context:F) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.766Z" level=info msg="Pod node ci-t8vq2-1958506170 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.874Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(24:context:F) (ci-t8vq2-1958506170)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.874Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(25:context:G) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.875Z" level=info msg="Pod node ci-t8vq2-569544386 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.920Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(25:context:G) (ci-t8vq2-569544386)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.923Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(26:context:H) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:26.924Z" level=info msg="Pod node ci-t8vq2-743674290 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.040Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(26:context:H) (ci-t8vq2-743674290)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.040Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(27:context:I) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.042Z" level=info msg="Pod node ci-t8vq2-553546802 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.100Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(27:context:I) (ci-t8vq2-553546802)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.100Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(28:context:K) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.100Z" level=info msg="Pod node ci-t8vq2-3806784823 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.134Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(28:context:K) (ci-t8vq2-3806784823)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.134Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(29:context:L) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.138Z" level=info msg="Pod node ci-t8vq2-13895053 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.178Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(29:context:L) (ci-t8vq2-13895053)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.178Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(30:context:M) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.178Z" level=info msg="Pod node ci-t8vq2-2653728782 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.198Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(30:context:M) (ci-t8vq2-2653728782)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.207Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(31:context:N) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.208Z" level=info msg="Pod node ci-t8vq2-3144997424 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.230Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(31:context:N) (ci-t8vq2-3144997424)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.230Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(32:context:O) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.231Z" level=info msg="Pod node ci-t8vq2-2221870794 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.268Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(32:context:O) (ci-t8vq2-2221870794)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.268Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(33:context:P) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.268Z" level=info msg="Pod node ci-t8vq2-2998710952 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.298Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(33:context:P) (ci-t8vq2-2998710952)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.298Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(34:context:Q) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.298Z" level=info msg="Pod node ci-t8vq2-924300910 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.340Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(34:context:Q) (ci-t8vq2-924300910)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.340Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(35:context:R) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.340Z" level=info msg="Pod node ci-t8vq2-2687447728 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.362Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(35:context:R) (ci-t8vq2-2687447728)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.362Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(36:context:S) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.366Z" level=info msg="Pod node ci-t8vq2-441930226 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.383Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(36:context:S) (ci-t8vq2-441930226)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.383Z" level=info msg="All of node ci-t8vq2.echo-multiple-times(37:context:T) dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.397Z" level=info msg="Pod node ci-t8vq2-2027865656 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.476Z" level=info msg="Created pod: ci-t8vq2.echo-multiple-times(37:context:T) (ci-t8vq2-2027865656)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.476Z" level=info msg="All of node ci-t8vq2.task1 dependencies [] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.476Z" level=info msg="Pod node ci-t8vq2-212400199 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.494Z" level=info msg="Created pod: ci-t8vq2.task1 (ci-t8vq2-212400199)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:27.620Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=118446 workflow=ci-t8vq2
    time="2021-03-02T00:36:36.042Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.044Z" level=info msg="Updating node ci-t8vq2-1958506170 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.044Z" level=info msg="Updating node ci-t8vq2-2460421745 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.044Z" level=info msg="Updating node ci-t8vq2-2189885778 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-2620463353 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-3144997424 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-2594947737 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-3815346741 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-2608532967 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-3806784823 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-1497633967 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-2529876561 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.045Z" level=info msg="Updating node ci-t8vq2-3303001669 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.081Z" level=info msg="Updating node ci-t8vq2-2454096578 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.081Z" level=info msg="Updating node ci-t8vq2-743674290 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.081Z" level=info msg="Updating node ci-t8vq2-2221870794 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.087Z" level=info msg="Updating node ci-t8vq2-1087124617 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.087Z" level=info msg="Updating node ci-t8vq2-3045333153 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.087Z" level=info msg="Updating node ci-t8vq2-1156673745 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:36.825Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=118536 workflow=ci-t8vq2
    time="2021-03-02T00:36:46.220Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.221Z" level=info msg="Updating node ci-t8vq2-212400199 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.221Z" level=info msg="Updating node ci-t8vq2-1466005665 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.221Z" level=info msg="Updating node ci-t8vq2-1671854890 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-2687447728 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-924300910 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-441930226 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-2921795641 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-3097822101 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-2027865656 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-2998710952 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.222Z" level=info msg="Updating node ci-t8vq2-2135555749 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.223Z" level=info msg="Updating node ci-t8vq2-3834947628 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.223Z" level=info msg="Updating node ci-t8vq2-3752440098 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.223Z" level=info msg="Updating node ci-t8vq2-569544386 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.223Z" level=info msg="Updating node ci-t8vq2-2517646915 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-553546802 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-2699866519 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Setting node ci-t8vq2-935633491 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-935633491 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-2653728782 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-13895053 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.224Z" level=info msg="Updating node ci-t8vq2-139394661 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:46.418Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=118629 workflow=ci-t8vq2
    time="2021-03-02T00:36:56.412Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-1958506170 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-13895053 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-2620463353 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-139394661 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-1671854890 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Setting node ci-t8vq2-2529876561 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Updating node ci-t8vq2-2529876561 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.413Z" level=info msg="Setting node ci-t8vq2-2189885778 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.414Z" level=info msg="Updating node ci-t8vq2-2189885778 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.414Z" level=info msg="Setting node ci-t8vq2-2699866519 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.414Z" level=info msg="Updating node ci-t8vq2-2699866519 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.414Z" level=info msg="Setting node ci-t8vq2-3303001669 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.414Z" level=info msg="Updating node ci-t8vq2-3303001669 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Setting node ci-t8vq2-2594947737 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Updating node ci-t8vq2-2594947737 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Updating node ci-t8vq2-3815346741 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Updating node ci-t8vq2-2608532967 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Updating node ci-t8vq2-3834947628 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.416Z" level=info msg="Updating node ci-t8vq2-1466005665 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.417Z" level=info msg="Setting node ci-t8vq2-2460421745 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.417Z" level=info msg="Updating node ci-t8vq2-2460421745 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.417Z" level=info msg="Updating node ci-t8vq2-3045333153 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-1497633967 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-3806784823 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Setting node ci-t8vq2-743674290 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-743674290 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-1156673745 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-2135555749 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.418Z" level=info msg="Updating node ci-t8vq2-1087124617 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.419Z" level=info msg="Updating node ci-t8vq2-2221870794 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:36:56.564Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=118736 workflow=ci-t8vq2
    time="2021-03-02T00:36:56.620Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1087124617/labelPodCompleted
    time="2021-03-02T00:36:56.620Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2529876561/labelPodCompleted
    time="2021-03-02T00:37:06.639Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.639Z" level=info msg="Setting node ci-t8vq2-2687447728 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.639Z" level=info msg="Updating node ci-t8vq2-2687447728 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.639Z" level=info msg="Updating node ci-t8vq2-2620463353 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.642Z" level=info msg="Updating node ci-t8vq2-2921795641 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.643Z" level=info msg="Updating node ci-t8vq2-935633491 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.644Z" level=info msg="Updating node ci-t8vq2-139394661 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.644Z" level=info msg="Updating node ci-t8vq2-1466005665 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.645Z" level=info msg="Setting node ci-t8vq2-924300910 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.645Z" level=info msg="Updating node ci-t8vq2-924300910 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.645Z" level=info msg="Updating node ci-t8vq2-3806784823 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.647Z" level=info msg="Setting node ci-t8vq2-2653728782 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.647Z" level=info msg="Updating node ci-t8vq2-2653728782 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.648Z" level=info msg="Setting node ci-t8vq2-2998710952 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.648Z" level=info msg="Updating node ci-t8vq2-2998710952 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.649Z" level=info msg="Setting node ci-t8vq2-2454096578 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.650Z" level=info msg="Updating node ci-t8vq2-2454096578 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.651Z" level=info msg="Setting node ci-t8vq2-2027865656 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.651Z" level=info msg="Updating node ci-t8vq2-2027865656 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.653Z" level=info msg="Setting node ci-t8vq2-553546802 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.653Z" level=info msg="Updating node ci-t8vq2-553546802 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.655Z" level=info msg="Updating node ci-t8vq2-1958506170 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.655Z" level=info msg="Updating node ci-t8vq2-2460421745 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.655Z" level=info msg="Setting node ci-t8vq2-569544386 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.656Z" level=info msg="Updating node ci-t8vq2-569544386 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.656Z" level=info msg="Updating node ci-t8vq2-3097822101 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.656Z" level=info msg="Updating node ci-t8vq2-3752440098 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:06.756Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=118850 workflow=ci-t8vq2
    time="2021-03-02T00:37:06.778Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2620463353/labelPodCompleted
    time="2021-03-02T00:37:06.778Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-139394661/labelPodCompleted
    time="2021-03-02T00:37:06.779Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1466005665/labelPodCompleted
    time="2021-03-02T00:37:06.780Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2460421745/labelPodCompleted
    time="2021-03-02T00:37:06.860Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3752440098/labelPodCompleted
    time="2021-03-02T00:37:06.862Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2921795641/labelPodCompleted
    time="2021-03-02T00:37:06.878Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-935633491/labelPodCompleted
    time="2021-03-02T00:37:06.881Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3806784823/labelPodCompleted
    time="2021-03-02T00:37:06.909Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1958506170/labelPodCompleted
    time="2021-03-02T00:37:06.910Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3097822101/labelPodCompleted
    time="2021-03-02T00:37:16.763Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.764Z" level=info msg="Updating node ci-t8vq2-1671854890 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.765Z" level=info msg="Updating node ci-t8vq2-2189885778 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.765Z" level=info msg="Updating node ci-t8vq2-3303001669 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.765Z" level=info msg="Setting node ci-t8vq2-441930226 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.766Z" level=info msg="Updating node ci-t8vq2-441930226 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.766Z" level=info msg="Updating node ci-t8vq2-924300910 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.766Z" level=info msg="Updating node ci-t8vq2-2608532967 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.766Z" level=info msg="Updating node ci-t8vq2-2454096578 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.767Z" level=info msg="Updating node ci-t8vq2-2517646915 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.767Z" level=info msg="Updating node ci-t8vq2-2687447728 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.767Z" level=info msg="Updating node ci-t8vq2-2221870794 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.768Z" level=info msg="Setting node ci-t8vq2-3144997424 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.768Z" level=info msg="Updating node ci-t8vq2-3144997424 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.768Z" level=info msg="Setting node ci-t8vq2-212400199 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.768Z" level=info msg="Updating node ci-t8vq2-212400199 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.768Z" level=info msg="Updating node ci-t8vq2-553546802 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.769Z" level=info msg="Updating node ci-t8vq2-1156673745 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.769Z" level=info msg="Updating node ci-t8vq2-2594947737 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.770Z" level=info msg="Updating node ci-t8vq2-569544386 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.770Z" level=info msg="Updating node ci-t8vq2-2135555749 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.771Z" level=info msg="Updating node ci-t8vq2-3834947628 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.771Z" level=info msg="Updating node ci-t8vq2-2998710952 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.772Z" level=info msg="Updating node ci-t8vq2-13895053 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.772Z" level=info msg="Updating node ci-t8vq2-2653728782 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.772Z" level=info msg="Updating node ci-t8vq2-2699866519 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.772Z" level=info msg="Updating node ci-t8vq2-2027865656 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.798Z" level=info msg="TaskGroup node ci-t8vq2-229177818 initialized Running (message: )" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.798Z" level=info msg="All of node ci-t8vq2.task2(0:context:A) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.800Z" level=info msg="Pod node ci-t8vq2-857446365 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.819Z" level=info msg="Created pod: ci-t8vq2.task2(0:context:A) (ci-t8vq2-857446365)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.820Z" level=info msg="All of node ci-t8vq2.task2(1:context:B) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.821Z" level=info msg="Pod node ci-t8vq2-1089779475 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.845Z" level=info msg="Created pod: ci-t8vq2.task2(1:context:B) (ci-t8vq2-1089779475)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.845Z" level=info msg="All of node ci-t8vq2.task2(2:context:C) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.849Z" level=info msg="Pod node ci-t8vq2-1533362641 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.889Z" level=info msg="Created pod: ci-t8vq2.task2(2:context:C) (ci-t8vq2-1533362641)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.890Z" level=info msg="All of node ci-t8vq2.task2(3:context:D) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.894Z" level=info msg="Pod node ci-t8vq2-2193859387 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.910Z" level=info msg="Created pod: ci-t8vq2.task2(3:context:D) (ci-t8vq2-2193859387)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.911Z" level=info msg="All of node ci-t8vq2.task2(4:context:E) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.914Z" level=info msg="Pod node ci-t8vq2-2924122029 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.936Z" level=info msg="Created pod: ci-t8vq2.task2(4:context:E) (ci-t8vq2-2924122029)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.936Z" level=info msg="All of node ci-t8vq2.task3 dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.937Z" level=info msg="Pod node ci-t8vq2-245955437 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.970Z" level=info msg="Created pod: ci-t8vq2.task3 (ci-t8vq2-245955437)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.971Z" level=info msg="TaskGroup node ci-t8vq2-128512104 initialized Running (message: )" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.971Z" level=info msg="All of node ci-t8vq2.task4(0:context:A) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:16.972Z" level=info msg="Pod node ci-t8vq2-3119937059 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.015Z" level=info msg="Created pod: ci-t8vq2.task4(0:context:A) (ci-t8vq2-3119937059)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.015Z" level=info msg="All of node ci-t8vq2.task4(1:context:B) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.016Z" level=info msg="Pod node ci-t8vq2-3119595669 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.039Z" level=info msg="Created pod: ci-t8vq2.task4(1:context:B) (ci-t8vq2-3119595669)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.039Z" level=info msg="All of node ci-t8vq2.task4(2:context:C) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.040Z" level=info msg="Pod node ci-t8vq2-418065567 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.069Z" level=info msg="Created pod: ci-t8vq2.task4(2:context:C) (ci-t8vq2-418065567)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.070Z" level=info msg="All of node ci-t8vq2.task4(3:context:D) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.072Z" level=info msg="Pod node ci-t8vq2-2120215045 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.103Z" level=info msg="Created pod: ci-t8vq2.task4(3:context:D) (ci-t8vq2-2120215045)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.104Z" level=info msg="All of node ci-t8vq2.task4(4:context:E) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.104Z" level=info msg="Pod node ci-t8vq2-4250566955 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.124Z" level=info msg="Created pod: ci-t8vq2.task4(4:context:E) (ci-t8vq2-4250566955)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.125Z" level=info msg="All of node ci-t8vq2.task5 dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.127Z" level=info msg="Pod node ci-t8vq2-145289723 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.171Z" level=info msg="Created pod: ci-t8vq2.task5 (ci-t8vq2-145289723)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.175Z" level=info msg="TaskGroup node ci-t8vq2-162067342 initialized Running (message: )" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.175Z" level=info msg="All of node ci-t8vq2.task6(0:context:A) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.176Z" level=info msg="Pod node ci-t8vq2-1642984369 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.262Z" level=info msg="Created pod: ci-t8vq2.task6(0:context:A) (ci-t8vq2-1642984369)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.262Z" level=info msg="All of node ci-t8vq2.task6(1:context:B) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.274Z" level=info msg="Pod node ci-t8vq2-1325696727 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.337Z" level=info msg="Created pod: ci-t8vq2.task6(1:context:B) (ci-t8vq2-1325696727)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.337Z" level=info msg="All of node ci-t8vq2.task6(2:context:C) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.340Z" level=info msg="Pod node ci-t8vq2-3992325445 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.423Z" level=info msg="Created pod: ci-t8vq2.task6(2:context:C) (ci-t8vq2-3992325445)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.423Z" level=info msg="All of node ci-t8vq2.task6(3:context:D) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.437Z" level=info msg="Pod node ci-t8vq2-3744868327 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.497Z" level=info msg="Created pod: ci-t8vq2.task6(3:context:D) (ci-t8vq2-3744868327)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.497Z" level=info msg="All of node ci-t8vq2.task6(4:context:E) dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.518Z" level=info msg="Pod node ci-t8vq2-1719834657 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.587Z" level=info msg="Created pod: ci-t8vq2.task6(4:context:E) (ci-t8vq2-1719834657)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.587Z" level=info msg="All of node ci-t8vq2.task7 dependencies [task1] completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.587Z" level=info msg="Pod node ci-t8vq2-178844961 initialized Pending" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.607Z" level=info msg="Created pod: ci-t8vq2.task7 (ci-t8vq2-178844961)" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:17.718Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=119020 workflow=ci-t8vq2
    time="2021-03-02T00:37:17.779Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2454096578/labelPodCompleted
    time="2021-03-02T00:37:17.785Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2687447728/labelPodCompleted
    time="2021-03-02T00:37:17.789Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2221870794/labelPodCompleted
    time="2021-03-02T00:37:17.795Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-924300910/labelPodCompleted
    time="2021-03-02T00:37:17.831Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2608532967/labelPodCompleted
    time="2021-03-02T00:37:17.848Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-553546802/labelPodCompleted
    time="2021-03-02T00:37:17.868Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2135555749/labelPodCompleted
    time="2021-03-02T00:37:17.868Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3834947628/labelPodCompleted
    time="2021-03-02T00:37:17.910Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-13895053/labelPodCompleted
    time="2021-03-02T00:37:17.911Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2699866519/labelPodCompleted
    time="2021-03-02T00:37:17.976Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1671854890/labelPodCompleted
    time="2021-03-02T00:37:17.977Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2189885778/labelPodCompleted
    time="2021-03-02T00:37:17.986Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3303001669/labelPodCompleted
    time="2021-03-02T00:37:18.022Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-441930226/labelPodCompleted
    time="2021-03-02T00:37:18.056Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2594947737/labelPodCompleted
    time="2021-03-02T00:37:18.073Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2517646915/labelPodCompleted
    time="2021-03-02T00:37:18.103Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1156673745/labelPodCompleted
    time="2021-03-02T00:37:18.074Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-212400199/labelPodCompleted
    time="2021-03-02T00:37:18.105Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-569544386/labelPodCompleted
    time="2021-03-02T00:37:18.120Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2998710952/labelPodCompleted
    time="2021-03-02T00:37:18.182Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2653728782/labelPodCompleted
    time="2021-03-02T00:37:18.204Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2027865656/labelPodCompleted
    time="2021-03-02T00:37:26.829Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-743674290 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-857446365 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-1089779475 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-1533362641 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-1497633967 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-1642984369 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.830Z" level=info msg="Updating node ci-t8vq2-1719834657 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.832Z" level=info msg="Updating node ci-t8vq2-2193859387 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.832Z" level=info msg="Updating node ci-t8vq2-2924122029 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.832Z" level=info msg="Updating node ci-t8vq2-3992325445 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.832Z" level=info msg="Updating node ci-t8vq2-3144997424 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.832Z" level=info msg="Updating node ci-t8vq2-1325696727 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-3815346741 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-4250566955 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-3744868327 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-145289723 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-3119595669 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-245955437 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-2120215045 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-3045333153 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.833Z" level=info msg="Updating node ci-t8vq2-418065567 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.834Z" level=info msg="Updating node ci-t8vq2-3119937059 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.834Z" level=info msg="Updating node ci-t8vq2-178844961 message: ContainerCreating" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.904Z" level=info msg="node ci-t8vq2-28546012 phase Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:26.915Z" level=info msg="node ci-t8vq2-28546012 finished: 2021-03-02 00:37:26.9150578 +0000 UTC" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:27.012Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=119139 workflow=ci-t8vq2
    time="2021-03-02T00:37:27.050Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-743674290/labelPodCompleted
    time="2021-03-02T00:37:27.060Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1497633967/labelPodCompleted
    time="2021-03-02T00:37:27.061Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3144997424/labelPodCompleted
    time="2021-03-02T00:37:27.063Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3815346741/labelPodCompleted
    time="2021-03-02T00:37:27.115Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3045333153/labelPodCompleted
    time="2021-03-02T00:37:37.073Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.073Z" level=info msg="Setting node ci-t8vq2-3744868327 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.073Z" level=info msg="Updating node ci-t8vq2-3744868327 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.073Z" level=info msg="Setting node ci-t8vq2-1325696727 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.073Z" level=info msg="Updating node ci-t8vq2-1325696727 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-1642984369 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Setting node ci-t8vq2-245955437 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-245955437 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Setting node ci-t8vq2-1089779475 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-1089779475 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Setting node ci-t8vq2-3119937059 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-3119937059 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-418065567 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Setting node ci-t8vq2-1719834657 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.074Z" level=info msg="Updating node ci-t8vq2-1719834657 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Setting node ci-t8vq2-178844961 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Updating node ci-t8vq2-178844961 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Pod failed: Error (exit code 1): Could not get container status" displayName="task2(2:context:C)" namespace=argo pod=ci-t8vq2-1533362641 templateName= workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Updating node ci-t8vq2-1533362641 status Running -> Error" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Updating node ci-t8vq2-1533362641 message: Error (exit code 1): Could not get container status" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Updating node ci-t8vq2-2924122029 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Setting node ci-t8vq2-2193859387 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Updating node ci-t8vq2-2193859387 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.075Z" level=info msg="Setting node ci-t8vq2-145289723 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-145289723 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Setting node ci-t8vq2-4250566955 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-4250566955 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Setting node ci-t8vq2-3119595669 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-3119595669 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Setting node ci-t8vq2-857446365 outputs" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-857446365 status Pending -> Running" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-3992325445 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.076Z" level=info msg="Updating node ci-t8vq2-2120215045 status Pending -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:37.141Z" level=info msg="Workflow update successful" namespace=argo phase=Running resourceVersion=119254 workflow=ci-t8vq2
    time="2021-03-02T00:37:37.161Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1533362641/labelPodCompleted
    time="2021-03-02T00:37:37.161Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2924122029/labelPodCompleted
    time="2021-03-02T00:37:37.162Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3992325445/labelPodCompleted
    time="2021-03-02T00:37:37.163Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2120215045/labelPodCompleted
    time="2021-03-02T00:37:37.198Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-418065567/labelPodCompleted
    time="2021-03-02T00:37:47.113Z" level=info msg="Processing workflow" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.115Z" level=info msg="Updating node ci-t8vq2-3119595669 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.115Z" level=info msg="Updating node ci-t8vq2-3744868327 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.116Z" level=info msg="Updating node ci-t8vq2-1642984369 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.117Z" level=info msg="Updating node ci-t8vq2-2193859387 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.118Z" level=info msg="Updating node ci-t8vq2-4250566955 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.118Z" level=info msg="Updating node ci-t8vq2-1325696727 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.119Z" level=info msg="Updating node ci-t8vq2-857446365 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.119Z" level=info msg="Updating node ci-t8vq2-245955437 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.121Z" level=info msg="Updating node ci-t8vq2-1089779475 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.123Z" level=info msg="Updating node ci-t8vq2-178844961 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.123Z" level=info msg="Updating node ci-t8vq2-145289723 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.124Z" level=info msg="Updating node ci-t8vq2-3119937059 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.124Z" level=info msg="Updating node ci-t8vq2-1719834657 status Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.136Z" level=info msg="node ci-t8vq2-229177818 phase Running -> Error" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.136Z" level=info msg="node ci-t8vq2-229177818 finished: 2021-03-02 00:37:47.1368811 +0000 UTC" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.138Z" level=info msg="node ci-t8vq2-128512104 phase Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.138Z" level=info msg="node ci-t8vq2-128512104 finished: 2021-03-02 00:37:47.1385009 +0000 UTC" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.140Z" level=info msg="node ci-t8vq2-162067342 phase Running -> Succeeded" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.140Z" level=info msg="node ci-t8vq2-162067342 finished: 2021-03-02 00:37:47.1409097 +0000 UTC" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.141Z" level=info msg="Outbound nodes of ci-t8vq2 set to [ci-t8vq2-2699866519 ci-t8vq2-2921795641 ci-t8vq2-935633491 ci-t8vq2-2460421745 ci-t8vq2-1497633967 ci-t8vq2-2529876561 ci-t8vq2-2517646915 ci-t8vq2-2594947737 ci-t8vq2-2608532967 ci-t8vq2-3834947628 ci-t8vq2-1156673745 ci-t8vq2-2620463353 ci-t8vq2-3303001669 ci-t8vq2-139394661 ci-t8vq2-1087124617 ci-t8vq2-1466005665 ci-t8vq2-3097822101 ci-t8vq2-3815346741 ci-t8vq2-3045333153 ci-t8vq2-2135555749 ci-t8vq2-1671854890 ci-t8vq2-2189885778 ci-t8vq2-2454096578 ci-t8vq2-3752440098 ci-t8vq2-1958506170 ci-t8vq2-569544386 ci-t8vq2-743674290 ci-t8vq2-553546802 ci-t8vq2-3806784823 ci-t8vq2-13895053 ci-t8vq2-2653728782 ci-t8vq2-3144997424 ci-t8vq2-2221870794 ci-t8vq2-2998710952 ci-t8vq2-924300910 ci-t8vq2-2687447728 ci-t8vq2-441930226 ci-t8vq2-2027865656 ci-t8vq2-857446365 ci-t8vq2-1089779475 ci-t8vq2-1533362641 ci-t8vq2-2193859387 ci-t8vq2-2924122029 ci-t8vq2-245955437 ci-t8vq2-3119937059 ci-t8vq2-3119595669 ci-t8vq2-418065567 ci-t8vq2-2120215045 ci-t8vq2-4250566955 ci-t8vq2-145289723 ci-t8vq2-1642984369 ci-t8vq2-1325696727 ci-t8vq2-3992325445 ci-t8vq2-3744868327 ci-t8vq2-1719834657 ci-t8vq2-178844961]" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.141Z" level=info msg="node ci-t8vq2 phase Running -> Error" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.141Z" level=info msg="node ci-t8vq2 finished: 2021-03-02 00:37:47.1418331 +0000 UTC" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.142Z" level=info msg="Checking daemoned children of ci-t8vq2" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.142Z" level=info msg="Updated phase Running -> Error" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.142Z" level=info msg="Marking workflow completed" namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.142Z" level=info msg="Checking daemoned children of " namespace=argo workflow=ci-t8vq2
    time="2021-03-02T00:37:47.182Z" level=info msg="Workflow update successful" namespace=argo phase=Error resourceVersion=119304 workflow=ci-t8vq2
    time="2021-03-02T00:37:47.206Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1642984369/labelPodCompleted
    time="2021-03-02T00:37:47.207Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1325696727/labelPodCompleted
    time="2021-03-02T00:37:47.206Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-2193859387/labelPodCompleted
    time="2021-03-02T00:37:47.206Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-4250566955/labelPodCompleted
    time="2021-03-02T00:37:47.241Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-857446365/labelPodCompleted
    time="2021-03-02T00:37:47.264Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-245955437/labelPodCompleted
    time="2021-03-02T00:37:47.274Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3119595669/labelPodCompleted
    time="2021-03-02T00:37:47.305Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3744868327/labelPodCompleted
    time="2021-03-02T00:37:47.313Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-145289723/labelPodCompleted
    time="2021-03-02T00:37:47.320Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-3119937059/labelPodCompleted
    time="2021-03-02T00:37:47.335Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1719834657/labelPodCompleted
    time="2021-03-02T00:37:47.349Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-1089779475/labelPodCompleted
    time="2021-03-02T00:37:47.349Z" level=info msg="cleaning up pod" action=labelPodCompleted key=argo/ci-t8vq2-178844961/labelPodCompleted
    

    kubectl logs ci-t8vq2-1533362641 -n argo -c wait

    time="2021-03-02T00:37:22.325Z" level=info msg="secured root for pid 24 root: runc:[2:INIT]"
    time="2021-03-02T00:37:22.327Z" level=info msg="mapped pid 24 to container ID \"42af41ccd93a5ced82d44e0230c07f3cd0592fc368e3694ee8ff400ce257a0b4\""
    time="2021-03-02T00:37:22.329Z" level=info msg="mapped container name \"main\" to container ID \"42af41ccd93a5ced82d44e0230c07f3cd0592fc368e3694ee8ff400ce257a0b4\" and pid 24"
    time="2021-03-02T00:37:23.568Z" level=info msg="Waiting for \"main\" pid 24 to complete"
    time="2021-03-02T00:37:23.568Z" level=info msg="\"main\" pid 24 completed"
    time="2021-03-02T00:37:23.568Z" level=info msg="Main container completed"
    time="2021-03-02T00:37:23.568Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
    time="2021-03-02T00:37:23.569Z" level=info msg="Capturing script exit code"
    time="2021-03-02T00:37:23.569Z" level=info msg="Getting exit code of main"
    time="2021-03-02T00:37:23.612Z" level=info msg="Get pods 200"
    time="2021-03-02T00:37:23.799Z" level=error msg="executor error: Could not get container status\ngithub.com/argoproj/argo-workflows/v3/errors.Wrap\n\t/go/src/github.com/argoproj/argo-workflows/errors/errors.go:88\ngithub.com/argoproj/argo-workflows/v3/errors.InternalWrapError\n\t/go/src/github.com/argoproj/argo-workflows/errors/errors.go:73\ngithub.com/argoproj/argo-workflows/v3/workflow/executor/k8sapi.(*K8sAPIExecutor).GetExitCode\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/k8sapi/k8sapi.go:50\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).CaptureScriptExitCode\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:714\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.waitContainer\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:55\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.NewWaitCommand.func1\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:18\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:846\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:950\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:887\nmain.main\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/main.go:14\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1374"
    time="2021-03-02T00:37:23.799Z" level=info msg="Alloc=6303 TotalAlloc=11141 Sys=73041 NumGC=3 Goroutines=10"
    time="2021-03-02T00:37:23.812Z" level=fatal msg="Could not get container status\ngithub.com/argoproj/argo-workflows/v3/errors.Wrap\n\t/go/src/github.com/argoproj/argo-workflows/errors/errors.go:88\ngithub.com/argoproj/argo-workflows/v3/errors.InternalWrapError\n\t/go/src/github.com/argoproj/argo-workflows/errors/errors.go:73\ngithub.com/argoproj/argo-workflows/v3/workflow/executor/k8sapi.(*K8sAPIExecutor).GetExitCode\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/k8sapi/k8sapi.go:50\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).CaptureScriptExitCode\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:714\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.waitContainer\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:55\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.NewWaitCommand.func1\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:18\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:846\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:950\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:887\nmain.main\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/main.go:14\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1374"
    
    image

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

  • OOM error not caught by the `emissary` executor forcing the workflow to hang in

    OOM error not caught by the `emissary` executor forcing the workflow to hang in "Running" state

    I am opening a new issue, but you can check https://github.com/argoproj/argo-workflows/issues/8456#issuecomment-1120206141 for context.


    The below error has been reproduced on master (07/05/2022), 3.3.5 and 3.2.11.

    When a workflow get OOM killed by K8s, the emissary executor sometime can't detect it. In consequence, the workflow hangs in "Running" state forever.

    The error does not happen when using the pns or docker executor. This is a major regression for us since the previous executors were working just fine. For now, we are falling back to docker.

    I have been able to make Argo detect the killed process by manually sshing to the pod and killing the /var/run/argo/argoexec emissary -- bash --login /argo/staging/script process (sending a SIGTERM signal). When doing that the main container get immediately killed, as well as the workflow. The workflow is correctly marked as failed with the correct OOMKilled (exit code 137) error (the same error when using the pns and docker executors).

    Unfortunately, so far all my attempts to reproduce it using openly available code, images and packages has been unsuccessful (I'll keep trying). I can only reproduce it using our private internal stack and images.

    The workload is a deeply nested machine learning code that rely a lot on the python and pytorch multiprocessing and distributed module. My guess is that some zombie child processes prevent the argo executor or workflow controller to detect the main container as completed.

    I will be happy to provide you more information, logs or config if it can help you to make sense of this (while on my side I'll keep trying to make a workflow reproducing the bug that I can share).

    While this bug affects us, I am quite confident other people running ML workload with Python on Argo will get that bug at some point.

  • Workflow steps fail with a 'pod deleted' message.

    Workflow steps fail with a 'pod deleted' message.

    Summary

    Maybe related to #3381?

    Some of the workflow steps end up in Error state, with pod deleted. I am not sure which of the following data points are relevant, but listing all observations:

    • the workflow uses PodGC: strategy: OnPodSuccess.
    • we are seeing this for ~5% of workflow steps.
    • affected steps are a part of a withItems loop
    • the workflow is not large - ~170 to 300 concurrent nodes
    • this is observed since deploying v2.12.0rc2 yesterday, including v2.12.0rc2 executor image. We were previously on v2.11.6 and briefly on v2.11.7, and have not seen this.
    • k8s events confirm the pods ran to completion
    • cluster scaling has been ruled out as the cause - this is observed on multiple k8s nodes, all of which are still running
    • we have not tried the same workflow without PodGC yet.

    Diagnostics

    What Kubernetes provider are you using?

    docker

    What version of Argo Workflows are you running?

    v2.12.0rc2 for all components


    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

  • DAG/STEPS Hang v3.0.2 - Sidecars not being killed

    DAG/STEPS Hang v3.0.2 - Sidecars not being killed

    Summary

    What happened?

    DAG tasks randomly hang
    
    Screen Shot 2021-04-29 at 7 01 35 PM image

    What did you expect to happen?

    DAG tasks successfully finished
    

    Diagnostics

    What Kubernetes provider are you using?

    GKE
    Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.9-gke.1900", GitCommit:"008fd38bf3dc201bebdd4fe26edf9bf87478309a", GitTreeState:"clean", BuildDate:"2021-04-14T09:22:08Z", GoVersion:"go1.15.8b5", Compiler:"gc", Platform:"linux/amd64"}
    

    What version of Argo Workflows are you running?

    v3.0.2
    

    kubectl get wf -o yaml ${workflow}

    The workflow contains sensitive information regarding our organization, if it's important reach out me on CNCF slack

    kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name) | grep ${workflow}

    controller-dag-hang.txt

    Wait container logs

    W
    {},\"mirrorVolumeMounts\":true}],\"sidecars\":[{\"name\":\"mysql\",\"image\":\"mysql:5.6\",\"env\":[{\"name\":\"MYSQL_ALLOW_EMPTY_PASSWORD\",\"value\":\"true\"}],\"reso
    urces\":{},\"mirrorVolumeMounts\":true},{\"name\":\"redis\",\"image\":\"redis:alpine3.13\",\"resources\":{},\"mirrorVolumeMounts\":true},{\"name\":\"nginx\",\"image\":\
    "nginx:1.19.7-alpine\",\"resources\":{},\"mirrorVolumeMounts\":true}],\"archiveLocation\":{\"archiveLogs\":true,\"gcs\":{\"bucket\":\"7shitfs-argo-workflow-artifacts\",
    \"serviceAccountKeySecret\":{\"name\":\"devops-argo-workflow-sa\",\"key\":\"credentials.json\"},\"key\":\"argo-workflow-logs/2021/04/29/github-20979-9df1440/github-2097
    9-9df1440-2290904989\"}},\"retryStrategy\":{\"limit\":\"1\",\"retryPolicy\":\"Always\"},\"tolerations\":[{\"key\":\"node_type\",\"operator\":\"Equal\",\"value\":\"large
    \",\"effect\":\"NoSchedule\"}],\"hostAliases\":[{\"ip\":\"127.0.0.1\",\"hostnames\":[\"xyz.dev\",\"xyz.test\",\"cypress.xyz.test\",\"codeception.xyz.dev
    \"]}],\"podSpecPatch\":\"containers:\\n- name: main\\n  resources:\\n    request:\\n      memory: \\\"8Gi\\\"\\n      cpu: \\\"2\\\"\\n    limits:\\n      memory: \\\"8
    Gi\\\"\\n      cpu: \\\"2\\\"\\n- name: mysql\\n  resources:\\n    request:\\n      memory: \\\"2Gi\\\"\\n      cpu: \\\"0.5\\\"\\n    limits:\\n      memory: \\\"2Gi\\
    \"\\n      cpu: \\\"0.5\\\"\\n- name: redis\\n  resources:\\n    request:\\n      memory: \\\"50Mi\\\"\\n      cpu: \\\"0.05\\\"\\n    limits:\\n      memory: \\\"50Mi\
    \\"\\n      cpu: \\\"0.05\\\"\\n- name: nginx\\n  resources:\\n    request:\\n      memory: \\\"50Mi\\\"\\n      cpu: \\\"0.05\\\"\\n    limits:\\n      memory: \\\"50M
    i\\\"\\n      cpu: \\\"0.05\\\"\\n\",\"timeout\":\"1200s\"}"
    time="2021-04-29T22:33:05.291Z" level=info msg="Starting annotations monitor"
    time="2021-04-29T22:33:05.291Z" level=info msg="Starting deadline monitor"
    time="2021-04-29T22:33:10.299Z" level=info msg="Watch pods 200"
    time="2021-04-29T22:38:05.291Z" level=info msg="Alloc=4475 TotalAlloc=47692 Sys=75089 NumGC=15 Goroutines=10"
    time="2021-04-29T22:42:44.410Z" level=info msg="Main container completed"
    time="2021-04-29T22:42:44.410Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
    time="2021-04-29T22:42:44.410Z" level=info msg="Capturing script exit code"
    time="2021-04-29T22:42:44.410Z" level=info msg="Getting exit code of main"
    time="2021-04-29T22:42:44.413Z" level=info msg="Get pods 200"
    time="2021-04-29T22:42:44.414Z" level=info msg="Saving logs"
    time="2021-04-29T22:42:44.415Z" level=info msg="Getting output of main"
    time="2021-04-29T22:42:44.424Z" level=info msg="List log 200"
    time="2021-04-29T22:42:44.427Z" level=info msg="GCS Save path: /tmp/argo/outputs/logs/main.log, key: argo-workflow-logs/2021/04/29/github-20979-9df1440/github-20979-9df
    1440-2290904989/main.log"
    time="2021-04-29T22:42:44.763Z" level=info msg="not deleting local artifact" localArtPath=/tmp/argo/outputs/logs/main.log
    time="2021-04-29T22:42:44.763Z" level=info msg="Successfully saved file: /tmp/argo/outputs/logs/main.log"
    time="2021-04-29T22:42:44.763Z" level=info msg="No output parameters"
    time="2021-04-29T22:42:44.763Z" level=info msg="No output artifacts"
    time="2021-04-29T22:42:44.763Z" level=info msg="Annotating pod with output"
    time="2021-04-29T22:42:44.778Z" level=info msg="Patch pods 200"
    time="2021-04-29T22:42:44.779Z" level=info msg="Killing sidecars []"
    time="2021-04-29T22:42:44.779Z" level=info msg="Alloc=28577 TotalAlloc=72566 Sys=75089 NumGC=18 Goroutines=11"
    

    I've been continuously trying to upgrade our argo workflow version, but since 3.x.x dag tasks stopped working properly. I'm currently using v2.12 with no problems at all.


    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritize the issues with the most πŸ‘.

  • artifactory main pod log storing using workflow-controller-configmap gives nil pointer panic

    artifactory main pod log storing using workflow-controller-configmap gives nil pointer panic

    Summary

    I wanted to store the main pod logs of my steps directly on artifactory, so followed the (splintered) docs I found to configure artifactory repo in the workflow-config-map, but I see the wait container (argoexec) crashing

    Diagnostics

    Using docker executor

    What version of Argo Workflows are you running? 3.0.1

    apiVersion: argoproj.io/v1alpha1
    kind: Workflow
    metadata:
      creationTimestamp: "2021-04-21T21:20:08Z"
      generateName: nested-workflow-
      generation: 5
      labels:
        workflows.argoproj.io/phase: Running
      name: nested-wf-test
      namespace: default
      resourceVersion: "150308175"
      selfLink: /apis/argoproj.io/v1alpha1/namespaces/default/workflows/nested-wf-test
      uid: 8f930f77-8a87-4915-9815-ff97b872e952
    spec:
      arguments: {}
      entrypoint: nested-workflow-example
      templates:
      - inputs: {}
        metadata: {}
        name: nested-workflow-example
        outputs: {}
        steps:
        - - arguments:
              parameters:
              - name: excluded_node
                value: ""
            continueOn:
              failed: true
            name: runtb
            template: sleepabit
        - - arguments:
              parameters:
              - name: excluded_node
                value: '{{workflow.outputs.parameters.nodename}}'
            continueOn:
              failed: true
            name: retryrun
            template: sleepabit
            when: '{{steps.runtb.status}} != Succeeded'
      - affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: NotIn
                  values:
                  - '{{inputs.parameters.excluded_node}}'
        container:
          args:
          - echo $NODE_NAME > nodename.txt && echo blablabla && sleep 5 && false
          command:
          - sh
          - -c
          env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          image: alpine:3.7
          name: ""
          resources: {}
        inputs:
          parameters:
          - name: excluded_node
        metadata: {}
        name: sleepabit
        outputs:
          parameters:
          - globalName: nodename
            name: nodename
            valueFrom:
              path: nodename.txt
        retryStrategy:
          limit: 2
          retryPolicy: OnError
    status:
      artifactRepositoryRef:
        default: true
      conditions:
      - status: "False"
        type: PodRunning
      finishedAt: null
      nodes:
        nested-wf-test:
          children:
          - nested-wf-test-4187779359
          displayName: nested-wf-test
          finishedAt: null
          id: nested-wf-test
          name: nested-wf-test
          phase: Running
          progress: 1/2
          startedAt: "2021-04-21T21:20:08Z"
          templateName: nested-workflow-example
          templateScope: local/nested-wf-test
          type: Steps
        nested-wf-test-73877493:
          boundaryID: nested-wf-test
          children:
          - nested-wf-test-1033837250
          displayName: runtb(0)
          finishedAt: "2021-04-21T21:20:24Z"
          hostNodeName: dev11-gsn107-k8s-med-worker-1
          id: nested-wf-test-73877493
          inputs:
            parameters:
            - name: excluded_node
              value: ""
          message: Error (exit code 1)
          name: nested-wf-test[0].runtb(0)
          phase: Failed
          progress: 1/1
          resourcesDuration:
            cpu: 10
            memory: 10
          startedAt: "2021-04-21T21:20:08Z"
          templateName: sleepabit
          templateScope: local/nested-wf-test
          type: Pod
        nested-wf-test-1033837250:
          boundaryID: nested-wf-test
          children:
          - nested-wf-test-3144780967
          displayName: '[1]'
          finishedAt: null
          id: nested-wf-test-1033837250
          name: nested-wf-test[1]
          phase: Running
          progress: 0/1
          startedAt: "2021-04-21T21:20:27Z"
          templateScope: local/nested-wf-test
          type: StepGroup
        nested-wf-test-1133525750:
          boundaryID: nested-wf-test
          children:
          - nested-wf-test-73877493
          displayName: runtb
          finishedAt: "2021-04-21T21:20:27Z"
          id: nested-wf-test-1133525750
          inputs:
            parameters:
            - name: excluded_node
              value: ""
          message: Error (exit code 1)
          name: nested-wf-test[0].runtb
          phase: Failed
          progress: 1/2
          resourcesDuration:
            cpu: 10
            memory: 10
          startedAt: "2021-04-21T21:20:08Z"
          templateName: sleepabit
          templateScope: local/nested-wf-test
          type: Retry
        nested-wf-test-3144780967:
          boundaryID: nested-wf-test
          children:
          - nested-wf-test-3241021146
          displayName: retryrun
          finishedAt: null
          id: nested-wf-test-3144780967
          inputs:
            parameters:
            - name: excluded_node
              value: '{{workflow.outputs.parameters.nodename}}'
          name: nested-wf-test[1].retryrun
          phase: Running
          progress: 0/1
          startedAt: "2021-04-21T21:20:27Z"
          templateName: sleepabit
          templateScope: local/nested-wf-test
          type: Retry
        nested-wf-test-3241021146:
          boundaryID: nested-wf-test
          displayName: retryrun(0)
          finishedAt: null
          id: nested-wf-test-3241021146
          inputs:
            parameters:
            - name: excluded_node
              value: '{{workflow.outputs.parameters.nodename}}'
          message: 'Unschedulable: 0/72 nodes are available: 72 node(s) didn''t match
            node selector.'
          name: nested-wf-test[1].retryrun(0)
          phase: Pending
          progress: 0/1
          startedAt: "2021-04-21T21:20:27Z"
          templateName: sleepabit
          templateScope: local/nested-wf-test
          type: Pod
        nested-wf-test-4187779359:
          boundaryID: nested-wf-test
          children:
          - nested-wf-test-1133525750
          displayName: '[0]'
          finishedAt: "2021-04-21T21:20:27Z"
          id: nested-wf-test-4187779359
          name: nested-wf-test[0]
          phase: Succeeded
          progress: 1/2
          resourcesDuration:
            cpu: 10
            memory: 10
          startedAt: "2021-04-21T21:20:08Z"
          templateScope: local/nested-wf-test
          type: StepGroup
      phase: Running
      progress: 1/2
      resourcesDuration:
        cpu: 10
        memory: 10
      startedAt: "2021-04-21T21:20:08Z"
    
    

    I then recompiled argoexec to add some extra prints to see the various variables in use:

    kubectl logs nested-wf-test-73877493 -c wait -f
    time="2021-04-21T21:20:19.230Z" level=info msg="Starting Workflow Executor" version="{untagged 2021-04-21T21:06:04Z 46221c5c901ce3df1ce3144cf6d54705c1e8eb04 untagged clean go1.15.7 gc linux/amd64}"
    I0421 21:20:19.231385       1 merged_client_builder.go:121] Using in-cluster configuration
    I0421 21:20:19.231668       1 merged_client_builder.go:163] Using in-cluster namespace
    time="2021-04-21T21:20:19.235Z" level=info msg="Creating a docker executor"
    time="2021-04-21T21:20:19.235Z" level=info msg="Executor (version: untagged, build_date: 2021-04-21T21:06:04Z) initialized (pod: default/nested-wf-test-73877493) with template:\n{\"name\":\"sleepabit\",\"inputs\":{\"parameters\":[{\"name\":\"excluded_node\",\"value\":\"\"}]},\"outputs\":{\"parameters\":[{\"name\":\"nodename\",\"valueFrom\":{\"path\":\"nodename.txt\"},\"globalName\":\"nodename\"}]},\"affinity\":{\"nodeAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":{\"nodeSelectorTerms\":[{\"matchExpressions\":[{\"key\":\"kubernetes.io/hostname\",\"operator\":\"NotIn\",\"values\":[\"\"]}]}]}}},\"metadata\":{},\"container\":{\"name\":\"\",\"image\":\"alpine:3.7\",\"command\":[\"sh\",\"-c\"],\"args\":[\"echo $NODE_NAME \\u003e nodename.txt \\u0026\\u0026 echo blablabla \\u0026\\u0026 sleep 5 \\u0026\\u0026 false\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"resources\":{}},\"archiveLocation\":{\"archiveLogs\":true,\"artifactory\":{\"url\":\"http://artifactory-espoo1.int.net.nokia.com/artifactory/fixedaccess-sw-rpm-local/nested-wf-test/nested-wf-test-73877493\",\"usernameSecret\":{\"name\":\"artifactory-sandbox\",\"key\":\"username\"},\"passwordSecret\":{\"name\":\"artifactory-sandbox\",\"key\":\"password\"}}},\"retryStrategy\":{\"limit\":2,\"retryPolicy\":\"OnError\"}}"
    time="2021-04-21T21:20:19.235Z" level=info msg="Starting annotations monitor"
    time="2021-04-21T21:20:19.235Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:19.235Z" level=info msg="Starting deadline monitor"
    time="2021-04-21T21:20:19.280Z" level=info msg="mapped container name \"main\" to container ID \"7c3b4ec01ec4f5015110f7307aaba01caf68d7a90dd20385ea13a95831c2d530\" (created at 2021-04-21 21:20:19 +0000 UTC, status Created)"
    time="2021-04-21T21:20:19.280Z" level=info msg="mapped container name \"wait\" to container ID \"b5004cf8e03affa2199aead9ea5767d22a799490e9d24ce9d5066a3376eabd9b\" (created at 2021-04-21 21:20:19 +0000 UTC, status Up)"
    time="2021-04-21T21:20:20.235Z" level=info msg="docker wait 7c3b4ec01ec4f5015110f7307aaba01caf68d7a90dd20385ea13a95831c2d530"
    time="2021-04-21T21:20:20.280Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:21.314Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:22.350Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:23.384Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:24.420Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=default --filter=label=io.kubernetes.pod.name=nested-wf-test-73877493"
    time="2021-04-21T21:20:24.560Z" level=info msg="Main container completed"
    time="2021-04-21T21:20:24.560Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
    time="2021-04-21T21:20:24.560Z" level=info msg="Capturing script exit code"
    time="2021-04-21T21:20:24.595Z" level=info msg="Saving logs"
    time="2021-04-21T21:20:24.595Z" level=info msg="[docker logs 7c3b4ec01ec4f5015110f7307aaba01caf68d7a90dd20385ea13a95831c2d530]"
    time="2021-04-21T21:20:24.630Z" level=info msg="art: &Artifact{Name:main-logs,Path:,Mode:nil,From:,ArtifactLocation:ArtifactLocation{ArchiveLogs:nil,S3:nil,Git:nil,HTTP:nil,Artifactory:&ArtifactoryArtifact{URL:/artifactory/fixedaccess-sw-rpm-local/nested-wf-test/nested-wf-test-73877493/main.log,ArtifactoryAuth:ArtifactoryAuth{UsernameSecret:nil,PasswordSecret:nil,},},HDFS:nil,Raw:nil,OSS:nil,GCS:nil,},GlobalName:,Archive:nil,Optional:false,SubPath:,RecurseMode:false,}"
    time="2021-04-21T21:20:24.630Z" level=info msg="driverArt: &Artifact{Name:main-logs,Path:,Mode:nil,From:,ArtifactLocation:ArtifactLocation{ArchiveLogs:nil,S3:nil,Git:nil,HTTP:nil,Artifactory:&ArtifactoryArtifact{URL:/artifactory/fixedaccess-sw-rpm-local/nested-wf-test/nested-wf-test-73877493/main.log,ArtifactoryAuth:ArtifactoryAuth{UsernameSecret:nil,PasswordSecret:nil,},},HDFS:nil,Raw:nil,OSS:nil,GCS:nil,},GlobalName:,Archive:nil,Optional:false,SubPath:,RecurseMode:false,}"
    time="2021-04-21T21:20:24.630Z" level=info msg="driverArt: &Artifact{Name:main-logs,Path:,Mode:nil,From:,ArtifactLocation:ArtifactLocation{ArchiveLogs:nil,S3:nil,Git:nil,HTTP:nil,Artifactory:&ArtifactoryArtifact{URL:/artifactory/fixedaccess-sw-rpm-local/nested-wf-test/nested-wf-test-73877493/main.log,ArtifactoryAuth:ArtifactoryAuth{UsernameSecret:nil,PasswordSecret:nil,},},HDFS:nil,Raw:nil,OSS:nil,GCS:nil,},GlobalName:,Archive:nil,Optional:false,SubPath:,RecurseMode:false,}"
    time="2021-04-21T21:20:24.630Z" level=info msg=NewDriver
    time="2021-04-21T21:20:24.630Z" level=info msg=Artifactory
    time="2021-04-21T21:20:24.630Z" level=info msg="Artifactory: &ArtifactoryArtifact{URL:/artifactory/fixedaccess-sw-rpm-local/nested-wf-test/nested-wf-test-73877493/main.log,ArtifactoryAuth:ArtifactoryAuth{UsernameSecret:nil,PasswordSecret:nil,},}"
    time="2021-04-21T21:20:24.630Z" level=info msg="Alloc=4327 TotalAlloc=9206 Sys=74577 NumGC=4 Goroutines=9"
    time="2021-04-21T21:20:24.630Z" level=fatal msg="executor panic: runtime error: invalid memory address or nil pointer dereference\ngoroutine 1 [running]:\nruntime/debug.Stack(0x2053ed2, 0x14, 0xc0006b41c0)\n\t/usr/local/go/src/runtime/debug/stack.go:24 +0x9f\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).HandleError(0xc00037b600, 0x23706c0, 0xc000190020)\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:126 +0x1d6\npanic(0x1dac5c0, 0x3064ec0)\n\t/usr/local/go/src/runtime/panic.go:975 +0x47a\ngithub.com/argoproj/argo-workflows/v3/workflow/artifacts.NewDriver(0x23706c0, 0xc000190020, 0xc000726240, 0x23324c0, 0xc00037b600, 0xc00071f960, 0xa388f5, 0xc0002330e0, 0x2005e00)\n\t/go/src/github.com/argoproj/argo-workflows/workflow/artifacts/artifacts.go:99 +0xa21\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).InitDriver(0xc00037b600, 0x23706c0, 0xc000190020, 0xc000726240, 0xc00071f9f0, 0x1, 0x1, 0xa8)\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:586 +0x65\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).saveArtifactFromFile(0xc00037b600, 0x23706c0, 0xc000190020, 0xc000726180, 0x20400e7, 0x8, 0xc000734b60, 0x1f, 0x0, 0xc000190020)\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:316 +0x1be\ngithub.com/argoproj/argo-workflows/v3/workflow/executor.(*WorkflowExecutor).SaveLogs(0xc00037b600, 0x23706c0, 0xc000190020, 0x0, 0x0, 0xc00071fba0)\n\t/go/src/github.com/argoproj/argo-workflows/workflow/executor/executor.go:542 +0x23d\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.waitContainer(0x23706c0, 0xc000190020, 0x0, 0x0)\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:61 +0x61f\ngithub.com/argoproj/argo-workflows/v3/cmd/argoexec/commands.NewWaitCommand.func1(0xc00037ab00, 0xc0000a93e0, 0x0, 0x6)\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/commands/wait.go:18 +0x3d\ngithub.com/spf13/cobra.(*Command).execute(0xc00037ab00, 0xc0000a9380, 0x6, 0x6, 0xc00037ab00, 0xc0000a9380)\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x2c2\ngithub.com/spf13/cobra.(*Command).ExecuteC(0xc00037a2c0, 0xc000086778, 0xc00071ff78, 0x4062c5)\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375\ngithub.com/spf13/cobra.(*Command).Execute(...)\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:887\nmain.main()\n\t/go/src/github.com/argoproj/argo-workflows/cmd/argoexec/main.go:14 +0x2b\n"
    

    workflow controller configmap

    apiVersion: v1
    data:
      artifactRepository: |
        # archiveLogs will archive the main container logs as an artifact
        archiveLogs: true
        artifactory:
          repoURL: "http://artifactory-espoo1.int.net.nokia.com/artifactory/fixedaccess-sw-rpm-local"
          usernameSecret:
            name: artifactory-sandbox
            key: username
          passwordSecret:
            name: artifactory-sandbox
            key: password
      executor: |
        imagePullPolicy: Always
        args:
        - --loglevel
        - debug
        - --gloglevel
        - "6"
    kind: ConfigMap
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"workflow-controller-configmap","namespace":"argo"}}
      creationTimestamp: "2021-01-27T18:28:36Z"
      name: workflow-controller-configmap
      namespace: argo
      resourceVersion: "150280397"
      selfLink: /api/v1/namespaces/argo/configmaps/workflow-controller-configmap
      uid: 62afdfd0-32bb-4ed5-88bf-f8db88ed6706
    

    and the secret (base64 removed)

    apiVersion: v1
    data:
      password: xxxxxxxxxxxxxxx
      username: xxxxxxxxxxx
    kind: Secret
    metadata:
      creationTimestamp: "2021-04-21T16:17:47Z"
      name: artifactory-sandbox
      namespace: default
      resourceVersion: "149896347"
      selfLink: /api/v1/namespaces/default/secrets/artifactory-sandbox
      uid: 806ba8d8-183b-4210-a39f-06eeb2b2b9e5
    type: Opaque
    

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

  • Keep getting

    Keep getting "Connection closed to api/v1/workflow-events/argo?listOptions.resourceVersion=30237..." from Argo Workflow UI

    Summary

    What happened/what you expected to happen? I exposed Argo Workflow UI using Ambassador under /argo/ subpath. I was able to access the UI. However, I got "Unable to load data" error. 01 forbidden

    After that I got recurring "Connection closed to api/v1/workflow-events/argo?listOptions.resourceVersion=30237..." error. 02 connection-closed Connection closed to api/v1/workflow-events/argo?listOptions.resourceVersion=7273&fields=result.object.metadata.name,result.object.metadata.namespace,result.object.metadata.resourceVersion,result.object.metadata.uid,result.object.status.finishedAt,result.object.status.phase,result.object.status.startedAt,result.object.status.estimatedDuration,result.object.status.progress,result.type,result.object.metadata.labels,result.object.spec.suspend:

    Diagnostics

    What Kubernetes provider are you using?
    1.19.4 What version of Argo Workflows are you running? 2.12.2

    Steps to reproduce

    1. Create K3D cluster
    k3d cluster create dev-ld --api-port 6443 -p 8080:80@loadbalancer --agents 2 \
        --k3s-server-arg '--flannel-backend=none' --k3s-server-arg '--no-deploy=traefik' \
        --volume "$(CURRENT_DIR)/k3d_custom/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml"
    
    1. Install Ambassador Edge Stack
    kubectl apply -f https://www.getambassador.io/yaml/aes-crds.yaml && \
    kubectl wait --for condition=established --timeout=90s crd -lproduct=aes && \
    kubectl apply -f https://www.getambassador.io/yaml/aes.yaml && \
    kubectl -n ambassador wait --for condition=available --timeout=90s deploy -lproduct=aes
    
    1. Install metallb metal-lb-layer2-config.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: metallb-system
      name: config
    data:
      config: |
        address-pools:
        - name: default
          protocol: layer2
          addresses:
          - 172.18.0.3-172.18.0.254
    
    # https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
    kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
    kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
    # On first install only
    kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
    kubectl apply -f $(CURRENT_DIR)/k3d_custom/metal-lb-layer2-config.yaml
    
    1. Configure tuntap - Mac only
    # Install tuntap:
    brew install Caskroom/cask/tuntap
    # https://blog.kubernauts.io/k3s-with-k3d-and-metallb-on-mac-923a3255c36e
    # only needed for Mac
    # create a bridge interface between the Physical Machine and the Host Virtual Machine.
    # download scripts form https://github.com/arashkaffamanesh/k3d-k3s-metallb
    k3d cluster stop dev-ld
    ./k3d_custom/docker-tuntap-osx/sbin/docker_tap_install.sh
    echo "Wait till docker is restarted."
    sleep 40
    echo "..done"
    ./k3d_custom/docker-tuntap-osx/sbin/docker_tap_up.sh
    sudo route -v add -net 172.18.0.0 -netmask 255.255.255.0 10.0.75.2
    k3d cluster start dev-ld
    
    1. Install argo
    kubectl create namespace argo || true
    kubectl -n argo apply -f https://raw.githubusercontent.com/argoproj/argo/master/manifests/quick-start-postgres.yaml
    # Since K3S uses containerd, you need to configure the workflow controller to use the PNS (Pod Namespace Sharing) executor
    kubectl -n argo patch cm workflow-controller-configmap -p '{"data": {"containerRuntimeExecutor": "pns"}}';
    kubectl -n argo wait pod -lapp=argo-server --for condition=available --timeout=180s
    
    1. Patch argo argocd-patch.yaml
    spec:
      template:
        spec:
          containers:
          - args:
            - server
            - --namespaced
            - --auth-mode
            - server
            - --auth-mode
            - client
            - --basehref
            - /argo/
            image: argoproj/argocli:latest
            name: argo-server
    
    kubectl -n argo patch deployment/argo-server -p "$(cat argocd-patch.yaml)"
    
    1. Create Ambassador mapping argo-ingress-ambassador.yaml
    apiVersion: getambassador.io/v2
    kind: Mapping
    metadata:
      name: argo-server
      namespace: argo
    spec:
      prefix: /argo/
      regex_rewrite:
        pattern: '/argo(/|$)(.*)'
        substitution: '/\2'
      service: argo-server:2746
    
    kubectl -n argo apply -f resources/argo-ingress-ambassador.yaml
    
    1. argo-server log
    ❯ kubectl -n argo logs argo-server-db794976-gchd2                                           
    time="2020-12-27T05:06:17.892Z" level=info authModes="[server client]" baseHRef=/argo/ managedNamespace=argo namespace=argo secure=false
    time="2020-12-27T05:06:17.893Z" level=warning msg="You are running in insecure mode. Learn how to enable transport layer security: https://argoproj.github.io/argo/tls/"
    time="2020-12-27T05:06:17.893Z" level=info msg="config map" name=workflow-controller-configmap
    time="2020-12-27T05:06:17.893Z" level=info msg="SSO disabled"
    time="2020-12-27T05:06:18.061Z" level=info msg="Starting Argo Server" instanceID= version=764f118ca4180b476a9b053873e55eeccbc5e202
    time="2020-12-27T05:06:18.062Z" level=info msg="Creating DB session"
    time="2020-12-27T05:06:18.121Z" level=info msg="Node status offloading config" ttl=5m0s
    time="2020-12-27T05:06:18.122Z" level=info msg="Creating event controller" operationQueueSize=16 workerCount=4
    time="2020-12-27T05:06:18.125Z" level=info msg="Argo Server started successfully on http://localhost:2746"
    time="2020-12-27T05:10:23.551Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=GetVersion grpc.service=info.InfoService grpc.start_time="2020-12-27T05:10:23Z" grpc.time_ms=37.122 span.kind=server system=grpc
    time="2020-12-27T05:10:23.661Z" level=warning msg="finished unary call with code PermissionDenied" error="rpc error: code = PermissionDenied desc = workflowtemplates.argoproj.io is forbidden: User \"system:serviceaccount:argo:argo-server\" cannot list resource \"workflowtemplates\" in API group \"argoproj.io\" at the cluster scope" grpc.code=PermissionDenied grpc.method=ListWorkflowTemplates grpc.service=workflowtemplate.WorkflowTemplateService grpc.start_time="2020-12-27T05:10:23Z" grpc.time_ms=81.194 span.kind=server system=grpc
    time="2020-12-27T05:10:23.665Z" level=warning msg="finished unary call with code PermissionDenied" error="rpc error: code = PermissionDenied desc = cronworkflows.argoproj.io is forbidden: User \"system:serviceaccount:argo:argo-server\" cannot list resource \"cronworkflows\" in API group \"argoproj.io\" at the cluster scope" grpc.code=PermissionDenied grpc.method=ListCronWorkflows grpc.service=cronworkflow.CronWorkflowService grpc.start_time="2020-12-27T05:10:23Z" grpc.time_ms=81.08 span.kind=server system=grpc
    time="2020-12-27T05:10:23.757Z" level=warning msg="finished unary call with code PermissionDenied" error="rpc error: code = PermissionDenied desc = workflows.argoproj.io is forbidden: User \"system:serviceaccount:argo:argo-server\" cannot list resource \"workflows\" in API group \"argoproj.io\" at the cluster scope" grpc.code=PermissionDenied grpc.method=ListWorkflows grpc.service=workflow.WorkflowService grpc.start_time="2020-12-27T05:10:23Z" grpc.time_ms=5.809 span.kind=server system=grpc
    time="2020-12-27T05:10:23.917Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=GetInfo grpc.service=info.InfoService grpc.start_time="2020-12-27T05:10:23Z" grpc.time_ms=0.886 span.kind=server system=grpc
    time="2020-12-27T05:10:24.080Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=ListWorkflowTemplates grpc.service=workflowtemplate.WorkflowTemplateService grpc.start_time="2020-12-27T05:10:23Z" grpc.time_ms=120.036 span.kind=server system=grpc
    time="2020-12-27T05:10:24.090Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=ListCronWorkflows grpc.service=cronworkflow.CronWorkflowService grpc.start_time="2020-12-27T05:10:23Z" grpc.time_ms=124.877 span.kind=server system=grpc
    time="2020-12-27T05:10:24.150Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=ListWorkflows grpc.service=workflow.WorkflowService grpc.start_time="2020-12-27T05:10:24Z" grpc.time_ms=132.258 span.kind=server system=grpc
    time="2020-12-27T05:10:27.242Z" level=info msg="finished streaming call with code OK" grpc.code=OK grpc.method=WatchWorkflows grpc.service=workflow.WorkflowService grpc.start_time="2020-12-27T05:10:24Z" grpc.time_ms=3000.936 span.kind=server system=grpc
    time="2020-12-27T05:10:37.329Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=ListWorkflows grpc.service=workflow.WorkflowService grpc.start_time="2020-12-27T05:10:37Z" grpc.time_ms=15.356 span.kind=server system=grpc
    time="2020-12-27T05:10:40.387Z" level=info msg="finished streaming call with code OK" grpc.code=OK grpc.method=WatchWorkflows grpc.service=workflow.WorkflowService grpc.start_time="2020-12-27T05:10:37Z" grpc.time_ms=2999.413 span.kind=server system=grpc
    
    1. workflow-controller log
    ❯ kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name) | more            
    time="2020-12-27T05:08:33.189Z" level=info msg="config map" name=workflow-controller-configmap
    time="2020-12-27T05:08:33.233Z" level=info msg="Configuration:\nartifactRepository:\n  archiveLogs: true\n  s3:\n    accessKeySecret:\n      key: accesskey\n      name: my-minio-cred\n    bucket: my-bucket\n    endpoint: minio:9000\n    insecure: true\n    secretKeySecret:\n      key: secretkey\n      name: my-minio-cred\ncontainerRuntimeExecutor: pns\ninitialDelay: 0s\nlinks:\n- name: Example Workflow Link\n  scope: workflow\n  url: http://logging-facility?namespace=${metadata.namespace}&workflowName=${metadata.name}&startedAt=${status.startedAt}&finishedAt=${status.finishedAt}\n- name: Example Pod Link\n  scope: pod\n  url: http://logging-facility?namespace=${metadata.namespace}&podName=${metadata.name}&startedAt=${status.startedAt}&finishedAt=${status.finishedAt}\nmetricsConfig:\n  disableLegacy: true\n  enabled: true\n  path: /metrics\n  port: 9090\nnodeEvents: {}\npersistence:\n  archive: true\n  archiveTTL: 168h0m0s\n  connectionPool:\n    maxIdleConns: 100\n  nodeStatusOffLoad: true\n  postgresql:\n    database: postgres\n    host: postgres\n    passwordSecret:\n      key: password\n      name: argo-postgres-config\n    port: 5432\n    tableName: argo_workflows\n    userNameSecret:\n      key: username\n      name: argo-postgres-config\npodSpecLogStrategy: {}\ntelemetryConfig: {}\n"
    :...skipping...
    time="2020-12-27T05:08:33.189Z" level=info msg="config map" name=workflow-controller-configmap
    time="2020-12-27T05:08:33.233Z" level=info msg="Configuration:\nartifactRepository:\n  archiveLogs: true\n  s3:\n    accessKeySecret:\n      key: accesskey\n      name: my-minio-cred\n    bucket: my-bucket\n    endpoint: minio:9000\n    insecure: true\n    secretKeySecret:\n      key: secretkey\n      name: my-minio-cred\ncontainerRuntimeExecutor: pns\ninitialDelay: 0s\nlinks:\n- name: Example Workflow Link\n  scope: workflow\n  url: http://logging-facility?namespace=${metadata.namespace}&workflowName=${metadata.name}&startedAt=${status.startedAt}&finishedAt=${status.finishedAt}\n- name: Example Pod Link\n  scope: pod\n  url: http://logging-facility?namespace=${metadata.namespace}&podName=${metadata.name}&startedAt=${status.startedAt}&finishedAt=${status.finishedAt}\nmetricsConfig:\n  disableLegacy: true\n  enabled: true\n  path: /metrics\n  port: 9090\nnodeEvents: {}\npersistence:\n  archive: true\n  archiveTTL: 168h0m0s\n  connectionPool:\n    maxIdleConns: 100\n  nodeStatusOffLoad: true\n  postgresql:\n    database: postgres\n    host: postgres\n    passwordSecret:\n      key: password\n      name: argo-postgres-config\n    port: 5432\n    tableName: argo_workflows\n    userNameSecret:\n      key: username\n      name: argo-postgres-config\npodSpecLogStrategy: {}\ntelemetryConfig: {}\n"
    time="2020-12-27T05:08:33.233Z" level=info msg="Persistence configuration enabled"
    time="2020-12-27T05:08:33.234Z" level=info msg="Creating DB session"
    time="2020-12-27T05:08:33.257Z" level=info msg="Persistence Session created successfully"
    time="2020-12-27T05:08:33.265Z" level=info msg="Migrating database schema" clusterName=default dbType=postgres
    time="2020-12-27T05:08:33.275Z" level=info msg="applying database change" change="create table if not exists argo_workflows (\n    id varchar(128) ,\n    name varchar(256),\n    phase varchar(25),\n    namespace varchar(256),\n    workflow text,\n    startedat timestamp default CURRENT_TIMESTAMP,\n    finishedat timestamp default CURRENT_TIMESTAMP,\n    primary key (id, namespace)\n)" changeSchemaVersion=0
    time="2020-12-27T05:08:33.286Z" level=info msg="applying database change" change="create unique index idx_name on argo_workflows (name)" changeSchemaVersion=1
    time="2020-12-27T05:08:33.295Z" level=info msg="applying database change" change="create table if not exists argo_workflow_history (\n    id varchar(128) ,\n    name varchar(256),\n    phase varchar(25),\n    namespace varchar(256),\n    workflow text,\n    startedat timestamp default CURRENT_TIMESTAMP,\n    finishedat timestamp default CURRENT_TIMESTAMP,\n    primary key (id, namespace)\n)" changeSchemaVersion=2
    time="2020-12-27T05:08:33.304Z" level=info msg="applying database change" change="alter table argo_workflow_history rename to argo_archived_workflows" changeSchemaVersion=3
    time="2020-12-27T05:08:33.310Z" level=info msg="applying database change" change="drop index idx_name" changeSchemaVersion=4
    time="2020-12-27T05:08:33.315Z" level=info msg="applying database change" change="create unique index idx_name on argo_workflows(name, namespace)" changeSchemaVersion=5
    time="2020-12-27T05:08:33.321Z" level=info msg="applying database change" change="alter table argo_workflows drop constraint argo_workflows_pkey" changeSchemaVersion=6
    time="2020-12-27T05:08:33.326Z" level=info msg="applying database change" change="alter table argo_workflows add primary key(name,namespace)" changeSchemaVersion=7
    time="2020-12-27T05:08:33.331Z" level=info msg="applying database change" change="alter table argo_archived_workflows drop constraint argo_workflow_history_pkey" changeSchemaVersion=8
    time="2020-12-27T05:08:33.335Z" level=info msg="applying database change" change="alter table argo_archived_workflows add primary key(id)" changeSchemaVersion=9
    time="2020-12-27T05:08:33.342Z" level=info msg="applying database change" change="alter table argo_archived_workflows rename column id to uid" changeSchemaVersion=10
    time="2020-12-27T05:08:33.345Z" level=info msg="applying database change" change="alter table argo_archived_workflows alter column uid set not null" changeSchemaVersion=11
    time="2020-12-27T05:08:33.351Z" level=info msg="applying database change" change="alter table argo_archived_workflows alter column phase set not null" changeSchemaVersion=12
    time="2020-12-27T05:08:33.356Z" level=info msg="applying database change" change="alter table argo_archived_workflows alter column namespace set not null" changeSchemaVersion=13
    time="2020-12-27T05:08:33.360Z" level=info msg="applying database change" change="alter table argo_archived_workflows alter column workflow set not null" changeSchemaVersion=14
    time="2020-12-27T05:08:33.365Z" level=info msg="applying database change" change="alter table argo_archived_workflows alter column startedat set not null" changeSchemaVersion=15
    time="2020-12-27T05:08:33.370Z" level=info msg="applying database change" change="alter table argo_archived_workflows alter column finishedat set not null" changeSchemaVersion=16
    time="2020-12-27T05:08:33.374Z" level=info msg="applying database change" change="alter table argo_archived_workflows add clustername varchar(64)" changeSchemaVersion=17
    time="2020-12-27T05:08:33.378Z" level=info msg="applying database change" change="update argo_archived_workflows set clustername = 'default' where clustername is null" changeSchemaVersion=18
    time="2020-12-27T05:08:33.381Z" level=info msg="applying database change" change="alter table argo_archived_workflows alter column clustername set not null" changeSchemaVersion=19
    time="2020-12-27T05:08:33.384Z" level=info msg="applying database change" change="alter table argo_archived_workflows drop constraint argo_archived_workflows_pkey" changeSchemaVersion=20
    time="2020-12-27T05:08:33.390Z" level=info msg="applying database change" change="alter table argo_archived_workflows add primary key(clustername,uid)" changeSchemaVersion=21
    time="2020-12-27T05:08:33.395Z" level=info msg="applying database change" change="create index argo_archived_workflows_i1 on argo_archived_workflows (clustername,namespace)" changeSchemaVersion=22
    time="2020-12-27T05:08:33.405Z" level=info msg="applying database change" change="alter table argo_workflows drop column phase" changeSchemaVersion=23
    time="2020-12-27T05:08:33.409Z" level=info msg="applying database change" change="alter table argo_workflows drop column startedat" changeSchemaVersion=24
    time="2020-12-27T05:08:33.412Z" level=info msg="applying database change" change="alter table argo_workflows drop column finishedat" changeSchemaVersion=25
    time="2020-12-27T05:08:33.415Z" level=info msg="applying database change" change="alter table argo_workflows rename column id to uid" changeSchemaVersion=26
    time="2020-12-27T05:08:33.419Z" level=info msg="applying database change" change="alter table argo_workflows alter column uid set not null" changeSchemaVersion=27
    time="2020-12-27T05:08:33.422Z" level=info msg="applying database change" change="alter table argo_workflows alter column namespace set not null" changeSchemaVersion=28
    time="2020-12-27T05:08:33.425Z" level=info msg="applying database change" change="alter table argo_workflows add column clustername varchar(64)" changeSchemaVersion=29
    time="2020-12-27T05:08:33.429Z" level=info msg="applying database change" change="update argo_workflows set clustername = 'default' where clustername is null" changeSchemaVersion=30
    time="2020-12-27T05:08:33.432Z" level=info msg="applying database change" change="alter table argo_workflows alter column clustername set not null" changeSchemaVersion=31
    time="2020-12-27T05:08:33.436Z" level=info msg="applying database change" change="alter table argo_workflows add column version varchar(64)" changeSchemaVersion=32
    time="2020-12-27T05:08:33.440Z" level=info msg="applying database change" change="alter table argo_workflows add column nodes text" changeSchemaVersion=33
    time="2020-12-27T05:08:33.443Z" level=info msg="applying database change" change="backfillNodes{argo_workflows}" changeSchemaVersion=34
    time="2020-12-27T05:08:33.443Z" level=info msg="Backfill node status"
    time="2020-12-27T05:08:33.447Z" level=info msg="applying database change" change="alter table argo_workflows alter column nodes set not null" changeSchemaVersion=35
    time="2020-12-27T05:08:33.450Z" level=info msg="applying database change" change="alter table argo_workflows drop column workflow" changeSchemaVersion=36
    time="2020-12-27T05:08:33.454Z" level=info msg="applying database change" change="alter table argo_workflows add column updatedat timestamp not null default current_timestamp" changeSchemaVersion=37
    time="2020-12-27T05:08:33.457Z" level=info msg="applying database change" change="alter table argo_workflows drop constraint argo_workflows_pkey" changeSchemaVersion=38
    time="2020-12-27T05:08:33.462Z" level=info msg="applying database change" change="drop index idx_name" changeSchemaVersion=39
    time="2020-12-27T05:08:33.466Z" level=info msg="applying database change" change="alter table argo_workflows drop column name" changeSchemaVersion=40
    time="2020-12-27T05:08:33.470Z" level=info msg="applying database change" change="alter table argo_workflows add primary key(clustername,uid,version)" changeSchemaVersion=41
    time="2020-12-27T05:08:33.475Z" level=info msg="applying database change" change="create index argo_workflows_i1 on argo_workflows (clustername,namespace)" changeSchemaVersion=42
    time="2020-12-27T05:08:33.482Z" level=info msg="applying database change" change="alter table argo_archived_workflows alter column workflow type json using workflow::json" changeSchemaVersion=43
    time="2020-12-27T05:08:33.496Z" level=info msg="applying database change" change="alter table argo_archived_workflows alter column name set not null" changeSchemaVersion=44
    time="2020-12-27T05:08:33.506Z" level=info msg="applying database change" change="create index argo_workflows_i2 on argo_workflows (clustername,namespace,updatedat)" changeSchemaVersion=45
    time="2020-12-27T05:08:33.512Z" level=info msg="applying database change" change="create table if not exists argo_archived_workflows_labels (\n\tclustername varchar(64) not null,\n\tuid varchar(128) not null,\n    name varchar(317) not null,\n    value varchar(63) not null,\n    primary key (clustername, uid, name),\n \tforeign key (clustername, uid) references argo_archived_workflows(clustername, uid) on delete cascade\n)" changeSchemaVersion=46
    time="2020-12-27T05:08:33.523Z" level=info msg="applying database change" change="alter table argo_workflows alter column nodes type json using nodes::json" changeSchemaVersion=47
    time="2020-12-27T05:08:33.536Z" level=info msg="applying database change" change="alter table argo_archived_workflows add column instanceid varchar(64)" changeSchemaVersion=48
    time="2020-12-27T05:08:33.540Z" level=info msg="applying database change" change="update argo_archived_workflows set instanceid = '' where instanceid is null" changeSchemaVersion=49
    time="2020-12-27T05:08:33.545Z" level=info msg="applying database change" change="alter table argo_archived_workflows alter column instanceid set not null" changeSchemaVersion=50
    time="2020-12-27T05:08:33.550Z" level=info msg="applying database change" change="drop index argo_archived_workflows_i1" changeSchemaVersion=51
    time="2020-12-27T05:08:33.555Z" level=info msg="applying database change" change="create index argo_archived_workflows_i1 on argo_archived_workflows (clustername,instanceid,namespace)" changeSchemaVersion=52
    time="2020-12-27T05:08:33.562Z" level=info msg="applying database change" change="drop index argo_workflows_i1" changeSchemaVersion=53
    time="2020-12-27T05:08:33.566Z" level=info msg="applying database change" change="drop index argo_workflows_i2" changeSchemaVersion=54
    time="2020-12-27T05:08:33.571Z" level=info msg="applying database change" change="create index argo_workflows_i1 on argo_workflows (clustername,namespace,updatedat)" changeSchemaVersion=55
    time="2020-12-27T05:08:33.577Z" level=info msg="applying database change" change="create index argo_archived_workflows_i2 on argo_archived_workflows (clustername,instanceid,finishedat)" changeSchemaVersion=56
    time="2020-12-27T05:08:33.582Z" level=info msg="Node status offloading config" ttl=5m0s
    time="2020-12-27T05:08:33.582Z" level=info msg="Node status offloading is enabled"
    time="2020-12-27T05:08:33.583Z" level=info msg="Workflow archiving is enabled"
    time="2020-12-27T05:08:33.585Z" level=info msg="Starting Workflow Controller" version=764f118ca4180b476a9b053873e55eeccbc5e202
    time="2020-12-27T05:08:33.585Z" level=info msg="Workers: workflow: 32, pod: 32, pod cleanup: 4"
    time="2020-12-27T05:08:33.771Z" level=info msg="Manager initialized successfully"
    I1227 05:08:33.774426       1 leaderelection.go:242] attempting to acquire leader lease  argo/workflow-controller...
    I1227 05:08:33.792510       1 leaderelection.go:252] successfully acquired lease argo/workflow-controller
    time="2020-12-27T05:08:33.792Z" level=info msg="new leader" id=workflow-controller-75cb95b89d-6l49r leader=workflow-controller-75cb95b89d-6l49r
    time="2020-12-27T05:08:33.793Z" level=info msg="started leading" id=workflow-controller-75cb95b89d-6l49r
    time="2020-12-27T05:08:33.793Z" level=info msg="Performing periodic GC every 5m0s"
    time="2020-12-27T05:08:33.793Z" level=info msg="Performing archived workflow GC" periodicity=24h0m0s ttl=604800000000000
    time="2020-12-27T05:08:33.799Z" level=info msg="Starting workflow TTL controller (resync 20m0s, workflowTTLWorkers 4)"
    time="2020-12-27T05:08:33.799Z" level=info msg="Started workflow TTL worker"
    time="2020-12-27T05:08:33.801Z" level=info msg="Starting prometheus metrics server at localhost:9090/metrics"
    time="2020-12-27T05:08:33.805Z" level=info msg="Starting CronWorkflow controller"
    time="2020-12-27T05:13:32.973Z" level=info msg="Alloc=5797 TotalAlloc=22882 Sys=70080 NumGC=8 Goroutines=201"
    time="2020-12-27T05:13:33.581Z" level=info msg="Performing periodic workflow GC"
    time="2020-12-27T05:13:33.593Z" level=info msg="Zero old offloads, nothing to do"
    

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

  • Difficulty scaling when running many workflows and/or steps

    Difficulty scaling when running many workflows and/or steps

    Problem statement

    • Use case: processing media assets as they are uploaded by customers
      • We are trying to test the case of continuously created workflows, with the hope of relatively predictable responsiveness.
      • In an ideal world, we would be able to run ~2500 workflows in parallel at peak times
      • Expected workflow run time varies quite a bit, but most of them will take less than five minutes
    • In the synthetic benchmarks detailed below, our nodegroup scale limits are never reached. We don’t seem to be able to run workflows quickly enough to saturate our capacity.

    Environment details

    • Running on EKS
      • Kubernetes version: 1.17
      • Controller running on instance type: m5.8xlarge (32 vCPU, 128GB RAM)
      • Workflow executors instance type: m5.xlarge (4 vCPU, 16GB RAM)
    • Argo details:
      • Modified for additional metrics based on master at commit 5c538d7a918e41029d3911a92c6ac615f04d3b80
      • Running with parallelism: 800, otherwise we observed the EKS control place becoming unresponsive
      • Running with containerRuntimeExecutor: kubelet on AWS Bottlerocket instances

    Case 1 (many workflows, few steps)

    • We created a workflow template (see fig.1) with a modifiable number of steps. We launched workflows at a rate of 3 per second with a script (see fig.2)
    • Initially the controller seems to keep up, with the running workflow count growing, and no noticeable delays. After a few minutes we observe a growing number of pending workflows, as well as an oscillation pattern in several metrics.
    • Instead of scaling up to meet demand, we see kubernetes node capacity go unused as the number of running workflows oscillates
    • We added custom metrics in the controller to monitor the workflow and pod queues (wfc.wfQueue and wfc.podQueue in controller/controller.go). The workflow queue oscillates between 1000 and 1500 items during our test. However, the pod queue consistently stays at 0.

    Case 2 (few workflows, many steps)

    • We created a similar template (fig. 6) and script (fig. 7) that generates a number of β€œwork” steps that run in parallel, and concludes with a final β€œfollowup” step.
    • When launching one workflow at a time, or at an interval larger than 20s, everything ran smoothly (pods completed and workflows were successful)
    • When the flow of workflows was increased, we began to see the β€œzombie” phenomena (fig. 4) - even though the pod was marked as Completed, the workflow lingers in the Running state (fig. 5).

    Things we tried

    In trying to address these issues, we changed the values of the following parameters without much success:

    • pod-workers
    • workflow-workers (the default of 32 was a bottleneck, but anything over 128 didn’t make a difference)
    • INFORMER_WRITE_BACK=false
    • --qps, --burst
    • We increased the memory and CPU resources available to the controller
    • workflowResyncPeriodand podResyncPeriod
    • We’ve also tried various experimental branches from recent seemingly related issues (and we are happy to try more!)

    fig. 1

    apiVersion: argoproj.io/v1alpha1
    kind: WorkflowTemplate
    metadata:
      name: sleep-test-template
      generateName: sleep-test-
      namespace: argo-workflows
    spec:
      entrypoint: sleep
      ttlStrategy:
        secondsAfterSuccess: 0
        secondsAfterFailure: 600
      podGC:
        strategy: OnPodCompletion
      arguments:
        parameters:
          - name: friendly-name
            value: sleep_test # Use underscores, not hyphens
          - name: cpu-limit
            value: 2000m
          - name: mem-limit
            value: 1024Mi
          - name: step-count
            value: "200"
          - name: sleep-seconds
            value: "8"
      metrics:
        prometheus:
          - name: "workflow_duration"      # Metric name (will be prepended with "argo_workflows_")
            help: "Duration gauge by name" # A help doc describing your metric. This is required.
            labels:
               - key: workflow_template
                 value: "{{workflow.parameters.friendly-name}}"
            gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
              value: "{{workflow.duration}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
          - name: "workflow_processed"
            help: "Workflow processed count"
            labels:
               - key: workflow_template
                 value: "{{workflow.parameters.friendly-name}}"
               - key: status
                 value: "{{workflow.status}}"
            counter:
              value: "1"
      templates:
      - name: sleep
        nodeSelector:
          intent: task-workers
        steps:
          - - name: generate
              template: gen-number-list
          - - name: "sleep"
              template: snooze
              arguments:
                parameters: [{name: input_asset, value: "{{workflow.parameters.sleep-seconds}}", id: "{{item}}"}]
              withParam: "{{steps.generate.outputs.result}}"
    
      # Generate a list of numbers in JSON format
      - name: gen-number-list
        nodeSelector:
          intent: task-workers
        script:
          image: python:3.8.5-alpine3.12
          imagePullPolicy: IfNotPresent
          command: [python]
          source: |
            import json
            import sys
            json.dump([i for i in range(0, {{workflow.parameters.step-count}})], sys.stdout)
      - name: snooze
        metrics:
          prometheus:
            - name: "resource_duration_cpu"      # Metric name (will be prepended with "argo_workflows_")
              help: "Resource Duration CPU" # A help doc describing your metric. This is required.
              labels:
                 - key: workflow_template
                   value: "{{workflow.parameters.friendly-name}}"
              gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
                value: "{{resourcesDuration.cpu}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
            - name: "resource_duration_memory"      # Metric name (will be prepended with "argo_workflows_")
              help: "Resource Duration Memory" # A help doc describing your metric. This is required.
              labels:
                 - key: workflow_template
                   value: "{{workflow.parameters.friendly-name}}"
              gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
                value: "{{resourcesDuration.memory}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
        nodeSelector:
          intent: task-workers
        inputs:
          parameters:
            - name: input_asset
        podSpecPatch: '{"containers":[{"name":"main", "resources":{"requests":{"cpu": "{{workflow.parameters.cpu-limit}}", "memory": "{{workflow.parameters.mem-limit}}"}, "limits":{"cpu": "{{workflow.parameters.cpu-limit}}", "memory": "{{workflow.parameters.mem-limit}}" }}}]}'
        container:
          image: alpine
          imagePullPolicy: IfNotPresent
          command: [sleep]
          args: ["{{workflow.parameters.sleep-seconds}}"]
    

    fig. 2

    #!/usr/bin/env bash
    set -euo pipefail
    while true; do
      for i in {1..3}; do
        argo submit \
            -n argo-workflows \
            --from workflowtemplate/sleep-test-template \
            -p step-count="1" \
            -p sleep-seconds="60" &>/dev/null &
      done
      sleep 1
      echo -n "."
    done
    

    fig. 3

    Screen Shot 2020-12-01 at 4 09 01 PM

    fig.4

    ❯ argo -n argo-workflows get sleep-fanout-test-template-6dtjp
    Name:                sleep-fanout-test-template-6dtjp
    Namespace:           argo-workflows
    ServiceAccount:      default
    Status:              Running
    Created:             Wed Dec 02 15:39:59 -0500 (6 minutes ago)
    Started:             Wed Dec 02 15:39:59 -0500 (6 minutes ago)
    Duration:            6 minutes 21 seconds
    ResourcesDuration:   42m21s*(1 cpu),2h30m41s*(100Mi memory)
    Parameters:
      step-count:        100
      sleep-seconds:     8
    
    STEP                                 TEMPLATE         PODNAME                                      DURATION  MESSAGE
     ● sleep-fanout-test-template-6dtjp  sleep
     β”œ---βœ” generate                      gen-number-list  sleep-fanout-test-template-6dtjp-2151903814  7s
     β”œ-Β·-βœ” sleep(0:0)                    snooze           sleep-fanout-test-template-6dtjp-1189074090  14s
     | β”œ-βœ” sleep(1:1)                    snooze           sleep-fanout-test-template-6dtjp-1828931302  25s
    ...
     | β””-βœ” sleep(99:99)                  snooze           sleep-fanout-test-template-6dtjp-1049774502  16s
     β””---β—· followup                      snooze           sleep-fanout-test-template-6dtjp-1490893639  5m
    

    fig. 5

    ❯ kubectl -n argo-workflows get pod/sleep-fanout-test-template-6dtjp-1490893639
    NAME                                          READY   STATUS      RESTARTS   AGE
    sleep-fanout-test-template-6dtjp-1490893639   0/2     Completed   0          5m43s
    

    fig. 6

    apiVersion: argoproj.io/v1alpha1
    kind: WorkflowTemplate
    metadata:
      name: sleep-fanout-test-template
      generateName: sleep-fanout-test-
      namespace: argo-workflows
    spec:
      entrypoint: sleep
      ttlStrategy:
        secondsAfterSuccess: 0
        secondsAfterFailure: 600
      podGC:
        strategy: OnPodCompletion
      arguments:
        parameters:
          - name: friendly-name
            value: sleep_fanout_test # Use underscores, not hyphens
          - name: cpu-limit
            value: 2000m
          - name: mem-limit
            value: 1024Mi
          - name: step-count
            value: "200"
          - name: sleep-seconds
            value: "8"
      metrics:
        prometheus:
          - name: "workflow_duration"      # Metric name (will be prepended with "argo_workflows_")
            help: "Duration gauge by name" # A help doc describing your metric. This is required.
            labels:
               - key: workflow_template
                 value: "{{workflow.parameters.friendly-name}}"
            gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
              value: "{{workflow.duration}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
          - name: "workflow_processed"
            help: "Workflow processed count"
            labels:
               - key: workflow_template
                 value: "{{workflow.parameters.friendly-name}}"
               - key: status
                 value: "{{workflow.status}}"
            counter:
              value: "1"
      templates:
      - name: sleep
        nodeSelector:
          intent: task-workers
        steps:
          - - name: generate
              template: gen-number-list
          - - name: "sleep"
              template: snooze
              withParam: "{{steps.generate.outputs.result}}"
          - - name: "followup"
              template: snooze
    
      # Generate a list of numbers in JSON format
      - name: gen-number-list
        nodeSelector:
          intent: task-workers
        script:
          image: python:3.8.5-alpine3.12
          imagePullPolicy: IfNotPresent
          command: [python]
          source: |
            import json
            import sys
            json.dump([i for i in range(0, {{workflow.parameters.step-count}})], sys.stdout)
      - name: snooze
        metrics:
          prometheus:
            - name: "resource_duration_cpu"      # Metric name (will be prepended with "argo_workflows_")
              help: "Resource Duration CPU" # A help doc describing your metric. This is required.
              labels:
                 - key: workflow_template
                   value: "{{workflow.parameters.friendly-name}}"
              gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
                value: "{{resourcesDuration.cpu}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
            - name: "resource_duration_memory"      # Metric name (will be prepended with "argo_workflows_")
              help: "Resource Duration Memory" # A help doc describing your metric. This is required.
              labels:
                 - key: workflow_template
                   value: "{{workflow.parameters.friendly-name}}"
              gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
                value: "{{resourcesDuration.memory}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
        nodeSelector:
          intent: task-workers
        podSpecPatch: '{"containers":[{"name":"main", "resources":{"requests":{"cpu": "{{workflow.parameters.cpu-limit}}", "memory": "{{workflow.parameters.mem-limit}}"}, "limits":{"cpu": "{{workflow.parameters.cpu-limit}}", "memory": "{{workflow.parameters.mem-limit}}" }}}]}'
        container:
          image: alpine
          imagePullPolicy: IfNotPresent
          command: [sleep]
          args: ["{{workflow.parameters.sleep-seconds}}"]
    

    fig. 7

    #!/usr/bin/env bash
    set -euo pipefail
    while true; do
      argo submit \
        -n argo-workflows \
        --from workflowtemplate/sleep-fanout-test-template \
        -p step-count="100" \
        -p sleep-seconds="8" &>/dev/null
      echo -n "."
      sleep 10
    done
    

    Message from the maintainers:

    Impacted by this bug? Give it a πŸ‘. We prioritise the issues with the most πŸ‘.

  • Support authentication via Shared Access Signatures (SAS) for Azure artifacts

    Support authentication via Shared Access Signatures (SAS) for Azure artifacts

    Summary

    Support authentication via Shared Access Signatures (SAS) for Azure artifacts

    Use Cases

    With argo v3.4 support for Azure artifacts has been implemented. As far as I understand, the current implementation only supports authentication via Storage Account access keys, but not via Shared Access Signatures (SAS). I would like to be able to use SAS for authentication, too.

    @brianloss Since you worked on this feature, could you confirm that I am right why my assumption? I could not find a way to make it work with SAS tokens. (and thanks btw for implementing it!)

    Me/my team would potentially be willing do this contribution.


    Message from the maintainers:

    Love this enhancement proposal? Give it a πŸ‘. We prioritise the proposals with the most πŸ‘.

  • Add functionality to exclude specific dates to not run workflow on (ex. holidays/non-working days).

    Add functionality to exclude specific dates to not run workflow on (ex. holidays/non-working days).

    Summary

    Functionality to not run workflow(s) on certain dates. Like the holiday feature that SOS Berlin's jobscheduler has.

    Use Cases

    For example when you or your company has fixed holidays/non-working days when you do not want certain workflow(s) to run.


    Message from the maintainers:

    Love this enhancement proposal? Give it a πŸ‘. We prioritise the proposals with the most πŸ‘.

  • SSO RBAC is not working in version 3.4.3 when keycloak is used as OIDC provider

    SSO RBAC is not working in version 3.4.3 when keycloak is used as OIDC provider

    Pre-requisites

    • [X] I have double-checked my configuration
    • [X] I can confirm the issues exists when I tested with :latest
    • [ ] I'd like to contribute the fix myself (see contributing guide)

    What happened/what you expected to happen?

    Hi, I am trying to bring up the argo server with SSO enabled, I have followed the documentation (https://argoproj.github.io/argo-workflows/argo-server-sso/#sso-rbac) created required secret and other necessary stuff. When I login with the keycloak user, I am able to login but the service account associated with the group is not showing in UI, nor it's taking automatically. Please help me with this issue.

    image

    My RBAC configuration is as follows:

    `apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: argoAdmin rules:

    • apiGroups:
      • "" resources:
      • configmaps
      • events verbs:
      • get
      • watch
      • list
    • apiGroups:
      • "" resources:
      • secrets verbs:
      • get
      • create
    • apiGroups:
      • "" resources:
      • serviceaccounts verbs:
      • get
      • list
    • apiGroups:
      • "" resources:
      • pods
      • pods/exec
      • pods/log verbs:
      • get
      • list
      • watch
      • delete
      • patch
    • apiGroups:
      • "policy" resources:
      • poddisruptionbudgets verbs:
      • create
      • get
      • delete
    • apiGroups:
      • argoproj.io resources:
      • workflows
      • workfloweventbindings
      • workflowtemplates
      • cronworkflows
      • cronworkflows/finalizers
      • workflowtemplates
      • cronworkflows
      • cronworkflows/finalizers
      • workflowtaskresults
      • workflowtasksets
      • workflowartifactgctasks

    - clusterworkflowtemplates #If creating crb ,please uncomment this

    verbs:

    • create
    • get
    • list
    • watch
    • update
    • patch
    • delete

    apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: argoadmin-cluster-template roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: argoAdmin subjects:

    • kind: ServiceAccount name: argoadmin namespace: argotest

    apiVersion: v1 kind: ServiceAccount metadata: name: argoadmin namespace: argotest annotations: workflows.argoproj.io/rbac-rule: "'argoadmingroups' in groups" workflows.argoproj.io/rbac-rule-precedence: "1" ` No error or warning logs are present in the argoserver logs.

    Version

    v3.4.3

    Paste a small workflow that reproduces the issue. We must be able to run the workflow; don't enter a workflows that uses private images.

    NA
    

    Logs from the workflow controller

    kubectl logs -n argo deploy/workflow-controller | grep ${workflow}
    NA
    

    Logs from in your workflow's wait container

    kubectl logs -n argo -c wait -l workflows.argoproj.io/workflow=${workflow},workflow.argoproj.io/phase!=Succeeded
    
    NA
    
  • chore(deps): bump nick-fields/retry from 2.8.2 to 2.8.3

    chore(deps): bump nick-fields/retry from 2.8.2 to 2.8.3

    Bumps nick-fields/retry from 2.8.2 to 2.8.3.

    Release notes

    Sourced from nick-fields/retry's releases.

    v2.8.3

    2.8.3 (2022-12-30)

    Bug Fixes

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • fix: improve rate at which we catch transient gcp errors in artifact driver Fixes #10282 #10174

    fix: improve rate at which we catch transient gcp errors in artifact driver Fixes #10282 #10174

    Fixes #10282 #10174

    Please do not open a pull request until you have checked ALL of these:

    • [x] Create the PR as draft .
    • [x] Run make pre-commit -B to fix codegen and lint problems.
    • [ ] Sign-off your commits (otherwise the DCO check will fail).
    • [x] Use a conventional commit message (otherwise the commit message check will fail).
    • [x] "Fixes #" is in both the PR title (for release notes) and this description (to automatically link and close the issue).
    • [x] Add unit or e2e tests. Say how you tested your changes. If you changed the UI, attach screenshots.
    • [ ] Github checks are green.
    • [ ] Once required tests have passed, mark your PR "Ready for review".

    If changes were requested, and you've made them, dismiss the review to get it reviewed again.

  • Allow controlling `parallelism` within a `containerSet`

    Allow controlling `parallelism` within a `containerSet`

    Summary

    The parallelism setting (docs) "limits the max total parallel pods that can execute at the same time in a workflow". I hoped to find a field for container sets (docs) that would allow limiting container parallelism, or, in alternative, that this field wouldn't be about limiting pod parallelism but "step" parallelism.

    Use Cases

    We have some logic acting as a middle layer between the user and argo. Depending on some condition A, a certain pipeline is run as a container set or as a "normal" workflow where one pod is created for each step, this is transparent to the user. We would like to offer the ability to limit step parallelism without having to get into custom logic territory based on condition A.


    Message from the maintainers:

    Love this enhancement proposal? Give it a πŸ‘. We prioritise the proposals with the most πŸ‘.

Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
A Golang based high performance, scalable and distributed workflow framework
A Golang based high performance, scalable and distributed workflow framework

Go-Flow A Golang based high performance, scalable and distributed workflow framework It allows to programmatically author distributed workflow as Dire

Jan 6, 2023
Workflow Orchestrator
Workflow Orchestrator

Adagio - A Workflow Orchestrator This project is currently in a constant state of flux. Don't expect it to work. Thank you o/ Adagio is a workflow exe

Sep 2, 2022
Export GitHub Action Workflow data as traces via OTLP

Github Action to OTLP NOTE: This is still work in progress This action outputs Github Action workflows and jobs details to OTLP via gRPC. Inputs endpo

Nov 1, 2022
A reverse engineered github actions compatible self-hosted runner using nektos/act to execute your workflow steps

github-act-runner A reverse engineered github actions compatible self-hosted runner using nektos/act to execute your workflow steps. Unlike the offici

Dec 24, 2022
A simple Go app and GitHub workflow that shows how to use GitHub Actions to test, build and deploy a Go app to Docker Hub

go-pipeline-demo A repository containing a simple Go app and GitHub workflow that shows how to use GitHub Actions to test, build and deploy a Go app t

Nov 17, 2021
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website β€’ Quickstart β€’ Documentation β€’ Blog β€’ Twitter β€’ Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

Dec 18, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Oct 15, 2022
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Nov 25, 2022
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:
Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout:

Kubei is a vulnerabilities scanning and CIS Docker benchmark tool that allows users to get an accurate and immediate risk assessment of their kubernet

Dec 30, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

Mar 25, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Kubernetes Operator to sync secrets between different secret backends and Kubernetes

Vals-Operator Here at Digitalis we love vals, it's a tool we use daily to keep secrets stored securely. We also use secrets-manager on the Kubernetes

Nov 13, 2022