Serverless Container Workflows

direktiv


vorteil

event-based serverless container workflows


Build Go Report Card Discord

Check out our online demo: wf.direktiv.io

What is Direktiv?

Direktiv is a specification for a serverless computing workflow language that aims to be simple and powerful above all else.

Direktiv defines a selection of intentionally primitive states, which can be strung together to create workflows as simple or complex as the author requires. The powerful jq JSON processor allows authors to implement sophisticated control flow logic, and when combined with the ability to run containers as part of Direktiv workflows just about any logic can be implemented.

Workflows can be triggered by CloudEvents for event-based solutions, can use cron scheduling to handle periodic tasks, and can be scripted using the APIs for everything else.

Why use Direktiv?

Direktiv was created to address 4 problems faced with workflow engines in general:

  • Cloud agnostic: we wanted Direktiv to run on any platform or cloud, support any code or capability and NOT be dependent on the cloud provider's services for running the workflow or executing the actions (but obviously support it all)
  • Simplicity: the configuration of the workflow components should be simple more than anything else. Using only YAML and jq you should be able to express all workflow states, transitions, evaluations and actions needed to complete the workflow
  • Reusable: if you're going to the effort and trouble of pushing all your microservices, code or application components into a container platform why not have the ability to reuse and standardise this code across all of your workflows. We wanted to ensure that your code always remains reusable and portable and not tied into a specific vendor format or requirement (or vendor specific language) - so we've modelled Direktiv's specification after the CNCF Serverless Workflow Specification with the ultimate goal to make it feature-complete and easy to implement.
  • Multi-tenanted and secure: we want to use Direktiv in a multi-tenant service provider space, which means all workflow executions have to be isolated, data access secured and isolated and all workflows and actions are truly ephemeral (or serverless).

Direktiv internals?

This repository contains a reference implementation that runs Docker containers as isolated virtual machines on Firecracker using Vorteil.io.

direktiv

Quickstart

Starting the Server

Getting a local playground environment can be easily done with either Vorteil.io or Docker. Direktiv's defaul isolation level is firecracker based on vorteil machines. This behaviour can be changed in the configuration file or via environment variable.


Using Docker:

Firecracker Isolation

docker run --privileged -p6666:6666 -eDIREKTIV_INGRESS_BIND=0.0.0.0:6666 vorteil/direktiv

Container Isolation:

docker run --privileged -p6666:6666 -eDIREKTIV_INGRESS_BIND=0.0.0.0:6666 -eDIREKTIV_ISOLATION=container vorteil/direktiv

*Note: *

  • You may need to run this command as an administrator.

  • In a public cloud instance, nested virualization is needed to support the firecracker micro-VMs. Each public cloud provider has different configuration settings which need to be applied to enable nested virtualization. Examples are shown below for each public cloud provider:

Using Vorteil:

With Vorteil installed (full instructions here):

  1. download direktiv.vorteil from the releases page,
  2. run one of the following commands from within your downloads folder:

vorteil run direktiv.vorteil for firecracker/vorteil isolation

vorteil run --program[2].env="DIREKTIV_ISOLATION=container" direktiv.vorteil for container isolation

Testing Direktiv:

Download the direkcli command-line tool from the releases page and create your first namespace by running:

direkcli namespaces create demo

$ direkcli namespaces create demo
Created namespace: demo
$ direkcli namespaces list
+------+
| NAME |
+------+
| demo |
+------+

Workflow specification

The below example is the minimal configuration needed for a workflow, following the workflow language specification:

id: helloworld
states:
- id: hello
  type: noop
  transform: '{ msg: ("Hello, " + .name + "!") }'

Creating and Running a Workflow

The following script does everything required to run the first workflow. This includes creating a namespace & workflow and running the workflow the first time.

$ direkcli namespaces create demo
Created namespace: demo
$ cat > helloworld.yml <<- EOF
id: helloworld
states:
- id: hello
  type: noop
  transform: '{ msg: ("Hello, " + .name + "!") }'
EOF
$ direkcli workflows create demo helloworld.yml
Created workflow 'helloworld'
$ cat > input.json <<- EOF
{
  "name": "Alan"
}
EOF
$ direkcli workflows execute demo helloworld --input=input.json
Successfully invoked, Instance ID: demo/helloworld/aqMeFX <---CHANGE_THIS_TO_YOUR_VALUE
$ direkcli instances get demo/helloworld/aqMeFX
ID: demo/helloworld/aqMeFX
Input: {
  "name": "Alan"
}
Output: {"msg":"Hello, Alan!"}

Roadmap

  • Installation instructions (Kubernetes, Non-Kubernetes environments, Container/Vorteil setting)
  • Providing individual vorteil / docker containers for individual components (workflow, isolates etc.)
  • HTTP API & Simple UI
  • Service Mesh configuration

Code of Conduct

We have adopted the Contributor Covenant code of conduct.

Contributing

Any feedback and contributions are welcome. Read our contributing guidelines for details.

License

Distributed under the Apache 2.0 License. See LICENSE for more information.

See Also

Comments
  • Update Helm charts to make ingress-nginx more configurable

    Update Helm charts to make ingress-nginx more configurable

    Update Helm charts to make ingress-nginx more configurable

    Given we want to try out Direktiv in Kubernetes When we install the dependencies Then we want the ability to be able to bring our own ingress-nginx, and set our own annotations for the ingress so for things like external dns, cert manager etc.

    Acceptance criteria:

    API Ingress Manifest

    charts/direktiv/templates/ingress-api.yaml

    • [ ] Add Annotations to charts/direktiv/values.yaml under ingress line 92 for ingress-api.yaml

    UI Ingress Manifest

    charts/direktiv/templates/ingress-ui.yaml

    • [ ] Add Annotations to charts/direktiv/values.yaml under ingress line 92 for ingress-ui.yaml

    Note: it could be better to put the ingress configuration under the ui: and api: as it will make it scalable for the future

    Add ingress-nginx dependency as optional

    charts/direktiv/Chart.yaml

    • [ ] Have a flag for whether you install the ingress-nginx or not.
  • access to instance id

    access to instance id

    Is your feature request related to a problem? Please describe.

    It would be good if the instance could access the instance id. If I want to publish events and want to use something as context value to wait for e.g. multiple async subflows.

    Describe the solution you'd like

    Up for discussion

    Describe alternatives you've considered

    At the moment I'm using something like jq(.name + (now | todateiso8601 | fromdate | tostring)). Not so nice output.

  • Workflow Example did not work

    Workflow Example did not work

    Describe the bug A clear and concise description of what the bug is.

    To Reproduce Steps to reproduce the behavior:

    1. docker run --privileged -p 8080:80 -ti vorteil/direktiv-kube (done,It works)
    2. direkcli namespaces create demo (done,It works)
    3. create test.yml file below content (done)

    `id: check-nsfw-image description: "Classify an image uploaded to Azure Blob Storage as SFW or NSFW using Google Vision, AWS Lambda and Azure Storage functions" start: type: event state: getRatingFromGoogleVision event: type: Microsoft.Storage.BlobCreated functions:

    • id: imageCheck image: vorteil/imagerecognition:v2
    • id: awslambda image: vorteil/lambda:v2
    • id: send-email image: vorteil/smtp:v2
    • id: azureupload image: vorteil/azure-upload:v2
    • id: azurecli image: vorteil/azgo:v2 size: large
      states:
    • id: getRatingFromGoogleVision type: action action: secrets: ["GOOGLE_SERVICE_ACCOUNT_KEY"] function: imageCheck input: '{ "url": ."Microsoft.Storage.BlobCreated".url, "serviceAccountKey": .secrets.GOOGLE_SERVICE_ACCOUNT_KEY
      }' transition: checkRatingForImage
    • id: checkRatingForImage log: "." type: switch conditions:
      • condition: .return.safeForWork == true transition: addWaterMarkApproved defaultTransition: addWaterMarkNotApproved
    • id: addWaterMarkApproved type: action action: function: awslambda secrets: ["LAMBDA_KEY", "LAMBDA_SECRET"] input: '{ key: .secrets.LAMBDA_KEY, secret: .secrets.LAMBDA_SECRET, region: "ap-southeast-2", function: "python-watermark", body: { imageurl: ."Microsoft.Storage.BlobCreated".url, message: "Approved by Direktiv.io", } }' transform: '.notify = .return | del(.return)' transition: copyFileToSafeForWork
    • id: addWaterMarkNotApproved type: action action: function: awslambda secrets: ["LAMBDA_KEY", "LAMBDA_SECRET"] input: '{ key: .secrets.LAMBDA_KEY, secret: .secrets.LAMBDA_SECRET, region: "ap-southeast-2", function: "python-watermark", body: { imageurl: ."Microsoft.Storage.BlobCreated".url, message: "Not approved by Direktiv.io", } }' transform: '.notify = .return | del(.return)' transition: sendEmail
    • id: sendEmail type: action log: "." action: function: send-email secrets: ["GMAIL_PASSWORD"] input: '{ "from": "[email protected]", "to": "[email protected]", "subject": "Direktiv NSFW Image Workflow", "message": "NSFW Image detected", "server": "smtp.gmail.com", "port": 587, "password": .secrets.GMAIL_PASSWORD }' transition: copyFileToNotSafeForWork
    • id: copyFileToNotSafeForWork type: action log: "." action: secrets: ["AZ_STORAGE_ACCOUNT", "AZ_STORAGE_KEY"] function: azureupload input: '{ "container": "not-safe-for-work", "storage-account": .secrets.AZ_STORAGE_ACCOUNT, "storage-account-key": .secrets.AZ_STORAGE_KEY, "data": .notify.body, "upload-name": ."Microsoft.Storage.BlobCreated".url | capture("(?[a-z.]+$)").filename }' transition: cleanup
    • id: copyFileToSafeForWork type: action log: "." action: secrets: ["AZ_STORAGE_ACCOUNT", "AZ_STORAGE_KEY"] function: azureupload input: '{ "container": "safe-for-work", "storage-account": .secrets.AZ_STORAGE_ACCOUNT, "storage-account-key": .secrets.AZ_STORAGE_KEY, "data": .notify.body, "upload-name": ."Microsoft.Storage.BlobCreated".url | capture("(?[a-z.]+$)").filename }' transition: cleanup
    • id: cleanup type: action action: secrets: ["AZ_STORAGE_ACCOUNT", "AZ_NAME", "AZ_PASSWORD", "AZ_TENANT","AZ_STORAGE_KEY"] function: azurecli input: '{ "name": .secrets.AZ_NAME, "password": .secrets.AZ_PASSWORD, "tenant": .secrets.AZ_TENANT, "command": ["storage", "blob", "delete", "--container", "processing", "--name", (."Microsoft.Storage.BlobCreated".url | split("processing/")[1]), "--account-name", .secrets.AZ_STORAGE_ACCOUNT, "--account-key", .secrets.AZ_STORAGE_KEY] }' `
    1. ./direkcli workflows create demo test.yml (failed)

    Expected behavior

    Screenshots image

    Desktop (please complete the following information):

    • OS: Macos Big Sur 11.3
  • How can we create a  function to do CI job

    How can we create a function to do CI job

    I want to use direktiv to do some ci job in serverless worker , but after I look up the function document(link), I find we cannot mount something to pod, so we cannot do docker in docker , that means we cannot use docker or podman to operate image. How can we create a function to do CI job now ?

  • Filter broadcast events

    Filter broadcast events

    Description

    Please describe the aim of the pull request and the changes made in the commits.

    Purpose

    • [ ] Bug fix
    • [ ] New feature
    • [ ] Other

    How was this tested? (if applicable)

    Please describe how the proposed changes have been tested, and the outcome of the tests.

    Test Platform Details (if applicable)

    Operating system: OSX/Windows 10/Ubuntu 20.04/etc.

    CLI version:

    Hypervisors/Platforms (if applicable): qemu/hyper-v/google cloud platform/vmware workstation/etc

    Kernel version (if applicable):

    Checklist

    • [ ] Code is commented
    • [ ] Unit test coverage encompasses new code
    • [ ] Existing unit tests pass with these changes
    • [ ] PR is signed
  • New ent schema will not migrate database gracefully.

    New ent schema will not migrate database gracefully.

    Describe the bug

    Upgrading to latest ent schemas will trigger the following error on postgres if it's using a older database.

    2021-11-30 13:57:10.201 AEST [864] ERROR:  column "created_at" contains null values
    2021-11-30 13:57:10.201 AEST [864] STATEMENT:  ALTER TABLE "refs" ADD COLUMN "created_at" timestamp with time zone NOT NULL
    

    We need to investigate if its possible to adjust the migration process of ent to accommodate for this new column. Or if there is something we can change in the schema.

  • error state seems to be inconsistent to other states

    error state seems to be inconsistent to other states

    Is your feature request related to a problem? Please describe.

    All other states accept JQ input for their fields. The error state can only accept a string for the error field. I wanted to catch an error, handle it and then still raise it after. The use-case would be to do jq(.error.code) in the error field.

    The message has the same "issue". If it would be JQ instead of the args field it would be easier to use because Direktiv has a basically a sprintf with jq.

    Now:

    - id: error-out-of-date
      type: error
      error: validation.outOfDate
      message: "food item %s is out of date"
      args:
      - jq(.item.name)
    

    New

    - id: error-out-of-date
      type: error
      error: jq(.whatever) OR my.new.error
      message: 'This is an error jq(.myvalue)'
    
  • Wait API with no body data now assumes JSON input even when no input …

    Wait API with no body data now assumes JSON input even when no input …

    …is provided.

    Signed-off-by: Alan Murtagh [email protected]

    I've decided for GET requests it's fine to assume JSON input. GET requests cannot reasonably handle binary data anyway, and if you really want to you can emulate it by passing ?input.input=, and setting to base64 encoded binary data.

  • how to config subflow

    how to config subflow

    in Previous version the parent workflow relate to the child workflow by field "id",Workflow does not has filed "id",how the parent workflow relate to the child workflow

  • Cannot re-trigger event received: error:

    Cannot re-trigger event received: error: "send namespace event: method is not allowed"

    Describe the bug Received an event from Google Cloud EventArc via the Direktiv container as described in the document. The event is received and can execute a workflow. When I try and trigger the event again the following error appears:

    "send namespace event: method is not allowed"

    Example the event input is shown below (not the data is binary - something wrong in the direktic google eventarc container code):

    Context Attributes, specversion: 1.0 type: google.cloud.audit.log.v1.written source: //cloudaudit.googleapis.com/projects/exalted-iridium-367300/logs/activity id: projects/exalted-iridium-367300/logs/cloudaudit.googleapis.com%2Factivityr7a0hndbrmi1667689902263958 time: 2022-11-05T23:11:42.837387365Z datacontenttype: application/json Data (binary), { "protoPayload": { "authenticationInfo": { "principalEmail": "[email protected]" }, "requestMetadata": { "callerIp": "2601:643:4001:4c0:9c47:2941:e9b5:bc90", "callerSuppliedUserAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36,gzip(gfe),gzip(gfe)" }, "serviceName": "compute.googleapis.com", "methodName": "v1.compute.instances.delete", "resourceName": "projects/exalted-iridium-367300/zones/us-central1-a/instances/instance-template-1", "serviceData": {}, "request": { "@type": "type.googleapis.com/compute.instances.delete" } }, "insertId": "r7a0hndbrmi", "resource": { "type": "gce_instance", "labels": { "instance_id": "3705468946020520950", "zone": "us-central1-a", "project_id": "exalted-iridium-367300" } }, "timestamp": "2022-11-05T23:11:42.263958Z", "severity": "NOTICE", "logName": "projects/exalted-iridium-367300/logs/cloudaudit.googleapis.com%2Factivity", "operation": { "id": "operation-1667689855034-5ecc14d67a55a-625b9bf3-04683458", "producer": "compute.googleapis.com", "last": true }, "receiveTimestamp": "2022-11-05T23:11:42.837387365Z" }

  • Git sync throws ent error on renaming

    Git sync throws ent error on renaming

    Describe the bug

    I renamed a few files in https://github.com/direktiv/apps-svc. The follwoing rename actions were applied:

     rename generate-tags.yaml => generate-tags._yaml (100%)
     rename get-html.yaml => get-html._yaml (100%)
     rename get-swagger.yaml => get-swagger._yaml (100%)
     rename list.yaml => list._yaml (100%)
     rename load-spec.yaml => load-spec_.yaml (100%)
     rename load-specs.yaml => load-specs._yaml (100%)
     rename md.yaml => md._yaml (100%)
     rename pq.yaml => pq._yaml (100%)
     rename spec.yaml => spec._yaml (100%)
     rename tags.yaml => tags._yaml (100%)
     rename uri-key.yaml => uri-key._yaml (100%)
     rename uri-tag-key.yaml => uri-tag-key._yaml (100%)
    

    The sync fails with :+1:

    Mirror activity 'sync' failed: ent: validator failed for field "Inode.name": value does not match validation
    

    The last log lines:

    {"level":"info","timestamp":"2022-07-26T08:27:34Z","caller":"flow/logs.go:126","msg":"Deleted workflow '/get-swagger'.","component":"flow","build":"10e25bf","trace":"00000000000000000000000000000000","namespace":"svc","namespace-id":"8fb6dc8f-1b1d-470b-a5fa-d44b1ed75f80"}
    {"level":"debug","timestamp":"2022-07-26T08:27:34Z","caller":"flow/pubsub.go:517","msg":"PS Notify Inode: c4f1d45f-8ca0-43d3-a968-26627cdb7090","component":"flow","build":"10e25bf"}
    {"level":"info","timestamp":"2022-07-26T08:27:34Z","caller":"flow/logs.go:126","msg":"Deleted workflow '/uri-tag-key'.","component":"flow","build":"10e25bf","trace":"00000000000000000000000000000000","namespace":"svc","namespace-id":"8fb6dc8f-1b1d-470b-a5fa-d44b1ed75f80"}
    {"level":"debug","timestamp":"2022-07-26T08:27:34Z","caller":"flow/pubsub.go:517","msg":"PS Notify Inode: c4f1d45f-8ca0-43d3-a968-26627cdb7090","component":"flow","build":"10e25bf"}
    {"level":"info","timestamp":"2022-07-26T08:27:34Z","caller":"flow/logs.go:271","msg":"Mirror activity 'sync' failed: ent: validator failed for field \"Inode.name\": value does not match validation","component":"flow","build":"10e25bf","trace":"00000000000000000000000000000000","namespace":"svc","namespace-id":"8fb6dc8f-1b1d-470b-a5fa-d44b1ed75f80","mirror-id":"13340a02-1184-4ad5-b8c7-9413ae545d96"}
    
    

    It seems to be obvious that it is cause by the renaming rename load-spec.yaml => load-spec_.yaml. The solution is to improve the logs? Because this was a rename to an invalid workflow name.

  • 738 install k3s instructions in readme is broken

    738 install k3s instructions in readme is broken

    Description

    K8s feature-gates 'TTLAfterFinished' flag was removed see this.

    This PR removes 'TTLAfterFinished' usage in the k3s installation instruction.

  • Install k3s instructions in Readme is broken

    Install k3s instructions in Readme is broken

    Describe the bug Install k3s instructions in Readme is broken due to removal of this k8s feature.

    --kube-apiserver-arg feature-gates=TTLAfterFinished=true need to be removed.

  • Expand service configuration

    Expand service configuration

    Is your feature request related to a problem? Please describe.

    At the moment services can define commands and environment variables (0.8.0).

    Additionally they should be able to run as privileged and potentially mount disks.

  • API first approach

    API first approach

    At the moment the API has swagger configurations but it is an implementation-first approach. This can lead to discrepancies between implementation and swagger documentation.

    The API should be build on swagger first and the implementation needs to follow. That guarantees a consistent documentation and implementation.

  • A few services have SSE implemenations only

    A few services have SSE implemenations only

    Is your feature request related to a problem? Please describe.

    The API has three services which are only implementing Server Side Events:

    • watchNamespaceRevisions
    • watchNamespaceRevision
    • singleNamespaceServiceSSE

    Although there are not needed for the UI they should be implemented for consistency.

A lightweight yet powerful IoC container for Go projects

Container A lightweight yet powerful IoC container for Go projects. It provides a simple, fluent and easy-to-use interface to make dependency injectio

Dec 29, 2022
🛠 A full-featured dependency injection container for go programming language.

DI Dependency injection for Go programming language. Tutorial | Examples | Advanced features Dependency injection is one form of the broader technique

Dec 31, 2022
Simple Dependency Injection Container
Simple Dependency Injection Container

?? gocontainer gocontainer - Dependency Injection Container ?? ABOUT Contributors: Rafał Lorenz Want to contribute ? Feel free to send pull requests!

Sep 27, 2022
Provider-generic-workflows - A generic provider which uses argo workflows to define the backend actions.

provider-generic-workflows provider-generic-workflows is a generic provider which uses argo workflows for managing the external resource. This will re

Jan 1, 2022
Go-serverless-eth-event-listener - Go serverless, ethereum contract event listener with a sample contract

go-serverless-eth-event-listener This repository is for showing how to listen sm

May 19, 2022
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes.
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes.

What is Argo Workflows? Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflow

Dec 10, 2021
FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute (USENIX ATC'21)

FaaSNet FaaSNet is the first system that provides an end-to-end, integrated solution for FaaS-optimized container runtime provisioning. FaaSNet uses l

Jan 2, 2023
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Jan 8, 2023
Nov 1, 2022
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Jan 2, 2023
top in container - Running the original top command in a container
top in container - Running the original top command in a container

Running the original top command in a container will not get information of the container, many metrics like uptime, users, load average, tasks, cpu, memory, are about the host in fact. topic(top in container) will retrieve those metrics from container instead, and shows the status of the container, not the host.

Dec 2, 2022
Boxygen is a container as code framework that allows you to build container images from code

Boxygen is a container as code framework that allows you to build container images from code, allowing integration of container image builds into other tooling such as servers or CLI tooling.

Dec 13, 2021
Amazon ECS Container Agent: a component of Amazon Elastic Container Service
Amazon ECS Container Agent: a component of Amazon Elastic Container Service

Amazon ECS Container Agent The Amazon ECS Container Agent is a component of Amazon Elastic Container Service (Amazon ECS) and is responsible for manag

Dec 28, 2021
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Jan 23, 2022
Ixia-c-one - A re-packaged (as a single-container) flavor of multi-container application ixia-c

ixia-c-one ixia-c-one is a re-packaged (as a single-container) flavor of multi-c

Apr 1, 2022
Assume AWS IAM roles from GitHub Actions workflows with no stored secrets
Assume AWS IAM roles from GitHub Actions workflows with no stored secrets

AWS IAM roles for GitHub Actions workflows Background and rationale GitHub Actions are a pretty nice solution for CI/CD. Where they fall short is inte

Feb 12, 2022
Machine is a library for creating data workflows.
Machine is a library for creating data workflows.

Machine is a library for creating data workflows. These workflows can be either very concise or quite complex, even allowing for cycles for flows that need retry or self healing mechanisms.

Dec 26, 2022
Generate mega-workflows using Wappalyzer outputs and existing tech-detect

Usage Usage of ./build/generate-nuclei-templates: -clone-path string Path to clone Wappalyzer repository (default "./wappalyzer") -debug

Nov 9, 2022
An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets

Sep 30, 2022
An operator to support Haschicorp Vault configuration workflows from within Kubernetes
An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets

Dec 19, 2022