µTask is an automation engine that models and executes business processes declared in yaml. ✏️📋

µTask, the Lightweight Automation Engine

Build Status Go Report Card Coverage Status GoDoc GitHub stars GitHub last commit GitHub license

µTask is an automation engine built for the cloud. It is:

  • simple to operate: only a postgres DB is required
  • secure: all data is encrypted, only visible to authorized users
  • extensible: you can develop custom actions in golang

µTask allows you to model business processes in a declarative yaml format. Describe a set of inputs and a graph of actions and their inter-dependencies: µTask will asynchronously handle the execution of each action, working its way around transient errors and keeping an encrypted, auditable trace of all intermediary states until completion.

Table of contents

Real-world examples

Here are a few real-world examples that can be implemented with µTask:

Kubernetes ingress TLS certificate provisioning

A new ingress is created on the production kubernetes cluster. A hook triggers a µTask template that:

  • generates a private key
  • requests a new certificate
  • meets the certificate issuer's challenges
  • commits the resulting certificate back to the cluster

New team member bootstrap

A new member joins the team. The team leader starts a task specifying the new member's name, that:

  • asks the new team member to generate an SSH key pair and copy the public key in a µTask-generated form
  • registers the public SSH key centrally
  • creates accounts on internal services (code repository, CI/CD, internal PaaS, ...) for the new team member
  • triggers another task to spawn a development VM
  • sends a welcome email full of GIFs

Payments API asynchronous processing

The payments API receives a request that requires an asynchronous antifraud check. It spawns a task on its companion µTask instance that:

  • calls a first risk-assessing API which returns a number
  • if the risk is low, the task succeeds immediately
  • otherwise it calls a SaaS antifraud solution API which returns a score
  • if the score is good, the task succeeds
  • if the score is very bad, the task fails
  • if it is in between, it triggers a human investigation step where an operator can enter a score in a µTask-generated form
  • when it is done, the task sends an event to the payments API to notify of the result

The payments API keeps a reference to the running workflow via its task ID. Operators of the payments API can follow the state of current tasks by requesting the µTask instance directly. Depending on the payments API implementation, it may allow its callers to follow a task's state.

Quick start

Running with docker-compose

Download our latest install script, setup your environment and launch your own local instance of µTask.

mkdir utask && cd utask
wget https://github.com/ovh/utask/releases/latest/download/install-utask.sh
sh install-utask.sh
docker-compose up

All the configuration for the application is found in the environment variables in docker-compose.yaml. You'll see that basic auth is setup for user admin with password 1234. Try logging in with this user on the graphical dashboard: http://localhost:8081/ui/dashboard.

You can also explore the API schema: http://localhost:8081/unsecured/spec.json.

Request a new task:

Get an overview of all tasks:

Get a detailed view of a running task:

Browse available task templates:

Running with your own postgres service

Alternatively, you can clone this repository and build the µTask binary:

make all

Operating in production

The folder you created in the previous step is meant to become a git repo where you version your own task templates and plugins. Re-download and run the latest install script to bump your version of µTask.

You'll deploy your version of µTask by building a docker image based on the official µTask image, which will include your extensions. See the Dockerfile generated during installation.

Architecture

µTask is designed to run a task scheduler and perform the task workloads within a single runtime: work is not delegated to external agents. Multiple instances of the application will coordinate around a single postgres database: each will be able to determine independently which tasks are available. When an instance of µTask decides to execute a task, it will take hold of that task to avoid collisions, then release it at the end of an execution cycle.

A task will keep running as long as its steps are successfully executed. If a task's execution is interrupted before completion, it will become available to be re-collected by one of the active instances of µTask. That means that execution might start in one instance and resume on a different one.

Maintenance procedures

Key rotation

  1. Generate a new key with symmecrypt, with the 'storage' label.
  2. Add it to your configuration items. The library will take all keys into account and use the latest possible key, falling back to older keys when finding older data.
  3. Set your API in maintenance mode (env var or command line arg, see config below): all write actions will be refused when you reboot the API.
  4. Reboot API.
  5. Make a POST request on the /key-rotate endpoint of the API.
  6. All data will be encrypted with the latest key, you can delete older keys.
  7. De-activate maintenance mode.
  8. Reboot API.

Configuration 🔨

Command line args

The µTask binary accepts the following arguments as binary args or env var. All are optional and have a default value:

  • init-path: the directory from where initialization plugins (see "Developing plugins") are loaded in *.so form (default: ./init)
  • plugins-path: the directory from where action plugins (see "Developing plugins") are loaded in *.so form (default: ./plugins)
  • templates-path: the directory where yaml-formatted task templates are loaded from (default: ./templates)
  • functions-path: the directory where yaml-formatted functions templates are loaded from (default: ./functions)
  • region: an arbitrary identifier, to aggregate a running group of µTask instances (commonly containers), and differentiate them from another group, in a separate region (default: default)
  • http-port: the port on which the HTTP API listents (default: 8081)
  • debug: a boolean flag to activate verbose logs (default: false)
  • maintenance-mode: a boolean to switch API to maintenance mode (default: false)

Config keys and files

Checkout the µTask config keys and files README.

Authentication

The vanilla version of µTask doesn't handle authentication by itself, it is meant to be placed behind a reverse proxy that provides a username through the "x-remote-user" http header. A username found there will be trusted as is, and used for authorization purposes (admin actions, task resolution, etc...).

For development purposes, an optional basic-auth configstore item can be provided to define a mapping of usernames and passwords. This is not meant for use in production.

Extending this basic authentication mechanism is possible by developing an "init" plugin, as described below.

Notification

Every task state change can be notified to a notification backend. µTask implements three differents notification backends: Slack, TaT, and generic webhooks.

Default payload that will be sent for generic webhooks is :

{
    "message": "string",
    "task_id": "public_task_uuid",
    "title": "task title string",
    "state": "current task state",
    "template": "template_name",
    "requester": "optional",
    "resolver": "optional",
    "steps": "14/20",
    "potential_resolvers": "user1,user2,admin",
    "resolution_id": "optional,public_resolution_uuid",
    "tags": "{\"tag1\":\"value1\"}"
}

Notification backends can be configured in the global µTask configuration, as described here.

Authoring Task Templates

Checkout the µTask examples directory.

A process that can be executed by µTask is modelled as a task template: it is written in yaml format and describes a sequence of steps, their interdepencies, and additional conditions and constraints to control the flow of execution.

The user that creates a task is called requester, and the user that executes it is called resolver. Both can be the same user in some scenarios.

A user can be allowed to resolve a task in three ways:

  • the user is included in the global configuration's list of admin_usernames
  • the user is included in the task's template list of allowed_resolver_usernames
  • the user is included in the task resolver_usernames list

Value Templating

µTask uses the go templating engine in order to introduce dynamic values during a task's execution. As you'll see in the example template below, template handles can be used to access values from different sources. Here's a summary of how you can access values through template handles:

  • .input.[INPUT_NAME]: the value of an input provided by the task's requester
  • .resolver_input.[INPUT_NAME]: the value of an input provided by the task's resolver
  • .step.[STEP_NAME].output.foo: field foo from the output of a named step
  • .step.[STEP_NAME].metadata.HTTPStatus: field HTTPStatus from the metadata of a named step
  • .step.[STEP_NAME].children: the collection of results from a 'foreach' step
  • .step.[STEP_NAME].error: error message from a failed step
  • .step.[STEP_NAME].state: current state of the given step
  • .config.[CONFIG_ITEM].bar: field bar from a config item (configstore, see above)
  • .iterator.foo: field foo from the iterator in a loop (see foreach steps below)
  • .pre_hook.output.foo: field foo from the output of the step's preHook (see preHooks below)
  • .pre_hook.metadata.HTTPStatus: field HTTPStatus from the metadata of the step's preHook (see preHooks below)
  • .function_args.[ARG_NAME]: argument that needs to be given in the conifguration section to the function (see functions below)

The following templating functions are available:

Name Description Reference
Golang Builtin functions from Golang text template Doc
Sprig Extended set of functions from the Sprig project Doc
field Equivalent to the dot notation, for entries with forbidden characters {{field `config` `foo.bar`}}
fieldFrom Equivalent to the dot notation, for entries with forbidden characters. It takes the previous template expression as source for the templating values. Example: ``{{ {"foo.foo":"bar"} fromJson
eval Evaluates the value of a template variable {{eval `var1`}}
evalCache Evaluates the value of a template variable, and cache for future usage (to avoid further computation) {{evalCache `var1`}}
fromJson Decodes a JSON document into a structure. If the input cannot be decoded as JSON, the function will return an empty string {{fromJson `{"a":"b"}`}}
mustFromJson Similar to fromJson, but will return an error in case the JSON is invalid. A common usecase consists of returning a JSON stringified data structure from a JavaScript expression (object, array), and use one of its members in the template. Example: {{(eval `myExpression` | fromJson).myArr}} or {{(eval `myExpression` | fromJson).myObj}} {{mustFromJson `{"a":"b"}`}}

Basic properties

  • name: a short unique human-readable identifier
  • description: sentence-long description of intent
  • long_description: paragraph-long basic documentation
  • doc_link: URL for external documentation about the task
  • title_format: templateable text, generates a title for a task based on this template
  • result_format: templateable map, used to generate a final result object from data collected during execution

Advanced properties

  • allowed_resolver_usernames: a list of usernames with the right to resolve a task based on this template
  • allow_all_resolver_usernames: boolean (default: false): when true, any user can execute a task based on this template
  • auto_runnable; boolean (default: false): when true, the task will be executed directly after being created, IF the requester is an accepted resolver or allow_all_resolver_usernames is true
  • blocked: boolean (default: false): no tasks can be created from this template
  • hidden: boolean (default: false): the template is not listed on the API, it is concealed to regular users
  • retry_max: int (default: 100): maximum amount of consecutive executions of a task based on this template, before being blocked for manual review

Inputs

When creating a new task, a requester needs to provide parameters described as a list of objects under the inputs property of a template. Additional parameters can be requested from a task's resolver user: those are represented under the resolver_inputs property of a template.

An input's definition allows to define validation constraints on the values provided for that input. See example template above.

Input properties

  • name: unique name, used to access the value provided by the task's requester
  • description: human readable description of the input, meant to give context to the task's requester
  • regex: (optional) a regular expression that the provided value must match
  • legal_values: (optional) a list of possible values accepted for this input
  • collection: boolean (default: false) a list of values is accepted, instead of a single value
  • type: (string|number|bool) (default: string) the type of data accepted
  • optional: boolean (default: false) the input can be left empty
  • default: (optional) a value assigned to the input if left empty

Variables

A template variable is a named holder of either:

  • a fixed value
  • a JavaScript expression evaluated on the fly.

See the example template above to see variables in action. The expression in a variable can contain template handles to introduce values dynamically (from executed steps, for instance), like a step's configuration.

The JavaScript evaluation is done using otto.

Steps

A step is the smallest unit of work that can be performed within a task. At is's heart, a step defines an action: several types of actions are available, and each type requires a different configuration, provided as part of the step definition. The state of a step will change during a task's resolution process, and determine which steps become eligible for execution. Custom states can be defined for a step, to fine-tune execution flow (see below).

A sequence of ordered steps constitutes the entire workload of a task. Steps are ordered by declaring dependencies between each other. A step declares its dependencies as a list of step names on which it waits, meaning that a step's execution will be on hold until its dependencies have been resolved. More details about dependencies.

The flow of this sequence can further be controlled with conditions on the steps: a condition is a clause that can be run before or after the step's action. A condition can either be used:

  • to skip a step altogether
  • to analyze its outcome and override the engine's default behaviour

Several conditions can be specified, the first one to evaluate as true is applied. A condition is composed of:

  • a type (skip or check)
  • a list of if assertions (value, operator, expected) which all have to be true (AND on the collection),
  • a then object to impact the state of steps (this refers to the current step)
  • an optional message to convey the intention of the condition, making it easier to inspect tasks

Here's an example of a skip condition. The value of an input is evaluated to determine the result: if the value of runType is dry, the createUser step will not be executed, its state will be set directly to DONE.

inputs:
- name: runType
  description: Run this task with/without side effects
  legal_values: [dry, wet]
steps:
  createUser:
    description: Create new user
    action:
      ... etc...
    conditions:
    - type: skip
      if:
      - value: '{{.input.runType}}'
        operator: EQ
        expected: dry
      then:
        this: DONE
      message: Dry run, skip user creation

Here's an example of a check condition. Here the return of an http call is inspected: a 404 status will put the step in a custom NOT_FOUND state. The default behavior would be to consider any 4xx status as a client error, which blocks execution of the task. The check condition allows you to consider this situation as normal, and proceed with other steps that take the NOT_FOUND state into account (creating the missing resource, for instance).

steps:
  getUser:
    description: Get user
    custom_states: [NOT_FOUND]
    action:
      type: http
      configuration:
        url: http://example.org/user/{{.input.id}}
        method: GET
    conditions:
    - type: check
      if:
      - value: '{{.step.getUser.metadata.HTTPStatus}}'
        operator: EQ
        expected: '404'
      then:
        this: NOT_FOUND
      message: User {{.input.id}} not found
  createUser:
    description: Create the user
    dependencies: ["getUser:NOT_FOUND"]
    action:
      type: http
      configuration:
        url: http://example.org/user
        method: POST
        body: |-
          {"user_id":"{{.input.id}}"}

Condition Operators

A condition can use one of the following operators:

  • EQ: equal
  • NE: not equal
  • GT: greater than
  • LT: less than
  • GE: greater or equal
  • LE: less than or equal
  • REGEXP: match a regexp
  • IN: found in a list of values
  • NOTIN: not found in a list of values

Note that the operators IN and NOTIN expect a list of acceptable values in the field value, instead of a single one. You can specify the separator character to use to split the values of the list using the field list_separator (default: ,). Each value of the list will be trimmed of its leading and trailing white spaces before comparison.

Basic Step Properties

  • name: a unique identifier
  • description: a human readable sentence to convey the step's intent
  • action: the actual task the step executes
  • pre_hook: an action that can be executed before the actual action of the step
  • dependencies: a list of step names on which this step waits before running
  • custom_states: a list of personnalised allowed state for this step (can be assigned to the state's step using conditions)
  • retry_pattern: (seconds, minutes, hours) define on what temporal order of magnitude the re-runs of this step should be spread (default = seconds)
  • resources: a list of resources that will be used during the step execution, to control and limit the concurrent execution of the step (more information in the resources section).

Action

The action field of a step defines the actual workload to be performed. It consists of at least a type chosen among the registered action plugins, and a configuration fitting that plugin. See below for a detailed description of builtin plugins. For information on how to develop your own action plugins, refer to this section.

When an action's configuration is repeated across several steps, it can be factored by defining base_configurations at the root of the template. For example:

base_configurations:
  postMessage:
    method: POST
    url: http://message.board/new

This base configuration can then be leveraged by any step wanting to post a message, with different bodies:

steps:
  sayHello:
    description: Say hello on the message board
    action:
      type: http
      base_configuration: postMessage
      configuration:
        body: Hello
  sayGoodbye:
    description: Say goodbye on the message board
    dependencies: [sayHello]
    action:
      type: http
      base_configuration: postMessage
      configuration:
        body: Goodbye

These two step definitions are the equivalent of:

steps:
  sayHello:
    description: Say hello on the message board
    action:
      type: http
      configuration:
        body: Hello
        method: POST
        url: http://message.board/new
  sayGoodbye:
    description: Say goodbye on the message board
    dependencies: [sayHello]
    action:
      type: http
      configuration:
        body: Goodbye
        method: POST
        url: http://message.board/new

The output of an action can be enriched by means of an output. For example, in a template with an input field named id, value 1234 and a call to a service which returns the following payload:

{
  "name": "username"
}

The following action definition:

steps:
  getUser:
    description: Prefix an ID received as input, return both
    action:
      type: http
      output:
        strategy: merge
        format:
          id: "{{.input.id}}"
      configuration:
        method: GET
        url: http://directory/user/{{.input.id}}

Will render the following output, a combination of the action's raw output and the output:

{
  "id": "1234",
  "name": "username"
}

All the strategies available are:

  • merge: data in format must be a dict and will be merged with the output of the action (e.g. ahead)
  • template: the action will return exactly the data in format that can be templated (see Value Templating)

Builtin actions

Browse builtin actions

Plugin name Description Documentation
echo Print out a pre-determined result Access plugin doc
http Make an http request Access plugin doc
subtask Spawn a new task on µTask Access plugin doc
notify Dispatch a notification over a registered channel Access plugin doc
apiovh Make a signed call on OVH's public API (requires credentials retrieved from configstore, containing the fields endpoint, appKey, appSecret, consumerKey, more info here) Access plugin doc
ssh Connect to a remote system and run commands on it Access plugin doc
email Send an email Access plugin doc
ping Send a ping to an hostname Warn: This plugin will keep running until the count is done Access plugin doc
script Execute a script under scripts folder Access plugin doc

PreHooks

The pre_hook field of a step can be set to define an action that is executed before the step's action. This fields supports all the sames fields as the action. It aims to fetch data for the execution of the action that can change over time and needs to be fetched at every retry, such as OTPs. All the result values of the preHook are available under the templating variable .pre_hook

doSomeAuthPost:
  pre_hook:
    type: http
    configuration:
      method: "GET"
      url: "https://example.org/otp"
  action:
    type: http
    configuration:
      method: "POST"
      url: "https://example.org/doSomePost"
      headers:
        X-Otp: "{{ .pre_hook.output }}"

Functions

Functions are abstraction of the actions to define a behavior that can be re-used in templates. They act like a plugin but are fully declared in dedicated directory functions. They can have arguments that need to be given in the configuration section of the action and can be used in the declaration of the function by accessing the templating variables under .function_args.

name: ovh::request
description: Execute a call to the ovh API
pre_hook:
  type: http
  configuration:
    method: "GET"
    url: https://api.ovh.com/1.0/auth/time
action:
  type: http
  configuration:
    headers:
    - name: X-Ovh-Signature
      value: '{{ printf "%s+%s+%s+%s%s+%s+%v" .config.apiovh.applicationSecret .config.apiovh.consumerKey .function_args.method .config.apiovh.basePath .function_args.path .function_args.body .pre_hook.output | sha1sum | printf "$1$%s"}}'
    - name: X-Ovh-Timestamp
      value: "{{ .pre_hook.output }}"
    - name: X-Ovh-Consumer
      value: "{{ .config.apiovh.consumerKey }}"
    - name: X-Ovh-Application
      value: "{{ .config.apiovh.applicationKey }}"
    method: "{{ .function_args.method }}"
    url: "{{.config.apiovh.basePath}}{{ .function_args.path }}"
    body: "{{ .function_args.body }}"

This function can be used in a template like this:

steps:
  getService:
    description: Get Service
    action:
      type: ovh::request
      configuration:
        path: "{{.input.path}}"
        method: GET
        body: ""

Dependencies

Dependencies can be declared on a step, to indicate what requirements should be met before the step can actually run. A step can have multiple dependencies, which will all have to be met before the step can start running.

A dependency can be qualified with a step's state (stepX:stateY, it depends on stepX, finishing in stateY). If omitted, then DONE is assumed.

There are two different kinds of states: builtin and custom. Builtin states are provided by uTask and include: TODO, RUNNING, DONE, CLIENT_ERROR, SERVER_ERROR, FATAL_ERROR, CRASHED, PRUNE, TO_RETRY, AFTERRUN_ERROR. Additionally, a step can define custom states via its custom_states field. These custom states provide a way for the step to express that it ran successfully, but the result may be different from the normal expected case (e.g. a custom state NOT_FOUND would let the rest of the workflow proceed, but may trigger additional provisioning steps).

A dependency (stepX:stateY) can be on any of stepX's custom states, along with DONE (builtin). These are all considered final (uTask will not touch that step anymore, it has been run to completion). Conversely, other builtin states (CLIENT_ERROR, ...) may not be used in a dependency, since those imply a transient state and the uTask engine still has work to do on these.

If you wish to declare a dependency on something normally considered as a CLIENT_ERROR (e.g. GET HTTP returns a 404), you can write a check condition to inspect your step result, and change it to a custom state instead (meaning an alternative termination, see the NOT_FOUND example)

It is possible that a dependency will never match the expected state. For example, step1 is in DONE state, and step2 has a dependency declared as step1:NOT_FOUND: it means that step2 requires that step1 finishes its execution with state NOT_FOUND. In that case, step2 will never be allowed to run, as step1 finished with state DONE. To remedy this, uTask will remove step2 from the workflow by setting its state to the special state PRUNE. Any further step depending on step2 will also be pruned, removing entire alternative execution branches. This allows crossroads patterns, where a step may be followed by two mutually exclusive branches (one for DONE, one for ALTERNATE_STATE_XXX). (Note: PRUNE may also be used in conditions to manually eliminate entire branches of execution)

A special qualifier that can be used as a dependency state is ANY (stepX:ANY). ANY matches all custom states and DONE, and it also does not get PRUNE'd recursively if stepX is set to PRUNE. This is used mostly for sequencing, either when the actual result of the step does not matter, but its timing does; or to reconcile mutually exclusive branches in a diamond pattern (using e.g. the coalesce templating function to mix optional step results).

For example, step2 can declare a dependency on step1 in the following ways:

  • step1: wait for step1 to be in state DONE (could also be written as step1:DONE)
  • step1:DONE,ALREADY_EXISTS: wait for step1 to be either in state DONE or ALREADY_EXISTS
  • step1:ANY: wait for step1 to be in any "final" state, ie. it cannot keep running

Loops

A step can be configured to take a json-formatted collection as input, in its foreach property. It will be executed once for each element in the collection, and its result will be a collection of each iteration. This scheme makes it possible to chain several steps with the foreach property.

For the following step definition (note json-format of foreach):

steps:
  prefixStrings:
    description: Process a collection of strings, adding a prefix
    foreach: '[{"id":"a"},{"id":"b"},{"id":"c"}]'
    action:
      type: echo
      configuration:
        output:
          prefixed: pre-{{.iterator.id}}

The following output can be expected to be accessible at {{.step.prefixStrings.children}}

[{
  "prefixed": "pre-a"
},{
  "prefixed": "pre-b"
},{
  "prefixed": "pre-c"
}]

This output can be then passed to another step in json format:

foreach: '{{.step.prefixStrings.children | toJson}}'

It's possible to configure the strategy used to run each elements: default strategy is parallel: each elements will be run in parallel to maximize throughput ; sequence will run each element when the previous one is done, to ensure the sequence between elements. It can be declared in the template as is:

foreach_strategy: "sequence"

Resources

Resources are a way to restrict the concurrency factor of certain operations, to control the throughput and avoid dangerous behavior e.g. flooding the targets.

High level view:

  • For each action to execute, a list of target resources is determined. (see later)
  • In the µTask configuration, numerical limits can be set to each resource label. This acts as a semaphore, allowing a certain number of concurrent slots for the given resource label. If no limit is set for a resource label, the previously mentionned target resources have no effect. Limits are declared in the resource_limits property.

The target resources for a step can be defined in its YAML definition, using the resources property.

steps:
  foobar:
    description: A dummy step, that should not execute in parallel
    resources: ["myLimitedResource"]
    action:
      type: echo
      configuration:
        output:
          foobar: fuzz

Alternatively, some target resources are determined automatically by µTask Engine:

  • When a task is run, the resource template:my-template-name is used automatically.
  • When a step is run, the plugin in charge of the execution automatically generates a list of resources. This includes generic resources such as socket, url:www.example.org, fork... allowing the µTask administrator to set-up generic limits such as "socket": 48 or "url:www.example.org": 1.

Each builtin plugins declares resources which can be discovered using the README of the plugin (example for http plugin).

Declared resource_limits must be positive integers. When a step is executed, if the number of concurrent executions is reached, the µTask Engine will wait for a slot to be released. If the resource is limited to the 0 value, then the step will not be executed and is set to TO_RETRY state, it will be run once the instance allows the execution of its resources. The default time that µTask Engine will wait for a resource to become available is 1 minute, but it can be configured using the resource_acquire_timeout property.

Task templates validation

A JSON-schema file is available to validate the syntax of task templates, it's available in hack/template-schema.json.

Validation can be performed at writing time if you are using a modern IDE or editor.

Validation with Visual Studio Code

  • Install YAML extension from RedHat.
    • Ctrl+P, then type ext install redhat.vscode-yaml
  • Edit your workspace configuration (settings.json file) to add:
{
    "yaml.schemas": {
        "./hack/template-schema.json": ["/*.yaml"]
    }
}
  • Every template will be validated real-time while editing.

Task template snippets with Visual Studio Code

Code snippets are available in this repository to be used for task template editing: hack/templates.code-snippets

To use them inside your repository, copy the templates.code-snippets file into your .vscode workspace folder.

Available snippets:

  • template
  • variable
  • input
  • step

Extending µTask with plugins

µTask is extensible with golang plugins compiled in *.so format. Two kinds of plugins exist:

  • action plugins, that you can re-use in your task templates to implement steps
  • init plugins, a way to customize the authentication mechanism of the API, and to draw data from different providers of the configstore library

The installation script for utask creates a folder structure that will automatically package and build your code in a docker image, with your plugins ready to be loaded by the main binary at boot time. Create a separate folder for each of your plugins, within either the plugins or the init folders.

Action Plugins

Action plugins allow you to extend the kind of work that can be performed during a task. An action plugin has a name, that will be referred to as the action type in a template. It declares a configuration structure, a validation function for the data received from the template as configuration, and an execution function which performs an action based on valid configuration.

Create a new folder within the plugins folder of your utask repo. There, develop a main package that exposes a Plugin variable that implements the TaskPlugin defined in the plugins package:

type TaskPlugin interface {
	ValidConfig(baseConfig json.RawMessage, config json.RawMessage) error
	Exec(stepName string, baseConfig json.RawMessage, config json.RawMessage, ctx interface{}) (interface{}, interface{}, error)
	Context(stepName string) interface{}
	PluginName() string
	PluginVersion() string
	MetadataSchema() json.RawMessage
}

The taskplugin package provides helper functions to build a Plugin:

package main

import (
	"github.com/ovh/utask/pkg/plugins/taskplugin"
)

var (
	Plugin = taskplugin.New("my-plugin", "v0.1", exec,
		taskplugin.WithConfig(validConfig, Config{}))
)

type Config struct { ... }

func validConfig(config interface{}) (err error) {
  cfg := config.(*Config)
  ...
  return
}

func exec(stepName string, config interface{}, ctx interface{}) (output interface{}, metadata interface{}, err error) {
  cfg := config.(*Config)
  ...
  return
}

Exec function returns 3 values:

  • output: an object representing the output of the plugin, that will be usable as {{.step.xxx.output}} in the templating engine.
  • metadata: an object representing the metadata of the plugin, that will be usable as {{.step.xxx.metadata}} in the templating engine.
  • err: an error if the execution of the plugin failed. uTask is based on github.com/juju/errors package to determine if the returned error is a CLIENT_ERROR or a SERVER_ERROR.

Warning: output and metadata should not be named structures but plain map. Otherwise, you might encounter some inconsistencies in templating as keys could be different before and after marshalling in the database.

Init Plugins

Init plugins allow you to customize your instance of µtask by giving you access to its underlying configuration store and its API server.

Create a new folder within the init folder of your utask repo. There, develop a main package that exposes a Plugin variable that implements the InitializerPlugin defined in the plugins package:

type Service struct {
	Store  *configstore.Store
	Server *api.Server
}

type InitializerPlugin interface {
	Init(service *Service) error // access configstore and server to customize µTask
	Description() string         // describe what the initialization plugin does
}

As of version v1.0.0, this is meant to give you access to two features:

  • service.Store exposes the RegisterProvider(name string, f configstore.Provider) method that allow you to plug different data sources for you configuration, which are not available by default in the main runtime
  • service.Server exposes the WithAuth(authProvider func(*http.Request) (string, error)) method, where you can provide a custom source of authentication and authorization based on the incoming http requests

If you develop more than one initialization plugin, they will all be loaded in alphabetical order. You might want to provide a default initialization, plus more specific behaviour under certain scenarios.

Contributing

Backend

In order to iterate on feature development, run the utask server plus a backing postgres DB by invoking make run-test-stack-docker in a terminal. Use SIGINT (Ctrl+C) to reboot the server, and SIGQUIT (Ctrl+4) to teardown the server and its DB.

In a separate terminal, rebuild (make re) each time you want to iterate on a code patch, then reboot the server in the terminal where it is running.

To visualize API routes, a swagger-ui interface is available with the docker image, accessible through your browser at http://hostname.example/ui/swagger/.

Frontend

µTask serves two graphical interfaces: one for general use of the tool (dashboard), the other one for task template authoring (editor). They're found in the ui folder and each have their own Makefile for development purposes.

Run make dev to launch a live-reloading on your machine. The editor is a standalone GUI, while the dashboard needs a backing µTask api (see above to run a server).

Run the tests

Run all test suites against an ephemeral postgres DB:

$ make test-docker

Get in touch

You've developed a new cool feature ? Fixed an annoying bug ? We'll be happy to hear from you! Take a look at CONTRIBUTING.md

License

The uTask logo is an original artwork under Creative Commons 3.0 license based on a design by Renee French under Creative Commons 3.0 Attributions.

Swagger UI is an open-source software, under Apache 2 license.

For the rest of the code, see LICENSE.

Related links

Comments
  • feature(callback): create callback plugin

    feature(callback): create callback plugin

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) New feature

    • What is the new behavior (if this is a feature change)? We can now manage callback steps.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) New SQL schema version (table creation).

    • Other information: N/A

  • feat(auth): Add group support #235

    feat(auth): Add group support #235

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)

    Feature

    • What is the new behavior ?

    This PR introduces the group support to handle permissions access from an identity provider. The goal is to able be to give rights (resolve tasks, be an administrator) to groups. All users of these groups will inherit the rights granted.

    New fields has been added :

    • allowed_resolvers_groups on template : to authorize the users of these groups to resolve the tasks
    • admin_groups on configuration : to consider the users of these groups to be administrator

    A new method exposed by service.Server :

    • WithGroupAuth(groupAuthProvider func(*http.Request) (string, []string, error)), which has the same use as WithGroupAuth (provide a custom source of authentication and authorization). The difference is that the method returns the user (string) and his groups ([]string).
    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?)

    No breaking change

  • update angular to version 12

    update angular to version 12

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)

    • What is the current behavior? (You can also link to an open issue here)

    • What is the new behavior (if this is a feature change)?

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?)

    • Other information:

  • feat: improve audit logs middleware

    feat: improve audit logs middleware

    Signed-off-by: William Poussier [email protected]

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)

    Refactor of the error logs middleware.

    • What is the current behavior? (You can also link to an open issue here)

    Only errors with a few fields are logged.

    • What is the new behavior (if this is a feature change)?

    This logs more metadata, such as the action name, user, ...

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?)

    No.

  • chore(dep): migrating µTask build to Go 1.18

    chore(dep): migrating µTask build to Go 1.18

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Chore

    • What is the current behavior? (You can also link to an open issue here) Build using Go 1.17

    • What is the new behavior (if this is a feature change)? Build using Go 1.18

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) No

    • Other information:

  • feat: add compress package & the ability to compress steps in a resolution

    feat: add compress package & the ability to compress steps in a resolution

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)

    Feature

    • What is the current behavior? (You can also link to an open issue here)

    The steps aren't compressed.

    • What is the new behavior (if this is a feature change)?

    Now, we can compress the steps in a resolution to reduce the row size in database. The compression algorithm used is saved in database to ensure backward compatibility in the case where the compression algorithm is changed.

    By default, no compression algorithm is used.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?)

    No

    • Other information:

    To compress in gzip, update utask-cfg to add:

    {
        "steps_compression": "gzip"
    }
    
  • fix(subtask): do not auto-resume parent task of subtask if parent resolution is paused

    fix(subtask): do not auto-resume parent task of subtask if parent resolution is paused

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Bug fix

    • What is the current behavior? (You can also link to an open issue here) Most of the time, parent resolution in state Paused means that an human operator has stopped the processing for patch management. We should not auto-resume the parent task to prevent any issue.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) No that much

    • Other information:

  • feat(notify): use logrus to capture errors

    feat(notify): use logrus to capture errors

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)

    Improvement: use logrus to capture errors from Send Notify to standardize the errors logs.

    • What is the current behavior? (You can also link to an open issue here)

    The error logs aren't standardized.

    • What is the new behavior (if this is a feature change)?

    All notify backend use the same function to capture an error and the error logs are standardized. Also, some fields have been added to simplify the debug.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?)

    The format of error logs has changed to be standardized with the rest of the application. In case where an alerting system that parses the logs is used, the users need to change their configuration/rules.

    • Other information:

    An example of notify_config to test the new behaviour:

    "notify_config":{
       "test":{
          "type":"webhook",
          "config":{
             "webhook_url":"http://localhost:7777"
          }
       }
    }
    
    # Before
    2022/10/05 21:31:43 Post "http://localhost:7777": dial tcp [::1]:7777: connect: connection refused
    
    # After
    ERRO[2022-10-05T21:26:22+02:00] Error while sending notification on webhook   error="Post \"http://localhost:7777\": dial tcp [::1]:7777: connect: connection refused" instance_id=8 notification_type=task_state_update notifier_name=test notify_backend=webhook task_id=8f971130-cadf-498d-ac3c-50c799e97d50
    
  • feat: Make a step loop while a condition is true

    feat: Make a step loop while a condition is true

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)

    A new type of condition: loop. This forces a step to be evaluated again while the condition is true. There is also a new runnable state, IN_LOOP.

    Here is a trivial example where a step counts to a number:

    steps:
        count:
            description: counts
            conditions:
            - type: loop
              if:
              - value: '{{ .step.this.output }}'
                operator: LT
                expected: '{{ .input.max }}'
            action:
                type: echo
                configuration:
                    output: '{{ add (default .step.count.output 0) 1 }}'
    
    • What is the current behavior? (You can also link to an open issue here)

    Currently there is no easy way to write a loop with a condition.

    1. foreach is not suitable for cases where we don’t know how many items we need to iterate on.
    2. The closest way we can do this is to have a step setting its own state to TO_RETRY with a check condition. However, the backoff duration quickly adds up and the result is less than ideal.
    3. Another solution would be to write a plugin or a script, though we won’t benefit from important μTask features like retrying and blocking.
    • What is the new behavior (if this is a feature change)?

    A new condition type: loop.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?)

    No

    • Other information:

    Nothing of importance, but while testing this feature I came across an interesting behaviour: if a step has a foreach property and a loop condition, then the children will loop independently from eachother. This might be used to factorise similar steps together.

  • feature: add resolution id as task value

    feature: add resolution id as task value

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Add resolution_id as one of the task values. It really helps to leverage resolution_id for some customized init plugins or actions.

    • What is the current behavior? (You can also link to an open issue here) resolution_id was not in task values.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) No

    • Other information:

  • chore(deps): bump terser and @angular-devkit/build-angular in /vscode/web

    chore(deps): bump terser and @angular-devkit/build-angular in /vscode/web

    Bumps terser and @angular-devkit/build-angular. These dependencies needed to be updated together. Updates terser from 5.11.0 to 5.14.2

    Changelog

    Sourced from terser's changelog.

    v5.14.2

    • Security fix for RegExps that should not be evaluated (regexp DDOS)
    • Source maps improvements (#1211)
    • Performance improvements in long property access evaluation (#1213)

    v5.14.1

    • keep_numbers option added to TypeScript defs (#1208)
    • Fixed parsing of nested template strings (#1204)

    v5.14.0

    • Switched to @​jridgewell/source-map for sourcemap generation (#1190, #1181)
    • Fixed source maps with non-terminated segments (#1106)
    • Enabled typescript types to be imported from the package (#1194)
    • Extra DOM props have been added (#1191)
    • Delete the AST while generating code, as a means to save RAM

    v5.13.1

    • Removed self-assignments (varname=varname) (closes #1081)
    • Separated inlining code (for inlining things into references, or removing IIFEs)
    • Allow multiple identifiers with the same name in var destructuring (eg var { a, a } = x) (#1176)

    v5.13.0

    • All calls to eval() were removed (#1171, #1184)
    • source-map was updated to 0.8.0-beta.0 (#1164)
    • NavigatorUAData was added to domprops to avoid property mangling (#1166)

    v5.12.1

    • Fixed an issue with function definitions inside blocks (#1155)
    • Fixed parens of new in some situations (closes #1159)

    v5.12.0

    • TERSER_DEBUG_DIR environment variable
    • @​copyright comments are now preserved with the comments="some" option (#1153)
    Commits

    Updates @angular-devkit/build-angular from 13.3.7 to 13.3.9

    Release notes

    Sourced from @​angular-devkit/build-angular's releases.

    v13.3.9

    13.3.9 (2022-07-20)

    @​angular-devkit/build-angular

    Commit Description
    fix - 0d62716ae update terser to address CVE-2022-25858

    Special Thanks

    Alan Agius and Charles Lyding

    v13.3.8

    13.3.8 (2022-06-15)

    @​angular/pwa

    Commit Description
    fix - c7f994f88 add peer dependency on Angular CLI

    Special Thanks

    Alan Agius

    Changelog

    Sourced from @​angular-devkit/build-angular's changelog.

    13.3.9 (2022-07-20)

    @​angular-devkit/build-angular

    Commit Type Description
    0d62716ae fix update terser to address CVE-2022-25858

    Special Thanks

    Alan Agius and Charles Lyding

    14.0.6 (2022-07-13)

    @​angular/cli

    Commit Type Description
    178550529 fix handle cases when completion is enabled and running in an older CLI workspace
    10f24498e fix remove deprecation warning of no prefixed schema options

    @​schematics/angular

    Commit Type Description
    dfa6d73c5 fix remove browserslist configuration

    @​angular-devkit/build-angular

    Commit Type Description
    4d848c4e6 fix generate different content hashes for scripts which are changed during the optimization phase

    @​angular-devkit/core

    Commit Type Description
    2500f34a4 fix provide actionable warning when a workspace project has missing root property

    Special Thanks

    Alan Agius and martinfrancois

    ... (truncated)

    Commits
    • d091bb0 release: cut the v13.3.9 release
    • 0d62716 fix(@​angular-devkit/build-angular): update terser to address CVE-2022-25858
    • 0bb875d build: mark external only bazel rules
    • 62f46c8 release: cut the v13.3.8 release
    • d27fc7e test: always install a compatible version of @angular/material-moment-adapter
    • c7f994f fix(@​angular/pwa): add peer dependency on Angular CLI
    • See full diff in compare view

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • chore(docker): add UI node_modules into the .dockerignore file

    chore(docker): add UI node_modules into the .dockerignore file

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) bug fix

    • What is the current behavior? (You can also link to an open issue here) if you have a local node_modules directory, it's copied into the docker image

    • What is the new behavior (if this is a feature change)? local node_modules folder is ignored and not copied into the image

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) no breaking change

    • Other information:

  • chore(deps): bump json5 from 2.2.1 to 2.2.3 in /ui/dashboard

    chore(deps): bump json5 from 2.2.1 to 2.2.3 in /ui/dashboard

    Bumps json5 from 2.2.1 to 2.2.3.

    Release notes

    Sourced from json5's releases.

    v2.2.3

    v2.2.2

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).
    Changelog

    Sourced from json5's changelog.

    v2.2.3 [code, diff]

    v2.2.2 [code, diff]

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).
    Commits
    • c3a7524 2.2.3
    • 94fd06d docs: update CHANGELOG for v2.2.3
    • 3b8cebf docs(security): use GitHub security advisories
    • f0fd9e1 docs: publish a security policy
    • 6a91a05 docs(template): bug -> bug report
    • 14f8cb1 2.2.2
    • 10cc7ca docs: update CHANGELOG for v2.2.2
    • 7774c10 fix: add proto to objects and arrays
    • edde30a Readme: slight tweak to intro
    • 97286f8 Improve example in readme
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • feat(task): add Kafka consumer to create tasks from Kafka topic

    feat(task): add Kafka consumer to create tasks from Kafka topic

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Feature

    • What is the new behavior (if this is a feature change)? Add the possibility to create tasks from a Kafka topic

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) No

    • Other information:

  • feat(ssh): add timeout support for SSH plugin

    feat(ssh): add timeout support for SSH plugin

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Feature

    • What is the new behavior (if this is a feature change)? Add timeout to SSH plugin

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) No

    • Other information: Closes #209

  • fix(garbage): subtasks are kept until the parent task is in a final state

    fix(garbage): subtasks are kept until the parent task is in a final state

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)

    Bug fix

    • What is the current behavior? (You can also link to an open issue here)

    The subtasks are deleted even if the parent task is not done.

    Side effect: If the parent task hasn't been executed before the deletion of the subtask, the parent task is in an inconsistent state. It can't determine the subtask state and continue the execution.

    • What is the new behavior (if this is a feature change)?

    The subtasks are kept until the parent task is in a final state.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?)

    No

    • Other information:

    Config

    utask-cfg

    {
        "completed_task_expiration": "1m"
    }
    

    Scripts

    scripts/sleep.sh

    #!/bin/bash
    
    /bin/sleep 60
    

    Templates

    Click here to show templates

    name:             garbage-parent
    description:      Test garbage (parent)
    
    title_format:     Test garbage
    
    allowed_resolver_usernames:   []
    allow_all_resolver_usernames: true
    auto_runnable: true
    blocked:       false
    hidden:        false
    
    steps:
      createSubtask:
        description: "Create subtask"
        action:
          type: subtask
          configuration:
            template: garbage-subtask
      Waiting:
        dependencies:
          - createSubtask
        description: "Wait one minute"
        action:
          type: script
          configuration:
            file_path: sleep.sh
    
    name:             garbage-subtask
    description:      Test garbage (subtask)
    
    title_format:     Test garbage (subtask)
    
    allowed_resolver_usernames:   []
    allow_all_resolver_usernames: true
    auto_runnable: true
    blocked:       false
    hidden:        true
    
    steps:
      Hello:
        description: "Display a message"
        action:
          type: echo
          configuration:
            output:
              foo: 'bar'
    

  • feat(plugin): create Kafka plugin

    feat(plugin): create Kafka plugin

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...)

    Feature : add a Kafka plugin to produce messages (see https://github.com/ovh/utask/issues/3)

    • What is the current behavior? (You can also link to an open issue here)

    uTask can't publish directly a message to a Kafka topic.

    • What is the new behavior (if this is a feature change)?

    Now, uTask can do it. A task can publish a message to a Kafka topic via this plugin.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?)

    Nothing.

    • Other information: Closes https://github.com/ovh/utask/issues/3
    Template used to test the plugin (click me)

    Required: a kafka broker that listens on localhost:9092. How to Install Apache Kafka Using Docker — The Easy Way

    name:             kafka
    description:      Send message into kafka topic
    long_description: Send message into kafka topic
    
    title_format:     Send Kafka message
    result_format:
      echo_message: 'ok'
    
    allowed_resolver_usernames:   []
    allow_all_resolver_usernames: true
    auto_runnable: true
    blocked:       false
    hidden:        false
    
    steps:
      sendKafka:
        description: Send message into kafka topic
        action:
          type: kafka
          configuration:
            brokers:
              - localhost:9092
            timeout: 20s
            message:
              topic: "utask"
              key: 'test'
              value: |
                  {
                    "message": "hello world"
                  }
    

    Result (click me)

    Consumer

    $ kaf consume utask
    Key:         test
    Partition:   0
    Offset:      11
    Timestamp:   2022-10-05 16:47:07.267 +0200 CEST
    {
      "message": "hello world"
    }
    

    kaf is a CLI for Apache Kafka: https://github.com/birdayz/kaf

    Step output

    {
      "name": "sendKafka",
      "description": "Send message into kafka topic",
      "idempotent": false,
      "action": {
        "type": "kafka",
        "configuration": {
          "brokers": [
            "localhost:9092"
          ],
          "message": {
            "key": "test",
            "topic": "utask",
            "value": "{\n  \"message\": \"hello world\"\n}\n"
          }
        },
        "output": null
      },
      "output": {
        "offset": 11,
        "partition": 0
      },
      "state": "DONE",
      "try_count": 1,
      "max_retries": 10000,
      "last_run": "2022-10-05T16:47:07.279139+02:00",
      "foreach_strategy": "",
      "resources": null,
      "tags": null
    }
    

Executes an OCI image using firecracker.

oci-image-executor Executes an OCI image using Firecracker. Logs from the executed process (both stdout and stderr) are sent to stdout. Logs from the

Dec 28, 2022
Docker for Your ML/DL Models Based on OCI Artifacts
Docker for Your ML/DL Models Based on OCI Artifacts

English | 中文 ORMB is an open-source model registry to manage machine learning model. ORMB helps you manage your Machine Learning/Deep Learning models

Dec 30, 2022
List, find and inspect operating system processes in Go

ps Package ps provides functionality to find, list and inspect operating system processes, without using cgo or external binaries. Supported operating

Nov 9, 2022
K6 extension that adds support for browser automation and end-to-end web testing using playwright-go
K6 extension that adds support for browser automation and end-to-end web testing using playwright-go

k6 extension that adds support for browser automation and end-to-end web testing using playwright-go

Dec 21, 2022
Orchestra is a library to manage long running go processes.

Orchestra Orchestra is a library to manage long running go processes. At the heart of the library is an interface called Player // Player is a long ru

Oct 21, 2022
Remaphore - Admin tool employing NATS to coordinate processes on distributed infrastructure.

remaphore Admin tool employing NATS to coordinate processes on distributed infrastructure. Tasks on widely distributed machines often have to be coord

Jan 24, 2022
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)

terraform-provider-awsutils Terraform provider for performing various tasks that cannot be performed with the official AWS Terraform Provider from Has

Dec 8, 2022
Zeus - A Devops Automation Tool

With this tool we are trying generalize and minimize devops reperating task while trying to encourage shared responsibility model acorss devloper teams.

May 31, 2022
Terraform-house - Golang Based terraform automation example using tf.json

Terraform House Manage your own terraform workflow using go language, with the b

Feb 17, 2022
YAML and Golang implementations of common Kubernetes patterns.

Kubernetes Patterns Types Patterns Foundational Patterns Behavioral Patterns Structural Patterns Configuration Patterns Usage To run, simply do go run

Aug 8, 2022
A handy utility to generate configmap and values.yaml of your application for helmifying them

Helmfig Are you tired of writing values.yaml for configmap of your project when you are helmifying them? Helmfig is a handy tool that can generate the

Dec 14, 2022
Prestic - Lets you define and run restic commands from a YAML file

Pete's Restic Lets you define and run restic commands from a YAML file. Features

Jan 10, 2022
`runenv` create gcloud run deploy `--set-env-vars=` option and export shell environment from yaml file.

runenv runenv create gcloud run deploy --set-env-vars= option and export shell environment from yaml file. Motivation I want to manage Cloud Run envir

Feb 10, 2022
How you can use Go to replace YAML files with Kubernetes.

YamYams A small project that is free to use. ?? I wanted to offer a starting point for anyone interested in replacing YAML with Go. You can read more

Jan 6, 2023
Not another markup language. Framework for replacing Kubernetes YAML with Go.

Not another markup language. Replace Kubernetes YAML with raw Go! Say so long ?? to YAML and start using the Go ?? programming language to represent a

Jan 3, 2023
kubectl plugin for signing Kubernetes manifest YAML files with sigstore
kubectl plugin for signing Kubernetes manifest YAML files with sigstore

k8s-manifest-sigstore kubectl plugin for signing Kubernetes manifest YAML files with sigstore ⚠️ Still under developement, not ready for production us

Nov 28, 2022
Converts your k8s YAML to a cdk8s Api Object.

kube2cdk8s Converts your k8s YAML to a cdk8s Api Object. Uses Pulumi's kube2pulumi as a base. Dependencies 1. pulumi cli 2. pulumi kubernetes provider

Oct 18, 2022
No YAML deployments to K8s

no-yaml No YAML deployments to K8s with following approaches: Pulumi NAML cdk8s We will deploy the ?? ?? CNCF App Delivery SIG Demo podtato-head and u

Dec 27, 2022
Creates Helm chart from Kubernetes yaml

Helmify CLI that creates Helm charts from kubernetes yamls. Helmify reads a list of supported k8s objects from stdin and converts it to a helm chart.

Dec 28, 2022