Cloud-native, enterprise-level cron job platform for Kubernetes

Furiko

Furiko Logo

CI Releases Go version Kubernetes version LICENSE codecov

Furiko is a cloud-native, enterprise-level cron and adhoc job platform for Kubernetes.

The main website for documentation and updates is hosted at https://furiko.io.

Introduction

Furiko is a Kubernetes-native operator for managing, scheduling and executing scheduled and adhoc jobs and workflows. It aims to be a general-purpose job platform that supports a diverse range of use cases, including cron jobs, batch processing, workflow automation, etc.

Furiko is built from the beginning to support enterprise-level use cases and running self-hosted in a private Kubernetes cluster, supporting users across a large organization.

Some use cases that are perfect for Furiko include:

  • Cron-based scheduling massive amounts of periodic jobs per day in a large organization
  • Scheduling some jobs to run once at a later time, with a set of specific inputs
  • Starting multiple jobs to execute one after another, once the previous job has finished
  • Event-driven, offline/asynchronous job processing via webhooks
  • Building a platform to automate business operations via form-based inputs (with Furiko as the job engine)

Contributing

See CONTRIBUTING.md.

License

NOTE: Although started within the company, Furiko is not an official Shopee project or product.

Furiko is licensed under the Apache License, Version 2.0.

Logo is designed by Duan Weiwei, and is distributed under CC-BY 4.0.

Owner
Furiko
Cloud-native, enterprise-level cron job platform for Kubernetes
Furiko
Comments
  • chore: Support arm/v7 and arm64 and migrate to GitHub Container Registry

    chore: Support arm/v7 and arm64 and migrate to GitHub Container Registry

    This PR adds support for linux/arm/v7 and linux/arm64 architectures, releasing container images for multiple architectures using docker manifest. Additionally, we are migrating from Docker Hub to GitHub Container Registry! 🎉


    Changes

    1. Follows the tutorial from GoReleaser on how to build multi-arch Docker releases: https://carlosbecker.com/posts/multi-platform-docker-images-goreleaser-gh-actions/
    2. Revamp nightly release workflows and avoid using GoReleaser's snapshot feature.
    3. Rewrite some ./hack scripts to use getopts.
    4. Migrate from Docker Hub to GitHub Container Registry. Images can be pulled from here instead: https://github.com/orgs/furiko-io/packages
  • fix(cli): Support AllowCustom and fix validation for select

    fix(cli): Support AllowCustom and fix validation for select

    Fixes several CLI issues for furiko run.

    1. Support AllowCustom for Select options by using survey.Input in place of survey.Select

      asciicast

      • Currently can't add support for Multi options as well, because there isn't a good equivalent way
      • The UX is also not perfect, perhaps this can be better fulfilled by https://github.com/go-survey/survey/issues/339
    2. Fixes a validation bug where Select options could not be selected when required is true.

    3. Adds unit tests.

  • feat(concurrency): Support variable MaxConcurrency

    feat(concurrency): Support variable MaxConcurrency

    Closes #16.

    Implements MaxConcurrency in ConcurrencySpec, which allows specifying a custom maximum concurrency value. This applies to both Forbid and Enqueue.

  • feat(execution): Add Indexes to ParallelStatus, update GetJobCommand

    feat(execution): Add Indexes to ParallelStatus, update GetJobCommand

    • Update ParallelStatus to contain list of Indexes.
    • Update furiko get job command:
      • Support --output detail and show all tasks and parallel task groups.
      • Show parallel task summary based on index counts.
      • Show job status task summary based on task counts.
    • Other changes:
      • Fix incorrect enum value for TaskTerminating.
      • Deprecate TaskPendingTimeout result, TaskKilled will be used instead, and PendingTimeout will be stored in the Reason.
      • Add more GetPhase unit tests.
  • feat(cli): Add enable/disable subcommands

    feat(cli): Add enable/disable subcommands

    Adds two new subcommands:

    Usage:
      furiko [command]
    
    Available Commands:
      ...
      disable     Disable automatic scheduling for a JobConfig.
      enable      Enable automatic scheduling for a JobConfig.
      ...
    

    Detailed subcommand help:

    Disables automatic scheduling for a JobConfig.
    
    If the specified JobConfig does not have a schedule, then an error will be thrown.
    If the specified JobConfig is already disabled, then this is a no-op.
    
    Usage:
      furiko disable [flags]
    
    Examples:
      # Disable scheduling for the JobConfig.
      furiko disable send-weekly-report
    
    Enables automatic scheduling for a JobConfig.
    
    If the specified JobConfig does not have a schedule, then an error will be thrown.
    If the specified JobConfig is already enabled, then this is a no-op.
    
    Usage:
      furiko enable [flags]
    
    Examples:
      # Enable scheduling for the JobConfig.
      furiko enable send-weekly-report
    
  • fix(webhook): Add default values for parallelism.completionStrategy

    fix(webhook): Add default values for parallelism.completionStrategy

    Bug Reproduction

    Trying to apply the following YAML results in not being able to save:

    apiVersion: execution.furiko.io/v1alpha1
    kind: JobConfig
    metadata:
      name: jobconfig-parallel-sleep
    spec:
      concurrency:
        policy: Forbid
      schedule:
        cron:
          expression: "H/15 * * * *"
          timezone: Asia/Singapore
        disabled: false
      template:
        spec:
          parallelism:
            withKeys:
              - "30"
              - "60"
              - "120"
          taskTemplate:
            pod:
              spec:
                containers:
                  - name: job-container
                    args:
                      - bash
                      - -c
                      - "echo Sleeping for ${task.index_key}; sleep ${task.index_key}"
                    image: bash
    

    The following error is displayed:

    $ k apply -f jobconfig-parallel-sleep.yaml
    The JobConfig "jobconfig-parallel-sleep" is invalid: spec.template.spec.parallelism.completionStrategy: Required value
    

    Bugfix

    By right, we should also add default values for JobTemplate in JobConfigSpec. The default value should be set to AllSuccessful.

  • feat(execution): Avoid using activeDeadlineSeconds to kill tasks

    feat(execution): Avoid using activeDeadlineSeconds to kill tasks

    Closes #64. Also helps #63.

    Avoids using activeDeadlineSeconds to kill tasks, instead it will use API deletion instead, which supports graceful termination.

  • feat(execution): Implement parallelism in Job

    feat(execution): Implement parallelism in Job

    Closes #71.

    This implements all necessary features to support parallel tasks in a single Job. The following changes are introduced:

    1. Added API changes to introduce ParallelismSpec according to the proposal in #71.
    2. Revamped API for JobStatus fields, and reduced the possible set of phase, results, states, etc. to reduce duplication.
    3. Changed the job and task naming convention to delimit name components with hyphens instead of periods (e.g. jobconfig-parallel-1653824280 and jobconfig-parallel-sleep-1653822660-ge3tgm-0)
    4. Added mutation and validation handlers for ParallelismSpec.
    5. Compute uncreated tasks and create them in reconciler.

    Remaining TODO items:

    • [x] Immediately terminate all remaining tasks when a single task fails (all retries exceeded) when using AllSuccessful
    • [x] Create reconciler integration tests
  • feat(api): Make PodTemplateSpec schemaless

    feat(api): Make PodTemplateSpec schemaless

    Addresses an issue mentioned in #63:

    Another solution without needing to use a string is to use type: object in the CRD definition without properties, which prevents any schema validation. We could consider doing this for the PodTemplateSpec field currently as well.

    By avoiding an embed of the core/v1 API types in the CRD's OpenAPI schema, we avoid a few problems:

    1. Unknown fields error/pruning of unknown fields when Furiko is embedding a higher API version than the API version of the apiserver
    2. Incorrect/incomplete schema from kubectl explain
    3. (unlikely) Support backwards-incompatible changes from one API version to the next

    Also fixes the issue that metav1.ObjectMeta fields (when not at the root of a CRD) will not be able to store non-string values, because for some reason, the apiserver treats it as map[string]string rather than an Object.

  • Proposal: Support task-level parallelism

    Proposal: Support task-level parallelism

    Motivation

    Users may wish to shard their periodic jobs into multiple Pods. For example every day at 12am, we will need to process a whole bunch of work, and this work to be done may significantly increase in volume over time, and the work cannot be done before 12am (i.e. on the previous day). At where we stand right now, the only option is to support vertical scaling of a Pod, which is obviously impractical beyond a certain point. As such, we want to evaluate how to support horizontal scaling of Job pods (i.e. task-level parallelism).

    The Job object in K8s currently supports parallel execution: https://kubernetes.io/docs/concepts/workloads/controllers/job/#parallel-jobs. However, it is my personal opinion that the K8s JobController API for controlling parallelism of multiple Pods is not very clear and well-designed. We will outline the use cases, and attempt to propose an API design that would support and potentially improve the existing one in K8s.

    It is also important to avoid over-designing this feature. A good principle to keep in mind is that Furiko's main focus on automatic timed/periodic tasks, not so much user-invoked or complex task workflows, and we are better off utilizing more complex workflow engines (i.e. Argo Workflows) to support them.

    Use Cases

    We will outline some use cases (including/highlighting those we have gotten internally):

    1. Support running a fixed number of Pods per Job at the same time: This would basically be equivalent to "scaling up" a Job to more replicas, but each Pod has to be assigned work independently of other Pods. This usually involves an external message queue.
      • The equivalent use case in K8s is "Parallel Jobs with a work queue".
      • The advantage is that there is a well-defined upper bound in the amount of resources required to run the Job, but the disadvantage is that any unexpected increase in work to be done could result in longer processing times.
    2. Support running a variable number of Pods per Job at the same time: This is an extension of (1), except that the parallelism factor is variable.
      • One idea could be to use (1) but allow changing the parallelism at both adhoc invocation time (prior to creation), and during runtime (after it is started) which could be controlled by an autoscaler of sorts. See (3) for more details on the latter.
      • If we allow the parallelism factor to depend on an external data source (e.g. Kafka topic lag), then it becomes dangerously close to a MapReduce design. I think it may be better to require the parallelism factor to be defined in the workload spec itself.
    3. Horizontal scaling of Pods while a Job is running: While a Job is running, we may want to update the number of Pod replicas if we realize that it may not complete in time without stopping its progress. (just an idea, no real use case for now)
      • This can be utilized by jobs which read from a queue with a central coordinator, but not so much when the number of shards is fixed. One notable exception is Kafka, where consumers can rebalance when new consumers are added to the consumer group, and scale up to a maximum of the number of partitions.
      • Implementing this should be straightforward, but we have to be careful about the scale-down case, since it may conflict with completion policies (see below). A simple way to get around this is to prevent scale-down, and only allow scale-up.
    4. Stateless/Stateful parallel worker patterns: When a Job has multiple parallel Pods, it could be possible that some Pods can pick up work from a queue such that other Pods don't need to do so, so it would be sufficient to terminate once any Pod is finished. On the other hand, if every Pod works on its subset or work and nothing else (e.g. using consistent hashing), then we need to wait for ALL Pods to finish before terminating. As such, we need to support both use cases.

    I personally don't think there is a need for "fixed completion count" jobs like in K8s, at least I have never encountered a use case which depends on this. Perhaps the close equivalent of "fixed completion count" is to start N Jobs at the same time with a fixed concurrency factor, which is slightly different from the topic we are currently discussing.

    Requirements

    1. The parallelism feature must not conflict with the retries feature of Jobs. In other words, the distinction between retries and parallel tasks should be clear. In the batch/v1 JobController, it depends on a Pod's restartPolicy to retry containers, but we explicitly support pod-level retries.
    2. Control the completion and success policy: The user may want to be explicit about what and when constitutes a successful or a completed Job. In the case of a single Pod, using exit code 0 (i.e. the Pod's phase should be Success) is sufficient to indicate a successful Job, but it becomes less clear once we have parallel Pods.
    3. Control early termination policies: When a single Pod fails, it could be possible that we want to immediately terminate early from the Job in order to avoid unnecessary work, or to wait for all Pods to gracefully finish their existing work.

    Proposed Implementation 1

    We are not going forward with this proposal.

    Show old proposal...

    We will implement a new CRD ShardedSet (NOTE: the name is currently TBC). This CRD is most similar to a ReplicaSet, except that it controls running to completion (which ironically actually makes it similar to batch/v1 Job itself).

    The implementation of a ShardedSet will follow closely to the Indexed Job pattern in batch/v1 (https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/), but defines completion policy in a much more explicit manner than is currently supported by the batch/v1 Job API, and avoids the confusion with having to define completions. See https://github.com/kubernetes/kubernetes/issues/28486 for some related discussion about how completions are handled.

    CRD Design

    Example of a proposed ShardedSet custom resource object, with all possible API fields (including future expansion):

    apiVersion: execution.furiko.io/v1alpha1
    kind: ShardedSet
    spec:
      # Defines that exactly 5 tasks are run in parallel.
      # Each task receives a separate task index, and it is guaranteed that
      # no two concurrently running tasks will receive the same task index.
      parallelism: 5
    
      # Defines retry policy.
      retries:
        # Maximum attempts per shard, beyond which we stop creating new tasks for the shard. Defaults to 1.
        maxAttemptsPerShard: 3
    
        # Cannot exceed maxAttemptsPerShard * parallelism (also the default).
        # If a shard fails but this is exceeded, then it is considered a shard failure.
        maxAttemptsTotal: 15
    
      # Defines completion policy.
      completion:
        # Defines when a ShardedSet is completed. Options:
        #  - OnAllShardsSuccess (default): Stop once all shards are successful.
        #  - OnAnyShardSuccess: Stop once any shard is successful.
        #  - OnAnyShardFailure: Stop once any shard is failed.
        condition: OnAllShardsSuccess
    
        # Defines what to do on completion, depending on whether the ShardedSet is successful or failed, the defaults are shown below.
        # Note that this has no effect for OnAllShardsSuccess, since by definition all shards would have completed prior to taking this action.
        onSuccess: WaitForRemaining
        onFailure: TerminateRemaining
    
      # The TaskTemplateSpec itself, we could further compose other task executors too!
      template: 
        pod:
          spec:
            containers: 
              - name: container
                image: alpine
                args: ["echo", "Hello world"]
                env:
                  # The task can determine its shard index using this env var.
                  - name: SHARD_INDEX
                    value: "${parallel.shard_index}"
    

    Complete breakdown for completion.condition success cases:

    • OnAllShardsSuccess
      • When some shard succeeds, do nothing.
      • When all shards succeed, succeed the ShardedSet.
    • OnAnyShardSuccess
      • When some shard succeeds, immediately succeed the ShardedSet.
    • OnAnyShardFailure
      • When some shard succeeds, do nothing.
      • When all shards succeed, succeed the ShardedSet.

    Complete breakdown for completion.condition failure cases:

    • OnAllShardsSuccess
      • If any shard cannot retry further (exceed maxAttempts), immediately fail the ShardedSet.
    • OnAnyShardSuccess
      • If any shard cannot retry further (exceed maxAttempts), do nothing.
      • If all shards failed and cannot retry further (exceed maxAttempts), fail the ShardedSet.
    • OnAnyShardFailure
      • If any shard cannot retry further (exceed maxAttempts), immediately fail the ShardedSet.

    Note that:

    • In the success case, OnAllShardsSuccess == OnAnyShardFailure
    • In the failure case: OnAllShardsSuccess == OnAnyShardFailure

    Therefore, we can simplify it to just AllShardsSucceeded and AnyShardSucceeded. (Help me verify this claim??)

    Inside a JobConfig, the user will define it like this:

    apiVersion: execution.furiko.io/v1alpha1
    kind: JobConfig
    spec:
      template:
        spec:
          # Retry the ShardedSet up to 3 times
          maxAttempts: 3
          task:
            template:
              # This is the start of the ShardedSetTemplateSpec
              parallel:
                metadata:
                  labels: ...
                spec:
                  parallelism: 5
                  template:
                    pod:
                      spec:
                        containers: [ ... ]
    

    Pros and Cons

    • Pros:
      • Very easy to reason about. The composition of two separate APIs is clear from a developer and a user perspective, and future extensions to the ShardedSet controller avoids conflicting with the core JobController.
      • Most users will not need to think about the additional API fields that are introduced for parallelism if they don't need it. In my opinion, this is the biggest issue with the batch/v1 Job.
    • Cons:
      • By composing a secondary task executor to achieve task-level parallelism, we may be prematurely confining the design to only support a subset of use cases. For example, by separating the retry and parallel sets into distinct layers we may constrain the possible expansion options in the future.
      • Additional implementation cost, but it is saved by reducing the work on ensuring that the existing JobController behavior is not broken if we do Option 2.
      • Potentially duplicate logic in both ShardedSet and Job controllers (e.g. task adoption, task executor).

    Proposed Implementation 2

    Another way is to avoid the use of a CRD, but implement it directly in the JobController and update the Job API.

    We will add the following fields to the spec:

    spec:
      taskTemplate:
        parallelism:
          withCount: 3
          completionStrategy: AllSuccessful
    

    Some possible parallelism types:

    • withCount: Specify absolute number, the index number will be generated from 0 - N-1 in ${task.index_num}
    • withKeys: Specify a list of string keys, it will be made available in ${task.index_key}
    • withMatrix: Specify a map of keys to a list of string values, each key will be available in ${task.index_matrix.<matrix_key>}. This is to support common parallel patterns (e.g. CI on multiple platform and version combinations)

    Some considerations:

    1. Retries will take place on a parallel index-level. This means that using withCount of 3, each index (0, 1, 2) has a maximum of 3 retries each.
    2. The completionStrategy is similar to Proposal (1).

    The main reason we are not using Proposal (1) is because of the complexity introduced when having nested retries, and it is actually clearer to inline the implementation into the same CRD/controller.

    Alternatives

    There are some alternatives to the above design to achieve the same requirements.

    1. Creating one JobConfig for each desired shard. The obvious downside is that you have duplicate objects and higher cost of maintenance, configuration drift, etc.
    2. Support starting multiple Jobs at each schedule. This is a very simple solution, but there are some drawbacks:
      • Each Job started at the same time are basically independent of each other, and we cannot determine the status or control the workload as a single atomic unit.
      • Multiple Jobs started concurrently that spill over their normal run duration may eat into the maxConcurrency of the JobConfig (see #16), resulting in lesser total Jobs being run than expected.

    TODO List

    • #81
    • #83
  • chore: Move console to separate package

    chore: Move console to separate package

    The github.com/hinshun/vt10x package does not support Windows:

    ../../../../pkg/mod/github.com/hinshun/[email protected]/vt_other.go:26:8: t.cur.attr undefined (type Cursor has no field or method attr, but does have Attr)
    ../../../../pkg/mod/github.com/hinshun/[email protected]/vt_other.go:27:8: t.cur.attr undefined (type Cursor has no field or method attr, but does have Attr)
    

    Since we only use it for unit tests, we should chuck it into its own package so that it does not need to be imported when building the CLI.

  • feat(cli): Implement completion for flags and arguments

    feat(cli): Implement completion for flags and arguments

    Supports auto-completion of arguments and flags for all commands:

    completion

    • -n/--namespace: Autocompletes based on list of namespaces.
    • -o/--output: Autocompletes list of output formats.
    • Autocompletes Job/JobConfig names.

    Side note: The above GIF was generated with https://github.com/charmbracelet/vhs

  • feat(taskexecutor): Implement Argo Workflows task executor

    feat(taskexecutor): Implement Argo Workflows task executor

    Implements a new task executor: Argo Workflows.

    Currently, the task executor is very simple:

    1. One task = one Workflow. This means that retries (at Furiko level) will create a new Workflow, and parallelism (at Furiko level) will have multiple concurrent Workflows.
    2. Most task fields would have an equivalent meaning in Argo Workflows.
    • RunningTimestamp: workflow.status.startedAt (will be set once non-pending)
    • FinishTimestamp: workflow.status.finishedAt
    1. Substitution only occurs in .spec.arguments.parameters.*.value. This avoids conflicting with Argo's own substitution mechanism.
    2. If the Argo Workflow CRD is not installed, Furiko will not watch or support the argoWorkflow task executor. If it is installed or uninstalled, currently it would require a restart of execution-controller to take effect, because we need to reinitialize the informers.
      • Users can still create JobConfigs with argoWorkflow task executor, but a warning will be returned by the admission webhook.
      • When trying to run a new Job with the argoWorkflow task executor, AdmissionRefused will be thrown if it cannot be created because the CRD is not installed.
  • Migrate to k8s.io/utils/clock

    Migrate to k8s.io/utils/clock

    Ref https://github.com/kubernetes/kubernetes/issues/94738

    Internally we upgraded to use v0.24.x of client-go (was previously on v0.20.9), and found that some usages of k8s.io/apimachinery/pkg/util/clock is not 100% compatible with the now-preferred k8s.io/utils/clock.

    It seems that v0.23.0 is still okay, but we should migrate it still.

  • Bug: Changing defaultPendingTimeoutSeconds should not affect already created jobs

    Bug: Changing defaultPendingTimeoutSeconds should not affect already created jobs

    When updating the defaultPendingTimeoutSeconds, any already ongoing jobs will be affected by the change. This is probably undesirable, especially if the cluster administrator shortens the timeout from a longer one to a shorter one, it may cause previously created jobs to be killed. We should uphold a principle such that old resources should NOT be affected by newly updated defaults.

    As such, we should add pendingTimeoutSeconds via the JobMutatingWebhook if it is not specified, so that the controller does not wrongly use the new global configuration.

  • Feature: Support NotDuring constraints for JobConfig

    Feature: Support NotDuring constraints for JobConfig

    Users may want to have a periodically scheduled job that runs regularly on interval, except for some explicitly defined time ranges. For example, we may want to specify that during deploy freezes or some other real-world event, a cron job should not be run.

    Users should be able to specify a list of time ranges (start and end time, inclusive), during which schedules are not allowed.

    API Design

    apiVersion: execution.furiko.io/v1alpha1
    kind: JobConfig
    metadata:
      name: my-job-config
      namespace: my-namespace
    spec:
      schedule:
        # Schedule every hour from 10AM to 6PM.
        cron:
          expression: 0 10-18 * * *
          timezone: Asia/Singapore
        constraints:
          # Example of multiple notDuring freeze periods, inclusive.
          notDuring:
            - start: 2022-04-01T00:00:00+08:00
              end: 2022-04-02T11:59:59+08:00
            - start: 2022-04-28T00:00:00+08:00
              end: 2022-04-28T15:59:59+08:00
    

    Using the above example:

    • On 1st April, no schedules will be created at all.
    • On 2nd April, the first schedule that day will be at 12:00:00.
    • On 28th April, the first schedule that day will be at 16:00:00.

    Possible Extensions

    • Sharing constraints across multiple JobConfigs
    • Regular constraints instead of fixed time range: Using a cron expression to define exclusions to the main cron schedule, similar to GitLab
  • Enhancement: CronController can avoid back-scheduling for Forbid

    Enhancement: CronController can avoid back-scheduling for Forbid

    Since we have updated to use the JobQueueController instead, the process of admitting a Job to be started is now asynchronous. The CronController will back-schedule multiple Jobs even though it should know at this point to just ignore the back-scheduling for ConcurrencyPolicyForbid, because the JobQueueController will just reject them as AdmissionError:

    NAME                          AGE   PHASE            CREATED TASKS   RUN TIME   FINISH TIME
    jobconfig-sample.1650055799   7s    Succeeded        1                          3s
    jobconfig-sample.1650055814   7s    AdmissionError   0                          7s
    jobconfig-sample.1650055829   7s    AdmissionError   0                          7s
    jobconfig-sample.1650055844   7s    AdmissionError   0                          7s
    jobconfig-sample.1650055859   7s    AdmissionError   0                          7s
    

    In case users set a high back-scheduling limit, this may result in huge bursts of unnecessary Job creation.

A lightweight job scheduler based on priority queue with timeout, retry, replica, context cancellation and easy semantics for job chaining. Build for golang web apps.

Table of Contents Introduction What is RIO? Concern An asynchronous job processor Easy management of these goroutines and chaining them Introduction W

Dec 9, 2022
Executes jobs in separate GO routines. Provides Timeout, StartTime controls. Provides Cancel all running job before new job is run.

jobExecutor Library to execute jobs in GO routines. Provides for Job Timeout/Deadline (MaxDuration()) Job Start WallClock control (When()) Add a job b

Jan 10, 2022
A simple Cron library for go that can execute closures or functions at varying intervals, from once a second to once a year on a specific date and time. Primarily for web applications and long running daemons.

Cron.go This is a simple library to handle scheduled tasks. Tasks can be run in a minimum delay of once a second--for which Cron isn't actually design

Dec 17, 2022
Easy and fluent Go cron scheduling

goCron: A Golang Job Scheduling Package. goCron is a Golang job scheduling package which lets you run Go functions periodically at pre-determined inte

Jan 8, 2023
gron, Cron Jobs in Go.

gron Gron provides a clear syntax for writing and deploying cron jobs. Goals Minimalist APIs for scheduling jobs. Thread safety. Customizable Job Type

Dec 20, 2022
a cron library for go

cron Cron V3 has been released! To download the specific tagged release, run: go get github.com/robfig/cron/[email protected] Import it in your program as: im

Dec 25, 2022
分布式定时任务库 distributed-cron
分布式定时任务库 distributed-cron

dcron 分布式定时任务库 原理 基于redis同步节点数据,模拟服务注册。然后将任务名 根据一致性hash 选举出执行该任务的节点。 流程图 特性 负载均衡:根据任务数据和节点数据均衡分发任务。 无缝扩容:如果任务节点负载过大,直接启动新的服务器后部分任务会自动迁移至新服务实现无缝扩容。

Dec 29, 2022
Lightweight, fast and dependency-free Cron expression parser (due checker) for Golang (tested on v1.13 and above)

adhocore/gronx gronx is Golang cron expression parser ported from adhocore/cron-expr. Zero dependency. Very fast because it bails early in case a segm

Dec 30, 2022
Chadburn is a scheduler alternative to cron, built on Go and designed for Docker environments.

Chadburn - a job scheduler Chadburn is a modern and low footprint job scheduler for docker environments, written in Go. Chadburn aims to be a replacem

Dec 6, 2022
基于 Redis 和 Cron 的定时任务队列

RTask RTask 是 Golang 一款基于 Redis 和 Cron 的定时任务队列。 快速上手 您需要使用 Go Module 导入 RTask 工具包。 go get -u github.com/avtion/rtask 使用教程 package main import ( "con

Oct 27, 2021
A cron-like strategy plugin for HashiCorp Nomad Autoscaler

Nomad Autoscaler Cron Strategy A cron-like strategy plugin, where task groups are scaled based on a predefined scheduled. job "webapp" { ... group

Feb 14, 2022
Go-based runner for Cron Control

Cron Control Runner A Go-based runner for processing WordPress cron events, via Cron Control interfaces. Installation & Usage Clone the repo, and cd i

Jul 19, 2022
This package provides the way to get the previous timestamp or the next timestamp that satisfies the cron expression.

Cron expression parser Given a cron expression, you can get the previous timestamp or the next timestamp that satisfies the cron expression. I have us

May 3, 2022
Graceful shutdown with repeating "cron" jobs (running at a regular interval) in Go

Graceful shutdown with repeating "cron" jobs (running at a regular interval) in Go Illustrates how to implement the following in Go: run functions ("j

May 30, 2022
Zdpgo cron - 在golang中使用cron表达式并实现定时任务

zdpgo_cron 在golang中使用cron表达式并实现定时任务 项目地址:https://github.com/zhangdapeng520/zdpgo

Feb 16, 2022
clockwork - Simple and intuitive job scheduling library in Go.
clockwork - Simple and intuitive job scheduling library in Go.

clockwork A simple and intuitive scheduling library in Go. Inspired by python's schedule and ruby's clockwork libraries. Example use package main imp

Jul 27, 2022
You had one job, or more then one, which can be done in steps

Leprechaun Leprechaun is tool where you can schedule your recurring tasks to be performed over and over. In Leprechaun tasks are recipes, lets observe

Nov 23, 2022
Job scheduling made easy.

scheduler Job scheduling made easy. Scheduler allows you to schedule recurrent jobs with an easy-to-read syntax. Inspired by the article Rethinking Cr

Dec 30, 2022
goCron: A Golang Job Scheduling Package.

goCron: A Golang Job Scheduling Package.

Jan 9, 2023