KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling

main build nightly e2e Twitter

KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition.

KEDA can run on both the cloud and the edge, integrates natively with Kubernetes components such as the Horizontal Pod Autoscaler, and has no external dependencies.

We are a Cloud Native Computing Foundation (CNCF) sandbox project.

Table of contents

Getting started

You can find several samples for various event sources here.

Deploying KEDA

There are many ways to deploy KEDA including Helm, Operator Hub and YAML files.

Documentation

Interested to learn more? Head over to keda.sh.

Governance & Policies

You can learn about the governance of KEDA here.

Community

If interested in contributing or participating in the direction of KEDA, you can join our community meetings.

Just want to learn or chat about KEDA? Feel free to join the conversation in #KEDA on the Kubernetes Slack!

Become a listed KEDA user!

We are always happy to list users who run KEDA in production, learn more about it here.

Releases

You can find the latest releases here.

Contributing

You can find contributing guide here.

Building & deploying locally

Learn how to build & deploy KEDA locally here.

Owner
KEDA
Kubernetes-based Event Driven Autoscaling
KEDA
Comments
  • Provide support for explicitly pausing autoscaling of workloads.

    Provide support for explicitly pausing autoscaling of workloads.

    Provide support for explicitly stating workloads to scale to zero without the option of scaling up.

    This can be useful for manually scaling-to-zero instances because:

    • You want to do maintenance
    • Your cluster is suffering from resource starvation and you want to remove non-mission-critical workloads

    Why not delete the deployment? Glad you've asked! Because we don't want to touch the applications themselves but merely remove the instances it is running from an operational perspective. Once everything is good to go, we can enable it to scale again.

    Suggestion

    Introduce a new CRD, for example ManualScaleToZero, which targets a given deployment/workload and provides a description why it's scaled to 0 for now.

    If scaled objects/jobs are configured, they are ignored in favor of the new CRD.

  • feat: Azure AD Workload Identity support for Azure Scalers and Key Vault

    feat: Azure AD Workload Identity support for Azure Scalers and Key Vault

    Signed-off-by: Vighnesh Shenoy [email protected]

    Support for Workload Identity as a pod identity provider.

    Related PRs - Helm Changes - https://github.com/kedacore/charts/pull/263, https://github.com/kedacore/charts/pull/264 Doc changes - https://github.com/kedacore/keda-docs/pull/752

    Checklist

    • [x] Commits are signed with Developer Certificate of Origin (DCO - learn more)
    • [x] Tests have been added
    • [x] Changelog has been updated and is aligned with our changelog requirements
    • [x] A PR is opened to update our Helm chart (repo) (if applicable, ie. when deployment manifests are modified)
    • [x] A PR is opened to update the documentation on (repo) (if applicable)

    Relates to #2487

  • Improve e2e test reliability

    Improve e2e test reliability

    Signed-off-by: jorturfer [email protected]

    This PR increases some timeouts across different e2e test and does these relevant changes:

    • Azure Pipelines: e2e tests now check scaling from/to 0 instead of between 1 and 3 because otherwise we have random fails if the removed pods during the scale in are still executing an AzDO job.
    • New Relic: e2e test has changed a bit to maintain the load till the scaling out is done.
    • Selenium: e2e test has changed, now nodes starts from 0 and check scaling to 0 instead of job status because sometimes the job started before the hub is ready and has more than 1 result (and the test fails)
    • Global: The timeout for each scaler e2e tests has been increased from 10m to 30m.
    • Global: Removed all the resources except for cpu e2e tests
    • Global: Failing e2e tests are retried one more time

    Checklist

    • [x] Commits are signed with Developer Certificate of Origin (DCO - learn more)
    • [x] Tests have been added
    • [x] A PR is opened to update our Helm chart (repo) (if applicable, ie. when deployment manifests are modified)
    • [x] A PR is opened to update the documentation on (repo) (if applicable)
    • [x] Changelog has been updated
  • Allow HPA minReplicas other than 1 while still scaling to 0

    Allow HPA minReplicas other than 1 while still scaling to 0

    Proposal

    We currently have a scenario where we need to scale our deployments to 0 or have at least, e.g., 5 replicas available. We were not able to achieve this by overriding the HPA scale behavior. In the code, the HPA minReplicas are always set to 1 if we want to scale to 0.

    Therefore, we propose to be able to explicitly configure HPA minReplicas.

    Use-Case

    We are running Kafka Streams apps on Kubernetes and automatically scale them using KEDA. If there is no message lag, we can safely scale to 0. However, when process messages, we use Kafka Streams state stores, which are loaded into the memory of our pods. Because the resources of a single pod are insufficient, we replicate our deployment and thus distribute the state and require less resources per pod.

    Anything else?

    No response

  • ARM Support

    ARM Support

    I am trying to install KEDA on K3S running on RPi 4, it seems the latest version does not support ARM. The kedacore/keda:arm seems to be dead and is not available from Docker Hub. I also tried to compile it manually but I ran into many issues compiling Operator-SDK.

    ARM support is necessary nowadays since most of the Edge devices are ARM-based. It would be great if the ARM image can be made available on Docker Hub.

    Open items

    • [x] Automatically build container image on ARM for PRs
    • [x] Automatically run unit tests on ARM
    • [x] https://github.com/kedacore/keda/issues/2262
    • [x] https://github.com/kedacore/keda/issues/2263
  • KEDA scaler not working on AKS with trigger authentication using  pod identity

    KEDA scaler not working on AKS with trigger authentication using pod identity

    Report

    KEDA scaler not scales with scaled object defined with trigger using pod identity for authentication for service bus queue. I'm following this KEDA service bus triggered scaling project.
    The scaling works fine with the connection string, but when I try to scale using the pod identity for KEDA scaler the keda operator fails to get the azure identity bound to it with the following keda operator error message log:

    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).isScaledObjectActive
            /workspace/pkg/scaling/scale_handler.go:228
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).checkScalers
            /workspace/pkg/scaling/scale_handler.go:211
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).startScaleLoop
            /workspace/pkg/scaling/scale_handler.go:145
    2021-10-10T17:35:53.916Z        ERROR   azure_servicebus_scaler error   {"error": "failed to refresh token, error: adal: Refresh request failed. Status Code = '400'. Response body: {\"error\":\"invalid_request\",\"error_description\":\"Identity not found\"}\n"}
    
    

    My scaler objects' definition is as below:

    apiVersion: keda.sh/v1alpha1
    kind: TriggerAuthentication
    metadata:
      name: trigger-auth-service-bus-orders
    spec:
      podIdentity:
        provider: azure
    ---
    apiVersion: keda.sh/v1alpha1 
    kind: ScaledObject
    metadata:
      name: order-scaler
    spec:
      scaleTargetRef:
        name: order-processor
      # minReplicaCount: 0 Change to define how many minimum replicas you want
      maxReplicaCount: 10
      triggers:
      - type: azure-servicebus
        metadata:
          namespace: demodemobus
          queueName: orders
          messageCount: '5'
        authenticationRef:
          name: trigger-auth-service-bus-orders
    

    Im deploying the azure identity to the namespace keda where my keda deployment resides. And installs KEDA with the following command to set the pod identity binding using helm:

    helm install keda kedacore/keda --set podIdentity.activeDirectory.identity=app-autoscaler --namespace keda
    

    Expected Behavior

    The KEDA scaler should have worked fine with the assigned pod identity and access token to perform scaling

    Actual Behavior

    The KEDA operator could not be able to find the azure identity assigned and scaling fails

    Steps to Reproduce the Problem

    1. Create the azure identity and bindings for the KEDA
    2. Install KEDA with the aadpodidentitybinding
    3. Create the scaledobject and triggerauthentication using KEDA pod identity
    4. The scaler fails to authenticate and scale

    Logs from KEDA operator

    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).isScaledObjectActive
            /workspace/pkg/scaling/scale_handler.go:228
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).checkScalers
            /workspace/pkg/scaling/scale_handler.go:211
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).startScaleLoop
            /workspace/pkg/scaling/scale_handler.go:145
    2021-10-10T17:41:54.909Z        ERROR   azure_servicebus_scaler error   {"error": "failed to refresh token, error: adal: Refresh request failed. Status Code = '400'. Response body: {\"error\":\"invalid_request\",\"error_description\":\"Identity not found\"}\n"}
    

    KEDA Version

    No response

    Kubernetes Version

    1.20

    Platform

    Microsoft Azure

    Scaler Details

    Azure Service Bus

    Anything else?

    No response

  • External metric provider via HTTP

    External metric provider via HTTP

    • Scaler Source: This scaler will send a GET request to an API that returns a JSON response.

    • How do you want to scale: Users can access numeric value in the API response that will be used as a current value.

    • Authentication: Not sure about this one but probably some headers. Not sure if we want to authenticate with each request. Eventually, we can start with public endpoints.

    Let's consider an example. My application has an endpoint that returns some useful statistics which I would love to use as a source of information for HPA. When requested it returns the following response:

    {"stats": {"magic_resource": {"value": 42}}}
    

    To access this value I have to specify valueLocation (as in jq). Example ScaledObject:

    apiVersion: keda.k8s.io/v1alpha1
    kind: ScaledObject
    metadata:
      name: api-scaledobject
      namespace: my-project
    spec:
      scaleTargetRef:
        deploymentName: worker
      triggers:
      - type: api-request
        metadata:
          targetValue: 42
          url: http://my-resource:3001/some/stats/endpoint
          valueLocation: stats.magic_resource.value
    

    This scaler is inspired by slack question: https://kubernetes.slack.com/archives/C09R1LV8S/p1594244628163800

  • Provide support for Azure Key Vault in TriggerAuthentication.

    Provide support for Azure Key Vault in TriggerAuthentication.

    Authentication via Azure Key Vault is now supported.
    Sample for the new TriggerAuthenticationSpec -

    apiVersion: keda.sh/v1alpha1
    kind: TriggerAuthentication
    metadata:
      name: triggerAuthName
      namespace: default
    spec:
      azureKeyVault:
        vaultUri: <vault-address>
        credentials:  
          clientId: <azureAD-clientID>
          clientSecret:
            valueFrom:
              secretKeyRef:
                name: <secret-name>
                key: <key-within-secret>
          tenantId: <azureAD-tenantID>
        secrets: 
        - parameter: <param-name-for-authenticating>
          name: <secret-name-in-key-vault>
          version: <secret-version> # Optional
    
    

    Edit 1 - Updated spec and checklist. Edit 2- Raised documentation PR. Edit 3 - Update checklist with tests added.

    Checklist

    • [x] Commits are signed with Developer Certificate of Origin (DCO - learn more)
    • [x] Tests have been added
    • [x] A PR is opened to update our Helm chart (repo) (if applicable, ie. when deployment manifests are modified)
    • [x] A PR is opened to update the documentation on (repo) (if applicable)
    • [x] Changelog has been updated and is aligned with our changelog requirements

    Fixes #900

  • Start migrating e2e tests to Go.

    Start migrating e2e tests to Go.

    Signed-off-by: Vighnesh Shenoy [email protected]

    Provide a description of what has been changed

    Start migrating e2e tests to Go for code parity.

    Sample Logs

    Checklist

    • [x] Commits are signed with Developer Certificate of Origin (DCO - learn more)
    • [x] Tests have been added
    • [x] A PR is opened to update our Helm chart (repo) (if applicable, ie. when deployment manifests are modified)
    • [x] A PR is opened to update the documentation on (repo) (if applicable)
    • [x] Changelog has been updated and is aligned with our changelog requirements

    Relates to #2737

  • Add github action to run e2e command

    Add github action to run e2e command "on-demand"

    Signed-off-by: jorturfer [email protected]

    We have been discussing how to run e2e test in PRs before merge them when anyone with writing permissions requests it. There are several options to achieve this behavior, in this PR I propose to use several actions in a row to allow it.

    When someone with writing permissions comment the PR with the message /e2e, automatically this action will be executed reacting the comment with 🚀 to refer that the pipeline is in process, and also with 👍 / 👎 to notify if the e2e tests have passed or not.

    Also, the trigger supports setting the e2e test discovery regex using if you want, using this format /e2e REGEX, ex:

    • /run-e2e *.test.ts
    • /run-e2e cron*

    Note: To use this feature, we should build and push this image ghcr.io/kedacore/build-tools:main because a few extra packages are needed like up-to-dated git, or hub pr Note2: Because of this, a PAT with delete permissions in the packages should be added also in order to be able to drop the e2e-test-tag from ghcr. GITHUB_TOKEN is not enough. The expected secret name is GHCR_AUTH_PAT

    Achievements:

    • [x] Execute e2e test on demand using a command
    • [x] Allow providing the regex to select a subset of the tests
    • [x] Execute the process to ensure that it works

    Checklist

    • [x] Commits are signed with Developer Certificate of Origin (DCO - learn more)
    • [x] Tests have been added
    • [x] A PR is opened to update our Helm chart (repo) (if applicable, ie. when deployment manifests are modified)
    • [x] A PR is opened to update the documentation on (repo) (if applicable)
    • [x] Changelog has been updated

    Fixes https://github.com/kedacore/keda/discussions/2224

  • feat: option to query metrics only on polling interval

    feat: option to query metrics only on polling interval

    Signed-off-by: Zbynek Roubalik [email protected]

    Adding an option in the trigger spec, that enables caching metric values during polling interval. Then each request coming to KEDA Metrics Server from k8s server (HPA controller), reads the value from this cache and not querying the external service directly. This reduces load on the external services and user have the ability to select this option for an individual trigger in ScaledObject.

    • this feature is EXPERIMENTAL and I'd like to gather feedback, before making it a stable one
    • this feature is available only for ScaledObjects
    • I named this property ~queryMetricsOnPollingInterval~, though I am not happy with the name and I am eager to hear suggestions -> useCachedMetrics

    See example:

      - type: kafka
        useCachedMetrics: true     # <-- NEW OPTION, default value: false
        metadata:
          bootstrapServers: kafka.svc:9092
          consumerGroup: my-group
          topic: test-topic
          lagThreshold: '5'
    

    Code changes:

    • I merged scaler.IsActive() and scaler.GetMetrics() calls to single method scaler.GetMetricsAndActivity() -> this way we don't ask for the same information twice. I was able to reduce this calls for majority of scalers, there are some leftovers to be refactored, as the logic was more complex (so in scaler.GetMetricsAndActivity() we call scaler.IsActive() and scaler.GetMetrics() methods instead of single call) - TODO tracking issue for leftovers
    • refactored scalehandler and scalerscache a little bit

    Outstanding issues:

    • [x] How to name this property?
    • [x] e2e test
    • [x] more unit tests
    • [x] print warning message if this message is being used in ScaledJobs - it doesn't work there
    • [x] pring error if this is used by scaler where it doesn't make sense - cpu, memory, cron Where else?

    Checklist

    • [x] Commits are signed with Developer Certificate of Origin (DCO - learn more)
    • [x] Tests have been added
    • [x] A PR is opened to update the documentation on https://github.com/kedacore/keda-docs/pull/1003
    • [x] Changelog has been updated and is aligned with our changelog requirements

    Fixes https://github.com/kedacore/keda/issues/3921

    Relates to https://github.com/kedacore/keda/issues/2282

  • Redis cluster 7.x.x is not supported by KEDA 2.9.1

    Redis cluster 7.x.x is not supported by KEDA 2.9.1

    Report

    While trying to use KEDA with redis cluster v7.x.x for scale based on list length:

    Spec:
      Advanced:
      Scale Target Ref:
        Name:  serving
      Triggers:
        Metadata:
          Addresses:    ***
          List Length:  3
          List Name:    ***
        Type:           redis-cluster
    Status:
      Conditions:
        Message:  Failed to ensure HPA is correctly created for ScaledObject
        Reason:   ScaledObjectCheckFailed
        Status:   False
        Type:     Ready
        Message:  ScaledObject check failed
        Reason:   UnkownState
        Status:   Unknown
        Type:     Active
        Message:  No fallbacks are active on this scaled object
        Reason:   NoFallbackFound
        Status:   False
        Type:     Fallback
      External Metric Names:
        s0-redis-input-queue
      Health:
        s0-redis-input-queue:
          Number Of Failures:  0
          Status:              Happy
      Hpa Name:                keda-hpa-serving
      Last Active Time:        2022-12-23T15:25:25Z
      Original Replica Count:  1
      Scale Target GVKR:
        Group:            apps
        Kind:             Deployment
        Resource:         deployments
        Version:          v1
      Scale Target Kind:  apps/v1.Deployment
    Events:
      Type     Reason                   Age                 From           Message
      ----     ------                   ----                ----           -------
      Warning  ScaledObjectCheckFailed  10s (x12 over 14s)  keda-operator  Failed to ensure HPA is correctly created for ScaledObject
      Warning  KEDAScalerFailed         8s (x13 over 14s)   keda-operator  connection to redis cluster failed: got 4 elements in cluster info address, expected 2 or 3
    

    With redis cluster v6.x.x works fine.

    Expected Behavior

    Work same as for redis v6.x.x

    Actual Behavior

    ScaledObject failing during initialization:

      Type     Reason                   Age                 From           Message
      ----     ------                   ----                ----           -------
      Warning  ScaledObjectCheckFailed  10s (x12 over 14s)  keda-operator  Failed to ensure HPA is correctly created for ScaledObject
      Warning  KEDAScalerFailed         8s (x13 over 14s)   keda-operator  connection to redis cluster failed: got 4 elements in cluster info address, expected 2 or 3
    

    Steps to Reproduce the Problem

    1. deploy redis cluster v7.0.0
    2. deploy ScaledObject with Type of redis-cluster

    Logs from KEDA operator

    2022-12-30T11:34:46Z    ERROR    scalehandler    Error getting scalers    {"object": {"apiVersion": "keda.sh/v1alpha1", "kind": "ScaledObject", "namespace": "default", "name": "serving"}, "error": "connection to redis cluster failed: got 4 elements in cluster info address, expected 2 or 3"}                      
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).checkScalers                                                                                                                                                                                                                                                          
        /workspace/pkg/scaling/scale_handler.go:347                                                                                                                                                                                                                                                                               
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).startScaleLoop                                                                                                                                                                                                                                                        
        /workspace/pkg/scaling/scale_handler.go:162                                                                                                                                                                                                                                                                               
    2022-12-30T11:34:51Z    ERROR    scalehandler    error resolving auth params    {"scalerIndex": 0, "object": {"apiVersion": "keda.sh/v1alpha1", "kind": "ScaledObject", "namespace": "default", "name": "serving"}, "error": "connection to redis cluster failed: got 4 elements in cluster info address, expected 2 or 3
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).buildScalers                                                                                                                                                                                                                                                          
        /workspace/pkg/scaling/scale_handler.go:543                                                                                                                                                                                                                                                                               
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).performGetScalersCache                                                                                                                                                                                                                                                
        /workspace/pkg/scaling/scale_handler.go:269                                                                                                                                                                                                                                                                               
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).GetScalersCache                                                                                                                                                                                                                                                       
        /workspace/pkg/scaling/scale_handler.go:190                                                                                                                                                                                                                                                                               
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).checkScalers                                                                                                                                                                                                                                                          
        /workspace/pkg/scaling/scale_handler.go:345                                                                                                                                                                                                                                                                               
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).startScaleLoop                                                                                                                                                                                                                                                        
        /workspace/pkg/scaling/scale_handler.go:162                                                                                                                                                                                                                                                                               
    2022-12-30T11:34:51Z    ERROR    scalehandler    Error getting scalers    {"object": {"apiVersion": "keda.sh/v1alpha1", "kind": "ScaledObject", "namespace": "default", "name": "serving"}, "error": "connection to redis cluster failed: got 4 elements in cluster info address, expected 2 or 3"}                      
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).checkScalers                                                                                                                                                                                                                                                          
        /workspace/pkg/scaling/scale_handler.go:347                                                                                                                                                                                                                                                                               
    github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).startScaleLoop                                                                                                                                                                                                                                                        
        /workspace/pkg/scaling/scale_handler.go:162                                                                                                                                                                                                                                                                               
    

    KEDA Version

    2.9.1

    Kubernetes Version

    1.24

    Platform

    Amazon Web Services

    Scaler Details

    redis-cluster

    Anything else?

    No response

  • metrics-apiserver json logs

    metrics-apiserver json logs

        > I'm very sorry @JorTurFer - I was looking at logs from the `keda-operator-metrics-apiserver` deployment. It seems that only has a level setting as opposed to the operator having level, format, and timeEncoding.
    

    Our logs for keda-operator are correctly being emitted as JSON 🤦

    That said, I'd like to have the metrics-apiserver log in json format as well, primarily because all info logs are emitted to stderr and those show up as error logs in our logging system. Could this functionality please be added?

    Originally posted by @naseemkullah in https://github.com/kedacore/keda/issues/3655#issuecomment-1367852952

  • Is it possible to combine multiple k8s secrets into single value in TriggerAuthentication

    Is it possible to combine multiple k8s secrets into single value in TriggerAuthentication

    Proposal

    I have following secret:

    ---
    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
      name: my-database-credentials
    data:
      username: ...
      password: ...
    

    Use-Case

    I want to combine values into new one: host: amqp://$(username):$(password)@rbtmq.default.svc.cluster.local:5672

    Anything else?

    If it's possible, may I have an example, please?

  • Azure log analytics scaler now supports the unsafeSsl flag

    Azure log analytics scaler now supports the unsafeSsl flag

    Allow users to configure whether or not they want to use SSL when using the azure monitor log analytic scaler. Inspired by the Loki scaler.

    Fixes #4046

    default value still remains false.

  • Keda Cron Scaler can't be unsuspended

    Keda Cron Scaler can't be unsuspended

    Report

    Hello, We have a ScaledObject that is scaling our application based on the Cron Scaler.

    Sometimes we have production events that cause us to want even more replicas than what is configured in the cron, so we just change the number of desired replicas using the suspend annotation:

    kubectl annotate scaledObject -n my-apps my-app autoscaling.keda.sh/paused-replicas=25
    

    this works, and immediatly changes the replicas to the number that is stated in the annotation. The problem is - unpause, when removing the annotation:

    kubectl annotate scaledObject -n my-apps my-app autoscaling.keda.sh/paused-replicas-
    

    this doesnt do anything, so basically once we add a pause-replicas annotations we are stuck with this number of replicas. we also dont see anything in the logs regarding the removal of the annotation like we see with other scalers when we remove the annotation. so it seems like there is a bug to get the event that the annotation was removed when using cron.

    this graph shows how it looks: image

    at first we were using paused-replicas on 30, after removing it nothing happened so we deleted and recreated the scaled object - and everything is back to normal, than we did paused-replicas again on 25 replicas - removed it and now we stuck on 25 replicas.

    Expected Behavior

    when removing the annotation of paused-replicas return to the needed number of replicas.

    Actual Behavior

    nothing happens when removing the annotations.

    Steps to Reproduce the Problem

    1. use cron scaler
    2. add an annotation of pause-replicas
    3. remove the annotation of pause-replicas

    Logs from KEDA operator

    No response

    KEDA Version

    2.7.1

    Kubernetes Version

    < 1.23

    Platform

    Amazon Web Services

    Scaler Details

    Cron

    Anything else?

    No response

Amazon ECS Container Agent: a component of Amazon Elastic Container Service
Amazon ECS Container Agent: a component of Amazon Elastic Container Service

Amazon ECS Container Agent The Amazon ECS Container Agent is a component of Amazon Elastic Container Service (Amazon ECS) and is responsible for manag

Dec 28, 2021
A distributed append only commit log used for quick writes and reads to any scale
A distributed append only commit log used for quick writes and reads to any scale

Maestro-DB A distributed append only commit log used for quick writes and reads to any scale Part 1 - Scaffolding Part-1 Notes Going to start off with

Nov 28, 2021
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Jan 8, 2023
A docker container that can be deployed as a sidecar on any kubernetes pod to monitor PSI metrics

CgroupV2 PSI Sidecar CgroupV2 PSI Sidecar can be deployed on any kubernetes pod with access to cgroupv2 PSI metrics. About This is a docker container

Nov 23, 2021
TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative.

TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative. TriggerMesh allows you to declaratively define event flows between sources and targets as well as add even filter, splitting and processing using functions.

Dec 30, 2022
Kube-step-podautoscaler - Controller to scale workloads based on steps
Kube-step-podautoscaler - Controller to scale workloads based on steps

Refer controller/*controller.go for implementation details and explanation for a better understanding.

Sep 5, 2022
Large-scale Kubernetes cluster diagnostic tool.
Large-scale Kubernetes cluster diagnostic tool.

English | 简体中文 KubeProber What is KubeProber? KubeProber is a diagnostic tool designed for large-scale Kubernetes clusters. It is used to perform diag

Dec 21, 2022
A component for sync services between Nacos and Kubernetes.

简介 该项目用于同步Kubernetes和Nacos之间的服务信息。 目前该项目仅支持 Kubernetes Service -> Nacos Service 的同步 TODO 增加高性能zap的logger 增加 Nacos Service -> Kubernetes Service 的同步 监听

May 16, 2022
A Pulumi Kubernetes CertManager component

Pulumi Cert Manager Component This repo contains the Pulumi Cert Manager component for Kubernetes. This add-on automates the management and issuance o

Nov 30, 2022
A Pulumi Kubernetes CoreDNS component

Pulumi Kubernetes CoreDNS Component This repo contains the Pulumi CoreDNS component for Kubernetes. CoreDNS is a fast and flexible DNS server, providi

Dec 1, 2021
Nanovms running in Docker x86 container for M1 Mac ARM64.

Docker Ops This project is an attempt to enable Nanos unikernels to be managed by Ops on non-intel architectures such as the Mac M1 ARM64. Unless ther

Nov 22, 2021
Display (Namespace, Pod, Container, Primary PID) from a host PID, fails if the target process is running on host

Display (Namespace, Pod, Container, Primary PID) from a host PID, fails if the target process is running on host

Oct 17, 2022
How to get a Go / Golang app using the Gin web framework running natively on Windows Azure App Service WITHOUT using a Docker container

Go on Azure App Service View the running app -> https://go-azure-appservice.azurewebsites.net ?? This is an example repo of how to get a Go / Golang a

Nov 28, 2022
Boxygen is a container as code framework that allows you to build container images from code

Boxygen is a container as code framework that allows you to build container images from code, allowing integration of container image builds into other tooling such as servers or CLI tooling.

Dec 13, 2021
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Jan 23, 2022
A tool to build, deploy, and release any application on any platform.
A tool to build, deploy, and release any application on any platform.

Waypoint Website: https://www.waypointproject.io Tutorials: HashiCorp Learn Forum: Discuss Waypoint allows developers to define their application buil

Dec 28, 2022
Hexagonal architecture paradigms, such as dividing adapters into primary (driver) and secondary (driven)Hexagonal architecture paradigms, such as dividing adapters into primary (driver) and secondary (driven)

authorizer Architecture In this project, I tried to apply hexagonal architecture paradigms, such as dividing adapters into primary (driver) and second

Dec 7, 2021
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Nov 25, 2022