A controller managing namespaces deployments, statefulsets and cronjobs objects. Inspired by kube-downscaler.

kube-ns-suspender

Kubernetes controller managing namespaces life cycle.

Goal

This controller watches the cluster's namespaces and "suspends" them by scaling to 0 some of the resources within those namespaces at a given time. However, once a namespace is in a "suspended" state, it will not be restarted automatically the following day (or whatever). This allows to "reactivate" namespaces only when required, and reduce costs.

Usage

Internals

This controller can be splitted into 2 parts:

  • The watcher
  • The suspender

The watcher

The watcher function is charged to check every X seconds (X being set by the flag -watcher-idle or by the KUBE_NS_SUSPENDER_WATCHER_IDLE environement variable) all the namespaces. When it found namespace that have the kube-ns-suspender/desiredState annotation, it sends it to the suspender. It also manages all the metrics that are exposed about the watched namespaces states.

The suspender

The suspender function does all the work of reading namespaces/resources annotations, and (un)suspending them when required.

Flags

/* explain the different flags, the associated env vars... */

Resources

Currently supported resources are:

States

Namespaces watched by kube-ns-suspender can be in 3 differents states:

  • Running: the namespace is "up", and all the resources have the desired number of replicas.
  • Suspended: the namespace is "paused", and all the supported resources are scaled down to 0 or suspended.
  • Running Forced: the namespace has been suspended, and then reactivated manually. It will be "running" for a pre-defined duration then will go back to the "suspended" state.

Annotations

Annotations are employed to save the original state of a resource.

On namespaces

In order for a namespace to be watched by the controller, it needs to have the kube-ns-suspender/desiredState annotation set to any of the supported values, which are:

  • Running
  • RunningForced
  • Suspended

To be suspended at a given time, a namespace must have the annotation kube-ns-suspender/suspendAt set to a valid value. Valid values are any values that match the time.Kitchen time format, for example: 8:15PM, 12:45AM...

On resources

Deployments and Stateful Sets

As those resources have a spec.replicas value, they must have a kube-ns-suspender/originalReplicas annotation that must be the same as the spec.replicas value. This annotation will be used when a resource will be "unsuspended" to set the original number of replicas.

Cronjobs

Cronjobs have a spec.suspend value that indicates if they must be runned or not. As this value is a boolean, no other annotations are required.

Contributing

/* add CONTRIBUTING file at root */

License

MIT

Comments
  • [Feature]: UI Button to suspend namespace

    [Feature]: UI Button to suspend namespace

    Right now you can manually suspend namespace on demand based on changing the annotation. It would be nice to have an option to do this from UI as it is easy to see which namespaces are suspended and which are not.

  • [Bug]: Panic on checkSuspendedStatefulsetsConformity

    [Bug]: Panic on checkSuspendedStatefulsetsConformity

    Version

    v2.1.0

    What happened?

    Thanks for this project. It looks like it meets my use case exactly. However, when I first tried to use this on my minikube setup, I hit a panic. I'm on a slightly older version of Kubernetes. Not sure if the issue is related to that:

    Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.14", GitCommit:"0f77da5bd4809927e15d1658fb4aa8f13ad890a5", GitTreeState:"clean", BuildDate:"2022-06-15T14:11:36Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
    

    Relevant log output

    {"level":"debug","routine":"suspender","namespace":"cogito","step":"3/3 - handle desiredState","resource":"statefulsets","time":"2022-12-14T17:26:00+01:00","message":"checking suspended Conformity"}
    {"level":"info","routine":"suspender","namespace":"cogito","statefulset":"keycloak","time":"2022-12-14T17:26:00+01:00","message":"scaling keycloak from 1 to 0 replicas"}
    {"level":"info","routine":"suspender","namespace":"cogito","deployment":"compute-manager","time":"2022-12-14T17:26:00+01:00","message":"scaling compute-manager from 1 to 0 replicas"}
    E1214 17:26:00.378244       1 runtime.go:78] Observed a panic: "assignment to entry in nil map" (assignment to entry in nil map)
    goroutine 61 [running]:
    k8s.io/apimachinery/pkg/util/runtime.logPanic({0x13287a0, 0x1661690})
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x85
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0001c0000})
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x75
    panic({0x13287a0, 0x1661690})
    	/usr/local/go/src/runtime/panic.go:1038 +0x215
    github.com/govirtuo/kube-ns-suspender/engine.patchStatefulsetReplicas.func1()
    	/build/engine/statefulset.go:61 +0x186
    k8s.io/client-go/util/retry.OnError.func1()
    	/go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:51 +0x33
    k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x40ce34, 0xc0000a1848})
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:217 +0x1b
    k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x168bb88, 0xc000046038}, 0xc0000a1950)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:230 +0x7c
    k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0x13aa440)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:223 +0x39
    k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0}, 0x40d187)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:418 +0x5f
    k8s.io/client-go/util/retry.OnError({0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0}, 0x15600d0, 0xc000604840)
    	/go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:50 +0xf1
    k8s.io/client-go/util/retry.RetryOnConflict(...)
    	/go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:104
    github.com/govirtuo/kube-ns-suspender/engine.patchStatefulsetReplicas({0x168bb88, 0xc000046030}, 0xc0003af4a0, {0xc00048acf0, 0x6}, {0xc0002de930, 0x8}, {0x14b63de, 0x12}, 0x0)
    	/build/engine/statefulset.go:53 +0x1d2
    github.com/govirtuo/kube-ns-suspender/engine.checkSuspendedStatefulsetsConformity({0x168bb88, 0xc000046030}, {{0x167daa8, 0xc00038bee0}, 0x0, {0x0, 0x0}, {0xc00042e600, 0x2b, 0x1f4}, ...}, ...)
    	/build/engine/statefulset.go:43 +0x1f1
    github.com/govirtuo/kube-ns-suspender/engine.(*Engine).Suspender.func6()
    	/build/engine/suspender.go:266 +0xa8
    created by github.com/govirtuo/kube-ns-suspender/engine.(*Engine).Suspender
    	/build/engine/suspender.go:265 +0x3a05
    panic: assignment to entry in nil map [recovered]
    	panic: assignment to entry in nil map
    
    goroutine 61 [running]:
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0001c0000})
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0xd8
    panic({0x13287a0, 0x1661690})
    	/usr/local/go/src/runtime/panic.go:1038 +0x215
    github.com/govirtuo/kube-ns-suspender/engine.patchStatefulsetReplicas.func1()
    	/build/engine/statefulset.go:61 +0x186
    k8s.io/client-go/util/retry.OnError.func1()
    	/go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:51 +0x33
    k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x40ce34, 0xc0000a1848})
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:217 +0x1b
    k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x168bb88, 0xc000046038}, 0xc0000a1950)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:230 +0x7c
    k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0x13aa440)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:223 +0x39
    k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0}, 0x40d187)
    	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:418 +0x5f
    k8s.io/client-go/util/retry.OnError({0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0}, 0x15600d0, 0xc000604840)
    	/go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:50 +0xf1
    k8s.io/client-go/util/retry.RetryOnConflict(...)
    	/go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:104
    github.com/govirtuo/kube-ns-suspender/engine.patchStatefulsetReplicas({0x168bb88, 0xc000046030}, 0xc0003af4a0, {0xc00048acf0, 0x6}, {0xc0002de930, 0x8}, {0x14b63de, 0x12}, 0x0)
    	/build/engine/statefulset.go:53 +0x1d2
    github.com/govirtuo/kube-ns-suspender/engine.checkSuspendedStatefulsetsConformity({0x168bb88, 0xc000046030}, {{0x167daa8, 0xc00038bee0}, 0x0, {0x0, 0x0}, {0xc00042e600, 0x2b, 0x1f4}, ...}, ...)
    	/build/engine/statefulset.go:43 +0x1f1
    github.com/govirtuo/kube-ns-suspender/engine.(*Engine).Suspender.func6()
    	/build/engine/suspender.go:266 +0xa8
    created by github.com/govirtuo/kube-ns-suspender/engine.(*Engine).Suspender
    	/build/engine/suspender.go:265 +0x3a05
    

    Anything else?

    No response

  • [Bug]: No namespaces are detected in new version

    [Bug]: No namespaces are detected in new version

    Version

    v2.1.0

    What happened?

    In UI no namespaces are detected: image

    Configured variables:

            env:
            - name: "KUBE_NS_SUSPENDER_UI_EMBEDDED"
              value: "true"
            - name: "KUBE_NS_SUSPENDER_CONTROLLER_NAME"
              value: "kube-ns-suspender"
    
    ❯ kubectl get ns --show-labels
    
    NAME                STATUS   AGE     LABELS
    
    kube-system         Active   77m     kubernetes.io/metadata.name=kube-system
    
    default             Active   77m     kubernetes.io/metadata.name=default
    
    kube-public         Active   77m     kubernetes.io/metadata.name=kube-public
    
    kube-node-lease     Active   77m     kubernetes.io/metadata.name=kube-node-lease
    
    kube-ns-suspender   Active   74m     kube-ns-suspender/controllerName=kube-ns-suspender,kubernetes.io/metadata.name=kube-ns-suspender
    
    test                Active   5m42s   kube-ns-suspender/controllerName=kube-ns-suspender,kubernetes.io/metadata.name=test
    

    Relevant log output

    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"engine successfully created in 8.479µs"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"kube-ns-suspender version 'ghcr.io/govirtuo/kube-ns-suspender:v2.1.0' (built 2022-06-13_12:28:13TUTC)"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"web UI successfully created"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"timezone: Europe/Paris"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"watcher idle: 15s"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"running duration: 4h0m0s"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"log level: debug"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"json logging: true"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"controller name: kube-ns-suspender"}
    {"level":"debug","time":"2022-06-14T11:54:55+02:00","message":"annotations prefix: kube-ns-suspender/"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"metrics server successfully created in 74.014µs"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"in-cluster configuration successfully created in 113.159µs"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"clientset successfully created in 1.209969ms"}
    {"level":"info","time":"2022-06-14T11:54:55+02:00","message":"starting 'Watcher' and 'Suspender' routines"}
    {"level":"info","routine":"suspender","time":"2022-06-14T11:54:55+02:00","message":"suspender started"}
    {"level":"info","routine":"watcher","time":"2022-06-14T11:54:55+02:00","message":"watcher started"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"starting new namespaces inventory"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"parsing namespaces list"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"Metric - channel length: 0"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"Metric - running namespaces: 0"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"Metric - suspended namespaces: 0"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"Metric - unknown namespaces: 0"}
    {"level":"debug","routine":"watcher","inventory_id":0,"time":"2022-06-14T11:54:55+02:00","message":"namespaces inventory ended"}
    
    
    
    ### Anything else?
    
    _No response_
  • Features for v0.1.0

    Features for v0.1.0

    • [x] Suspend / Unsuspend "deployments" based on NS annotations (MVP)
    • [x] Implement namespace autostop (scheduled)
    • [x] Support CronJobs
    • [x] Support StatefulSets
    • [x] Code and comments refactoring
    • [x] Add GitHunb actions to release the binary
    • [x] Review log levels
  • [Bug]: Ingress not working

    [Bug]: Ingress not working

    Version

    v2.0.11

    What happened?

    Using standard kustomization.yaml manifest from `base/run``

    My Ingress:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: kube-ns-suspender-webui
      annotations:
        kubernetes.io/ingress.class: nginx
    spec:
      rules:
      - host: kube-ns-suspend.k3s.home
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: kube-ns-suspender-webui
                port:
                  number: 8080
    

    Port-forward is also broken, maybe the current release is broken?

    Relevant log output

    19:54:13 [error] 1409#1409: *57135486 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.7.71, server: kube-ns-suspend.k3s.home, request: "GET /favicon.ico
    
    lars@Larss-MacBook-Air k8s-at-home % kubectl -n kube-ns-suspender port-forward svc/kube-ns-suspender-webui 8080
    Forwarding from 127.0.0.1:8080 -> 8080
    Forwarding from [::1]:8080 -> 8080
    Handling connection for 8080
    E0603 21:53:22.207123    6735 portforward.go:406] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod df5b06140a49690901253dfb0f0bd8da95121bdd0efbf5eaed06b7ff5897b552, uid : failed to execute portforward in network namespace "/var/run/netns/cni-110e7dcd-eefb-0043-6d6e-66300695a2be": failed to connect to localhost:8080 inside namespace "df5b06140a49690901253dfb0f0bd8da95121bdd0efbf5eaed06b7ff5897b552", IPv4: dial tcp4 127.0.0.1:8080: connect: connection refused IPv6 dial tcp6 [::1]:8080: connect: connection refused 
    E0603 21:53:22.207693    6735 portforward.go:234] lost connection to pod
    Handling connection for 8080
    E0603 21:53:22.208050    6735 portforward.go:346] error creating error stream for port 8080 -> 8080: EOF
    
    
    
    ### Anything else?
    
    _No response_
  • [Bug]: `nextSuspendTime` seems to be drifting when no `dailySuspendTime`

    [Bug]: `nextSuspendTime` seems to be drifting when no `dailySuspendTime`

    Current Behavior:

    When the annotation dailySuspendTime is not present, it seems that nextSuspendTime is drifting, and in the end the namespace will never be suspended.

    Expected Behavior:

    nextSuspendTIme alone should be sufficient to suspend a namespace.

    Steps To Reproduce:

    1. Create a namespace that is watched by kube-ns-suspender without the annotation dailySuspendTime
    2. Unsuspend it to have the annotation nextSuspendTime added
    3. Wait until the nextSuspendTIme is reached
    4. Enjoy

    Anything else:

    n/a

  • [Feature]: Installation guide

    [Feature]: Installation guide

    Is your feature request related to a problem?

    Hi, I have tried to install kube-ns-suspender on my K3d cluster to test it out however it is very difficult to sort it out how to do it. Additionally, I want to install kube-ns-suspender without cloning the repo and using Kustomize and I think it is not possible yet?

    Describe the solution you'd like

    A clear and concise description of how to install kube-ns-suspender on the cluster

  • refactor: logs

    refactor: logs

    • chore: Rename watcher logger variable
    • chore: Minor logs update on 'main.go'
    • chore: Minor logs update on 'engine/watcher.go'
    • chore: Remove un-needed 'namespace' reference from logger in 'engine/suspender.go'
    • refactor: Add tons of logs on 'engine/suspender.go'
    • chore: Add 'inventory_id' on every log statement on 'engine/watcher.go'
  • feat: v2

    feat: v2

    Features updates

    nextSuspendTime

    • [x] Rename auto_nextSuspendTime -> nextSuspendTime
    • [x] Use nextSuspendTime as a reference, drop the in-mem k/v store
    • [x] Support editable nextSuspendTime annotation (Advanced user can define the value nextSuspendtime)

    dailySuspendTime

    • [x] Suspend namespace resources at dailySuspendTime, even if user "unsuspended" the namespace after dailySuspendTime
    • [x] (optional) Make dailySuspendTime optional (to allow advanced user to remove the restriction)
  • [Bug]: dry run flag is not working

    [Bug]: dry run flag is not working

    Current Behavior:

    Even when using the -dry-run flag, the objects are downscaled.

    Expected Behavior:

    Do not downscale the objects

    Steps To Reproduce:

    Use -dry-run flag and watch the objects.

  • feat: refactored code to avoid argoCD self heal issues

    feat: refactored code to avoid argoCD self heal issues

    ArgoCD self heal feature detected that the original manifests changed. So now the original annotations are not edited anymore.

    This PR also closes issues #4 and #11

  • [Bug]: Failed to unsuspend deployment scaled to 0 #90

    [Bug]: Failed to unsuspend deployment scaled to 0 #90

    Bugfix for issue #90

    When a deployment or statefulset is scaled to 0, the originalReplicas annotation will not be set on namespace suspend so we need to handle this annotation not being present on unsuspend. Also need to clear out the originalReplicas annotation on unsuspend so that the replicas can be set to 0 again.

  • [Feature]: Disable running-duration

    [Feature]: Disable running-duration

    Is your feature request related to a problem?

    I would like to leave a namespace running indefinitely until I manually suspend it. This is to support long test durations of undetermined length.

    Describe the solution you'd like

    I would like to be able to disable the running-duration feature by setting the value to 0 or -1. When the namespace is unsuspended, it should not set a nextSuspendTime.

    Additional context

    Add any other context or screenshots about the feature request here.

  • [Bug]: Failed to unsuspend deployment scaled to 0

    [Bug]: Failed to unsuspend deployment scaled to 0

    Version

    v2.1.1

    What happened?

    I have a deployment of pods that is scaled to 0 by Keda. When I suspend the namespace, the deployment isn't annotated with the originalReplicas, presumably because it is scaled to 0. When I unsuspend the namespace, I hit an error because the originalReplicas annotation is missing.

    Relevant log output

    {"level":"debug","routine":"suspender","namespace":"cogito-saas","step":"3/3 - handle desiredState","resource":"statefulsets","time":"2022-12-19T16:07:14+01:00","message":"checking running conformity"}
    {"level":"error","routine":"suspender","namespace":"cogito-saas","error":"strconv.Atoi: parsing \"\": invalid syntax","time":"2022-12-19T16:07:14+01:00","message":"running deployments conformity checks failed"}
    

    Anything else?

    No response

  • [Feature]: Support pausing Keda autoscaling

    [Feature]: Support pausing Keda autoscaling

    Is your feature request related to a problem?

    I use Keda to autoscale some components. Keda runs in its own namespace. Unfortunately, when I suspend a namespace that includes Keda ScaledObjects, Keda will continue to try to scale up the deployment in the suspended namespace.

    Describe the solution you'd like

    In order to avoid this, the ScaledObject resources can be annotated to pause autoscaling. See: https://keda.sh/docs/2.8/concepts/scaling-deployments/#pause-autoscaling

    If a namespace is suspended, check if any apiVersion: keda.sh/v1alpha1, kind: ScaledObject resources exist in the namespace. If they do, add the autoscaling.keda.sh/paused-replicas: "0" annotation. When the namespace is unsuspended, remove this annotation.

    Additional context

    Add any other context or screenshots about the feature request here.

  • [Bug]: Web UI display issue with long list of namespaces

    [Bug]: Web UI display issue with long list of namespaces

    Version

    v2.1.1

    What happened?

    Enable the web UI with 13 annotated namespaces. The namespace table overlaps the footer making it hard to read. Screen Shot 2022-12-16 at 11 21 37 AM

    Relevant log output

    No response

    Anything else?

    No response

A kubernetes controller that watches the Deployments and “caches” the images
A kubernetes controller that watches the Deployments and “caches” the images

image-cloner This is just an exercise. It's a kubernetes controller that watches

Dec 20, 2021
Image clone controller is a kubernetes controller to safe guard against the risk of container images disappearing

Image clone controller image clone controller is a kubernetes controller to safe guard against the risk of container images disappearing from public r

Oct 10, 2021
A Controller written in kubernetes sample-controller style which watches a custom resource named Bookstore

bookstore-sample-controller A Controller written in kubernetes sample-controller style which watches a custom resource named Bookstore. A resource cre

Jan 20, 2022
A fluxcd controller for managing remote manifests with kubecfg

kubecfg-operator A fluxcd controller for managing remote manifests with kubecfg This project is in very early stages proof-of-concept. Only latest ima

Nov 1, 2022
Enforcing per team quota (sum of used resources across all their namespaces) and delegating the per namespace quota to users.

Quota Operator Enforcing per team quota (sum of used resources across all their namespaces) and delegating the per namespace quota to users. Instructi

Nov 9, 2022
A Kubernetes operator that allows for automatic provisioning and distribution of cert-manager certs across namespaces

cached-certificate-operator CachedCertificate Workflow When a CachedCertificate is created or updated the operator does the following: Check for a val

Sep 6, 2022
Command kube-tmux prints Kubernetes context and namespace to tmux status line.

kube-tmux Command kube-tmux prints Kubernetes context and namespace to tmux status line.

Sep 10, 2021
A general purpose cloud provider for Kube-Vip

kube-vip-cloud-provider The Kube-Vip cloud provider is a general purpose cloud-provider for on-prem bare-metal or virtualised environments. It's desig

Jan 8, 2023
Reworking kube-proxy's architecture

Kubernetes Proxy NG The Kubernetes Proxy NG a new design of kube-proxy aimed at allowing Kubernetes business logic to evolve with minimal to no impact

Jan 3, 2023
scenario system to check the behavior of kube-scheduler

kube-scheduler-simulator-cli: Kubernetes Scheduler simulator on CLI and scenario system. Hello world. This repository is scenario system for kube-sche

Jan 25, 2022
Kube - A simple Kubernetes client, based on client-go

kube A simple Kubernetes client, based on client-go.

Aug 9, 2022
Container image sweeper kube

container-image-sweeper-kube container-image-sweeper-kube は、不要になった Docker イメージを自

Jan 24, 2022
A fake kube-apiserver that serves static data from files

Static KAS A fake kube-apiserver that serves static data from an Openshift must-gather. Dynamically discovers resources and supports logs. Requires go

Nov 19, 2022
Progressive delivery Kubernetes operator (Canary, A/B Testing and Blue/Green deployments)
Progressive delivery Kubernetes operator (Canary, A/B Testing and Blue/Green deployments)

flagger Flagger is a progressive delivery tool that automates the release process for applications running on Kubernetes. It reduces the risk of intro

Jan 5, 2023
Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments.

Apollo Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments. Philosophy Linux-

Feb 7, 2022
🚥 See the status of your vercel deployments on the pimoroni blinkt!

verpi ?? See the status of your vercel deployments on the pimoroni blinkt! verpi ?? Demo ?? Setup your own version ?? Getting the parts ?? Install the

May 22, 2022
No YAML deployments to K8s

no-yaml No YAML deployments to K8s with following approaches: Pulumi NAML cdk8s We will deploy the ?? ?? CNCF App Delivery SIG Demo podtato-head and u

Dec 27, 2022
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Jan 3, 2023