A best practices checker for Kubernetes clusters. 🤠

Clusterlint

CircleCI

As clusters scale and become increasingly difficult to maintain, clusterlint helps operators conform to Kubernetes best practices around resources, security and reliability to avoid common problems while operating or upgrading the clusters.

Clusterlint queries live Kubernetes clusters for resources, executes common and platform specific checks against these resources and provides actionable feedback to cluster operators. It is a non invasive tool that is run externally. Clusterlint does not alter the resource configurations.

Background

Kubernetes resources can be configured and applied in many ways. This flexibility often makes it difficult to identify problems across the cluster at the time of configuration. Clusterlint looks at live clusters to analyze all its resources and report problems, if any.

There are some common best practices to follow while applying configurations to a cluster like:

  • Namespace is used to limit the scope of the Kubernetes resources created by multiple sets of users within a team. Even though there is a default namespace, dumping all the created resources into one namespace is not recommended. It can lead to privilege escalation, resource name collisions, latency in operations as resources scale up and mismanagement of kubernetes objects. Having namespaces ensures that resource quotas can be enabled to keep track node, cpu and memory usage for individual teams.

  • Always specify resource requests and limits on pods: When containers have resource requests specified, the scheduler can make better decisions about which nodes to place pods on. And when containers have their limits specified, contention for resources on a node can be handled in a specified manner.

While there are problems that are common to clusters irrespective of the environment they are running in, the fact that different Kubernetes configurations (VMs, managed solutions, etc.) have different subtleties affect how workloads run. Clusterlint provides platform specific checks to identify issues with resources that cluster operators can fix to run in a specific environment.

Some examples of such checks are:

  • On upgrade of a cluster on DOKS, the worker nodes' hostname changes. So, if a user's pod spec relies on the hostname to schedule pods on specific nodes, pod scheduling will fail after upgrade.

Please refer to checks.md to get some background on every check that clusterlint performs.

Install

go get github.com/digitalocean/clusterlint/cmd/clusterlint

The above command creates the clusterlint binary in $GOPATH/bin

Usage

clusterlint list [options]  // list all checks available
clusterlint run [options]  // run all or specific checks

Specific checks and groups

All checks that clusterlint performs are categorized into groups. A check can belong to multiple groups. This framework allows one to only run specific checks on a cluster. For instance, if a cluster is running on DOKS, then, running checks specific to AWS does not make sense. Clusterlint can blacklist aws related checks, if any while running against a DOKS cluster.

clusterlint run -g basic                // runs only checks that are part of the basic group
clusterlint run -G security            // runs all checks that are not part of the security group
clusterlint run -c default-namespace  // runs only the default-namespace check
clusterlint run -C default-namespace // exclude default-namespace check

Disabling checks via Annotations

Clusterlint provides a way to ignore some special objects in the cluster from being checked. For example, resources in the kube-system namespace often use privileged containers. This can create a lot of noise in the output when a cluster operator is looking for feedback to improve the cluster configurations. In order to avoid such a situation where objects that are exempt from being checked, the annotation clusterlint.digitalocean.com/disabled-checks can be added in the resource configuration. The annotation takes in a comma separated list of check names that should be excluded while running clusterlint.

"metadata": {
  "annotations": {
    "clusterlint.digitalocean.com/disabled-checks" : "noop,bare-pods"
  }
}

Building local checks

Some individuals and organizations have Kubernetes best practices that are not applicable to the general community, but which they would like to check with clusterlint. If your check may be useful for anyone else, we encourage you to submit it to clusterlint rather than keeping it local. However, if you have a truly specific check that is not appropriate for sharing with the broader community, you can implement it using Go plugins.

See the example plugin for documentation on how to build a plugin. Please be sure to read the caveats and consider whether you really want to maintain a plugin.

To use your plugin with clusterlint, pass its path on the commandline:

$ clusterlint --plugins=/path/to/plugin.so list
$ clusterlint --plugins=/path/to/plugin.so run -c my-plugin-check

Contributing

Contributions are welcome, in the form of either issues or pull requests. Please see the contribution guidelines for details.

License

Copyright 2019 DigitalOcean

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • Error on images from docker.pkg.github.com

    Error on images from docker.pkg.github.com

    Resolves #113

    Since Kubernetes 1.20 containerd is used instead of Docker as the container runtime. Containerd is due to protocol version differences/support/mismatch unabled to pull images hosted at docker.pkg.github.com. The new check in this commit will error when it finds an image from that registry, and suggests to use ghcr.io, which is also hosted and operated by GitHub, and the successor of docker.pkg.github.com.

    Refs:

    • https://github.com/containerd/containerd/issues/3291#issuecomment-683700425
    • https://docs.github.com/en/packages/guides/migrating-to-github-container-registry-for-docker-images#domain-changes
  • Missing doc on warn Pod referencing dobs volumes must be owned by statefulset

    Missing doc on warn Pod referencing dobs volumes must be owned by statefulset

    Hi,

    I have few warnings such as: Pod referencing dobs volumes must be owned by statefulset

    which links to: https://www.digitalocean.com/docs/kubernetes/resources/clusterlint-errors/#dobs-pod-owner

    However the link don't seem correct and there don't seem to have documentation about this issue.

  • Cluster upgrade issue with cert manager

    Cluster upgrade issue with cert manager

    Anyone using cert manager currently will get this error when upgrading their cluster:

    There are issues that will cause your pods to stop working. We recommend you fix them before upgrading this cluster. Validating webhook is configured in such a way that it may be problematic during upgrades. Mutating webhook is configured in such a way that it may be problematic during upgrades.

    Should these be be marked as errors since api group rules are specified? https://github.com/jetstack/cert-manager/blob/87989dbfe35bed99a9e031c71ad3a7d49030a8bf/deploy/charts/cert-manager/templates/webhook-mutating-webhook.yaml#L26-L28 https://github.com/jetstack/cert-manager/blob/87989dbfe35bed99a9e031c71ad3a7d49030a8bf/deploy/charts/cert-manager/templates/webhook-validating-webhook.yaml#L36-L38

  • Disable checking for each container

    Disable checking for each container

    Currently, the check is invalidated by metadata. However, this method does not allow you to disable checks on a per container basis. Is there a way to solve this?

  • Cluster linter messages point to missing docs section

    Cluster linter messages point to missing docs section

    I have a cluster with two helm charts on it (almost default values):

    1. ingress-nginx/ingress-nginx
    2. jetstack/cert-manager

    The cluster linter from the dashboard is pointing out some problems for future upgrades (I know there are duplicates but but this is the exact output that I'm getting):

    All links point to missing anchors inside the target page and I can't find much online about the given messages (I even checked the TimeoutSeconds values of my resources but it seems to be set to 1).

    Do you have any suggestion?

    Thank you for your time.

  • Runtime error over latest tag on cluster

    Runtime error over latest tag on cluster

    I got the following error:

    $ clusterlint run
    
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x11862b5]
    
    goroutine 193 [running]:
    github.com/digitalocean/clusterlint/vendor/github.com/docker/distribution/reference.WithTag(0x0, 0x0, 0x13f96c7, 0x6, 0x0, 0x1340600, 0x1, 0xc0010ae000)
    	/home/joanne/go/src/github.com/digitalocean/clusterlint/vendor/github.com/docker/distribution/reference/reference.go:280 +0x3f5
    github.com/digitalocean/clusterlint/vendor/github.com/docker/distribution/reference.TagNameOnly(0x0, 0x0, 0x0, 0x0)
    	/home/joanne/go/src/github.com/digitalocean/clusterlint/vendor/github.com/docker/distribution/reference/normalize.go:130 +0xa5
    github.com/digitalocean/clusterlint/checks/basic.(*latestTagCheck).checkTags(0x2285958, 0xc000bf5e40, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0xc000ca83e0, 0x18, ...)
    	/home/joanne/go/src/github.com/digitalocean/clusterlint/checks/basic/latest_tag.go:70 +0x164
    github.com/digitalocean/clusterlint/checks/basic.(*latestTagCheck).Run(0x2285958, 0xc000517280, 0x2268080, 0x1010000015c58c0, 0x226d180, 0xc0000e86e8, 0x44a86b)
    	/home/joanne/go/src/github.com/digitalocean/clusterlint/checks/basic/latest_tag.go:57 +0x14f
    github.com/digitalocean/clusterlint/checks.Run.func1(0x8, 0x148ace0)
    	/home/joanne/go/src/github.com/digitalocean/clusterlint/checks/run_checks.go:51 +0xc3
    github.com/digitalocean/clusterlint/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1(0xc0004c9c20, 0xc000519180)
    	/home/joanne/go/src/github.com/digitalocean/clusterlint/vendor/golang.org/x/sync/errgroup/errgroup.go:57 +0x57
    created by github.com/digitalocean/clusterlint/vendor/golang.org/x/sync/errgroup.(*Group).Go
    	/home/joanne/go/src/github.com/digitalocean/clusterlint/vendor/golang.org/x/sync/errgroup/errgroup.go:54 +0x66
    

    It looks like clusterlint errored on a latest tag because clusterlint run ignore-checks latest-tag ran successfully.

    The problem looks like it occurs because of a pod on my cluster that refers to a latest tag:

    apiVersion: v1
    kind: Pod
      creationTimestamp: "2019-10-24T08:47:41Z"
      generateName: jaeger-698f8b8cf4-
      labels:
        app: jaeger
        app.kubernetes.io/component: all-in-one
        app.kubernetes.io/name: jaeger
        pod-template-hash: 698f8b8cf4
      name: jaeger-698f8b8cf4-nmjcg
    ...
    ...
    ...
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2019-10-24T08:47:41Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2019-10-24T08:47:59Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2019-10-24T08:47:59Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2019-10-24T08:47:41Z"
        status: "True"
        type: PodScheduled
      containerStatuses:
      - containerID: docker://1e012a3f85a056a0674877ce93fdb4ad54bc6a6151e58611f7058739f270cab0
        image: jaegertracing/all-in-one:latest
        imageID: docker-pullable://jaegertracing/all-in-one@sha256:4cb2598b80d4f37b1d66fbe35b2f7488fa04f4d269e301919e8c45526f2d73c3
        lastState: {}
        name: jaeger
        ready: true
        restartCount: 0
        state:
          running:
            startedAt: "2019-10-24T08:47:45Z"
      hostIP: 192.168.176.31
      phase: Running
      podIP: 192.168.181.143
      qosClass: Burstable
      startTime: "2019-10-24T08:47:41Z"
    

    See status.containerStatuses.containerID.image for where the problem is.

    $ kubectl version
    
    Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-eks-b7174d", GitCommit:"b7174db5ee0e30c94a0b9899c20ac980c0850fc8", GitTreeState:"clean", BuildDate:"2019-10-18T17:56:01Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
    
  • docs: fix anchor for resource-requirements

    docs: fix anchor for resource-requirements

    when a error for resource-requirements is found, DO links to https://docs.digitalocean.com/products/kubernetes/resources/clusterlint-errors/#resource-requirements, however that anchor is not found due to the heading of the section, this PR should fix the issue

    image

  • Add the ability to run in-cluster

    Add the ability to run in-cluster

    Following https://github.com/digitalocean/clusterlint/issues/128

    Until now, clusterlint was designed to be run locally using a kubeconfig file to access the Kubernetes API. But some users may want to run it in-cluster so it can be run as a CronJob for example.

    Note to reviewers:

    • I'm a new contributor and it's 12am here 😛 so feel free to tell me if I missed anything, especially for testing and documentation
    • I was wondering if a in-cluster CLI flag was the good approach, or if we should just fallback to in-cluster when the kubeconfig path is empty ?
    • Should I add an example manifest with ClusterRole for running clusterlint in-cluster ?
    • I noticed this project does not have a Docker image, should I create it ?
  • Admission control webhook check should check apiGroups

    Admission control webhook check should check apiGroups

    The admission control webhook check in the doks group will currently throw an error for webhooks that apply only to CRDs, but such webhooks would never actually cause a problem for DOKS upgrades since they won't prevent pods from starting. The admission control webhook check should ignore any webhook configuration that doesn't apply to resources in the v1 or apps/v1 apiGroups.

  • Add node name check: Checks for pods which use node name in the node selector.

    Add node name check: Checks for pods which use node name in the node selector.

    Check  if the node selector in the pod spec uses label key kubernetes.io/hostname 

    It will emit a error to the users and will be a DOKS specific check.

    Context: On upgrade of a cluster on DOKS, the worker nodes' hostname changes. So, if a user's pod spec relies on the hostname to schedule pods on specific nodes, pod scheduling will fail after upgrade. 

    The check can be parallelized later if it ends up being less performant to iterate over the pod list as discussed.

  • Add resource requirements check to doks group

    Add resource requirements check to doks group

    The resource-requirements check is in the basic group right now. However, many of doks users face resource contention issues because they may not have followed this best practice of setting resource limits on their pods. We currently run all the checks in the doks group before an upgrade. We can show this warning to users if this check is added to the doks group as well.

  • upgrade to 1.22.7-do.0, incompatible ingress not detected

    upgrade to 1.22.7-do.0, incompatible ingress not detected

    Dear DO, I upgraded my cluster on DO, and the ingress/load balancer failed after the upgrade.

    The incompatibility was not detected by clusterlint, I expect other customers will also run into the same issue

    see https://stackoverflow.com/questions/70908774/nginx-ingress-controller-fails-to-start-after-aks-upgrade-to-v1-22/70974010 for background, my ingress was also 0.34.1

    I also updated cert-manager as it had some problems too, but I do not have that many details on it, the error was error registering secret controller: no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1

  • Bump github.com/docker/distribution from 2.7.1+incompatible to 2.8.0+incompatible

    Bump github.com/docker/distribution from 2.7.1+incompatible to 2.8.0+incompatible

    Bumps github.com/docker/distribution from 2.7.1+incompatible to 2.8.0+incompatible.

    Release notes

    Sourced from github.com/docker/distribution's releases.

    v2.8.0

    registry 2.8.0

    Welcome to the v2.8.0 release of registry!

    The 2.8.0 registry release has been a long time overdue. This is the first step towards the last 2.x release. No further active development will continue on 2.x branch. Security vulnerability patches to 2.x might be considered, but all active development will be focussed on v3 release due in 2022. This release includes a security vulnerability fix along with a few minor bug fixes and improvemnts in documentation and CI.

    See changelog below for full list of changes.

    Bugfixes

    • Close the io.ReadCloser from storage driver #3370
    • Remove empty Content-Type header #3297
    • Make ipfilteredby not required in cloudfront storage middleware #3088

    Features

    • Add reference.ParseDockerRef utility function #3002

    CI build

    • First draft of actions based ci #3347
    • Fix vndr and check #3001
    • Improve code quality by adding linter checks #3385

    Documentation

    • Add redirect for old URL #3197
    • Fix broken table #3073
    • Adding deprecated schema v1 instructions #2987
    • Change should to must in v2 spec (#3495)

    Storage drivers

    • S3 Driver: add support for ceph radosgw #3119

    Security

    Changes

    • Prepare for v2.8.0 release (#3552)
      • d5d89a46 Make this releaes a beta release first.
      • 1ddad0ba Apply suggestions from code review

    ... (truncated)

    Commits
    • dcf6639 Update README so the release pipeline works properly.
    • 212b38e Merge pull request #3552 from milosgajdos/v2.8.0-release
    • 359b97a Merge pull request #3568 from crazy-max/2.8-artifacts
    • d5d89a4 Make this releaes a beta release first.
    • 6241e09 [2.8] Release artifacts
    • 1840415 Merge pull request #3565 from crazy-max/2.8-gha
    • 65ca39e release workflow
    • 1ddad0b Apply suggestions from code review
    • 3960a56 Prepare for v2.8.0 release
    • 3b7b534 Merge pull request from GHSA-qq97-vm5h-rrhg
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • admission-controller-webhook-replacement check needs more details

    admission-controller-webhook-replacement check needs more details

    This check just outputs that the webhook is configured incorrectly and can cause problems with DOKS cluster upgrades. But, adding in more details as to how the webhook is configured wrong or what config changes can be done to fix it will help users more.

    Context: https://kubernetes.slack.com/archives/CCPETNUCA/p1598549516051000

    This check can also use more test cases that can cause it to fail. This also serves as documentation for anyone trying to understand how their webhook config can cause the check to fail.

  • Admission control webhook check in the

    Admission control webhook check in the "basic" group

    We currently have a variety of webhook checks in the doks group, since various webhook configurations can be problematic for DOKS upgrades. However, there are also some generic best practices around admission control webhooks, documented in the upstream docs. For example, it's a generic best practice to set timeouts to small values (definitely less than 30 seconds, since that's the default apiserver request timeout).

    We should build some generic webhook best practice checks that can be included in the basic group as well as the doks group.

  • Subcommand to lint manifests before they are deployed onto a cluster

    Subcommand to lint manifests before they are deployed onto a cluster

    Right now, clusterlint analyzes the workloads after they have been deployed on a managed/self hosted platform. This is great because:

    • Users may not deploy everything into a cluster from one place, and actual deployments can diverge from manifests.
    • It can be used to identify problems that can occur on a cluster even if manifests were alright (example: not setting resource requests and limits)

    Adding a feature to lint the manifests before attempting to deploy the workloads on a cluster can be useful to prevent bad configs. This will be particularly useful if there is CI/CD in place to automatically deploy the workloads after making sure that all the configs are fine. This can act as a sanity check before the config is merged in a SCM repository.

  • "Latest-tag" should be upgrade to "non-fixed-tag check"

    We just found that one of our users used "nightly" tag in his manifest so I think that it should check for any non-fixed versions of images, not just latest

A best practices Go source project with unit-test and integration test, also use skaffold & helm to automate CI & CD at local to optimize development cycle

Dependencies Docker Go 1.17 MySQL 8.0.25 Bootstrap Run chmod +x start.sh if start.sh script does not have privileged to run Run ./start.sh --bootstrap

Apr 4, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Dec 10, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Jan 3, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Feb 14, 2022
kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters

kubequery powered by Osquery kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters kubequery will be packaged as

Dec 27, 2022
🐶 Kubernetes CLI To Manage Your Clusters In Style!
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Jan 9, 2023
Manage large fleets of Kubernetes clusters
Manage large fleets of Kubernetes clusters

Introduction Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that it works great for a si

Dec 31, 2022
Kubernetes operator to autoscale Google's Cloud Bigtable clusters
Kubernetes operator to autoscale Google's Cloud Bigtable clusters

Bigtable Autoscaler Operator Bigtable Autoscaler Operator is a Kubernetes Operator to autoscale the number of nodes of a Google Cloud Bigtable instanc

Nov 5, 2021
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

Dec 31, 2022
Client extension for interacting with Kubernetes clusters from your k6 tests.

⚠️ This is a proof of concept As this is a proof of concept, it won't be supported by the k6 team. It may also break in the future as xk6 evolves. USE

Jan 2, 2023
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Dec 30, 2022
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Jan 8, 2023
kcount counts Kubernetes (K8s) objects across clusters.

kcount counts Kubernetes (K8s) objects across clusters. It gets the cluster configuration, including cluster name and namespace, from kubeconfig files

Sep 23, 2022
A pain of glass between you and your Kubernetes clusters.

kube-lock A pain of glass between you and your Kubernetes clusters. Sits as a middle-man between you and kubectl, allowing you to lock and unlock cont

Oct 20, 2022
Hot-swap Kubernetes clusters while keeping your microservices up and running.

Okra Okra is a Kubernetes controller and a set of CRDs which provide advanced multi-cluster appilcation rollout capabilities, such as canary deploymen

Nov 23, 2022