Manage large fleets of Kubernetes clusters

Introduction

Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that it works great for a single cluster too, but it really shines when you get to a large scale. By large scale we mean either a lot of clusters, a lot of deployments, or a lot of teams in a single organization.

Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, or Kustomize or any combination of the three. Regardless of the source all resources are dynamically turned into Helm charts and Helm is used as the engine to deploy everything in the cluster. This gives a high degree of control, consistency, and auditability. Fleet focuses not only on the ability to scale, but to give one a high degree of control and visibility to exactly what is installed on the cluster.

Quick Start

Who needs documentation, let's just run this thing!

Install

Get helm if you don't have it. Helm 3 is just a CLI and won't do bad insecure things to your cluster.

brew install helm

Install the Fleet Helm charts (there's two because we separate out CRDs for ultimate flexibility.)

VERSION=0.3.3
helm -n fleet-system install --create-namespace --wait \
    fleet-crd https://github.com/rancher/fleet/releases/download/v${VERSION}/fleet-crd-${VERSION}.tgz
helm -n fleet-system install --create-namespace --wait \
    fleet https://github.com/rancher/fleet/releases/download/v${VERSION}/fleet-${VERSION}.tgz

Add a Git Repo to watch

Change spec.repo to your git repo of choice. Kubernetes manifest files that should be deployed should be in /manifests in your repo.

cat > example.yaml << "EOF"
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: sample
  # This namespace is special and auto-wired to deploy to the local cluster
  namespace: fleet-local
spec:
  # Everything from this repo will be ran in this cluster. You trust me right?
  repo: "https://github.com/fleet-demo/simple"
EOF

kubectl apply -f example.yaml

Get Status

Get status of what fleet is doing

kubectl -n fleet-local get fleet

You should see something like this get created in your cluster.

kubectl get deploy frontend
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
frontend   3/3     3            3           116m

Enjoy and read the docs.

Comments
  • fleet-agent cannot find secret in local cluster in Rancher single-install setup

    fleet-agent cannot find secret in local cluster in Rancher single-install setup

    run rancher:master-0f691dc70f86bbda3d6563af11779300a6191584-head in the single-install mode

    Screen Shot 2020-09-14 at 5 17 43 PM Screen Shot 2020-09-14 at 5 17 51 PM Screen Shot 2020-09-14 at 5 18 30 PM

    The following line floods the log of the pod fleet-agent-7dfdfd5846-xjw96

    time="2020-09-15T00:18:30Z" level=info msg="Waiting for secret fleet-clusters-system/c-09ea1d541bf704218ec6fc9ab2d60c0392543af636c1c3a90793946522685 for request-2vz49: secrets \"c-09ea1d541bf704218ec6fc9ab2d60c0392543af636c1c3a90793946522685\" not found"
    

    gz#14319

  • gitjob:v0.1.11 in 0.3.2

    gitjob:v0.1.11 in 0.3.2

    The gitjob image v0.1.11 in 0.3.2 does not have a uid entry for 1000 in /etc/passwd This results in an error when the gitjob attempts to read the git repository.

    ~The workaround is to patch the gitjob deployment with a securityContext.runAsUser. It appears the nobody user works...~

      securityContext:
        runAsUser: 65534
    
  • SOPS encrypted resources

    SOPS encrypted resources

    Hi,

    I'm having trouble using sops encrypted resources, getting the error from the fleet-agent container:

    time="2020-11-05T18:25:04Z" level=info msg="getting history for release test-manifests-repo-fleet"
    time="2020-11-05T18:25:04Z" level=error msg="error syncing 'cluster-stores-test-01-83b14484851f/test-manifests-repo-fleet': handler bundle-deploy: failed to decrypt with sops: Error getting data key: 0 successful groups required, got 0, requeuing"
    

    I have this project installed on the agent cluster - https://github.com/isindir/sops-secrets-operator The idea is that the client / leaf cluster will decrypt (sops/kms) and generate the real secrets.

    I take it fleet is also trying to do this, can you advise on how I can either configure fleet agent with the sops details or bypass sops processing by fleet and allow the client sops operator to handle?

    Many Thanks

    gz#15522

  • disable sops check and decryption

    disable sops check and decryption

    Fixes https://github.com/rancher/fleet/issues/144

    Ignores sops related resources (allow downstream to handle)

    Currently there is no way to configure sops via fleet (to pass in cloud KMS details) so would be good to allow downstream to handle, e.g. with https://github.com/isindir/sops-secrets-operator

  • fleet-agent in state `ErrApplied` with the following reason: `another operation (install/upgrade/rollback) is in progress`

    fleet-agent in state `ErrApplied` with the following reason: `another operation (install/upgrade/rollback) is in progress`

    I currently have my fleet-agent in state ErrApplied with the following reason:

    another operation (install/upgrade/rollback) is in progress.
    

    I had two fleet-agents under "Installed Apps": One in ns cattle-fleet-system and the other in fleet-system

    Not sure how I came to that state, but I recently upgraded from Rancher 2.5.11 to 2.6.2.

    The app in cattle-fleet-system was in status Pending-Upgrade

    image

    I deleted both apps and made an "upgrade" of fleet app on the local cluster. The fleet-agent seems now properly deployed

    image

  • No matching host key type found. Their offer: ssh-rsa

    No matching host key type found. Their offer: ssh-rsa

    Hello,

    When we proceed to an update of Rancher from 2.6.3 to 2.6.4 (so upgrade fleet to 0.3.9), we faced an issue regarding Fleet. Everything was fine on 2.6.3 but since we proceeded to the update, we are facing this issue. Here is the message we have of every GitRepo:

    git ls-remote ssh://[email protected]:8888/PP0/mygit.git refs/heads/master error: exit status 128, detail: Unable to negotiate with x.x.x.x port 8888: no matching host key type found. Their offer: ssh-rsa
    fatal: Could not read from remote repository.
    
    Please make sure you have the correct access rights
    and the repository exists.
    

    Regards

  • fleet-agent cleanup continually deleting releases

    fleet-agent cleanup continually deleting releases

    The cleanup loop in fleet-agent isn't able to properly identify releases with bundles. For a given release, it looks for the bundle deployment associated to that release. If the release has a different name than fleet-agent expects (based on the bundle deployment) then fleet-agent deletes the release.

    The problem is that then a release with the exact same name is created and the next time the cleanup loop runs, fleet-agent will delete the release again.

    For example, a bundle deployment with name mcc-anupamafinalrcrke2-managed-system-upgrade creates a release with name mcc-anupamafinalrcrke2-managed-system-upgrade-c-407d2.

    It seems that this function should look at status.release as well when trying to determine the name of the release from a bundle deployment.

  • Helm Target Customization Repo/Version Override

    Helm Target Customization Repo/Version Override

    Fix #699 Fix #899

    This commit fixes an issue stopping versions and repos in Helm specs from being overridden in target customizations. It applies root repos and versions to customizations where appropriate to allow targets to properly resolve the correct target.

    Test

    To test this pull request, I used the following configurations: https://raw.githubusercontent.com/romejoe/fleet-test/main/multi-chart-test/fleet.yaml https://raw.githubusercontent.com/romejoe/fleet-test/main/version-test/fleet-config/fleet.yaml

    The first test verifies that if the entire helm reference(repo, chart and version) are specified in a target customization, the proper chart is deployed to those targets.

    The second test verifies that if only a version is specified, the updated version will be used. It also verifies that if the chart is overridden, the appropriate chart is still used.

    Additional Information

    Tradeoff

    The only potential trade off is that if the user specifies a repo, chart and version in the root helm spec and then only overrides the chart property in a customization, the target customization will treat the chart as a relative path. The only time this could raise an issue is if the user changes the chart in the customization with the intent of selecting a different chart within the same repo. This is probably out of scope though. The user is still able to do that, they just have to supply at least a repo or version as well in the customization.

  • would it be possible to concatenate label value with global labels now?

    would it be possible to concatenate label value with global labels now?

    Now that rancher/fleet#325 and rancher/fleet#152 are marked as resolved, would this be possible to achieve for non-helm-based resources:

    we are adding a series of ExternalSecrets on each new cluster that is spun up. Unfortunately we have not yet found a way to register new cluster in hashicorp vault, that remains a manual step, but we would like for the externalSecrets to be formed properly to reflect the cluster and the role names that are based on cluster name.

    so essentualy, we need to concatenate string with value from cluster-display-name label, such as - this is PSEUDO-CODE-EXAMPLE not intended to work: I cluster name is:

    global: 
      fleet: 
        clusterLabels.management.cattle.io/cluster-display-name: gke_some-cluster-external
    

    and I want to add something like "-role" to the cluster name as a value to a helm attribute:

    apiVersion: 'kubernetes-client.io/v1'
    kind: ExternalSecret
    metadata:
      name: jaeger-operator-jaeger
    spec:
      backendType: vault
      vaultMountPoint: $(global.fleet.clusterLabels.management.cattle.io/cluster-display-name)-external-secrets
      vaultRole: $(global.fleet.clusterLabels.management.cattle.io/cluster-display-name)-role
      kvVersion: 2
      data:
      - name: ES_USERNAME
        key: infra-secrets/data/elastic-cloud
        property: ES_USERNAME
    

    to achieve something like this as a final result that gets applied to the cluster (see vaultMountPoint and vaultRole keys ):

    apiVersion: 'kubernetes-client.io/v1'
    kind: ExternalSecret
    metadata:
      name: jaeger-operator-jaeger
    spec:
      backendType: vault
      vaultMountPoint: gke_some-cluster-external-secrets
      vaultRole: gke_some-cluster-role
      kvVersion: 2
      data:
      - name: ES_USERNAME
        key: infra-secrets/data/elastic-cloud
        property: ES_USERNAME
    
  • how to avoid  Modified(1) [Bundle xyz] statefulset.apps xyz extra;

    how to avoid Modified(1) [Bundle xyz] statefulset.apps xyz extra;

    if i deploy the helm chart https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack the included prometheus operator install two statefulsets. after installing we see this

    cluster-79fd2a35037e   16/17           2/2           xyz      2021-04-07T08:36:12Z   Modified(1) [Bundle mo-lpg-stack]; statefulset.apps mo-logging-monitoring/alertmanager-mo-lpg-stack-kube-promethe-alertmanager extra; statefulset.apps mo-logging-monitoring/prometheus-mo-lpg-stack-kube-promethe-prometheus extra
    cluster-86aff2d3b822   16/17           2/2           xyz      2021-04-07T08:29:04Z   Modified(1) [Bundle mo-lpg-stack]; statefulset.apps mo-logging-monitoring/alertmanager-mo-lpg-stack-kube-promethe-alertmanager extra; statefulset.apps mo-logging-monitoring/prometheus-mo-lpg-stack-kube-promethe-prometheus extra
    

    with diff patch we couldn't avoid this. the bundlestate looks like this:

        - bundleState: Modified
          modifiedStatus:
          - apiVersion: apps/v1
            delete: true
            kind: StatefulSet
            name: alertmanager-mo-lpg-stack-kube-promethe-alertmanager
            namespace: mo-logging-monitoring
          - apiVersion: apps/v1
            delete: true
            kind: StatefulSet
            name: prometheus-mo-lpg-stack-kube-promethe-prometheus
            namespace: mo-logging-monitoring
          name: fleet-market-stage/cluster-f501e0eb409b
    
  • Sprig Templating for Helm Values with Inputs from Cluster Resource

    Sprig Templating for Helm Values with Inputs from Cluster Resource

    This PR introduces the ability for fleet.yaml's helm.values object to contain go template strings in either the keys or values. The context of the go template also includes the cluster labels (for convenient migration from the existing global.fleet.clusterLabels.FOO users.

    Cluster specific template variables can be added to Cluster CR under a new optional key in spec.templateContext

    Related issues/PRs

    • rancher/fleet#507 but adds the per-cluster custom template context
    • rancher/fleet#375
    • rancher/fleet#355
  • Bump bci/bci-base from 15.4.27.14.23 to 15.4.27.14.26 in /package

    Bump bci/bci-base from 15.4.27.14.23 to 15.4.27.14.26 in /package

    Bumps bci/bci-base from 15.4.27.14.23 to 15.4.27.14.26.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump bci/golang from 1.19-18.50 to 1.19-19.8

    Bump bci/golang from 1.19-18.50 to 1.19-19.8

    Bumps bci/golang from 1.19-18.50 to 1.19-19.8.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/go-git/go-billy/v5 from 5.3.1 to 5.4.0

    Bump github.com/go-git/go-billy/v5 from 5.3.1 to 5.4.0

    Bumps github.com/go-git/go-billy/v5 from 5.3.1 to 5.4.0.

    Release notes

    Sourced from github.com/go-git/go-billy/v5's releases.

    v5.4.0

    What's Changed

    Full Changelog: https://github.com/go-git/go-billy/compare/v5.3.1...v5.4.0

    Commits
    • 1b88f62 Merge pull request #26 from cuishuang/master
    • 4e5a841 Merge pull request #28 from pjbgf/fix-go-git-data-race
    • 38b02ce tests: Fix tests in windows
    • 0a54206 Fix go-git data races whilst running tests
    • 027fa5a build: Bump dependencies
    • 007675e build: Update GitHub workflows
    • a71b2d8 fix some typos
    • 7ab80d7 Merge pull request #17 from tjamet/feat/walk
    • 213e20d utils: Walk, use os.FileInfo
    • e0768be utils: Walk, minor style changes
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • How to do force update at particular interval of time through fleet yaml file

    How to do force update at particular interval of time through fleet yaml file

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    Currently using fleet gitops pipeline for our organization project. I think fleet GitHub repo resource will sync with cluster state at some interval of time in day to keep the environment similar. But in our use cases we want to sync the fleet created resources with cluster after some interval of time. like 2 hours or 4 hours through yaml file. We are creating all fleet resources through login into rancher dashboard on continuous delivery option. Hence we are not getting the sync option to apply after 2 hours or 4 hours. After creation force update option will be there but we wanted to execute through yaml and after interval of time.

    Please help

    Is there way to do force update kind of cron job defining in fleet yaml file.

    Expected Behavior

    There has to yaml file definition to force update

    Steps To Reproduce

    n/a

    Environment

    - Architecture:
    - Fleet Version:
    - Cluster:
      - Provider:
      - Options:
      - Kubernetes Version:
    

    Logs

    n/a
    

    Anything else?

    n/a

  • Bump sigs.k8s.io/cli-utils from 0.33.0 to 0.34.0

    Bump sigs.k8s.io/cli-utils from 0.33.0 to 0.34.0

    Bumps sigs.k8s.io/cli-utils from 0.33.0 to 0.34.0.

    Release notes

    Sourced from sigs.k8s.io/cli-utils's releases.

    v0.34.0

    Changelog

    • 9d2cc31 chore: Bump go to v1.18
    • 31fa3de chore: Bump golangci-lint version
    • 49981f7 chore: Remove deprecated linters
    • 10ac028 chore: fix linter warnings
    • c494d01 chore: Fix linting issues found in golangci-lint v1.50.0
    • 7f1d7db chore: remove file
    • b07f05f chore: update dependencies to Kubernetes v1.25.3
    Commits
    • 135cc3f Merge pull request #606 from ash2k/ash2k/bump-deps
    • 7f1d7db chore: remove file
    • 10ac028 chore: fix linter warnings
    • b07f05f chore: update dependencies to Kubernetes v1.25.3
    • d2e7237 Merge pull request #609 from rquitales/bump-go
    • 9d2cc31 chore: Bump go to v1.18
    • 57ba470 Merge pull request #608 from rquitales/update-golangci-lint
    • 49981f7 chore: Remove deprecated linters
    • c494d01 refactor: Fix linting issues found in golangci-lint v1.50.0
    • 31fa3de chore: Bump golangci-lint version
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump sigs.k8s.io/controller-runtime from 0.12.3 to 0.14.1

    Bump sigs.k8s.io/controller-runtime from 0.12.3 to 0.14.1

    Bumps sigs.k8s.io/controller-runtime from 0.12.3 to 0.14.1.

    Release notes

    Sourced from sigs.k8s.io/controller-runtime's releases.

    v0.14.1

    Changes since v0.14.0

    :bug: Bug Fixes

    Full Changelog: https://github.com/kubernetes-sigs/controller-runtime/compare/v0.14.0...v0.14.1

    v0.14.0

    Changes since v0.13.1

    :warning: Breaking Changes

    • Add Get functionality to SubResourceClient (#2094)
    • Allow configuring RecoverPanic for controllers globally (#2093)
    • Add client.SubResourceWriter (#2072)
    • Support registration and removal of event handler (#2046)
    • Update Kubernetes dependencies to v0.26 (#2043, #2087)
    • Zap log: Default to RFC3339 time encoding (#2029)
    • cache.BuilderWithOptions inherit options from caller (#1980)

    :sparkles: New Features

    • Builder: Do not require For (#2091)
    • support disable deepcopy on list funcion (#2076)
    • Add cluster.NewClientFunc with options (#2054)
    • Tidy up startup logging of kindWithCache source (#2057)
    • Add function to get reconcileID from context (#2056)
    • feat: add NOT predicate (#2031)
    • Allow to provide a custom lock interface to manager (#2027)
    • Add tls options to manager.Options (#2023)
    • Update Go version to 1.19 (#1986)

    :bug: Bug Fixes

    • Prevent manager from getting started a second time (#2090)
    • Missing error log for in-cluster config (#2051)
    • Skip custom mutation handler when delete a CR (#2049)
    • fix: improve semantics of combining cache selectorsByObject (#2039)
    • Conversion webhook should not panic when conversion request is nil (#1970)

    :seedling: Others

    • Prepare for release 0.14 (#2100)
    • Generate files and update modules (#2096)
    • Bump github.com/onsi/ginkgo/v2 from 2.5.1 to 2.6.0 (#2097)
    • Bump golang.org/x/time (#2089)
    • Update OWNERS: remove inactive members, promote fillzpp sbueringer (#2088, #2092)
    • Default ENVTEST version to a working one (1.24.2) (#2081)
    • Update golangci-lint to v1.50.1 (#2080)
    • Bump go.uber.org/zap from 1.23.0 to 1.24.0 (#2077)
    • Bump golang.org/x/sys from 0.2.0 to 0.3.0 (#2078)
    • Ignore Kubernetes Dependencies in Dependabot (#2071)

    ... (truncated)

    Commits
    • 84c5c9f 🐛 controllers without For() fail to start (#2108)
    • ddcb99d Merge pull request #2100 from vincepri/release-0.14
    • 69f0938 Merge pull request #2094 from alvaroaleman/subresoruce-get
    • 8738e91 Merge pull request #2091 from alvaroaleman/no-for
    • ca4b4de Merge pull request #2096 from lucacome/generate
    • 5673341 Merge pull request #2097 from kubernetes-sigs/dependabot/go_modules/github.co...
    • 7333aed :seedling: Bump github.com/onsi/ginkgo/v2 from 2.5.1 to 2.6.0
    • d4f1e82 Generate files and update modules
    • a387bf4 Merge pull request #2093 from alvaroaleman/recover-panic-globally
    • da7dd5d :warning: Allow configuring RecoverPanic for controllers globally
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Related tags
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Jan 3, 2023
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
🐶 Kubernetes CLI To Manage Your Clusters In Style!
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Jan 9, 2023
Simple Tools to help manage non-production Kubernetes Clusters

SecondMate.io A tool to help your nonProduction Kubernetes Clusters running clean. The goal of this tool is to add some features to non production clu

Feb 21, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Feb 14, 2022
Deploy, manage, and secure applications and resources across multiple clusters using CloudFormation and Shipa

CloudFormation provider Deploy, secure, and manage applications across multiple clusters using CloudFormation and Shipa. Development environment setup

Feb 12, 2022
Large-scale Kubernetes cluster diagnostic tool.
Large-scale Kubernetes cluster diagnostic tool.

English | 简体中文 KubeProber What is KubeProber? KubeProber is a diagnostic tool designed for large-scale Kubernetes clusters. It is used to perform diag

Dec 21, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters

kubequery powered by Osquery kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters kubequery will be packaged as

Dec 27, 2022
Validation of best practices in your Kubernetes clusters
Validation of best practices in your Kubernetes clusters

Best Practices for Kubernetes Workload Configuration Fairwinds' Polaris keeps your clusters sailing smoothly. It runs a variety of checks to ensure th

Jan 9, 2023
A best practices checker for Kubernetes clusters. 🤠

Clusterlint As clusters scale and become increasingly difficult to maintain, clusterlint helps operators conform to Kubernetes best practices around r

Dec 29, 2022
Kubernetes operator to autoscale Google's Cloud Bigtable clusters
Kubernetes operator to autoscale Google's Cloud Bigtable clusters

Bigtable Autoscaler Operator Bigtable Autoscaler Operator is a Kubernetes Operator to autoscale the number of nodes of a Google Cloud Bigtable instanc

Nov 5, 2021
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

Dec 31, 2022
Client extension for interacting with Kubernetes clusters from your k6 tests.

⚠️ This is a proof of concept As this is a proof of concept, it won't be supported by the k6 team. It may also break in the future as xk6 evolves. USE

Jan 2, 2023
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Dec 30, 2022