Validation of best practices in your Kubernetes clusters

Polaris Logo

Best Practices for Kubernetes Workload Configuration

Fairwinds' Polaris keeps your clusters sailing smoothly. It runs a variety of checks to ensure that Kubernetes pods and controllers are configured using best practices, helping you avoid problems in the future.

Polaris can be run in three different modes:

  • As a dashboard, so you can audit what's running inside your cluster.
  • As an admission controller, so you can automatically reject workloads that don't adhere to your organization's policies.
  • As a command-line tool, so you can test local YAML files, e.g. as part of a CI/CD process.

Polaris Architecture

Want to learn more? Reach out on the Slack channel (request invite), send an email to [email protected], or join us for office hours on Zoom

Documentation

Check out the documentation at docs.fairwinds.com

Integration with Fairwinds Insights

Fairwinds Insights

Fairwinds Insights is a platform for auditing Kubernetes clusters and enforcing policy. If you'd like to:

  • manage Polaris across a fleet of clusters
  • track findings over time
  • send results to services like Slack and Datadog
  • add additional checks from tools like Trivy, Goldilocks, and OPA

you can sign up for a free account here.

Contributing

PRs welcome! Check out the Contributing Guidelines and Code of Conduct for more information.

Further Information

A history of changes to this project can be viewed in the Changelog

If you'd like to learn more about Polaris, or if you'd like to speak with a Kubernetes expert, you can contact [email protected] or visit our website


Polaris Dashboard

Owner
Comments
  • Design updated configuration schema that will support all of v1

    Design updated configuration schema that will support all of v1

    An update configuration schema should be designed that will support all of the existing validations along with all planned validations before v1, including:

    • Pull policy always (warning)
    • No host networking
    • No host port
    • No host IPC
    • No host pid
    • Restricting kernel capabilities by default like SYS_ADMIN (warning)
    • No privileged containers (warning)
    • No root user (warning)
    • Should use read only root filesystem (warning)
    • Don't mount /var/run/docker.sock

    As is likely obvious here, this will also need to find some kind of way to differentiate from errors and warnings.

  • Check metadataAndNameMismatched not found

    Check metadataAndNameMismatched not found

    Installation Process

    Docker

    Polaris Version

    3.2.1

    Expected Behavior

    Expected audit to run

    Actual Behavior

    WARN[0000] An error occurred validating controller:Check metadataAndNameMismatched not found 
    ERRO[0000] Error while running audit on resources: Check metadataAndNameMismatched not found
    

    Steps to Reproduce

    docker run -ti \
      -v "$PWD/pwd" -v ~/github/k8s:/k8s \
      -v ~/.kube/config:/opt/app/config:ro \
      quay.io/fairwinds/polaris:3.2.1 polaris audit \
        --kubeconfig /opt/app/config \
        --audit-path /pwd \
        --config /k8s/polaris-config.yaml \
        -f pretty --only-show-failed-tests
    WARN[0000] An error occurred validating controller:Check metadataAndNameMismatched not found 
    ERRO[0000] Error while running audit on resources: Check metadataAndNameMismatched not found
    

    The config file I am using is copied from here:

    https://raw.githubusercontent.com/FairwindsOps/polaris/master/examples/config.yaml

  • pkg/dashboard: setup basePath as a path prefix in routing

    pkg/dashboard: setup basePath as a path prefix in routing

    Awesome project! It works for me with a port-forward, but not with -dashboard-path-prefix. After this change I can load the dashboard with a base path (/polaris/ in my case), but I haven't tested building a Docker image for my cluster yet.

    ~~Btw, I was unable to sign the CLA at https://cla-assistant.io/fairwinds/polaris (404)~~ With ?pullRequest=201 I could sign the CLA.

  • Ignore orphaned pods

    Ignore orphaned pods

    Installation Process

    EKS cluster, version 1.15, using helm.

    Command line:

    $ helm upgrade --install polaris fairwinds-stable/polaris --version 1.0.2 --namespace kube-system -f polaris-values.yaml
    

    polaris-values.yaml contents:

    # Override the tag provided in the chart as the unconstrained "1.0"
    # to be the more constrained "1.0.3" (a specific release).
    image:
      tag: "1.0.3"
    
    # Enable the dashboard over HTTP for internal users
    dashboard:
      enable: true
      ingress:
        enabled: true
        annotations:
          kubernetes.io/ingress.class: nginx-ingress-private
        hosts:
          - "polaris.internal.domain"
    

    Polaris Version

    Version 1.0.3 Docker Image

    $ k -n kube-system get deploy polaris-dashboard -o wide
    NAME                READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                            SELECTOR
    polaris-dashboard   1/1     1            1           15d   dashboard    quay.io/fairwinds/polaris:1.0.3   app=polaris,app.kubernetes.io/instance=polaris,app.kubernetes.io/name=polaris,component=dashboard
    
    

    Expected Behavior

    The dashboard should be working.

    Actual Behavior

    Webpage with contents:

    Error fetching Kubernetes resources
    

    Logs from container:

    $ k -n kube-system logs polaris-dashboard-dcb6c8b9c-9bpkw 
    time="2020-06-04T17:36:42Z" level=info msg="Starting Polaris dashboard server on port 8080"
    time="2020-06-04T17:37:04Z" level=error msg="Cache missed ReplicaSet/prod/telbot-gunicorn-7844c9db9d again"
    time="2020-06-04T17:37:04Z" level=error msg="Error loading controllers from pods: Could not retrieve parent object"
    time="2020-06-04T17:37:04Z" level=error msg="Error fetching Kubernetes resources Could not retrieve parent object"
    

    CURL output of same request:

    $ curl -v -L 'http://polaris.internal.domain/'
    * About to connect() to polaris.internal.domain port 80 (#0)
    *   Trying 10.100.25.113...
    * Connected to polaris.internal.domain (10.100.25.113) port 80 (#0)
    > GET / HTTP/1.1
    > User-Agent: curl/7.29.0
    > Host: polaris.internal.domain
    > Accept: */*
    > 
    < HTTP/1.1 500 Internal Server Error
    < Date: Thu, 04 Jun 2020 17:44:50 GMT
    < Content-Type: text/plain; charset=utf-8
    < Content-Length: 36
    < Connection: keep-alive
    < Server: nginx/1.17.8
    < X-Content-Type-Options: nosniff
    < 
    Error fetching Kubernetes resources
    * Connection #0 to host polaris.internal.domain left intact
    

    Steps to Reproduce

    1. Install with helm.
    2. Open dashboard web interface.
    3. Get Error fetching Kubernetes resources error message

    Additional information

    I'm not sure what further information to provide to help diagnose this issue.

  • Show cluster name/host on dashboard

    Show cluster name/host on dashboard

    Addresses https://github.com/reactiveops/polaris/issues/124

    Uses whatever the user specifies as --cluster-name, falling back to the host named in kubeconfig. Doesn't look like we can get the name field: https://github.com/kubernetes/client-go/issues/530

    Any potential issues with surfacing the host in the dashboard? E.g. could it have basic auth creds?

    Here's what it looks like:

    Screen Shot 2019-06-05 at 4 53 43 PM
  • static content is not loaded when using nginx-ingress with custom base path

    static content is not loaded when using nginx-ingress with custom base path

    Hi,

    I installed polaris 1.2 by applying the yaml file, I can access the page, but, some contents are not loaded.

    In a post, I saw about this parameter : --dashboard-base-path="/polaris/" but I did not see how to use it .

    This is my ingress configuration:

    apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: "/$1" nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^(/polaris)$ $1/ permanent; name: polaris namespace: polaris spec: rules:

    • host: hostname http: paths:
      • path: /polaris/?(.*) backend: serviceName: polaris-dashboard servicePort: 80

    Thanks for your help

  • Issue with runAsNonRoot

    Issue with runAsNonRoot

    Not entirely sure that your check for runAsNonRoot is working, or we mis-understand exactly what it's checking. We have a pod running which is set at the pod level as below, yet reactive is still saying it shouldn't run as root... which it isn't. Since the securityContext at the container level is only for overriding what is set at the pod level I hope that being set at the pod level is enough. Any ideas?

    securityContext: runAsNonRoot: true runAsUser: 5000

  • 🛠  Add GitHub Action

    🛠 Add GitHub Action

    I added a Github Action to add polaris as executable to the GitHub Action runners.

    The script downloads the specified polaris version (by tag) and links it to the runners' path.

    It is rather verbose and written in a startup-like code manner but it should be well enough for a first version. We already use my action in our CI now (https://github.com/mambax/setup-polaris - a clone before contribution) and it works as planned 🤝

    Some resources: https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#runs-for-docker-actions https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstepsuses

    Note: In this, I also created https://github.com/github/docs/pull/3468 towards the Github Docs because I nearly went crazy because my Dockerfile_Action did not work (just as a hint for later).

    Btw: Once merged to your side and you being happy with it, you can list it in the GH marketplace also.

  • Polaris Dashboard and polaris cli output discrepancies

    Polaris Dashboard and polaris cli output discrepancies

    Installation Process

    Polaris dashboard was installed with helm on a kubernetes cluster. Chart version : "1.1.0"

    Polaris Version

    Polaris version:1.1.1
    

    Expected Behavior

    After setting up the current exemptions in the polaris configmap, through helm and a value file:

        - controllerNames:
          - prometheus-prometheus-operator-prometheus
          rules:
          - readinessProbeMissing
          - livenessProbeMissing
        - controllerNames:
          - prometheus-prometheus-operator-prometheus
          rules:
          - notReadOnlyRootFilesystem
    
    

    I can correctly see my polaris dashboard listing the component as green: image

    I expect that running polaris audit -c config (where config contains a list of the mentioned exemptions) I would get the same green status on all components.

    Actual Behavior

    However, when running polaris audit -c config (where config contains a list of the mentioned exemptions) I get:

      {
         "Name": "prometheus-operator-prometheus",
         "Namespace": "monitoring",
         "Kind": "Prometheus",
         "Results": {},
    ...
           "ContainerResults": [
             {
               "Name": "prometheus",
               "Results": {
    ...
                 "notReadOnlyRootFilesystem": {
                   "ID": "notReadOnlyRootFilesystem",
                   "Message": "Filesystem should be read only",
                   "Success": false,
                   "Severity": "warning",
                   "Category": "Security"
                 },
    ....
             {
               "Name": "prometheus-config-reloader",
               "Results": {
    ...
                 "livenessProbeMissing": {
                   "ID": "livenessProbeMissing",
                   "Message": "Liveness probe should be configured",
                   "Success": false,
                   "Severity": "warning",
                   "Category": "Health Checks"
                 },
    

    Namely polaris dashboard lists the component correctly with the name it has on my cluster: prometheus-prometheus-operator-prometheus

    The audit, for some reason, lists it as prometheus-operator-prometheus. I think it might be due to the lenght of the controller name or a difference between the chart and the cli versions. Could you confirm?

  • Linux binary doesn't handle top level commands as documentation

    Linux binary doesn't handle top level commands as documentation

    Installation Process

    Downloaded and untar'd the tar and placed binary in path. Mint Linux 19.3

    Polaris Version

    Polaris version 0.6.0
    

    Expected Behavior

    The documenation (Usage) describes running top level commands without dashes like this:

    polaris version
    

    Actual Behavior

    On Linux this just runs audit:

    polaris version
    # => Full audit output
    

    To run version (or help/dashboard etc...) you need to add dashes:

    polaris --version
    # =>Polaris version 0.6.0
    

    I am not sure how it acts on Mac, but I think it may be a build config missing in the Linux binary?

  • Please provide additional checks like Podrestartpolicy or maxretries

    Please provide additional checks like Podrestartpolicy or maxretries

    Polaris needs some additional validational checks like Pod's restart policy or maxretries so that it can monitor the usage of such parameters in the kubernetes deployment.

    Reason to add: The containers get restarted constantly whenever they fail. This might be because of system issues or some other factors. It will keep on failing even if it retries a lot of time. If we don’t have the maxRetries on our pod/deployment, the pod will always retry, however it will not succeed and retries take a lot of time and resource.

  • sc/rd 71 add plg link

    sc/rd 71 add plg link

    • Add persistentpostrun to root cmd and postrun to version cmd
    • Change PLG link
    • Add PLG link to dashboard

    This PR fixes #

    Checklist

    • [x] I have signed the CLA
    • [x] I have updated/added any relevant documentation

    Description

    What's the goal of this PR?

    What changes did you make?

    What alternative solution should we consider, if any?

  • Bump sigs.k8s.io/controller-runtime from 0.13.0 to 0.14.1

    Bump sigs.k8s.io/controller-runtime from 0.13.0 to 0.14.1

    Bumps sigs.k8s.io/controller-runtime from 0.13.0 to 0.14.1.

    Release notes

    Sourced from sigs.k8s.io/controller-runtime's releases.

    v0.14.1

    Changes since v0.14.0

    :bug: Bug Fixes

    Full Changelog: https://github.com/kubernetes-sigs/controller-runtime/compare/v0.14.0...v0.14.1

    v0.14.0

    Changes since v0.13.1

    :warning: Breaking Changes

    • Add Get functionality to SubResourceClient (#2094)
    • Allow configuring RecoverPanic for controllers globally (#2093)
    • Add client.SubResourceWriter (#2072)
    • Support registration and removal of event handler (#2046)
    • Update Kubernetes dependencies to v0.26 (#2043, #2087)
    • Zap log: Default to RFC3339 time encoding (#2029)
    • cache.BuilderWithOptions inherit options from caller (#1980)

    :sparkles: New Features

    • Builder: Do not require For (#2091)
    • support disable deepcopy on list funcion (#2076)
    • Add cluster.NewClientFunc with options (#2054)
    • Tidy up startup logging of kindWithCache source (#2057)
    • Add function to get reconcileID from context (#2056)
    • feat: add NOT predicate (#2031)
    • Allow to provide a custom lock interface to manager (#2027)
    • Add tls options to manager.Options (#2023)
    • Update Go version to 1.19 (#1986)

    :bug: Bug Fixes

    • Prevent manager from getting started a second time (#2090)
    • Missing error log for in-cluster config (#2051)
    • Skip custom mutation handler when delete a CR (#2049)
    • fix: improve semantics of combining cache selectorsByObject (#2039)
    • Conversion webhook should not panic when conversion request is nil (#1970)

    :seedling: Others

    • Prepare for release 0.14 (#2100)
    • Generate files and update modules (#2096)
    • Bump github.com/onsi/ginkgo/v2 from 2.5.1 to 2.6.0 (#2097)
    • Bump golang.org/x/time (#2089)
    • Update OWNERS: remove inactive members, promote fillzpp sbueringer (#2088, #2092)
    • Default ENVTEST version to a working one (1.24.2) (#2081)
    • Update golangci-lint to v1.50.1 (#2080)
    • Bump go.uber.org/zap from 1.23.0 to 1.24.0 (#2077)
    • Bump golang.org/x/sys from 0.2.0 to 0.3.0 (#2078)
    • Ignore Kubernetes Dependencies in Dependabot (#2071)

    ... (truncated)

    Commits
    • 84c5c9f 🐛 controllers without For() fail to start (#2108)
    • ddcb99d Merge pull request #2100 from vincepri/release-0.14
    • 69f0938 Merge pull request #2094 from alvaroaleman/subresoruce-get
    • 8738e91 Merge pull request #2091 from alvaroaleman/no-for
    • ca4b4de Merge pull request #2096 from lucacome/generate
    • 5673341 Merge pull request #2097 from kubernetes-sigs/dependabot/go_modules/github.co...
    • 7333aed :seedling: Bump github.com/onsi/ginkgo/v2 from 2.5.1 to 2.6.0
    • d4f1e82 Generate files and update modules
    • a387bf4 Merge pull request #2093 from alvaroaleman/recover-panic-globally
    • da7dd5d :warning: Allow configuring RecoverPanic for controllers globally
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Error fetching Kubernetes resource with Dashboard when using k8s on docker desktop windows

    Error fetching Kubernetes resource with Dashboard when using k8s on docker desktop windows

    Hello all I am on windows 10, I want to use the Dashboard with docker desktop windows I see the error: Error fetching Kubernetes resource

    I see in the logs

    time="2022-12-21T21:38:42Z" level=info msg="Starting Polaris dashboard server on port 8080" time="2022-12-21T21:38:46Z" level=error msg="Error fetching Cluster API version: Get https://kubernetes.docker.internal:6443/version?timeout=32s: dial tcp 192.168.65.4:6443: connect: connection refused" time="2022-12-21T21:38:46Z" level=error msg="Error fetching Kubernetes resources Get https://kubernetes.docker.internal:6443/version?timeout=32s: dial tcp 192.168.65.4:6443: connect: connection refused" time="2022-12-21T21:38:49Z" level=error msg="Error fetching Cluster API version: Get https://kubernetes.docker.internal:6443/version?timeout=32s: dial tcp 192.168.65.4:6443: connect: connection refused" time="2022-12-21T21:38:49Z" level=error msg="Error fetching Kubernetes resources Get https://kubernetes.docker.internal:6443/version?timeout=32s: dial tcp 192.168.65.4:6443: connect: connection refused" time="2022-12-23T09:15:25Z" level=error msg="Error fetching Cluster API version: Get https://kubernetes.docker.internal:6443/version?timeout=32s: dial tcp 192.168.65.4:6443: connect: connection refused" time="2022-12-23T09:15:25Z" level=error msg="Error fetching Kubernetes resources Get https://kubernetes.docker.internal:6443/version?timeout=32s: dial tcp 192.168.65.4:6443: connect: connection refused"

    I run the command:

    docker run -d -p 8082:8080 -v %USERPROFILE%/.kube/config:/opt/app/config:ro quay.io/fairwinds/polaris:1.2 polaris dashboard --kubeconfig /opt/app/config

    Thanks

  • proposal: nodeselector should be set to linux by default

    proposal: nodeselector should be set to linux by default

    if the cluster has a windows node then there is a chance that Polaris won't start because of lack windows images 🤷‍♂️

    image

    workaround: kubectl -n polaris edit deployment polaris-dashboard and add

    nodeSelector:
      kubernetes.io/os: linux
    
  • Bump k8s.io/apimachinery from 0.25.3 to 0.26.0

    Bump k8s.io/apimachinery from 0.25.3 to 0.26.0

    Bumps k8s.io/apimachinery from 0.25.3 to 0.26.0.

    Commits
    • 5d4cdd2 Merge remote-tracking branch 'origin/master' into release-1.26
    • 6cbc4a3 Update golang.org/x/net 1e63c2f
    • 6561235 Merge pull request #113699 from liggitt/manjusaka/fix-107415
    • dad8cd8 Update workload selector validation
    • fe82462 Add extra value validation for matchExpression field in LabelSelector
    • 067949d update k8s.io/utils to fix util tracing panic
    • 0ceff90 Merge pull request #112223 from astraw99/fix-ownerRef-validate
    • 9e85d3a Merge pull request #112649 from howardjohn/set/optimize-everything-nothing
    • 88a1448 Rename and comment on why sharing is safe
    • b03a432 Merge pull request #113367 from pohly/dep-ginkgo-gomega
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump k8s.io/client-go from 0.25.3 to 0.26.0

    Bump k8s.io/client-go from 0.25.3 to 0.26.0

    Bumps k8s.io/client-go from 0.25.3 to 0.26.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
A best practices Go source project with unit-test and integration test, also use skaffold & helm to automate CI & CD at local to optimize development cycle

Dependencies Docker Go 1.17 MySQL 8.0.25 Bootstrap Run chmod +x start.sh if start.sh script does not have privileged to run Run ./start.sh --bootstrap

Apr 4, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Dec 10, 2022
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.
PolarDB Stack is a DBaaS implementation for PolarDB-for-Postgres, as an operator creates and manages PolarDB/PostgreSQL clusters running in Kubernetes. It provides re-construct, failover swtich-over, scale up/out, high-available capabilities for each clusters.

PolarDB Stack开源版生命周期 1 系统概述 PolarDB是阿里云自研的云原生关系型数据库,采用了基于Shared-Storage的存储计算分离架构。数据库由传统的Share-Nothing,转变成了Shared-Storage架构。由原来的N份计算+N份存储,转变成了N份计算+1份存储

Nov 8, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Dec 14, 2022
Crossplane provider to provision and manage Kubernetes objects on (remote) Kubernetes clusters.

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

Jan 3, 2023
Kubernetes IN Docker - local clusters for testing Kubernetes
Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Feb 14, 2022
🐶 Kubernetes CLI To Manage Your Clusters In Style!
🐶 Kubernetes CLI To Manage Your Clusters In Style!

K9s - Kubernetes CLI To Manage Your Clusters In Style! K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project i

Jan 9, 2023
Client extension for interacting with Kubernetes clusters from your k6 tests.

⚠️ This is a proof of concept As this is a proof of concept, it won't be supported by the k6 team. It may also break in the future as xk6 evolves. USE

Jan 2, 2023
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Dec 30, 2022
A pain of glass between you and your Kubernetes clusters.

kube-lock A pain of glass between you and your Kubernetes clusters. Sits as a middle-man between you and kubectl, allowing you to lock and unlock cont

Oct 20, 2022
Hot-swap Kubernetes clusters while keeping your microservices up and running.

Okra Okra is a Kubernetes controller and a set of CRDs which provide advanced multi-cluster appilcation rollout capabilities, such as canary deploymen

Nov 23, 2022
Kubernetes compliance validation pack for Probr

Probr Kubernetes Service Pack The Probr Kubernetes Service pack provides a variety of provider-agnostic compliance checks. Get the latest stable versi

Jul 21, 2022
kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters

kubequery powered by Osquery kubequery is a Osquery extension that provides SQL based analytics for Kubernetes clusters kubequery will be packaged as

Dec 27, 2022
Manage large fleets of Kubernetes clusters
Manage large fleets of Kubernetes clusters

Introduction Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that it works great for a si

Dec 31, 2022
Kubernetes operator to autoscale Google's Cloud Bigtable clusters
Kubernetes operator to autoscale Google's Cloud Bigtable clusters

Bigtable Autoscaler Operator Bigtable Autoscaler Operator is a Kubernetes Operator to autoscale the number of nodes of a Google Cloud Bigtable instanc

Nov 5, 2021
Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster

Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.

Dec 31, 2022
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Jan 8, 2023