Lightweight, CRD based envoy control plane for kubernetes

MARIN3R

Go Report Card codecov build license

Lighweight, CRD based Envoy control plane for Kubernetes:

  • Implemented as a Kubernetes Operator
  • Deploy and manage an Envoy xDS server using the DiscoveryService custom resource
  • Inject Envoy sidecar containers based on Pod annotations
  • Deploy Envoy as a Kubernetes Deployment using the EnvoyDeployment custom resource
  • Dynamic Envoy configuration using the EnvoyConfig custom resource
  • Use any secret of type kubernetes.io/tls as a certificate source
  • Syntactic validation of Envoy configurations
  • Self-healing
  • Controls Envoy connnection draining and graceful shutdown whenever pods are terminated

Table of Contents

Overview

MARIN3R is a Kubernetes operator to manage a fleet of Envoy proxies within a Kubernetes cluster. It takes care of the deployment of the proxies and manages their configuration, feeding it to them through a discovery service using Envoy's xDS protocol. This allows for dynamic reconfiguration of the proxies without any reloads or restarts, favoring the ability to perform configuration changes in a non-disruptive way.

Users can write their Envoy configurations by making use of Kubernetes Custom Resources that the operator will watch and make available to the proxies through the discovery service. Configurations are defined making direct use of Envoy's v2/v3 APIs so anything supported in the Envoy APIs is available in MARIN3R. See the configuration section or the API reference for more details.

A great way to use this project is to have your own operator generating the Envoy configurations that your platform/service requires by making use of MARIN3R APIs. This way you can just focus on developing the Envoy configurations you need and let MARIN3R take care of the rest.

Getting started

Installation

MARIN3R can be installed either by using kustomize or by using Operator Lifecycle Manager (OLM). We recommend using OLM installation whenever possible.

Install using OLM

OLM is installed by default in Openshift 4.x clusters. For any other Kubernetes flavor, check if it is already installed in your cluster. If not, you can easily do so by following the OLM install guide.

Once OLM is installed in your cluster, you can proceed with the operator installation by applying the install manifests. This will create a namespaced install of MARIN3R that will only watch for resources in the default namespace, with the operator deployed in the marin3r-system namespace. Modify the field spec.targetNamespaces of the OperatorGroup resource in examples/quickstart/olm-install.yaml to modify the namespaces that MARIN3R will watch. A cluster scoped installation through OLM is currently not supported (check the kustomize based installation for cluster scope installation of the operator).

kubectl apply -f examples/quickstart/olm-install.yaml

Wait until you see the following Pods running:

▶ kubectl -n marin3r-system get pods | grep Running
marin3r-catalog-qsx9t                                             1/1     Running     0          103s
marin3r-controller-manager-5f97f86fc5-qbp6d                       2/2     Running     0          42s
marin3r-controller-webhook-5d4d855859-67zr6                       1/1     Running     0          42s
marin3r-controller-webhook-5d4d855859-6972h                       1/1     Running     0          42s

Install using kustomize

This method will install MARIN3R with cluster scope permissions in your cluster. It requires cert-manager to be present in the cluster.

To install cert-manager you can execute the following command in the root directory of this repository:

make deploy-cert-manager

You can also refer to the cert-manager install documentation.

Once cert-manager is available in the cluster, you can install MARIN3R by issuing the following command:

kustomize build config/default | kubectl apply -f -

After a while you should see the following Pods running:

▶ kubectl -n marin3r-system get pods
NAME                                          READY   STATUS    RESTARTS   AGE
marin3r-controller-manager-6c45f7675f-cs6dq   2/2     Running   0          31s
marin3r-controller-webhook-684bf5bbfd-cp2x4   1/1     Running   0          31s
marin3r-controller-webhook-684bf5bbfd-zdvrk   1/1     Running   0          31s

Deploy a discovery service

A discovery service is a Pod that users need to deploy in their namespaces to provide such namespaces with the ability to configure Envoy proxies dynamically using configurations loaded from Kubernetes Custom Resources. This Pod runs a couple of Kubernetes controllers as well as an Envoy xDS server. To deploy a discovery service users make use of the DiscoveryService custom resource that MARIN3R provides. The DiscoveryService is a namespace scoped resource, so one is required for each namespace where Envoy proxies are going to be deployed.

Continuing with our example, we are going to deploy a DiscoveryService resource in the default namespace of our cluster:

cat <<'EOF' | kubectl apply -f -
apiVersion: operator.marin3r.3scale.net/v1alpha1
kind: DiscoveryService
metadata:
  name: discoveryservice
  namespace: default
EOF

After a while you should see the discovery service Pod running:

▶ kubectl -n default get pods
NAME                                READY   STATUS    RESTARTS   AGE
marin3r-discoveryservice-676b5cd7db-xk9rt   1/1     Running   0          4s

Next steps

After installing the operator and deploying a DiscoveryService into a namespace, you are ready to start deploying and configuring Envoy proxies within the namespace. You can review the different walkthroughs within this repo to learn more about MARIN3R and its capabilities.

Configuration

API reference

The full MARIN3R API reference can be found here

EnvoyConfig custom resource

MARIN3R most core functionality is to feed the Envoy configurations defined in EnvoyConfig custom resources to an Envoy discovery service. The discovery service then sends the resources contained in those configurations to the Envoy proxies that identify themselves with the same nodeID defined in the EnvoyConfig resource.

Commented example of an EnvoyConfig resource:

cat <<'EOF' | kubectl apply -f -
apiVersion: marin3r.3scale.net/v1alpha1
kind: EnvoyConfig
metadata:
  # name and namespace uniquelly identify an EnvoyConfig but are
  # not relevant in any other way
  name: config
spec:
  # nodeID indicates that the resources defined in this EnvoyConfig are relevant
  # to Envoy proxies that identify themselves to the discovery service with the same
  # nodeID. The nodeID of an Envoy proxy can be specified using the "--node-id" command
  # line flag
  nodeID: proxy
  # Resources can be written either in json or in yaml, being json the default if
  # not specified
  serialization: json
  # Resources can be written using either v2 Envoy API or v3 Envoy API. Mixing v2 and v3 resources
  # in the same EnvoyConfig is not allowed. Default is v2.
  envoyAPI: v3
  # envoyResources is where users can write the different type of resources supported by MARIN3R
  envoyResources:
    # the "secrets" field holds references to Kubernetes Secrets. Only Secrets of type
    # "kubernetes.io/tls" can be referenced. Any certificate referenced from another Envoy
    # resource (for example a listener or a cluster) needs to be present here so marin3r
    # knows where to get the certificate from.
    secrets:
        # name is the name of the kubernetes Secret that holds the certificate and by which it can be 
        # referenced to from other resources
      - name: certificate
    # Endpoints is a list of the Envoy ClusterLoadAssignment resource type.
    # V2 reference: https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/endpoint.proto
    # V3 reference: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/endpoint/v3/endpoint.proto
    endpoints:
      - name: endpoint1
        value: {"clusterName":"cluster1","endpoints":[{"lbEndpoints":[{"endpoint":{"address":{"socketAddress":{"address":"127.0.0.1","portValue":8080}}}}]}]}
    # Clusters is a list of the Envoy Cluster resource type.
    # V2 reference: https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/cluster.proto
    # V3 reference: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/cluster.proto
    clusters:
      - name: cluster1
        value: {"name":"cluster1","type":"STRICT_DNS","connectTimeout":"2s","loadAssignment":{"clusterName":"cluster1","endpoints":[]}}
    # Routes is a list of the Envoy Route resource type.
    # V2 reference: https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/route.proto
    # V3 reference: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route.proto
    routes:
      - name: route1
        value: {"name":"route1","virtual_hosts":[{"name":"vhost","domains":["*"],"routes":[{"match":{"prefix":"/"},"direct_response":{"status":200}}]}]}
    # Listeners is a list of the Envoy Listener resource type.
    # V2 referece: https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/listener.proto
    # V3 reference: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/listener/v3/listener.proto
    listeners:
      - name: listener1
        value: {"name":"listener1","address":{"socketAddress":{"address":"0.0.0.0","portValue":8443}}}
    # Runtimes is a list of the Envoy Runtime resource type.
    # V2 reference: https://www.envoyproxy.io/docs/envoy/latest/api-v2/service/discovery/v2/rtds.proto
    # V3 reference: https://www.envoyproxy.io/docs/envoy/latest/api-v3/service/runtime/v3/rtds.proto
    runtimes:
      - name: runtime1
        value: {"name":"runtime1","layer":{"static_layer_0":"value"}}

Secrets

Secrets are treated in a special way by MARIN3R as they contain sensitive information. Instead of directly declaring an Envoy API secret resource in the EnvoyConfig CR, you have to reference a Kubernetes Secret, which should exists in the same namespace. MARIN3R expects this Secret to be of type kubernetes.io/tls and will load it into an Envoy secret resource. This way you avoid having to insert sensitive data into the EnvoyConfig resources and allows you to use your regular kubernetes Secret management workflow for sensitive data.

Other approach that can be used is to create certificates using cert-manager because cert-manager also uses kubernetes.io/tls Secrets to store the certificates it generates. You just need to point the references in your EnvoyConfig to the proper cert-manager generated Secret.

To use a certificate from a kubernetes Secret refer it like this from an EnvoyConfig:

spec:
  envoyResources:
    secrets:
      - name: certificate

This certificate can then be referenced in an Envoy cluster/listener with the following snippet (check the kuard example):

transport_socket:
  name: envoy.transport_sockets.tls
  typed_config:
    "@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext"
    common_tls_context:
      tls_certificate_sds_secret_configs:
        - name: certificate
          sds_config:
            ads: {}

Sidecar injection configuration

The MARIN3R mutating admission webhook will inject Envoy containers in any Pod annotated with marin3r.3scale.net/node-id and labelled with marin3r.3scale.net/status=enabled. The following annotations can be used in Pods to control the behavior of the sidecar injection:

annotations description default value
marin3r.3scale.net/node-id Envoy's node-id N/A
marin3r.3scale.net/cluster-id Envoy's cluster-id same as node-id
marin3r.3scale.net/envoy-api-version Envoy's API version (v2/v3) v2
marin3r.3scale.net/container-name the name of the Envoy sidecar envoy-sidecar
marin3r.3scale.net/ports the exposed ports in the Envoy sidecar N/A
marin3r.3scale.net/host-port-mappings Envoy sidecar ports that will be mapped to the host. This is used for local development, no recommended for production use. N/A
marin3r.3scale.net/envoy-image the Envoy image to be used in the injected sidecar container envoyproxy/envoy:v1.14.1
marin3r.3scale.net/config-volume the Pod volume where the ads-configmap will be mounted envoy-sidecar-bootstrap
marin3r.3scale.net/tls-volume the Pod volume where the marin3r client certificate will be mounted. envoy-sidecar-tls
marin3r.3scale.net/client-certificate the marin3r client certificate to use to authenticate to the marin3r control plane (marin3r uses mTLS)) envoy-sidecar-client-cert
marin3r.3scale.net/envoy-extra-args extra command line arguments to pass to the Envoy sidecar container ""
marin3r.3scale.net/admin.port Envoy's admin port 9901
marin3r.3scale.net/resources.limits.cpu Envoy sidecar container resource cpu limits. See syntax format to specify the resource quantity N/A
marin3r.3scale.net/admin.port Envoy's admin api port 9901
marin3r.3scale.net/admin.bind-address Envoy's admin api bind address 0.0.0.0
marin3r.3scale.net/admin.access-log-path Envoy's admin api access logs path /dev/null
marin3r.3scale.net/resources.limits.memory Envoy sidecar container resource memory limits. See syntax format to specify the resource quantity N/A
marin3r.3scale.net/resources.requests.cpu Envoy sidecar container resource cpu requests. See syntax format to specify the resource quantity N/A
marin3r.3scale.net/resources.requests.memory Envoy sidecar container resource memory requests. See syntax format to specify the resource quantity N/A
marin3r.3scale.net/shutdown-manager.enabled Enable or disables Envoy shutdown manager for graceful shutdown of the Envoy server (true/false) false
marin3r.3scale.net/shutdown-manager.port Envoy's shutdown manager server port 8090
marin3r.3scale.net/shutdown-manager.image Envoy's shutdown manager image If unset, the operator will select the appropriate image
marin3r.3scale.net/init-manager.image Envoy's init manager image If unset, the operator will select the appropriate image
marin3r.3scale.net/shutdown-manager.extra-lifecycle-hooks Comma separated list of container names whose stop should be coordinated with the shutdown-manager. You usually would want to add containers that act as upstream clusters for the Envoy sidecar N/A

marin3r.3scale.net/ports syntax

The port syntax is a comma-separated list of name:port[:protocol] as in "envoy-http:1080,envoy-https:1443".

marin3r.3scale.net/host-port-mappings syntax

The host-port-mappings syntax is a comma-separated list of container-port-name:host-port-number as in "envoy-http:1080,envoy-https:1443".

Use cases

Ratelimit

Design docs

For an in-depth look at how MARIN3R works, check the design docs.

Discovery service

Sidecar injection

Operator

Development

You can find development documentation here.

Release

You can find release process documentation here.

Comments
  • Remove API v2 code and EnvoyBootstrap code

    Remove API v2 code and EnvoyBootstrap code

    This is a code cleanup PR:

    • Removes all the code related to Envoy V2 API. V3 was already the default API so this is the natural step in the deprecation of V2, which has already been removed from https://github.com/envoyproxy/go-control-plane, a dependency of this project.
    • Removes the EnvoyBootstrap controller, whose use was deprecated in 0.8.
    • Moves the discovery service command code unde cmd/ so the structure of the code is the same for all subcomands.

    /kind cleanup /priority important-soon /assign

  • feat/autogenerate-proto-imports

    feat/autogenerate-proto-imports

    Context: There is a list of imports in pkg/envoy/serializer/v3/serializer.go that is required so the serialization/deserialization code is able to handle proto messages of the Any type. This is a dynamic list of imports as the files containing protobuffer definitions in go-control-plane can change from version to version. So far this list of imports was manually maintained, which is problematic as the list could end up being out of date.

    This PR automates the process of generating the list of imports:

    • The list is now maintained in a separate package pkg/envoy/protos/v3 that can be imported from other packages.
    • A generator has been written that performs the following tasks:
      • Inspects the project's go.mod to determine the go-control-plane release in use.
      • Clones the specific tag of the go-control-plane repository into memory and looks for the .pb.go files that belong to the v3 api version (though the generator is already able to look for other api versions).
      • Generates the file with all the imports within the pkg/envoy/protos/v3 package.
    • go generate is used to trigger the execution of the code generator from the Makefile when the binary is built.

    Currently the list of imports is out of date in marin3r-v0.9.0 so a new patch release will be required after this PR as some proto message definitions are missing, resulting in an error if a user tries to use them.

    /kind feature /kind bug /priority important-soon /assign

  • feat/upgrade-deps

    feat/upgrade-deps

    This PR upgrades libs and operator-sdk to latest possible ones, within constrains. Operator SDK manifest generation has been refactored for better maintenance, making it clear where the manifests have been customized within the Kustomize resources. Most of the operator-sdk project scaffolding has been regenerated using the latest version.

    Operator SDK has been bumped just to 1.10 as there is an ongoing issue for higher versions, still unresolved even though the issue that was reported is already closed: https://github.com/operator-framework/operator-sdk/issues/5244

    /kind feature /priority important-soon /assign

  • feat/default-v3

    feat/default-v3

    Given that latest releases of envoy have dropped support for the v2 api and that envoy 1.18.3 is currently MARIN3R's default, set v3 as the default API to use.

    I also deleted some unused code that was still around.

    /kind feature /priority important-soon /assign

  • Add a name to the V3 file-based SDS tls secret response

    Add a name to the V3 file-based SDS tls secret response

    This applies only to the V3 protocol.

    This impacts the tls_certificate_sds_secret.json config file, not the response sent over the wire to xDS clients.

    Marin3r generates this file and adds it to a k8s secret which is mounted into Envoy pods. Envoy uses this file to retrieve the bootstrap certs that allow it to talk to Marin3r over TLS. If there's no "name" field then Envoy v0.17.0 can't process the cert, although older Envoy versions can.

    [2021-02-18 23:20:51.419][1][critical][main] [source/server/server.cc:109] error initializing configuration '/etc/envoy/bootstrap/config.json': Proto constraint validation failed (UpstreamTlsContextValidationError.CommonTlsContext: ["embe dded message failed validation"] | caused by CommonTlsContextValidationError.TlsCertificateSdsSecretConfigs[i]: ["embedded message failed validation"] | caused by SdsSecretConfigValidationError.Name: ["value length must be at least " '\x0 1' " runes"]): common_tls_context { tls_certificate_sds_secret_configs { sds_config { path: "/etc/envoy/bootstrap/tls_certificate_sds_secret.json" } } }

    [2021-02-18 23:20:51.419][1][info][main] [source/server/server.cc:782] exiting Proto constraint validation failed (UpstreamTlsContextValidationError.CommonTlsContext: ["embedded message failed validation"] | caused by CommonTlsContextValidationError.TlsCertificateSdsSecretConfigs[i]: ["embedded message failed valida tion"] | caused by SdsSecretConfigValidationError.Name: ["value length must be at least " '\x01' " runes"]): common_tls_context { tls_certificate_sds_secret_configs { sds_config { path: "/etc/envoy/bootstrap/tls_certificate_sds_secret.json" } } }

    Thank you @roivaz for pointing me to this fix!

  • Rename field 'podAffinity' to just 'affinity' in EnvoyDeployment resource

    Rename field 'podAffinity' to just 'affinity' in EnvoyDeployment resource

    'podAffinity' was a poor choice for naming the field because it's basically the affinity field of a Pod spec. It's best if the naming is the same. This field has not make it yet to a stable release, so there is no problem in changing it.

    /kind feature /priority important-soon /assign

  • Reimplement self-healing using internal statistics

    Reimplement self-healing using internal statistics

    This PR includes the following:

    • Fixed a bug affecting reconcile of EnvoyConfigRevision status: d7687ea600d0293bb62e1146cd195e3e5d631f87
    • Implemented a mechanism to internally store statistics related to the xDS protocol messages interchanged between clients and the discovery service: 24e5b59e4636612f11adacf9b7ed5a471d2cc9af
    • Reimplemented the self-healing using the internal xDS stats: 1b17645cd666ac48be2d399189b56fe483b9e8d4
    • Implemented a backoff algorithm to avoid overloading the envoy clients with retries from the discovery service: 86d1b6dc8ff8aa8f3d19040cebf757f177a37269

    /kind feature /priority important-soon /assign

  • feat/shutdown-manager

    feat/shutdown-manager

    This PR add graceful termination for envoy containers, with connection draining of listeners.

    The shutdown manager can be enabled for EnvoyDeployment resources using:

    spec:
      shutdownManager: {}
    

    The shutdown manager can be enabled for envoy injected sidecars using the following annotation in Pods:

    metadata:
      annotations:
        marin3r.3scale.net/shutdown-manager.enabled: "true"
    

    The shutdown manager is a new command in the Marin3r image that runs a small server and is deployed as a sidecar container to the Envoy container. Container lifecycle hooks are used in both the shutdown manager container and the Envoy container to orchestrate graceful shutdown of the Envoy server, waiting until all the listeners are drained or the 300s timeout is reached.

    An example of how the Envoy and the shutdown manager containers are configured:

      containers:
        - name: envoy 
          args:
            - '-c'
            - /etc/envoy/bootstrap/config.json
            - '--service-node'
            - example
            - '--service-cluster'
            - example
            - '--component-log-level'
            - 'config:debug'
          command:
            - envoy
          image: 'envoyproxy/envoy:v1.16.0'
          lifecycle:
            preStop:
              httpGet:
                path: /shutdown
                port: 8090
                scheme: HTTP
          # rest of the container config is omitted
        - name: envoy-shtdn-mgr 
          args:
            - shutdown-manager
            - '--port'
            - '8090'
          image: 'quay.io/3scale/marin3r:v0.8.0-alpha.8'
          lifecycle:
            preStop:
              httpGet:
                path: /drain
                port: 8090
                scheme: HTTP
          # rest of the container config is omitted
    

    /kind feature /priority important-soon

  • Release/v0.10.0

    Release/v0.10.0

    Release v0.10.0.

    A small fix has been added to the package generators to wipe the generated file contents before writing.

    /kind feature /priority important-soon /assign

  • feat/extra-container-lifecycle-hooks

    feat/extra-container-lifecycle-hooks

    This PR allows coordination for extra containers within the Pod with the shutdown manager. A new annotation has been added that allows a user to specify other container names where the shutdown manager lifecycle hook should also be configured. This is a feature that only makes sense for sidecars.

    An example of usage:

    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: kuard
      namespace: default
      labels:
        app: kuard
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: kuard
      template:
        metadata:
          labels:
            app: kuard
            marin3r.3scale.net/status: "enabled"
          annotations:
            marin3r.3scale.net/node-id: kuard
            marin3r.3scale.net/ports: envoy-https:8443
            marin3r.3scale.net/shutdown-manager.enabled: "true"
            marin3r.3scale.net/shutdown-manager.extra-lifecycle-hooks: kuard
        spec:
          containers:
            - name: kuard
              image: gcr.io/kuar-demo/kuard-amd64:blue
              ports:
                - containerPort: 8080
                  name: http
                  protocol: TCP
    

    /kind feature /priority importantt-soon /assign

  • Auto renew ca certificate

    Auto renew ca certificate

    Adds a new event handler in the DiscoveryServiceCertificate controller to trigger a reconcile of any certificate that has its issuer certificate modified. After this, it is safe to enable auto-renewal of the discovery service CA certificate.

    /kind feature /assign /priority important-longterm

    Note for reviewers: https://github.com/3scale-ops/marin3r/pull/149 needs to be merged first and then I will rebase.

  • add support for VHDS

    add support for VHDS

    VirtualHost service discovery is available in envoy and go-control-plane already has support for it. Add, if possible, support for it in Marin3r.

    /kind feature /priority important-longterm /assign

  • Marin3r endpoint auto-discovery

    Marin3r endpoint auto-discovery

    Why

    To use kubernetes endpoints instead of services to improve load balancing decisions.

    /kind feature /priority important-longterm /label size/xl /assign

  • Add liveness/readiness probes for the discovery service

    Add liveness/readiness probes for the discovery service

    Similarly to #45 we need to add readiness/liveness to the discovery service Deployment. In this case is not sufficient with the endpoints provided by controller-runtime as we also need to assess the health of the discovery service server and somehow aggregate both results in the same endpoint.

  • HA for the discovery service server

    HA for the discovery service server

    Right now the discovery service server runs in a single pod. This is not optimal as if new pods are created while the discovery service pod in down, they will fail. Already running pods are not affected though.

    The proposal would be to move the EnvoyConfig controller to the operator pod and leave just the EnvoyConfigRevision discovery service pod. This has some problems that would need to be solved:

    • The status would need to have more intelligence as we need to assess that all discovery service pods have synced their cache before declaring an EnvoyConfig cacheStatus as "InSync".
  • Operator to manage DiscoveryService certificates

    Operator to manage DiscoveryService certificates

    A solution is needed to manage the renewal of the DiscoveryService related certificates:

    • The CA
    • The server certificate
    • The client certificates

    Currently all these certificates are just created but never reconciled/renewed so manual action is required to renew them.

Working towards a control plane for the MiCo Tool and the MiCoProxy

A simple control plane for MiCo This is still largely a work in progress The overall idea is to build a kubernetes DaemonSet that watches kubernetes s

May 4, 2022
L3AFD kernel function control plane
L3AFD kernel function control plane

L3AFD: Lightweight eBPF Application Foundation Daemon L3AFD is a crucial part of the L3AF ecosystem. For more information on L3AF see https://l3af.io/

Dec 20, 2022
⚡️ Control plane management agent for FD.io's VPP
⚡️ Control plane management agent for FD.io's VPP

VPP Agent The VPP Agent is a Go implementation of a control/management plane for VPP based cloud-native Virtual Network Functions (VNFs). The VPP Agen

Aug 3, 2020
Helm Operator is designed to managed the full lifecycle of Helm charts with Kubernetes CRD resource.

Helm Operator Helm Operator is designed to install and manage Helm charts with Kubernetes CRD resource. Helm Operator does not create the Helm release

Aug 25, 2022
Envoy file based dynamic routing using kubernetes config map

Envoy File Based Dynamic Routing Config mapを使用してEnvoy File Based Dynamic Routingを実現します。 概要 アーキテクチャとしては、 +----------+ +--------------+ +-----------

Dec 30, 2022
Ejemplo de un k8s custom controller para un CRD nuevo

Clonado de kubernetes/sample-controller Para pruebas de un CRD nuevo This repository implements a simple controller for watching Foo resources as defi

Nov 3, 2021
VaultOperator provides a CRD to interact securely and indirectly with secrets stored in Hashicorp Vault.

vault-operator The vault-operator provides several CRDs to interact securely and indirectly with secrets. Details Currently only stage 1 is implemente

Mar 12, 2022
Sesame: an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer

Sesame Overview Sesame is an Ingress controller for Kubernetes that works by dep

Dec 28, 2021
Asynchronously control the different roles available in the kubernetes cluster

RBAC audit Introduction This tool allows you to asynchronously control the different roles available in the kubernetes cluster. These audits are enter

Oct 19, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023
An Easy to use Go framework for Kubernetes based on kubernetes/client-go

k8devel An Easy to use Go framework for Kubernetes based on kubernetes/client-go, see examples dir for a quick start. How to test it ? Download the mo

Mar 25, 2022
Lightweight Kubernetes

K3s - Lightweight Kubernetes Lightweight Kubernetes. Production ready, easy to install, half the memory, all in a binary less than 100 MB. Great for:

Jan 8, 2023
Cmsnr - cmsnr (pronounced "commissioner") is a lightweight framework for running OPA in a sidecar alongside your applications in Kubernetes.

cmsnr Description cmsnr (pronounced "commissioner") is a lightweight framework for running OPA in a sidecar alongside your applications in Kubernetes.

Jan 13, 2022
K8s-ingress-health-bot - A K8s Ingress Health Bot is a lightweight application to check the health of the ingress endpoints for a given kubernetes namespace.

k8s-ingress-health-bot A K8s Ingress Health Bot is a lightweight application to check the health of qualified ingress endpoints for a given kubernetes

Jan 2, 2022
Go library to create resilient feedback loop/control controllers.

Gontroller A Go library to create feedback loop/control controllers, or in other words... a Go library to create controllers without Kubernetes resour

Jan 1, 2023
This project for solving the problem of chaincode being free from k8s control
This project for solving the problem of chaincode being free from k8s control

Peitho - Hyperledger Fabric chaincode Cloud-native managed system The chaincode of Hyperledger Fabric can be handed over to k8s for management, which

Aug 14, 2022
Pulumi-awscontroltower - A Pulumi provider for AWS Control Tower

Terraform Bridge Provider Boilerplate This repository contains boilerplate code

Nov 14, 2022
Sapfun - Utility that takes control over your video card coolers to keep it cool and steady

What? sapfun - Utility that takes control over your video card coolers to keep i

Feb 18, 2022