Run Tor onion services on Kubernetes (actively maintained)

tor-controller

This project started as an exercise to update kragniz's https://github.com/kragniz/tor-controller version

Important!! This project is not backward compatible with kragniz's OnionService definitions. You will need to update your OnionService manifests

Changes

  • Go updated to 1.17
  • Code ported to kubebuilder version 3
  • Domain updated moved from tor.k8s.io (protected) to k8s.torproject.org (see https://github.com/kubernetes/enhancements/pull/1111)
  • Added OnionBalancedService type
  • New OnionService version v1alpha2
  • Migrate clientset code to controller-runtime

Roadmap / TODO list

  • Implement OnionBalancedService resource (HA Onion Services)
  • Metrics exporters
  • TOR daemon management via socket (e.g: config reload)

TOR

Tor is an anonymity network that provides:

  • privacy
  • enhanced tamperproofing
  • freedom from network surveillance
  • NAT traversal

tor-controller allows you to create OnionService resources in kubernetes. These services are used similarly to standard kubernetes services, but they only serve traffic on the tor network (available on .onion addresses).

See this page for more information about onion services.

tor-controller creates the following resources for each OnionService:

  • a service, which is used to send traffic to application pods
  • tor pod, which contains a tor daemon to serve incoming traffic from the tor network, and a management process that watches the kubernetes API and generates tor config, signaling the tor daemon when it changes
  • rbac rules

Install

Install tor-controller:

$ kubectl apply -f hack/install.yaml

Quickstart with random address

Create some deployment to test against, in this example we'll deploy an echoserver. You can find the definition at hack/sample/echoserver.yaml:

Apply it:

$ kubectl apply -f hack/sample/echoserver.yaml

For a fixed address, we need a private key. This should be kept safe, since someone can impersonate your onion service if it is leaked. Tor-Controller will generate an Onion v3 key-pair for you (stored as a secret), unless it already exists

Create an onion service, hack/sample/onionservice.yaml, referencing an existing private key is optional:

apiVersion: tor.k8s.torproject.org/v1alpha2
kind: OnionService
metadata:
  name: example-onion-service
spec:
  version: 3
  rules:
    - port:
        number: 80
      backend:
        service:
          name: http-app
          port:
            number: 8080

Apply it:

$ kubectl apply -f hack/sample/onionservice.yaml

List active OnionServices:

$ kubectl get onionservices
NAME                    HOSTNAME                                                         TARGETCLUSTERIP   AGE
example-onion-service   cfoj4552cvq7fbge6k22qmkun3jl37oz273hndr7ktvoahnqg5kdnzqd.onion   10.43.252.41      1m

This service should now be accessable from any tor client, for example Tor Browser:

Random service names

If spec.privateKeySecret is not specified, tor-controller will start a service with a random name. The key-pair is stored in the same namespace as the tor-daemon, with the name ONIONSERVICENAME-tor-secret

Onion service versions

The spec.version field specifies which onion protocol to use. Only v3 is supported.

tor-controller defaults to using v3 if spec.version is not specified.

Using with nginx-ingress

tor-controller on its own simply directs TCP traffic to a backend service. If you want to serve HTTP stuff, you'll probably want to pair it with nginx-ingress or some other ingress controller.

To do this, first install nginx-ingress normally. Then point an onion service at the nginx-ingress-controller, for example:

apiVersion: tor.k8s.torproject.org/v1alpha2
kind: OnionService
metadata:
  name: example-onion-service
spec:
  version: 3
  rules:
    - port:
        number: 80
      backend:
        service:
          name: http-app
          port:
            number: 8080
  privateKeySecret:
    name: nginx-onion-key
    key: private_key

This can then be used in the same way any other ingress is. You can find a full example, with a default backend at hack/sample/full-example.yaml

Other projects

References:

Comments
  • [BUG] Manager pod failing to start for arm64 install

    [BUG] Manager pod failing to start for arm64 install

    Describe the bug I'm installing this package via Helm (and also directly) onto a cluster of Raspberry Pi 4's that use the arm64 architecture, but the manager pod is failing to start with a CrashLoopBackOff error. This normal indicates that the package that is being installed is built for the wrong architecture (i.e. amd64).

    To Reproduce Install the package via Helm.

    Expected behavior The pods should start successfully and I should be able to view the .onion address for the service.

    Additional information

    As per the conversation on #3, I have uninstalled, updated the repo and reinstalled the package, but the issue still persists.

    Here is the failing pod description:

    Name:         tor-controller-6977fc959f-hvb48
    Namespace:    tor-controller
    Priority:     0
    Node:        ---
    Start Time:   Tue, 01 Mar 2022 15:06:39 +0000
    Labels:       app.kubernetes.io/instance=tor-controller
                  app.kubernetes.io/name=tor-controller
                  pod-template-hash=6977fc959f
    Annotations:  <none>
    Status:       Running
    IP:           10.42.0.15
    IPs:
      IP:           10.42.0.15
    Controlled By:  ReplicaSet/tor-controller-6977fc959f
    Containers:
      manager:
        Container ID:  containerd://c63144efa6f93831c4217b145f9a8669ff3b691f8af16a972dd81bfa4f47d0ee
        Image:         quay.io/bugfest/tor-controller:0.5.0
        Image ID:      quay.io/bugfest/tor-controller@sha256:0f142060bba60d422c6c536de766ace73a0a00535fcffaba354260e54e59c1e6
        Port:          <none>
        Host Port:     <none>
        Command:
          /manager
        Args:
          --config=controller_manager_config.yaml
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    1
          Started:      Tue, 01 Mar 2022 15:10:06 +0000
          Finished:     Tue, 01 Mar 2022 15:10:06 +0000
        Ready:          False
        Restart Count:  5
        Liveness:       http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
        Readiness:      http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
        Environment:    <none>
        Mounts:
          /controller_manager_config.yaml from manager-config (rw,path="controller_manager_config.yaml")
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gzzm (ro)
      kube-rbac-proxy:
        Container ID:  containerd://5eab9e63e587140e040ef3b804ac9bea7f1bdbf8c4d4cb89f09cde93e0811ccb
        Image:         gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
        Image ID:      gcr.io/kubebuilder/kube-rbac-proxy@sha256:db06cc4c084dd0253134f156dddaaf53ef1c3fb3cc809e5d81711baa4029ea4c
        Port:          8443/TCP
        Host Port:     0/TCP
        Args:
          --secure-listen-address=0.0.0.0:8443
          --upstream=http://127.0.0.1:8080/
          --logtostderr=true
          --v=10
        State:          Running
          Started:      Tue, 01 Mar 2022 15:06:48 +0000
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gzzm (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             False 
      ContainersReady   False 
      PodScheduled      True 
    Volumes:
      manager-config:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      tor-controller-manager-config
        Optional:  false
      kube-api-access-5gzzm:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   BestEffort
    Node-Selectors:              <none>
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason       Age                    From               Message
      ----     ------       ----                   ----               -------
      Normal   Scheduled    4m58s                  default-scheduler  Successfully assigned tor-controller/tor-controller-6977fc959f-hvb48 to ---
      Warning  FailedMount  4m58s                  kubelet            MountVolume.SetUp failed for volume "manager-config" : failed to sync configmap cache: timed out waiting for the condition
      Normal   Pulled       4m53s                  kubelet            Successfully pulled image "quay.io/bugfest/tor-controller:0.5.0" in 748.656901ms
      Normal   Pulled       4m52s                  kubelet            Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0" already present on machine
      Normal   Created      4m51s                  kubelet            Created container kube-rbac-proxy
      Normal   Started      4m50s                  kubelet            Started container kube-rbac-proxy
      Normal   Pulled       4m48s                  kubelet            Successfully pulled image "quay.io/bugfest/tor-controller:0.5.0" in 2.019106168s
      Normal   Pulled       4m25s                  kubelet            Successfully pulled image "quay.io/bugfest/tor-controller:0.5.0" in 700.418473ms
      Normal   Created      4m25s (x3 over 4m52s)  kubelet            Created container manager
      Normal   Started      4m25s (x3 over 4m52s)  kubelet            Started container manager
      Warning  BackOff      4m7s (x8 over 4m45s)   kubelet            Back-off restarting failed container
      Normal   Pulling      3m54s (x4 over 4m54s)  kubelet            Pulling image "quay.io/bugfest/tor-controller:0.5.0"
    

    System (please complete the following information):

    • Platform: Raspberry Pi 4 Kubernetes cluster - arm64
    • Version: Latest
  • No arm64 containers

    No arm64 containers

    Please default to making multi-arch containers when ever you create a Docker project. More so when creating an Kubernetes one.

    $ k logs -n tor example-onion-service-tor-daemon-f8f94c688-mgwp4

    standard_init_linux.go:228: exec user process caused: exec format error
    
  • [REQUEST] Force all traffic on the namespace where controller is deployed though Tor

    [REQUEST] Force all traffic on the namespace where controller is deployed though Tor

    Is your feature request related to a problem? Please describe. Traffic to the internet from the onions leak though the normal internet connection

    Describe the solution you'd like I would like all traffic in that namespace to route though the Tor network

    Describe alternatives you've considered N/A

    Additional context It would probably be an value that one could set.

  • [REQUEST] Support specifying various `PodSpec` properties on the OnionService pods

    [REQUEST] Support specifying various `PodSpec` properties on the OnionService pods

    Is your feature request related to a problem? Please describe. I need to be able to control some spec properties on the onion service pods. My immediate pain is that I want to ensure the service continues to run even when the cluster comes under memory or CPU pressure, which means I need to be able to specify a higher priorityClassName for the pods.

    It would also be nice to be able to:

    1. add tolerations
    2. set resource requests/limits
    3. specify affinity rules

    Additionally, I currently don't have any specific use-cases in mind, but I could envision other users wanting to set other pod properties (ex: labels, annotations, hostNetwork, topologySpreadConstraints, etc). See https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec for a full list of PodSpec properties.

    Describe the solution you'd like Add a "template" property to the OnionService spec:

    apiVersion: tor.k8s.torproject.org/v1alpha2
    kind: OnionService
    metadata:
      name: example-onion-service
    spec:
      version: 3
      template:
        spec:
          priorityClassName: high-priority
          tolerations: []
          resources: {}
          affinity: {}
      rules: [...]
    

    Rather than manually creating this template spec for this project, it may be best to leverage the existing "PodTemplateSpec" (although this may introduce complications/confusion if users try to define the containers in the spec?).

    Describe alternatives you've considered I considered changing the default priorityClass to something higher, and setting all less-crucial workloads to a lower class. This does not work for me, because there are several other 3rd party projects that don't support controlling their workloads' priority, and the OnionService is the only one I would want to be considered a high-priority class.

    Additional context I would love to make Onion Services the primary ingress channel into my cluster (potentially even for access to the control-plane), so I am very interested in trying to make it more robust and reliable.

    I would be happy to start on a PR to support this, if you are happy with the strategy.

  • [BUG] trying to consume secret for private key fails

    [BUG] trying to consume secret for private key fails

    Using a tor v3 private key, created via:

    kubectl create secret generic test-onion-key --from-file=hs_ed25519_secret_key

    and then referenced in the YAML:

    privateKeySecret:
        name: test-onion-key
        key: private_key
    

    as per documentation. the pod fails to create with:

    Warning FailedMount 6s (x5 over 14s) kubelet MountVolume.SetUp failed for volume "private-key" : references non-existent secret key: privateKeyFile

    I predict its just a configuration error, but I can't seem to debug it and am sure its just missing documentation. Please advise.

    FULL YAML:

    apiVersion: tor.k8s.torproject.org/v1alpha2
    kind: OnionService
    metadata:
      name: test-site-deployment-tor
    spec:
      version: 3
      rules:
        - port:
            number: 80
          backend:
            service:
              name: test-site-deployment
              port:
                number: 80
      privateKeySecret:
        name: test-onion-key
        key: private_key
    
  • Startup problem with ingress

    Startup problem with ingress

    Hello, i not understand how work with ingress,

    Setup Ingress with 2 paths:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: http-app-ingress
      annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /
    spec:
      defaultBackend:
        service:
          name: service1
          port:
            number: 80
      rules:
      - host: '*.onion'
        http:
          paths:
          - path: /foo/
            pathType: Prefix
            backend:
              service:
                name: service1
                port:
                  number: 80
          - path: /bar/
            pathType: Prefix
            backend:
              service:
                name: service2
                port:
                  number: 80
    

    And try setup OnionService:

    apiVersion: tor.k8s.torproject.org/v1alpha2
    kind: OnionService
    metadata:
      name: example-onion-service
    spec:
      ... (secrets and other) ...
      rules:
        - port:
            number: 80
          backend:
            service:
              # name: service1  # working
              name: http-app-ingress  # not working. What am I doing wrong?
              port:
                number: 80
    

    I try get onion address from CLI, but no result

    kubectl get onion
    NAME                    HOSTNAME   AGE
    example-onion-service              10m
    

    After 10 min tor pod not available (and not working)

  • [REQUEST] x25519 auth client key generation

    [REQUEST] x25519 auth client key generation

    Hello can you please make a x25519 generation feature? For example Tim has a ebook that only a select group of people can access. Rather than needing to contact Tim for auth credentials they can pay a fee and upon payment confirmation be give. There x25519 key.

    Service must restart for clients upon revoking but not when being created so I'd imagine it's possible.

    Thanks, Kuberwear

  • [REQUEST] Add support for handling client authorization via secrets

    [REQUEST] Add support for handling client authorization via secrets

    Currently, the tor-controller does not support the client authorization functionality that onion services provide, resulting that authorization configuration needs to be handled via the god old-fashioned way, so separately from manifest-based onion service configuration. This makes it tedious when you want to use authorization for multiple onion services. For example, let's say you want to grant certain clients access to both an onion services pointing to a Gitea instance and a second onion service pointing to a Wiki.

    I would propose using secrets for storing client authorization public keys, and mount all linked secrets within an onion service's manifest to it's corresponding '/authorized_clients' directory. So, the structure could look similar to the one shown in the following image: image

    Using this structure would also allow restriction or enhancement of client access to specific services with a quite administrator-friendly approach.

    This is just a draft, so if you have a more suitable approach or an enhancement idea, please comment your ideas and thoughts below.

    Greetz pf0

  • [BUG] `0.6.1` image tag missing from image registry

    [BUG] `0.6.1` image tag missing from image registry

    Describe the bug The 0.1.6 release successfully published the images to the image registry (quay.io) under the latest tag, but does not appear to have published the version pegged tags (0.6.1).

    See: https://quay.io/repository/bugfest/tor-controller?tab=tags&tag=latest https://quay.io/repository/bugfest/tor-onionbalance-manager?tab=tags&tag=latest

    To Reproduce Install the latest version via helm (0.1.6). Observe images fail to be pulled.

    Check https://quay.io/repository/bugfest/tor-controller?tab=tags&tag=latest, and see that 0.6.1 is missing.

    Expected behavior Both latest and 0.6.1 image tags should be available on the image repository.

    Additional information

    Error during pod startup:

    Failed to pull image "quay.io/bugfest/tor-controller:0.6.1": rpc error: code = NotFound
    

    System (please complete the following information):

    • Platform: amd64
    • Version v1.23.8+k3s2

    Additional context Installed via helm.

  • [REQUEST] tor-controller as http proxy

    [REQUEST] tor-controller as http proxy

    Dear, I would like to use tor-controller as a http proxy to make http request on the web.

    Can't see (reading your documentation) how to create a kube service (internal/external) to be bind to the Tor POD (tor launched with the option HTTPTunnelPort:XXX)

    Could you help me ?

  • [BUG] echoserver is not multiarch

    [BUG] echoserver is not multiarch

    Describe the bug echoserver container used in the examples is not multiarch. E.g. fails in arm64

    To Reproduce

    $ uname -m
    aarch64
    
    $ kubectl apply -f https://raw.githubusercontent.com/bugfest/tor-controller/master/hack/sample/echoserver.yaml
    kubectl get po
    NAME                                                READY   STATUS             RESTARTS       AGE
    http-app-688bc87b88-t67dm                           0/1     CrashLoopBackOff   10   1h
    http-app-688bc87b88-ljn9l                           0/1     CrashLoopBackOff   10   1h
    
    $ kubectl logs po/http-app-688bc87b88-ljn9l
    standard_init_linux.go:228: exec user process caused: exec format error
    

    Expected behavior echoserver pod is up

    Additional information n/a

    System (please complete the following information):

    • Platform: arm64
    • Version chart 0.1.3 / app version 0.5.0

    Additional context n/a

  • [BUG] OnionBalancedService periodically stops working, resulting in Onion Service not being found

    [BUG] OnionBalancedService periodically stops working, resulting in Onion Service not being found

    Describe the bug After running an OnionBalancedService for a period of time, eventually the onion address is no longer resolvable.

    Attempting to reach my onion service via the tor browser returns:

    Onionsite Not Found
    
    An error occurred during a connection to [redacted].onion. 
    
    Details: 0xF0 — The requested onion service descriptor can't be found on the hashring and therefore the service is not reachable by the client.
    

    All "obb" pods appear to be working as expected, but the "daemon" pod potentially has deadlocked after a restart (see below for details). Deleting the daemon pod, and allowing it to be recreated/restarted resolves the issue.

    To Reproduce I have not figured out specific steps to reproduce this yet, other than waiting long enough. Although, I have a suspicion it happens when the pod restarts itself (I will continue to try and narrow down more specific repro steps).

    Expected behavior The onion service should always be available as long as the daemon and obb pods are running.

    Additional information

    Logs from the onionbalance container of the daemon pod:

    time="2023-01-06T23:08:33Z" level=info msg="Listening for events"
    time="2023-01-06T23:08:33Z" level=info msg="Running event controller"
    time="2023-01-06T23:08:33Z" level=info msg="Starting controller"
    W0106 23:08:33.805173       1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed
    time="2023-01-06T23:08:33Z" level=info msg="Added onionBalancedService: ingress/tor-service"
    time="2023-01-06T23:08:35Z" level=info msg="Getting key ingress/tor-service"
    

    NOTE: the actual time is now 8 hours later, so onionbalance has not logged any additional activity for quite some time (deadlock?).

    On a successful launch, I see something along the lines of:

    [...]
    time="2023-01-07T10:50:04Z" level=info msg="Getting key ingress/tor-service"
    time="2023-01-07T10:50:04Z" level=info msg="Updating onionbalance config for ingress/tor-service"
    reloading onionbalance...
    starting onionbalance...
    2023-01-07 10:50:15,789 [WARNING]: Initializing onionbalance (version: 0.2.2)...
    [...]
    

    System (please complete the following information):

    • Platform: amd64
    • Version: v1.25.5-k3s1

    Additional context This does not happen often, but it has occurred 4 or 5 times over the past ~3 months. Anecdotally, I believe the last few times this has happened was after/around performing system upgrades on my cluster (ex: upgrading Kubernetes, or restarting nodes), where lots of pods are bouncing around.

    The remedy is simple (manually restart the daemon pod), but an automated fix would be preferred. If actually resolving the deadlock (if that's truly the issue...) is overly complex to diagnose at this time, I wonder if an easier fix might be to simply add a probe that can properly detect this condition? Any thoughts on how I could do this?

The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes.
The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes.

The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes. Apache NiFI is a free, open-source solution that support powerful and scalable directed graphs of data routing, transformation, and system mediation logic.

Dec 26, 2022
Amazon Web Services (AWS) providerAmazon Web Services (AWS) provider

Amazon Web Services (AWS) provider The Amazon Web Services (AWS) resource provider for Pulumi lets you use AWS resources in your cloud programs. To us

Nov 10, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
A component for sync services between Nacos and Kubernetes.

简介 该项目用于同步Kubernetes和Nacos之间的服务信息。 目前该项目仅支持 Kubernetes Service -> Nacos Service 的同步 TODO 增加高性能zap的logger 增加 Nacos Service -> Kubernetes Service 的同步 监听

May 16, 2022
Discover expired TLS certificates in the services of a kubernetes cluster

About verify-k8s-certs is a daemon (prometheus exporter) to discover expired TLS certificates in a kubernetes cluster. It exposes the informations as

Feb 1, 2022
How to build production-level services in Go leveraging the power of Kubernetes

Ultimate Service Copyright 2018, 2019, 2020, 2021, Ardan Labs [email protected] Ultimate Service 3.0 Classes This class teaches how to build producti

Oct 22, 2021
A tool that allows you to manage Kubernetes manifests for your services in a Git repository

kuberpult Readme for users About Kuberpult is a tool that allows you to manage Kubernetes manifests for your services in a Git repository and manage t

Dec 16, 2022
Christmas Hack Day Project: Build an Kubernetes Operator to deploy Camunda Cloud services

Camunda Cloud Operator Christmas Hack Day Project (2021): Build an Kubernetes Operator to deploy Camunda Cloud services Motiviation / Idea We currentl

May 18, 2022
⚡️ A dev tool for microservice developers to run local applications and/or forward others from/to Kubernetes SSH or TCP
⚡️ A dev tool for microservice developers to run local applications and/or forward others from/to Kubernetes SSH or TCP

Your new microservice development environment friend. This CLI tool allows you to define a configuration to work with both local applications (Go, Nod

Jan 4, 2023
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Sep 19, 2022
Prevent Kubernetes misconfigurations from ever making it (again 😤) to production! The CLI integration provides policy enforcement solution to run automatic checks for rule violations. Docs: https://hub.datree.io
Prevent Kubernetes misconfigurations from ever making it  (again 😤) to production! The CLI integration provides policy enforcement solution to run automatic checks for rule violations.  Docs: https://hub.datree.io

What is Datree? Datree helps to prevent Kubernetes misconfigurations from ever making it to production. The CLI integration can be used locally or in

Jan 1, 2023
The DGL Operator makes it easy to run Deep Graph Library (DGL) graph neural network training on Kubernetes

DGL Operator The DGL Operator makes it easy to run Deep Graph Library (DGL) graph neural network distributed or non-distributed training on Kubernetes

Dec 19, 2022
Kubectl plugin to run curl commands against kubernetes pods

kubectl-curl Kubectl plugin to run curl commands against kubernetes pods Motivation Sending http requests to kubernetes pods is unnecessarily complica

Dec 22, 2022
Run Kubernetes locally
Run Kubernetes locally

minikube implements a local Kubernetes cluster on macOS, Linux, and Windows. minikube's primary goals are to be the best tool for local Kubernetes application development and to support all Kubernetes features that fit.

Nov 12, 2021
A kubernetes operator sample generated by kubebuilder , which run cmd in pod on specified time

init kubebuilder init --domain github.com --repo github.com/tonyshanc/sample-operator-v2 kubebuilder create api --group sample --version v1 --kind At

Jan 25, 2022
RancherOS v2 is an immutable Linux distribution built to run Rancher and it's corresponding Kubernetes distributions RKE2 and k3s

RancherOS v2 is an immutable Linux distribution built to run Rancher and it's corresponding Kubernetes distributions RKE2 and k3s. It is built using the cOS-toolkit and based on openSUSE

Dec 27, 2022
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023