Hubble - Network, Service & Security Observability for Kubernetes using eBPF

Hubble logo

Network, Service & Security Observability for Kubernetes

What is Hubble?

Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.

Hubble can answer questions such as:

Service dependencies & communication map:

  • What services are communicating with each other? How frequently? What does the service dependency graph look like?
  • What HTTP calls are being made? What Kafka topics does a service consume from or produce to?

Operational monitoring & alerting:

  • Is any network communication failing? Why is communication failing? Is it DNS? Is it an application or network problem? Is the communication broken on layer 4 (TCP) or layer 7 (HTTP)?
  • Which services have experienced a DNS resolution problems in the last 5 minutes? Which services have experienced an interrupted TCP connection recently or have seen connections timing out? What is the rate of unanswered TCP SYN requests?

Application monitoring:

  • What is the rate of 5xx or 4xx HTTP response codes for a particular service or across all clusters?
  • What is the 95th and 99th percentile latency between HTTP requests and responses in my cluster? Which services are performing the worst? What is the latency between two services?

Security observability:

  • Which services had connections blocked due to network policy? What services have been accessed from outside the cluster? Which services have resolved a particular DNS name?

Why Hubble?

The Linux kernel technology eBPF is enabling visibility into systems and applications at a granularity and efficiency that was not possible before. It does so in a completely transparent way, without requiring the application to change or for the application to hide information. By building on top of Cilium, Hubble can leverage eBPF for visibility. By leveraging on eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed insight where required. Hubble has been created and specifically designed to make best use of these new eBPF powers.

Releases

Version Release Date Supported Cilium Version Artifacts
v0.7 2020-10-22 (v0.7.1) Cilium 1.9, Cilium 1.8 GitHub Release
v0.6 2020-05-29 (v0.6.1) Cilium 1.8 GitHub Release
v0.5 2020-07-28 (v0.5.2) Cilium 1.7 GitHub Release

Component Stability

Hubble project consists of several components (see Architecture section).

While the core Hubble components have been running in production in multiple environments, new components continue to emerge as the project grows and expands in scope.

Some components, due to their relatively young age, are still considered beta and have to be used with caution in critical production workloads.

Component Area State
Hubble CLI Core Stable
Hubble Server Core Stable
Hubble Metrics Core Stable
Hubble Relay Multinode Stable
Hubble UI UI Beta

Architecture

Hubble Architecture

Getting Started

Features

Service Dependency Graph

Troubleshooting microservices application connectivity is a challenging task. Simply looking at "kubectl get pods" does not indicate dependencies between each service or external APIs or databases.

Hubble enables zero-effort automatic discovery of the service dependency graph for Kubernetes Clusters at L3/L4 and even L7, allowing user-friendly visualization and filtering of those dataflows as a Service Map.

See Hubble Service Map Tutorial for more examples.

Service Map

Metrics & Monitoring

The metrics and monitoring functionality provides an overview of the state of systems and allow to recognize patterns indicating failure and other scenarios that require action. The following is a short list of example metrics, for a more detailed list of examples, see the Metrics Documentation.

Networking Behavior

Networking

Network Policy Observation

Network Policy

HTTP Request/Response Rate & Latency

HTTP

DNS Request/Response Monitoring

DNS

Flow Visibility

Flow visibility provides visibility into flow information on the network and application protocol level. This enables visibility into individual TCP connections, DNS queries, HTTP requests, Kafka communication, and much more.

DNS Resolution

Identifying pods which have received DNS response indicating failure:

hubble observe --since=1m -t l7 -j \
   | jq 'select(.l7.dns.rcode==3) | .destination.namespace + "/" + .destination.pod_name' \
   | sort | uniq -c | sort -r
  42 "starwars/jar-jar-binks-6f5847c97c-qmggv"

Successful query & response:

starwars/x-wing-bd86d75c5-njv8k            kube-system/coredns-5c98db65d4-twwdg      DNS Query deathstar.starwars.svc.cluster.local. A
kube-system/coredns-5c98db65d4-twwdg       starwars/x-wing-bd86d75c5-njv8k           DNS Answer "10.110.126.213" TTL: 3 (Query deathstar.starwars.svc.cluster.local. A)

Non-existent domain:

starwars/jar-jar-binks-789c4b695d-ltrzm    kube-system/coredns-5c98db65d4-f4m8n      DNS Query unknown-galaxy.svc.cluster.local. A
starwars/jar-jar-binks-789c4b695d-ltrzm    kube-system/coredns-5c98db65d4-f4m8n      DNS Query unknown-galaxy.svc.cluster.local. AAAA
kube-system/coredns-5c98db65d4-twwdg       starwars/jar-jar-binks-789c4b695d-ltrzm   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Query unknown-galaxy.starwars.svc.cluster.local. A)
kube-system/coredns-5c98db65d4-twwdg       starwars/jar-jar-binks-789c4b695d-ltrzm   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Query unknown-galaxy.starwars.svc.cluster.local. AAAA)

HTTP Protocol

Successful request & response with latency information:

starwars/x-wing-bd86d75c5-njv8k:53410      starwars/deathstar-695d8f7ddc-lvj84:80    HTTP/1.1 GET http://deathstar/
starwars/deathstar-695d8f7ddc-lvj84:80     starwars/x-wing-bd86d75c5-njv8k:53410     HTTP/1.1 200 1ms (GET http://deathstar/)

TCP/UDP Packets

Successful TCP connection:

starwars/x-wing-bd86d75c5-njv8k:53410      starwars/deathstar-695d8f7ddc-lvj84:80    TCP Flags: SYN
deathstar.starwars.svc.cluster.local:80    starwars/x-wing-bd86d75c5-njv8k:53410     TCP Flags: SYN, ACK
starwars/x-wing-bd86d75c5-njv8k:53410      starwars/deathstar-695d8f7ddc-lvj84:80    TCP Flags: ACK, FIN
deathstar.starwars.svc.cluster.local:80    starwars/x-wing-bd86d75c5-njv8k:53410     TCP Flags: ACK, FIN

Connection timeout:

starwars/r2d2-6694d57947-xwhtz:60948   deathstar.starwars.svc.cluster.local:8080     TCP Flags: SYN
starwars/r2d2-6694d57947-xwhtz:60948   deathstar.starwars.svc.cluster.local:8080     TCP Flags: SYN
starwars/r2d2-6694d57947-xwhtz:60948   deathstar.starwars.svc.cluster.local:8080     TCP Flags: SYN

Network Policy Behavior

Denied connection attempt:

starwars/enterprise-5775b56c4b-thtwl:37800   starwars/deathstar-695d8f7ddc-lvj84:80(http)   Policy denied (L3)   TCP Flags: SYN
starwars/enterprise-5775b56c4b-thtwl:37800   starwars/deathstar-695d8f7ddc-lvj84:80(http)   Policy denied (L3)   TCP Flags: SYN
starwars/enterprise-5775b56c4b-thtwl:37800   starwars/deathstar-695d8f7ddc-lvj84:80(http)   Policy denied (L3)   TCP Flags: SYN

Community

Join the Cilium Slack #hubble channel to chat with Cilium Hubble developers and other Cilium / Hubble users. This is a good place to learn about Hubble and Cilium, ask questions, and share your experiences.

Learn more about Cilium.

Authors

Hubble is an open source project licensed under the Apache License. Everybody is welcome to contribute. The project is following the Governance Rules of the Cilium project. See CONTRIBUTING for instructions on how to contribute and details of the Code of Conduct.

Owner
Cilium
eBPF-based Networking, Security, and Observability
Cilium
Comments
  • hubble status reports Max Flows 0/0 and Unavailable Nodes

    hubble status reports Max Flows 0/0 and Unavailable Nodes

    Trying to enable hubble ui in a cluster where cilium was installed with helm:

    cilium hubble enable --ui --create-ca --relay-version v1.10.3
    

    (The --relay-version is a workaround for https://github.com/cilium/cilium-cli/issues/456)

    After port-forward, hubble status reports Max Flows 0/0 and all Nodes Unavailable even though running cilium status in each cilium pod shows Max Flows 4095/4095.

    No known workaround.

    Is this another case of cilium-cli being incompatible with a helm-installed Cilium? We wouldn't have to blaze that trail if cilium-cli were able to install Cilium chained to eks-vpc-cni.

  • Unable to load UI. `Error: getaddrinfo EAI_AGAIN`

    Unable to load UI. `Error: getaddrinfo EAI_AGAIN`

    When I port-forward the hubble-ui service and try to load the UI in a browser, the following happens:

    • the web page remains stuck on the "The application is loading, please wait..." page.
    • the logs of the hubble-ui pod show the following message:
    {
      "name": "frontend",
      "hostname": "hubble-ui-79b6c7c67-z4bs5",
      "pid": 19,
      "req_id": "101ee530-14a9-4580-868a-66fed7c6fd49",
      "user": "admin@localhost",
      "level": 50,
      "err": {
        "message": "Can't fetch namespaces via k8s api: Error: getaddrinfo EAI_AGAIN $ENTER_AKS_CLUSTER_DOMAIN_NAME",
        "locations": [
          {
            "line": 4,
            "column": 7
          }
        ],
        "path": [
          "viewer",
          "clusters"
        ],
        "extensions": {
          "code": "INTERNAL_SERVER_ERROR"
        }
      },
      "msg": "",
      "time": "2020-03-08T18:09:56.167Z",
      "v": 0
    }
    
  • Install Hubble from installation guide failing

    Install Hubble from installation guide failing

    Hi, when trying to follow the instructions that appears in this site: https://github.com/cilium/hubble/blob/master/Documentation/installation.md once you reach to hubble and try to run this cmd:

    helm template hubble \
        --namespace kube-system \
        --set metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
        > hubble.yaml
    

    you will fail on :

    Error: rendering template failed: runtime error: invalid memory address or nil pointer dereference
    

    tried to install also without any metrics and also not working , it looks like the template that exist here not working . can you please update the guidelines if any thing is expected?

  • Add endpoint workload filters

    Add endpoint workload filters

    This adds support to Hubble CLI for filtering against endpoints workloads The server side of this was implemented in https://github.com/cilium/cilium/pull/21296

  • Flows don't show up on GKE

    Flows don't show up on GKE

    Flows and arrows are not visible in Hubble UI. Yet flows for "hubble" namespace are visible. Running in GKE.

    Running procedure:

    helm template cilium \
      --namespace cilium \
      --set global.nodeinit.enabled=true \
      --set nodeinit.reconfigureKubelet=true \
      --set nodeinit.removeCbrBridge=true \
      --set global.cni.binPath=/home/kubernetes/bin \
      --set global.tag=v1.7.0-rc1 \
      > cilium.yaml
    
    helm template hubble \
        --namespace hubble \
        --set metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
        --set ui.enabled=true \
        > hubble.yaml
    

    I can confirm that flows are visible in "cilium monitor", "hubble observe", and "kubectl get cep".

  • OpenTelemetry Support

    OpenTelemetry Support

    Dear Hubble Community,

    We are currently migrating to Cilium as our networking solution and are very excited to use Hubble for observability.

    However, we miss one thing to be happy – OpenTelemetry (OpenTracing) support. I can see it was mentioned in the roadmap around Cilium 1.0 release:

    h3. The Roadmap Ahead Integration with OpenTracing, Jaeger, and Zipkin: The minimal overhead of BPF makes it the ideal technology to provide tracing and telemetry functionality without imposing additional system load.

    However, I haven't found any code/issues connected to it. I thought that might be Cilium Go Extensions is the right place to implement it. Then I checked Hubble, and it looks like all the data required is in place. I can potentially contribute to it if you give some guidance if Hubble Relay the right place for it.

  • network: unable to connect to Cilium daemon

    network: unable to connect to Cilium daemon

    I would like to ask how to clean up the cilium environment

    I follow the official documentation

    # install
    kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.7.0/install/kubernetes/quick-install.yaml
    
    # delete
    kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.7.0/install/kubernetes/quick-install.yaml
    

    After that, I found that all my pods cannot be created properly. about cilium crd,I have deleted. Do i need to delete anything?

    Error message

    # kubectl get pod  | grep httpd
    httpd-596db6fdc4-4r22k                                 0/1     ContainerCreating   0          15m
    httpd-596db6fdc4-5xldk                                 0/1     ContainerCreating   0          15m
    
    # kubectl describe pod
    Events:
      Type     Reason                  Age    From                             Message
      ----     ------                  ----   ----                             -------
      Normal   Scheduled               10m    default-scheduler                Successfully assigned default/httpd-596db6fdc4-5xldk to node001
      Warning  FailedCreatePodSandBox  9m17s  kubelet, node001  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ffcd455f1ab5483a17f87cdad35beaea980e61317dbe35b788cac7953e72c95f" network for pod "httpd-596db6fdc4-5xldk": NetworkPlugin cni failed to set up pod "httpd-596db6fdc4-5xldk_default" network: unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get http:///var/run/cilium/cilium.sock/v1/config: dial unix /var/run/cilium/cilium.sock: connect: no such file or directory
    Is the agent running?
    
  • Verdict events doubling

    Verdict events doubling

    Dear Hubble community,

    While logging traffic with Hubble:

    hubble observe -f --server hubble-relay:80 -o json --tcp-flags ACK --not --tcp-flags SYN
    

    Getting most events doubled in output: They have only difference in logging timestamp, ex:

    {"time":"2021-10-18T11:30:20.830417817Z","verdict":"FORWARDED","ethernet":{"source":"66:54:11:3e:bd:de","destination":"12:7a:c7:e0:b1:28"},"IP":{"source":"10.0.2.75","destination":"10.45.80.193","ipVersion":"IPv4"},"l4":{"TCP":{"source_port":49488,"destination_port":6443,"flags":{"ACK":true}}},"source":{"ID":140,"identity":6013,"namespace":"ingress-nginx","labels":["k8s:app.kubernetes.io/component=controller","k8s:app.kubernetes.io/instance=ingress-nginx","k8s:app.kubernetes.io/name=ingress-nginx","k8s:io.cilium.k8s.namespace.labels.field.cattle.io/projectId=p-8xjww","k8s:io.cilium.k8s.namespace.labels.name=ingress-nginx","k8s:io.cilium.k8s.policy.cluster=default","k8s:io.cilium.k8s.policy.serviceaccount=ingress-nginx","k8s:io.kubernetes.pod.namespace=ingress-nginx"],"pod_name":"ingress-nginx-controller-db9d9c7f4-gjllb"},"destination":{"identity":6,"labels":["reserved:remote-node"]},"Type":"L3_L4","node_name":"dev-wg-app1","event_type":{"type":4,"sub_type":3},"traffic_direction":"EGRESS","trace_observation_point":"TO_STACK","is_reply":false,"Summary":"TCP Flags: ACK"}
    {"time":"2021-10-18T11:30:26.853421611Z","verdict":"FORWARDED","ethernet":{"source":"66:54:11:3e:bd:de","destination":"12:7a:c7:e0:b1:28"},"IP":{"source":"10.0.2.75","destination":"10.45.80.193","ipVersion":"IPv4"},"l4":{"TCP":{"source_port":49488,"destination_port":6443,"flags":{"ACK":true}}},"source":{"ID":140,"identity":6013,"namespace":"ingress-nginx","labels":["k8s:app.kubernetes.io/component=controller","k8s:app.kubernetes.io/instance=ingress-nginx","k8s:app.kubernetes.io/name=ingress-nginx","k8s:io.cilium.k8s.namespace.labels.field.cattle.io/projectId=p-8xjww","k8s:io.cilium.k8s.namespace.labels.name=ingress-nginx","k8s:io.cilium.k8s.policy.cluster=default","k8s:io.cilium.k8s.policy.serviceaccount=ingress-nginx","k8s:io.kubernetes.pod.namespace=ingress-nginx"],"pod_name":"ingress-nginx-controller-db9d9c7f4-gjllb"},"destination":{"identity":6,"labels":["reserved:remote-node"]},"Type":"L3_L4","node_name":"dev-wg-app1","event_type":{"type":4,"sub_type":3},"traffic_direction":"EGRESS","trace_observation_point":"TO_STACK","is_reply":false,"Summary":"TCP Flags: ACK"}
    

    How this could be explained and avoided? Thanks!

  • cmd/node: Refactor & Test output methods

    cmd/node: Refactor & Test output methods

    This PR aims to achieve the following:

    • [x] Refactor, where applicable, to test output functions.
    • [x] Add table driven inputs for invoking certain output functionality.

    Signed-off-by: Simarpreet Singh [email protected]

  • Remove contrib/scripts/release.sh

    Remove contrib/scripts/release.sh

    Rename the current release make target to local-release, and update the release target to generate release artifacts from inside Docker.

    Signed-off-by: Michi Mutsuzaki [email protected]

  • Hubble UI cannot render due to Error: unable to get issuer certificate

    Hubble UI cannot render due to Error: unable to get issuer certificate

    Screen Shot 2020-02-21 at 10 04 54 AM

    We cannot render the hubble-ui due to this below error message:

    "message":"Can't fetch namespaces via k8s api: Error: unable to get issuer certificate","locations":[{"line":4,"column":7}],"path":["viewer","clusters"],"extensions":{"code":"INTERNAL_SERVER_ERROR"}}
    

    { name: 'inCluster', caFile: '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt', server: 'https://10.110.121.43:443', skipTLSVerify: false }

  • build(deps): bump actions/upload-artifact from 3.1.0 to 3.1.2

    build(deps): bump actions/upload-artifact from 3.1.0 to 3.1.2

    Bumps actions/upload-artifact from 3.1.0 to 3.1.2.

    Release notes

    Sourced from actions/upload-artifact's releases.

    v3.1.2

    • Update all @actions/* NPM packages to their latest versions- #374
    • Update all dev dependencies to their most recent versions - #375

    v3.1.1

    • Update actions/core package to latest version to remove set-output deprecation warning #351
    Commits

    Dependabot compatibility score

    You can trigger a rebase of this PR by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Can Hubble-ui enables user certification mechanism?

    Can Hubble-ui enables user certification mechanism?

    I am a green hand on Hubble. When I installed the Cilium and Hubble by helm, I found that the hubble-ui seems not secured by TLS, and have no more user identification mechanism. Thus any pods/worker nodes can access the hubble-ui-service-ip:80, and access all service maps in the whole cluster. Have I missed something? Looking forward to your reply!:)

  • Request for incremental release of 0.10.x to address GO Security Vulnerabilities

    Request for incremental release of 0.10.x to address GO Security Vulnerabilities

    Current Hubble 0.10.0 contains 16 GO related CVEs. Updating Hubble to use 1.18.9 will address these CVEs that have occurred since the June 2022 release of 0.10.0. I am requesting an incremental release of 0.10.x with this issue submission. Has there been any thought to aligning Hubble incremental release cadence with that of Cilium cadence (1.12.5 came out last week and updated to 1.18.9 GO) ?

  • ExternalName k8s Services - Hubble display

    ExternalName k8s Services - Hubble display

    Hi ,

    I'm having some issue trying to display external services.

    Here are the details :

    I have an external service defined as follow :

    kind: Service
    apiVersion: v1
    metadata:
      name: "searchmaster"
      labels:
        ressourcetype: service-solr-cd
        env: cd
    spec:
      type: ExternalName
      externalName: searchmaster.mydomain.local 
    

    Now , I have a pod that call this service , and also call a MysqL url ( which is not defined as a kubernetes service ). So basically configuration is :

    searchMasterUrl: http://searchmaster:8080 mySqlUrl: np-mysql01.mydomain.local

    Here is what I see in hubble :

    image

    I can see dns resolution works because i can see the the ip in the hubble log. But this is flag as "World"

    image

    Is there any way I can display my externalService name to identify theses flows ?

    I might missunderstanding something maybe because this should work out of the box since I use a dns name and it should get caught by dns rule.

    Cilium Version

    v1.12.4

    Kernel Version

    5.10.0-14-amd64

    Kubernetes Version

    v1.25.4

    Hubble version

    v1.12.4

    Thanks for your help !

    Regards

  • Support for filtering on HTTP headers

    Support for filtering on HTTP headers

    HTTP flows contain headers but Hubble doesn't support filtering flows based on HTTP headers. Using the CLI, we can already filter based on HTTP status code, methods and paths but filtering on headers is still missing.

network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

Dec 18, 2022
🔥 🔥 Open source cloud native security observability platform. Linux, K8s, AWS Fargate and more. 🔥 🔥
🔥 🔥   Open source cloud native security observability platform. Linux, K8s, AWS Fargate and more. 🔥 🔥

CVE-2021-44228 Log4J Vulnerability can be detected at runtime and attack paths can be visualized by ThreatMapper. Live demo of Log4J Vulnerability her

Jan 1, 2023
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.

Open Service Mesh (OSM) Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure,

Jan 2, 2023
K8s-network-config-operator - Kubernetes network config operator to push network config to switches

Kubernetes Network operator Will add more to the readme later :D Operations The

May 16, 2022
Kepler (Kubernetes-based Efficient Power Level Exporter) uses eBPF to probe energy related system stats and exports as Prometheus metrics
Kepler (Kubernetes-based Efficient Power Level Exporter) uses eBPF to probe energy related system stats and exports as Prometheus metrics

kepler Kepler (Kubernetes Efficient Power Level Exporter) uses eBPF to probe energy related system stats and exports as Prometheus metrics Architectur

Dec 26, 2022
Secure Distributed Thanos Deployment using an Observability Cluster

Atlas Status: BETA - I don't expect breaking changes, but still possible. Atlas, forced by Zeus to support the heavens and the skies on his shoulders.

Jun 11, 2022
ip-masq-agent-v2 aims to solve more specific networking cases, allow for more configuration options, and improve observability compared to the original.

ip-masq-agent-v2 Based on the original ip-masq-agent, v2 aims to solve more specific networking cases, allow for more configuration options, and impro

Aug 31, 2022
Measure the overheads of various observability tools, especially profilers.

strong: WIP - NOT READY TO LOOK AT go-observability-bench Terminology Workload: A Go function performing a small task (< 100ms) like parsing a big blo

Apr 23, 2022
A K8s ClusterIP HTTP monitoring library based on eBPF

Owlk8s Seamless RED monitoring of k8s ClusterIP HTTP services. This library provides RED (rate,error,duration) monitoring for all(by default but exclu

Jun 16, 2022
This manager helps handle the life cycle of your eBPF programs

eBPF Manager This repository implements a manager on top of Cilium's eBPF library. This declarative manager simplifies attaching and detaching eBPF pr

Dec 1, 2022
Metrics collector and ebpf-based profiler for C, C++, Golang, and Rust

Apache SkyWalking Rover SkyWalking Rover: Metrics collector and ebpf-based profiler for C, C++, Golang, and Rust. Documentation Official documentation

Jan 6, 2023
Provide task runtime implementation with pidfd and eBPF sched_process_exit tracepoint to manage deamonless container with low overhead.

embedshim The embedshim is the kind of task runtime implementation, which can be used as plugin in containerd. With current shim design, it is used to

Dec 18, 2022
Kubernetes Pod Security Standards implementation

Pod Security Admission The Pod Security Standards are a set of best-practice profiles for running pods securely. This repository contains the codified

Dec 30, 2022
Cheiron is a Kubernetes Operator made with OperatorSDK for reconciling service account and attaching imagePullSecrets to service accounts automatically

anny-co/cheiron NOTE: Cheiron is currently in very early stages of development and and far from anything usable. Feel free to contribute if you want t

Sep 13, 2021
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

Feb 21, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Oct 27, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021