Mizu - API traffic viewer for Kubernetes enabling you to view all API communication between microservices

Mizu: The API Traffic Viewer for Kubernetes

The API Traffic Viewer for Kubernetes

A simple-yet-powerful API traffic viewer for Kubernetes enabling you to view all API communication between microservices to help your debug and troubleshoot regressions.

Think TCPDump and Wireshark re-invented for Kubernetes.

Simple UI

Features

  • Simple and powerful CLI
  • Monitoring network traffic in real-time. Supported protocols:
  • Works with Kubernetes APIs. No installation or code instrumentation
  • Rich filtering

Requirements

A Kubernetes server version of 1.16.0 or higher is required.

Download

Download Mizu for your platform and operating system

Latest Stable Release

  • for MacOS - Intel
curl -Lo mizu \
https://github.com/up9inc/mizu/releases/latest/download/mizu_darwin_amd64 \
&& chmod 755 mizu
  • for Linux - Intel 64bit
curl -Lo mizu \
https://github.com/up9inc/mizu/releases/latest/download/mizu_linux_amd64 \
&& chmod 755 mizu

SHA256 checksums are available on the Releases page

Development (unstable) Build

Pick one from the Releases page

How to Run

  1. Find pods you'd like to tap to in your Kubernetes cluster
  2. Run mizu tap or mizu tap PODNAME
  3. Open browser on http://localhost:8899/mizu or as instructed in the CLI
  4. Watch the API traffic flowing
  5. Type ^C to stop

Examples

Run mizu help for usage options

To tap all pods in current namespace -

 $ kubectl get pods 
 NAME                            READY   STATUS    RESTARTS   AGE
 carts-66c77f5fbb-fq65r          2/2     Running   0          20m
 catalogue-5f4cb7cf5-7zrmn       2/2     Running   0          20m
 front-end-649fc5fd6-kqbtn       2/2     Running   0          20m
 ..

 $ mizu tap
 +carts-66c77f5fbb-fq65r
 +catalogue-5f4cb7cf5-7zrmn
 +front-end-649fc5fd6-kqbtn
 Web interface is now available at http://localhost:8899
 ^C

To tap specific pod

 $ kubectl get pods 
 NAME                            READY   STATUS    RESTARTS   AGE
 front-end-649fc5fd6-kqbtn       2/2     Running   0          7m
 ..

 $ mizu tap front-end-649fc5fd6-kqbtn
 +front-end-649fc5fd6-kqbtn
 Web interface is now available at http://localhost:8899
 ^C

To tap multiple pods using regex

 $ kubectl get pods 
 NAME                            READY   STATUS    RESTARTS   AGE
 carts-66c77f5fbb-fq65r          2/2     Running   0          20m
 catalogue-5f4cb7cf5-7zrmn       2/2     Running   0          20m
 front-end-649fc5fd6-kqbtn       2/2     Running   0          20m
 ..

 $ mizu tap "^ca.*"
 +carts-66c77f5fbb-fq65r
 +catalogue-5f4cb7cf5-7zrmn
 Web interface is now available at http://localhost:8899
 ^C

Configuration

Mizu can optionally work with a config file that can be provided as a CLI argument (using --set config-path=<PATH>) or if not provided, will be stored at ${HOME}/.mizu/config.yaml In case of partial configuration defined, all other fields will be used with defaults
You can always override the defaults or config file with CLI flags

To get the default config params run mizu config
To generate a new config file with default values use mizu config -r

Advanced Usage

Kubeconfig

It is possible to change the kubeconfig path using KUBECONFIG environment variable or the command like flag with --set kube-config-path=<PATH>.
If both are not set - Mizu assumes that configuration is at ${HOME}/.kube/config

Namespace-Restricted Mode

Some users have permission to only manage resources in one particular namespace assigned to them By default mizu tap creates a new namespace mizu for all of its Kubernetes resources. In order to instead install Mizu in an existing namespace, set the mizu-resources-namespace config option

If mizu-resources-namespace is set to a value other than the default mizu, Mizu will operate in a Namespace-Restricted mode. It will only tap pods in mizu-resources-namespace. This way Mizu only requires permissions to the namespace set by mizu-resources-namespace. The user must set the tapped namespace to the same namespace by using the --namespace flag or by setting tap.namespaces in the config file

Setting mizu-resources-namespace=mizu resets Mizu to its default behavior

For detailed list of k8s permissions see PERMISSIONS document

User agent filtering

User-agent filtering (like health checks) - can be configured using command-line options:

$ mizu tap "^ca.*" --set tap.ignored-user-agents=kube-probe --set tap.ignored-user-agents=prometheus
+carts-66c77f5fbb-fq65r
+catalogue-5f4cb7cf5-7zrmn
Web interface is now available at http://localhost:8899
^C

Any request that contains User-Agent header with one of the specified values (kube-probe or prometheus) will not be captured

Traffic validation rules

This feature allows you to define set of simple rules, and test the traffic against them. Such validation may test response for specific JSON fields, headers, etc.

Please see TRAFFIC RULES page for more details and syntax.

OpenAPI Specification (OAS) Contract Monitoring

An OAS/Swagger file can contain schemas under parameters and responses fields. With --contract catalogue.yaml CLI option, you can pass your API description to Mizu and the traffic will automatically be validated against the contracts.

Please see CONTRACT MONITORING page for more details and syntax.

Configure proxy host

By default, mizu will be accessible via local host: 'http://localhost:8899/mizu/', it is possible to change the host, for instance, to '0.0.0.0' which can grant access via machine IP address. This setting can be changed via command line flag --set tap.proxy-host=<value> or via config file: tap proxy-host: 0.0.0.0 and when changed it will support accessing by IP

Run in daemon mode

Mizu can be run detached from the cli using the daemon flag: mizu tap --daemon. This type of mizu instance will run indefinitely in the cluster.

For more information please refer to DAEMON MODE

Comments
  •  Error while proxying request: context canceled

    Error while proxying request: context canceled

    when use " mizu tap ".*"" meet thefollowing error E1021 21:34:46.539954 3777907 proxy_server.go:147] Error while proxying request: context canceled

    logs from mizu-tapper-daemon-set pod: panic: Error connecting to socket server at ws://mizu-api-server.mizu.svc.cluster.local/wsTapper dial tcp: lookup mizu-api-server.mizu.svc.cluster.local: Try again

  • Start the tapper after the API server is ready and watch, stream events to web UI through WebSocket

    Start the tapper after the API server is ready and watch, stream events to web UI through WebSocket

    NOTE: Toast messages are removed from this PR upon request. See the comments after https://github.com/up9inc/mizu/pull/304#issuecomment-927650086


    This PR;

    • Starts the tapper after the API server is ready.
    • Prints an error in the CLI if the API server couldn't be deployed because of insufficient resources:
    Tapping pods in namespaces "sock-shop"
    +carts-5db79fbddf-qw28s
    +carts-db-6c6c68b747-psx54
    +catalogue-5f4cb4f68b-9g9ks
    +catalogue-db-96f6f6b4c-2vm7k
    +front-end-5c89db9f57-5q6ss
    +orders-8458b7f5db-bfff9
    +orders-db-659949975f-dj765
    +payment-f58b8c445-6bxkl
    +queue-master-84bbb789b7-v6wnq
    +rabbitmq-5bcbb547d7-4cbrs
    +session-db-7cf97f8d4f-z7jn6
    +shipping-548f696b44-mvx6s
    +user-756b89d69c-tt7hc
    +user-db-6df7444fc-72vrg
    Cannot deploy the API server. Reason: "0/1 nodes are available: 1 Insufficient cpu."
    Error creating resources: Post "https://192.168.99.103:8443/api/v1/namespaces/mizu/services": context canceled
    
    Removing mizu resources
    
    • Prints a similar message for the tapper in case of the same insufficient resources condition:
    ...
    +user-756b89d69c-tt7hc
    +user-db-6df7444fc-72vrg
    Mizu is available at http://localhost:8899/mizu
    Cannot deploy the tapper. Reason: "0/1 nodes are available: 1 Insufficient cpu."
    
    Removing mizu resources
    
    • Removes the 25 seconds hard-coded timeout of API server deployment. (the timeout of Mizu Agent image pull and deployment as the API server)

    • Prints an error in the CLI if an incorrect agent image is supplied:

    ...
    +user-756b89d69c-nsghq
    +user-db-6df7444fc-wddmp
    Cannot deploy the API server. (ErrImagePull) Reason: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for mertyildiran/mizuagent-123, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"
    
    Removing mizu resources
    
    • Prints an error in the CLI if there is a network error while pulling the image:
    ...
    +user-756b89d69c-nsghq
    +user-db-6df7444fc-wddmp
    Cannot deploy the API server. (ErrImagePull) Reason: "rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io: Temporary failure in name resolution"
    
    Removing mizu resources
    
    • Rest of the events that come from the tapper watch, are displayed as toast messages in the web UI:

    Screenshot from 2021-09-25 22-10-05

    Screenshot from 2021-09-25 22-10-09

    Screenshot from 2021-09-25 22-18-15

    Screenshot from 2021-09-25 22-18-18

    Screenshot from 2021-09-25 23-24-51

  • Mizu API server was not ready in time

    Mizu API server was not ready in time

    [root@k8s-master-xxx mizu]# ./mizu tap Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached. Tapping pods in namespaces "default" +kali-roll-df47956b5-5mdtp +my-cert-manager-cainjector-5955cd77f8-j5g5x +my-cert-manager-ff65454bf-q547x +my-cert-manager-webhook-5ff8499f89-jmrrj +vault-7594bfbc57-vc4fx Waiting for Mizu Agent to start... Mizu API server was not ready in time

    Removing mizu resources

  • Service Unavailable

    Service Unavailable

    Hi, after installing mizu binary I run: mizu tap ".*" -A

    I can see the list of pods being tapped in the terminal. I get: Mizu is available at http://localhost:8899/mizu

    However when I try to open that I get: `{ "kind": "Status", "apiVersion": "v1", "metadata": {

    }, "status": "Failure", "message": "error trying to reach service: dial tcp 10.2.19.212:8899: i/o timeout", "reason": "ServiceUnavailable", "code": 503 } ` kubectl is correctly working on my box. I can also see the temporary mizu-collector pod running in the cluster.

    Is it a port forward issue?

  • Implement AMQP request-response matcher

    Implement AMQP request-response matcher

    While I tried to run the front-end locally, I got missing types error so I ran npm install --save-dev @types/ace that's why there is a diff in front-end code.

    Screenshot from 2022-07-08 23-06-31 Screenshot from 2022-07-08 23-06-39 Screenshot from 2022-07-08 23-06-41 Screenshot from 2022-07-08 23-06-53 Screenshot from 2022-07-08 23-06-57

  • Add `AF_PACKET` support

    Add `AF_PACKET` support

    This PR adds AF_PACKET support as the capture source. AF_PACKET is supported since Linux kernel version 2.2

    Adds AF_XDP support as the capture source. AF_XDP is supported since Linux kernel version 4.18

    Adds packet-capture config alongside tls config to set the capture source. Its default value is libpcap so Mizu uses libpcap as before by default. The possible values of packet-capture flag are;

    • af_xdp
    • af_packet
    • libpcap

    an input that's not valid falls back to libpcap without a panic.

    AF_PACKET has a lower drop rate when it's compared to libpcap.

  • How to set Hub (formerly agent) image/registry?

    How to set Hub (formerly agent) image/registry?

    Contact Details

    Is your feature request related to a problem? Please describe.

    We have not whitelisted docker hub in our environment. All the images that we need we import them from DockerHub to our private ACR. Is there any way to configure kubeshark to pull image from private ACR?

    We are getting below error -

    E1129 12:17:27.638349 23008 proxy_server.go:147] Error while proxying request: context canceled

    Original Thread

    No response

    Describe the solution you'd like to see

    Configuration to set image pull location.

    Provide additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • No analysis button appears in the UI.

    No analysis button appears in the UI.

    Describe the bug

    As described on Analysis in Mizu an Analysis button should appear in the upper right corner of the UI/Mizu webpage. It never does so though. And yes I've waited for several minutes and tried in Edge, Google Chrome and Firefox. To no avail.

    To Reproduce Steps to reproduce the behavior:

    1. Run mizu tap -A --analysis or mizu tap --namespaces NAME_OF_THE_NAMESPACE --analysis
    2. Let mizu do its magic and sniff API traffic
    3. wait and see that the Analysis button do NOT appear

    Expected behavior That the Analysis button appears in the upper right corner.

    Logs Uploaded to this issue.

    Screenshots image

    Desktop (please complete the following information):

    • OS: Windows 10
    • Web Browser: Edge. Chrome and Firefox
  • Add the ability to set the insertion filter into CLI

    Add the ability to set the insertion filter into CLI

    Adds tap.insertion-filter field to the configuration of the CLI which allows you to set the insertion filter of Basenine.

    The value can be a string which is the filters itself:

    $ ./cli/bin/mizu__ tap -n sock-shop --set tap.insertion-filter=http
    

    or a path to a BFL file:

    $ ./cli/bin/mizu__ tap -n sock-shop --set tap.insertion-filter=/tmp/example.bfl
    

    example.bfl can be any filter. For example;

    Only inserts HTTP or AMQP traffic:

    http or amqp
    

    Only inserts when it's HTTP and the response status is 202:

    http and response.status == 202
    

    Replaces the value of request.path field with [REDACTED] string before the insertion:

    redact("request.path")
    
  • Running mizu fails silently with no logs

    Running mizu fails silently with no logs

    Describe the bug A clear and concise description of what the bug is.

    To Reproduce Steps to reproduce the behavior:

    1. Run mizu tap or any variant of it which does NOT include mizu-resources-namespace and --namespaces
    2. See mizu terminate instantly with no logs or error output
    3. Run mizu tap --set mizu-resources-namespace=my-ns --namespaces=my-ns
    4. Mizu executes as expected
    5. Run mizu tap --set mizu-resources-namespace=mizu --namespaces=mizu
    6. See mizu terminate instantly with no logs or error output

    I'm admin on this cluster, and have even gone so far as to create the mizu namespace ahead of time after it failed originally.

    Expected behavior

    mizu tap command works without mizu-resources-namespace and --namespaces command

    Logs

    WARNING: No zip logs generated, only CLI logs.

    Screenshots

    n/a

    Desktop (please complete the following information):

    • OS: Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64
    • Web Browser: n/a

    Additional context Add any other context about the problem here.

    We do have an OPA policy which enforces a naming scheme on our namespaces unless a certain label is added. As such, I tried adding the mizu namespace manually with the partner: core label.

    Additionally, the mac install instructions don't add mizu to your path, so I manually copied it to /usr/local/bin/mizu though the issue was happening even when executing in my home directory with ./mizu instead of just mizu

    ➜  ~ rm -rf .mizu
    
    ➜  ~ mizu tap --set mizu-resources-namespace=mizu --namespaces=mizu --set dump-logs=true
    Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
    Tapping pods in namespaces "mizu"
    +bash
    Waiting for Mizu Agent to start...
    
    ➜  ~ cat .mizu/mizu_cli.log
    [2022-01-27T02:46:07.633+0000] DEBUG ▶ Checking for newer version... ▶ [57108 versionCheck.go:47 CheckNewerVersion]
    [2022-01-27T02:46:07.633+0000] DEBUG ▶ Init config finished
     Final config: {
            "Tap": {
                    "UploadIntervalSec": 10,
                    "PodRegexStr": ".*",
                    "GuiPort": 8899,
                    "ProxyHost": "127.0.0.1",
                    "Namespaces": [
                            "mizu"
                    ],
                    "Analysis": false,
                    "AllNamespaces": false,
                    "PlainTextFilterRegexes": null,
                    "IgnoredUserAgents": null,
                    "DisableRedaction": false,
                    "HumanMaxEntriesDBSize": "200MB",
                    "DryRun": false,
                    "Workspace": "",
                    "EnforcePolicyFile": "",
                    "ContractFile": "",
                    "AskUploadConfirmation": true,
                    "ApiServerResources": {
                            "CpuLimit": "750m",
                            "MemoryLimit": "1Gi",
                            "CpuRequests": "50m",
                            "MemoryRequests": "50Mi"
                    },
                    "TapperResources": {
                            "CpuLimit": "750m",
                            "MemoryLimit": "1Gi",
                            "CpuRequests": "50m",
                            "MemoryRequests": "50Mi"
                    },
                    "ServiceMesh": false
            },
            "Version": {
                    "DebugInfo": false
            },
            "View": {
                    "GuiPort": 8899,
                    "Url": ""
            },
            "Logs": {
                    "FileStr": ""
            },
            "Auth": {
                    "EnvName": "up9.app",
                    "Token": ""
            },
            "Config": {
                    "Regenerate": false
            },
            "AgentImage": "gcr.io/up9-docker-hub/mizu/main:0.22.0",
            "ImagePullPolicyStr": "Always",
            "MizuResourcesNamespace": "mizu",
            "Telemetry": true,
            "DumpLogs": true,
            "KubeConfigPathStr": "",
            "ConfigFilePath": "/Users/peter.dolkens/.mizu/config.yaml",
            "HeadlessMode": false,
            "LogLevelStr": "INFO",
            "ServiceMap": false,
            "OAS": false
    }
     ▶ [57108 config.go:57 InitConfig]
    [2022-01-27T02:46:07.633+0000] INFO  ▶ Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached. ▶ [57108 tap.go:82 func8]
    [2022-01-27T02:46:07.633+0000] DEBUG ▶ Using kube config /Users/peter.dolkens/.kube/config ▶ [57108 provider.go:1055 loadKubernetesConfiguration]
    [2022-01-27T02:46:08.017+0000] DEBUG ▶ successfully reported telemetry for cmd tap ▶ [57108 telemetry.go:36 ReportRun]
    [2022-01-27T02:46:08.175+0000] INFO  ▶ Tapping pods in namespaces "mizu" ▶ [57108 tapRunner.go:116 RunMizuTap]
    [2022-01-27T02:46:08.310+0000] INFO  ▶ +bash ▶ [57108 tapRunner.go:179 printTappedPodsPreview]
    [2022-01-27T02:46:08.310+0000] DEBUG ▶ Finished version validation, github version 0.22.0, current version 0.22.0, took 676.551796ms ▶ [57108 versionCheck.go:95 CheckNewerVersion]
    [2022-01-27T02:46:08.310+0000] INFO  ▶ Waiting for Mizu Agent to start... ▶ [57108 tapRunner.go:126 RunMizuTap]
    
    ➜  ~ ls .mizu
    total 8
    -rw-r--r--  1 peter.dolkens  staff  2466 Jan 27 02:46 mizu_cli.log
    
    ➜  ~ mizu tap datasync
    Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
    Tapping pods in namespaces "my-namespace"
    +datasync-deploy-7d94dc6446-d9h5k
    Waiting for Mizu Agent to start...
    
    ➜  ~ mizu tap --set mizu-resources-namespace=my-namespace --namespaces=my-namespace datasync
    Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
    Tapping pods in namespaces "my-namespace"
    +datasync-deploy-5c65c9868c-c44pb
    Waiting for Mizu Agent to start...
    Mizu is available at http://localhost:8899
    
    ➜  ~ k get ns mizu -o yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"name":"mizu"},"labels":{"partner":"core"},"name":"mizu"}}
        name: mizu
      creationTimestamp: "2022-01-27T02:04:06Z"
      labels:
        kubernetes.io/metadata.name: mizu
        partner: core
      name: mizu
      resourceVersion: "175991464"
      uid: 078c01bb-9ef6-4f63-98aa-caefed0d2401
    spec:
      finalizers:
      - kubernetes
    status:
      phase: Active
    
  • Mizu API server was not ready in time

    Mizu API server was not ready in time

    Describe the bug Mizu API server was not ready in time.

    To Reproduce Steps to reproduce the behavior:

    1. Run mizu tap kieserver-proxy-7b6c685f44-4hdpv -n rule-ns

    Expected behavior Can work normally.

    Screenshots image

    Desktop (please complete the following information):

  • [Feature Request:] Request kubeshark to capture UDP traffic

    [Feature Request:] Request kubeshark to capture UDP traffic

    Contact Details

    No response

    Is your feature request related to a problem? Please describe.

    I'm working on telecom product, and the default protocol is SIP. SIP is default based on UDP. Is it possible to make kubeshark to capture UDP traffic?

    Original Thread

    No response

    Describe the solution you'd like to see

    No response

    Provide additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • [Feature Request:] Set independent container image versions for all components

    [Feature Request:] Set independent container image versions for all components

    Contact Details

    No response

    Is your feature request related to a problem? Please describe.

    Problem is our cluster has a policy not to allow latest tags. Setting tap.docker.tag: 37.0 produces and error since it tries to apply this tag to all components:

    2023-01-09T12:57:02+01:00 ERR tapRunner.go:404 > Watching events. event=kubeshark-hub.1738a2095077dd7f kind=Pod name=kubeshark-hub note="Failed to pull image \"kubeshark/hub:37.0\": rpc error: code = NotFound desc = failed to pull and unpack image \"docker.io/kubeshark/hub:37.0\": failed to resolve reference \"docker.io/kubeshark/hub:37.0\": docker.io/kubeshark/hub:37.0: not found" pod=kubeshark-hub reason=Failed
    

    Original Thread

    No response

    Describe the solution you'd like to see

    Ideal solution is to be able to set container image tag in configuration per component, meaning hub, worker...with defaults to latest or change defaults to tag versions of all components per release.

    Provide additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
  • kubeshark tap is not working -  context deadline exceeded

    kubeshark tap is not working - context deadline exceeded

    Describe the bug kubeshark tap is not working - context deadline exceeded

    To Reproduce Steps to reproduce the behavior:

    1. Run kubeshark tap For some reason looks like neither proxy nor port forward are working. Pods are there and running (image is pulled) but they get killed after some time. I can even see tapped pods at the beginning.
    Screenshot 2023-01-06 at 14 58 15

    kubectl proxy is working as expected

    Expected behavior WebUI opened

    Logs kubeshark_logs_2023_01_06__14_49_23.zip

    Screenshots Screenshot 2023-01-06 at 14 55 35

    Desktop (please complete the following information):

    • OS: macOS

    Additional context Add any other context about the problem here.

  • kubeshark tap failing to pull image - toomanyrequests

    kubeshark tap failing to pull image - toomanyrequests

    Downloaded CLI and when trying to tap into one of my namespaces, i'm getting this error from docker.

    2023-01-05T12:59:49Z ERR tapRunner.go:404 > Watching events. event=kubeshark-hub.17376b240c74e222 kind=Pod name=kubeshark-hub note="Failed to pull image \"docker.io/kubeshark/hub:latest\": rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:10709232004862c65a2565c37715a0c498ab51050211316ae7062b170f1ad901 in docker.io/kubeshark/hub: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" pod=kubeshark-hub reason=Failed
    

    Thinking it was a matter of authentication i did docker login and tried pulling the image manually myself and it works:

    $~> docker pull kubeshark/hub
    Using default tag: latest
    latest: Pulling from kubeshark/hub
    ce2f991d7b89: Pull complete 
    9212b33af5f6: Pull complete 
    4f4fb700ef54: Pull complete 
    542fea208e5e: Pull complete 
    Digest: sha256:e5b468efcc9141d6c8be4bc005c8de56b6c3cd8ba9d65934abe1386cc7365a44
    Status: Downloaded newer image for kubeshark/hub:latest
    docker.io/kubeshark/hub:latest
    
    $~>  docker images | grep kubesh
    kubeshark/hub                       latest              9e924de048f2   6 days ago      24.8MB
    

    I'm not sure what to do, i couldn't find any similar issue anywhere, so i assume im the only one facing this, and i'm not sure how to work around this. is there a way to configure docker credentials for the pull? because apparently it is not using my already logged account.

  • kubeshark tap failed with

    kubeshark tap failed with "error in k8s watch"

    Describe the bug A clear and concise description of what the bug is. kubeshark tap failed with "error in k8s watch"

    version: kubeshark: 38.1 k8s: 1.18.4

    While I was executing the command of "./kubeshark tap --namespaces ## --configpath=/root/.kube/config", I always ran into the issue that kubeshar tap failed with following error, Would anyone please help to have a look ? Thanks~~

    2023-01-04T16:58:46+08:00 INF createResources.go:76 > Successfully created a service. service=kubeshark-front 2023-01-04T16:58:46+08:00 ERR tapRunner.go:417 > While watching events. error="error in k8s watch: the server could not find the requested resource (get events.events.k8s.io)" pod=kubeshark-hub 2023-01-04T16:58:46+08:00 INF tapRunner.go:297 > Added pod. pod=kubeshark-front 2023-01-04T16:58:46+08:00 INF tapRunner.go:217 > Added pod. pod=kubeshark-hub 2023-01-04T17:00:46+08:00 ERR tapRunner.go:347 > Pod was not ready in time. pod=kubeshark-front 2023-01-04T17:00:46+08:00 WRN cleanResources.go:16 > Removing Kubeshark resources... 2023-01-04T17:00:46+08:00 ERR tapRunner.go:268 > Pod was not ready in time. pod=kubeshark-hub

    2023-01-04T17:01:46+08:00 WRN cleanResources.go:88 > Timed out while deleting the namespace. namespace=kubeshark

    To Reproduce Steps to reproduce the behavior:

    1. Run kubeshark <command> ...
    2. Click on '...'
    3. Scroll down to '...'
    4. See error

    Expected behavior A clear and concise description of what you expected to happen.

    Logs Upload logs:

    1. Run the kubeshark command with --set dump-logs=true (e.g kubeshark tap --set dump-logs=true)
    2. Try to reproduce the issue
    3. CTRL+C on terminal tab which runs kubeshark
    4. Upload the logs zip file from ~/.kubeshark/kubeshark_logs_**.zip

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. macOS]
    • Web Browser: [e.g. Google Chrome]

    Additional context Add any other context about the problem here.

  • API calls count

    API calls count

    Contact Details

    [email protected]

    Is your feature request related to a problem? Please describe.

    no

    Original Thread

    No response

    Describe the solution you'd like to see

    I'd love to get count of certain API calls, so that I can get the most hit endpoints.

    Provide additional context

    No response

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
💨A well crafted go packages that help you build robust, reliable, maintainable microservices.

Hippo A Microservices Toolkit. Hippo is a collection of well crafted go packages that help you build robust, reliable, maintainable microservices. It

Aug 11, 2022
A generic oplog/replication system for microservices
A generic oplog/replication system for microservices

REST Operation Log OpLog is used as a real-time data synchronization layer between a producer and consumers. Basically, it's a generic database replic

Oct 23, 2022
Go gRPC Kafka CQRS microservices with tracing

Golang CQRS Kafka gRPC Postgresql MongoDB Redis microservices example ?? ??‍?? Full list what has been used: Kafka as messages broker gRPC Go implemen

Jan 1, 2023
Example Golang Event-Driven with kafka Microservices Choreography

Microservices Choreography A demonstration for event sourcing using Go and Kafka example Microservices Choreography. To run this project: Install Go I

Dec 2, 2021
Study project that uses Apache Kafka as syncing mechanism between two databases, with producers and consumers written in Go.

Kafka DB Sync Study project that uses Apache Kafka as syncing mechanisms between a monolith DB and a microservice. The main purpose of this project is

Dec 5, 2021
Queue with NATS Jetstream to remove all the erlangs from cloud

Saf in Persian means Queue. One of the problems, that we face on projects with queues is deploying RabbitMQ on the cloud which brings us many challenges for CPU load, etc. I want to see how NATS with Jetstream can work as the queue to replace RabbitMQ.

Dec 15, 2022
An opinionated package that helps you print user-friendly output messages from your Go command line applications.

github.com/eth-p/clout (Command Line Output) clout is a package that helps you print user-friendly output messages from your Go command line applicati

Jan 15, 2022
GopherSay allow you to display a message said by a cute random Gopher.

GopherSay About Welcome in GopherSay! GopherSay is inspired by Cowsay program. GopherSay allow you to display a message said by a cute random Gopher.

Nov 23, 2022
Kudruk helps you to create queue channels and manage them gracefully.

kudruk Channels are widely used as queues. kudruk (means queue in Turkish) helps you to easily create queue with channel and manage the data in the qu

Feb 21, 2022
Chanman helps you to create queue channels and manage them gracefully.

chanman Channels are widely used as queues. chanman (Channel Manager) helps you to easily create queue with channel and manage the data in the queue.

Oct 16, 2021
ntfy is a super simple pub-sub notification service. It allows you to send desktop notifications via scripts.

ntfy ntfy (pronounce: notify) is a super simple pub-sub notification service. It allows you to send desktop and (soon) phone notifications via scripts

Jan 9, 2023
Bark is an iOS App which allows you to push customed notifications to your iPhone.
Bark is an iOS App which allows you to push customed notifications to your iPhone.

Bark is an iOS App which allows you to push customed notifications to your iPhone.

Jan 3, 2023
Give it an URI and it will open it how you want.

url_handler Give it an url and it will open it how you want. Browers are anoying so when I can, I open links with dedicated programs. I started out us

Jan 8, 2022
go broker interface,you can use kafka,redis,pulsar etc.

broker go broker interface,you can use kafka,redis,pulsar etc. pulsar in docker run pulsar in docker docker run -dit \ --name pulsar-sever \ -p 6650:

Sep 8, 2022
🚀 Golang, Go Fiber, RabbitMQ, MongoDB, Docker, Kubernetes, GitHub Actions and Digital Ocean
🚀 Golang, Go Fiber, RabbitMQ, MongoDB, Docker, Kubernetes, GitHub Actions and Digital Ocean

Bookings Solução de cadastro de usuários e reservas. Tecnologias Utilizadas Golang MongoDB RabbitMQ Github Actions Docker Hub Docker Kubernetes Digita

Feb 18, 2022
KubeMQ is a Kubernetes native message queue broker

KubeMQ Community is the open-source version of KubeMQ, the Kubernetes native message broker. More about KubeMQ

Nov 20, 2021
Golang API wrapper for MangaDex v5's MVP API.

mangodex Golang API wrapper for MangaDex v5's MVP API. Full documentation is found here. This API is still in Open Beta, so testing may not be complet

Oct 23, 2022
Go (golang) bindings for the 0mq (zmq, zeromq) C API

NOTE: These gozmq bindings are in maintenance mode. Only critical bugs will be fixed. Henceforth I would suggest using @pebbe's actively maintained bi

Dec 9, 2022
A non-dependent, online configuration, GO-developed, API gateway
A non-dependent, online configuration, GO-developed, API gateway

GateKeeper GateKeeper 是一个 Go 编写的不依赖分布式数据库的 API 网关,使用它可以高效进行服务代理,支持在线化热更新服务配置 以及 纯文件方式服务配置,支持主动探测方式自动剔除故障节点以及手动方式关闭下游节点流量,还可以通过自定义中间件方式灵活拓展其他功能。 特性 内容

Dec 26, 2022