Kubectl plugin to ease sniffing on kubernetes pods using tcpdump and wireshark

ksniff

Build Status

A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod in your Kubernetes cluster.

You get the full power of Wireshark with minimal impact on your running pods.

Intro

When working with micro-services, many times it's very helpful to get a capture of the network activity between your micro-service and it's dependencies.

ksniff use kubectl to upload a statically compiled tcpdump binary to your pod and redirecting it's output to your local Wireshark for smooth network debugging experience.

Demo

Demo!

Production Readiness

Ksniff isn't production ready yet, running ksniff for production workloads isn't recommended at this point.

Installation

Installation via krew (https://github.com/GoogleContainerTools/krew)

kubectl krew install sniff

For manual installation, download the latest release package, unzip it and use the attached makefile:

unzip ksniff.zip
make install

Build

Requirements:

  1. libpcap-dev: for tcpdump compilation (Ubuntu: sudo apt-get install libpcap-dev)
  2. go 1.11 or newer

Compiling:

linux:      make linux
windows:    make windows
mac:        make darwin

To compile a static tcpdump binary:

make static-tcpdump

Usage

kubectl < 1.12:
kubectl plugin sniff <POD_NAME> [-n <NAMESPACE_NAME>] [-c <CONTAINER_NAME>] [-i <INTERFACE_NAME>] [-f <CAPTURE_FILTER>] [-o OUTPUT_FILE] [-l LOCAL_TCPDUMP_FILE] [-r REMOTE_TCPDUMP_FILE]

kubectl >= 1.12:
kubectl sniff <POD_NAME> [-n <NAMESPACE_NAME>] [-c <CONTAINER_NAME>] [-i <INTERFACE_NAME>] [-f <CAPTURE_FILTER>] [-o OUTPUT_FILE] [-l LOCAL_TCPDUMP_FILE] [-r REMOTE_TCPDUMP_FILE]

POD_NAME: Required. the name of the kubernetes pod to start capture it's traffic.
NAMESPACE_NAME: Optional. Namespace name. used to specify the target namespace to operate on.
CONTAINER_NAME: Optional. If omitted, the first container in the pod will be chosen.
INTERFACE_NAME: Optional. Pod Interface to capture from. If omitted, all Pod interfaces will be captured.
CAPTURE_FILTER: Optional. specify a specific tcpdump capture filter. If omitted no filter will be used.
OUTPUT_FILE: Optional. if specified, ksniff will redirect tcpdump output to local file instead of wireshark. Use '-' for stdout.
LOCAL_TCPDUMP_FILE: Optional. if specified, ksniff will use this path as the local path of the static tcpdump binary.
REMOTE_TCPDUMP_FILE: Optional. if specified, ksniff will use the specified path as the remote path to upload static tcpdump to.

Non-Privileged and Scratch Pods

To reduce attack surface and have small and lean containers, many production-ready containers runs as non-privileged user or even as a scratch container.

To support those containers as well, ksniff now ships with the "-p" (privileged) mode. When executed with the -p flag, ksniff will create a new pod on the remote kubernetes cluster that will have access to the node docker daemon.

ksniff will than use that pod to execute a container attached to the target container network namespace and perform the actual network capture.

Piping output to stdout

By default ksniff will attempt to start a local instance of the Wireshark GUI. You can integrate with other tools using the -o - flag to pipe packet cap data to stdout.

Example using tshark:

kubectl sniff pod-name -f "port 80" -o - | tshark -r -

Contribution

More than welcome! please don't hesitate to open bugs, questions, pull requests

Future Work

  1. Instead of uploading static tcpdump, use the future support of "kubectl debug" feature (https://github.com/kubernetes/community/pull/649) which should be a much cleaner solution.

Known Issues

Wireshark and TShark cannot read pcap

Issues 100 and 98

Wireshark may show UNKNOWN in Protocol column. TShark may report the following in output:

tshark: The standard input contains record data that TShark doesn't support.
(pcap: network type 276 unknown or unsupported)

This issue happens when using an old version of Wireshark or TShark to read the pcap created by ksniff. Upgrade Wireshark or TShark to resolve this issue. Ubuntu LTS versions may have this problem with stock package versions but using the Wireshark PPA will help.

Owner
Eldad Rudich
Director of Engineering @stackpulse
Eldad Rudich
Comments
  • Error when run ksniff in privileged mode

    Error when run ksniff in privileged mode

    After running kubectl sniff -p <POD> -c <CONTAINER_NAME> -n <NAMESPACE>. I'm using AKS Kubernetes v1.17.11.

    ksniff version: sniff v1.5.0

    INFO[0000] waiting for pod successful startup           
    INFO[0004] pod: 'ksniff-5pbv6' created successfully on node: 'aks-d8sv3-38575711-vmss000001' 
    INFO[0004] spawning wireshark!                          
    INFO[0004] starting remote sniffing using privileged pod 
    INFO[0004] executing command: '[docker --host unix:///host/var/run/docker.sock run --rm --name=ksniff-container-fLuezHME --net=container:602167f3ac7de5f5156763f8ad765ea15e7b9ed1cfeb242392fe0330c6762aaa maintained/tcpdump -i any -U -w - ]' on container: 'ksniff-privileged', pod: 'ksniff-5pbv6', namespace: 'global' 
    INFO[0005] command: '[docker --host unix:///host/var/run/docker.sock run --rm --name=ksniff-container-fLuezHME --net=container:602167f3ac7de5f5156763f8ad765ea15e7b9ed1cfeb242392fe0330c6762aaa maintained/tcpdump -i any -U -w - ]' executing successfully exitCode: '125', stdErr :'docker: Cannot connect to the Docker daemon at unix:///host/var/run/docker.sock. Is the docker daemon running?.
    See 'docker run --help'.
    INFO[0005] remote sniffing using privileged pod completed
    
  • Unrecognized libpcap format or not libpcap data

    Unrecognized libpcap format or not libpcap data

    Experiment Environment

    • OS: MacOS Mojave 10.14.1
    • Cluster:
      • Client: kubectl v1.12.3, installed from brew
      • Server: Kubernetes v1.10.11

    What do I do?

    I have configured my kubectl to control remote cluster, so this operation is on my laptop, I am trying to use sniff to dump packet traffic go through K8S Pod.

    🚀  kc -n epc1 get pods
    NAME          READY   STATUS    RESTARTS   AGE
    cassandra-0   1/1     Running   0          2h
    hss-0         1/1     Running   0          2h
    mme-0         1/1     Running   0          2h
    
    🚀  kc sniff mme-0 -n epc1
    INFO[0000] using tcpdump path at: '/Users/aweimeow/.krew/store/sniff/afb1a2e2cd093f1c8f8fff511f48cc5a290d2c6ecd18d9f51f9c66500710297b/static-tcpdump'
    INFO[0000] no container specified, taking first container we found in pod.
    INFO[0000] selected container: 'mme'
    INFO[0000] sniffing on pod: 'mme-0' [namespace: 'epc1', container: 'mme', filter: '']
    INFO[0000] checking for static tcpdump binary on: '/tmp/static-tcpdump'
    INFO[0000] couldn't find static tcpdump binary on: '/tmp/static-tcpdump', starting to upload
    INFO[0000] tcpdump uploaded successfully
    INFO[0000] spawning wireshark!
    

    And wireshark shows the following message: image

    If you need more information for helping debug, please let me know :)

  • Mount only the docker.sock file instead of the whole root path of the host

    Mount only the docker.sock file instead of the whole root path of the host

    When the pod is created, it mounts the whole / host path under the /host path of the pod. https://github.com/eldadru/ksniff/blob/798b1c7dec735bd2cdc925e7c840ee623b9ffde1/kube/kubernetes_api_service.go#L149

    However, when the Service Account Admission Manager came into the play, it will mount the token under /var/run/secrets/kubernetes.io/serviceaccount, and both the /var/run and /host/var/run is a symlink to /run, the /host/var/run/docker.sock is gone and docker won't be able to connect to the docker daemon.

    May I ask what's the purpose of mounting the whole / directory instead of just the .sock file? The current approach seems a bit dangerous and might have unexpected side effects.

  • Wireshark/Tshark isn't reading output correctly

    Wireshark/Tshark isn't reading output correctly

    What's the issue

    When I try to sniff traffic with wireshark or tshark I get an error pcap: network type 276 unknown or unsupported or I just get

    How to reproduce

    $ kubectl sniff my-pod -c my-container -p -n my-namespace -o - | tshark -r -
    INFO[0000] sniffing method: privileged pod
    INFO[0000] sniffing on pod: 'my-pod' [namespace: 'my-namespace', container: 'my-container', filter: '', interface: 'any']
    INFO[0000] creating privileged pod on node: 'my-node'
    INFO[0000] pod created: &Pod{ObjectMeta:{ksniff-qxsxk ksniff- my-namespace /api/v1/namespaces/my-namespace/pods/ksniff-qxsxk 485504a2-a9be-4328-8f86-424a2b41c2e1 56758253 0 2021-02-15 15:58:08 +0100 CET <nil> <nil> map[app:ksniff] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:host,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/,Type:*Directory,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},Volume{Name:container-socket,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/var/run/docker.sock,Type:*Socket,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},Volume{Name:default-token-8h6p9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8h6p9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:ksniff-privileged,Image:docker,Command:[sh -c sleep 10000000],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:container-socket,ReadOnly:true,MountPath:/var/run/docker.sock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-8h6p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:my-node,HostNetwork:false,HostPID:true,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    INFO[0000] waiting for pod successful startup
    INFO[0008] pod: 'ksniff-qxsxk' created successfully on node: 'my-node'
    INFO[0008] output file option specified, storing output in: '-'
    INFO[0008] starting remote sniffing using privileged pod
    INFO[0008] executing command: '[docker --host unix:///var/run/docker.sock run --rm --name=ksniff-container-fQJpKPcY --net=container:b696c45e35a5b9dfe0152685569fb35c6331c2d1e63648ed8987f52211ba0b5f maintained/tcpdump -i any -U -w - ]' on container: 'ksniff-privileged', pod: 'ksniff-qxsxk', namespace: 'my-namespace'
    tshark: The standard input contains record data that TShark doesn't support.
    (pcap: network type 276 unknown or unsupported)
    

    I get the same error if I save the output to a file and then try to open it with wireshark.

    However if I try to run ksniff directly to wireshark I get the traffic, but it's not able to decode it correctly
    (Although if you look closely you see that in the raw data there's some HTTP traffic)

    $ kubectl sniff my-pod -c my-container -p -n my-namespace
    

    screenshot_2021-02-15-160651

    Version

    ksniff is built from current master (https://github.com/eldadru/ksniff/commit/f253ce97ae6c3884c545080d9124aceb2f3b4263)

    $ wireshark --version
    Wireshark 3.2.7 (Git v3.2.7 packaged as 3.2.7-1)
    
    Copyright 1998-2020 Gerald Combs <[email protected]> and contributors.
    License GPLv2+: GNU GPL version 2 or later <https://www.gnu.org/licenses/gpl-2.0.html>
    This is free software; see the source for copying conditions. There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    Compiled (64-bit) with Qt 5.14.2, with libpcap, with POSIX capabilities (Linux),
    with libnl 3, with GLib 2.66.0, with zlib 1.2.11, with SMI 0.4.8, with c-ares
    1.16.1, with Lua 5.2.4, with GnuTLS 3.6.15 and PKCS #11 support, with Gcrypt
    1.8.5, with MIT Kerberos, with MaxMind DB resolver, with nghttp2 1.41.0, with
    brotli, with LZ4, with Zstandard, with Snappy, with libxml2 2.9.10, with
    QtMultimedia, without automatic updates, with SpeexDSP (using system library),
    with SBC, with SpanDSP, without bcg729.
    
    Running on Linux 5.8.0-43-generic, with Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz
    (with SSE4.2), with 15709 MB of physical memory, with locale
    LC_CTYPE=en_US.UTF-8, LC_NUMERIC=sv_SE.UTF-8, LC_TIME=sv_SE.UTF-8,
    LC_COLLATE=en_US.UTF-8, LC_MONETARY=sv_SE.UTF-8, LC_MESSAGES=en_US.UTF-8,
    LC_PAPER=sv_SE.UTF-8, LC_NAME=sv_SE.UTF-8, LC_ADDRESS=sv_SE.UTF-8,
    LC_TELEPHONE=sv_SE.UTF-8, LC_MEASUREMENT=sv_SE.UTF-8,
    LC_IDENTIFICATION=sv_SE.UTF-8, with libpcap version 1.9.1 (with TPACKET_V3),
    with GnuTLS 3.6.15, with Gcrypt 1.8.5, with brotli 1.0.9, with zlib 1.2.11,
    binary plugins supported (0 loaded).
    
    Built using gcc 10.2.0.
    
  • pod not found

    pod not found

    When I run k plugin sniff my-product-api-c97484d9b-bn6d6 -n my-product-api

    I get

    [+] Sniffing on pod: my-product-api-c97484d9b-bn6d6 container:  namespace: 
    [+] Verifying pod status
    Error from server (NotFound): pods "my-product-api-c97484d9b-bn6d6" not found
    [-] Pod is not existing or on different namespace
    error: exit status 1
    

    But, when I run k describe pod my-product-api-c97484d9b-bn6d6 -n my-product-api then I am getting the expected result.

    Versions

    Client Version: v1.10.4 Server Version: v1.11.1

  • Using ksniff on microk8s (containerd container runtime)

    Using ksniff on microk8s (containerd container runtime)

    Hi,

    I've had ksniff working with a minikube deployment with no issues. I've switched to microk8s and am now getting the following error.

    ERRO[0000] failed to create privileged pod on node: 'mudged-laptop' error="container runtime on node: 'mudged-laptop' isn't docker"

    From what I can see this is down to microk8s using containerd as the container runtime.

    Firstly am I right in my assumption? And are there any plans to support microk8s/containerd in the future?

  • Optional socket path argument and fallback to default soket path

    Optional socket path argument and fallback to default soket path

    Fixed #87, #82.

    Changes:

    1. Added a default socket path for each runtime (e.g. /var/run/docker.sock for docker);
    2. A new socket argument for passing in the socket path to support the scenario that the socket path is different from the default path (e.g. the DOCKER_SOCKET is changed to another path in the docker conf);
    3. Changed DockerBridge and CrioBridge to pointer receivers. This fixed the BuildCleanupCommand in DockerBridge too.
    4. Return the overlooked error in kube/ops.go and pkg/service/sniffer/privileged_pod_sniffer_service.go so that the error is propagated back and trigger the cleanup as expected.

    Note: Some code formatting changes due to the auto gofmt on my side.

  • Adding support for CRI-O and flexibility for more.

    Adding support for CRI-O and flexibility for more.

    Resolves #36 Begins addressing #65 Opens door for #74 (I think)

    This PR includes work to allow ksniff to work in an environment using CRI-O (e.g. OpenShift 4.x) but may open the door for supporting other container runtimes (by implementing ContainerRuntimeBridge).

    I know this wasn't on the roadmap as you've mentioned in a few bugs @eldadru but I still hope this is considered. If not, I'm happy to maintain a fork for CRI-O users!

  • [bugfix] Runtimes must use pointer receivers to be modifiable

    [bugfix] Runtimes must use pointer receivers to be modifiable

    Hi! We noticed that the privileged pods were leaving behind docker containers on our hosts. E.g. docker ps would show something like this:

    550b70024582        maintained/tcpdump                                                          "/usr/sbin/tcpdump -…"   4 minutes ago        Up 4 minutes                            ksniff-container-CuGlMCMT
    9f8e9578143a        maintained/tcpdump                                                          "/usr/sbin/tcpdump -…"   7 days ago           Up 7 days                               ksniff-container-tgNktuLk
    8fd1c0b61a6d        maintained/tcpdump                                                          "/usr/sbin/tcpdump -…"   2 weeks ago          Up 2 weeks                              ksniff-container-JpQDcWWx
    

    Looking more fully at the logs from the kubectl sniff invocation, we can see the problem:

    INFO[0001] waiting for pod successful startup
    INFO[0011] pod: 'ksniff-rnbws' created successfully on node: 'XXXXXXXXXX.eu-west-1.compute.internal'
    INFO[0011] spawning wireshark!
    INFO[0011] starting remote sniffing using privileged pod
    INFO[0011] executing command: '[docker --host unix:///host/var/run/docker.sock run --rm --name=ksniff-container-iqDIKJFD --net=container:4e49ced6ccf34a30b4bd3b13706268c7f434de28716f097324fe1119da137a59 maintained/tcpdump -i any -U -w - ]' on container: 'ksniff-privileged', pod: 'ksniff-rnbws', namespace: 'demos'
    INFO[0054] starting sniffer cleanup
    INFO[0054] removing privileged container: ''
    INFO[0054] executing command: '[docker rm -f ]' on container: 'ksniff-privileged', pod: 'ksniff-rnbws', namespace: 'demos'
    INFO[0054] command: '[docker rm -f ]' executing successfully exitCode: '1', stdErr :'Container name cannot be empty
    '
    INFO[0054] privileged container: '' removed successfully
    INFO[0054] removing pod: 'ksniff-rnbws'
    INFO[0054] removing privileged pod: 'ksniff-rnbws'
    INFO[0054] privileged pod: 'ksniff-rnbws' removed
    INFO[0054] pod: 'ksniff-rnbws' removed successfully
    INFO[0054] sniffer cleanup completed successfully
    

    Most relevant is this line: INFO[0054] command: '[docker rm -f ]' executing successfully exitCode: '1', stdErr :'Container name cannot be empty ' -- there is no container name being removed.

    Digging in to the code a bit, we found that the receivers for the DockerBridge type are value receivers, meaning that the assignment of d.tcpdumpContainerName = "ksniff-container-" + utils.GenerateRandomString(8) when building the tcpdump command is never actually propagated back to the caller object, but only saved in the local copy. (see https://golang.org/doc/effective_go.html#pointers_vs_values)

    This PR changes the receiver type for both DockerBridge and CrioBridge to use pointer receivers (even though this is not currently needed for the CrioBridge). I've added a test for the DockerBridge that shows the problem.

    After the change, we see this:

    INFO[0011] executing command: '[docker --host unix:///host/var/run/docker.sock rm -f ksniff-container-DMSaHhmy]' on container: 'ksniff-privileged', pod: 'ksniff-fv9bz', namespace: 'demos'
    INFO[0012] command: '[docker --host unix:///host/var/run/docker.sock rm -f ksniff-container-DMSaHhmy]' executing successfully exitCode: '0', stdErr :''
    
  • Minimum required RBAC for user to successfully sniff

    Minimum required RBAC for user to successfully sniff

    I am wondering if you've done any tuning on figuring out what the minimum required RBAC permissions for a user would been to be to get a successful sniff.

  • Update Krew Index with v1.6.1

    Update Krew Index with v1.6.1

    Hey @eldadru, thanks for you nice job in here :)

    Would be possible to update the version of the plugin into Krew Index so https://github.com/eldadru/ksniff/issues/114 is mitigated?

  • Openshift 4.10 Mac M1 nsenter: can't execute 'tcpdump': No such file or directory

    Openshift 4.10 Mac M1 nsenter: can't execute 'tcpdump': No such file or directory

    Hello,

    kubectl sniff -p MY-POD -n MY-NS --image MY-PRIV-REPO:MY-PORT/docker --tcpdump-image MY-PRIV-REPO:MY-PORT/corfr/tcpdump

    INFO[0005] spawning wireshark! INFO[0005] starting remote sniffing using privileged pod INFO[0005] executing command: '[nsenter -n -t 2800874 -- tcpdump -i any -U -w - ]' on container: 'ksniff-privileged', pod: 'ksniff-dntqt', namespace: 'MY-NS' INFO[0005] command: '[nsenter -n -t 2800874 -- tcpdump -i any -U -w - ]' executing successfully exitCode: '127', stdErr :'nsenter: can't execute 'tcpdump': No such file or directory '

    macOS Monterey 12.0.1

    Any ideas how to make it work?

  • can't run in privileged mode

    can't run in privileged mode

    Hi, I am trying to run sniff in privileged mode, its just fails right after starting the privileged container. It completes so fast that it does not even give a chance to check what's happening in the privileged pod (-v is also not helpful) Any suggestion on how to debug this ? Using latest version sniff plugin

    kubectl sniff ngnix -n default -p
    
    INFO[0000] no container specified, taking first container we found in pod. 
    INFO[0000] selected container: 'ngnix'                  
    INFO[0000] sniffing method: privileged pod              
    INFO[0000] sniffing on pod: 'ngnix' [namespace: 'default', container: 'ngnix', filter: '', interface: 'any'] 
    INFO[0000] creating privileged pod on node: 'node-1' 
    INFO[0000] pod: 'ksniff-w4mkm' created successfully in namespace: 'default' 
    INFO[0000] waiting for pod successful startup           
    INFO[0003] pod: 'ksniff-w4mkm' created successfully on node: 'node-1' 
    INFO[0003] spawning wireshark!                          
    INFO[0003] starting remote sniffing using privileged pod 
    INFO[0003] executing command: '[docker --host unix:///var/run/docker.sock run --rm --name=ksniff-container-hPDDCmXn --net=container:ce30c6e702526a19d748db87f6f29bb05987899bc9de03a98845f28091c6ccf4 maintained/tcpdump -i any -U -w - ]' on container: 'ksniff-privileged', pod: 'ksniff-w4mkm', namespace: 'default' 
    INFO[0003] starting sniffer cleanup                     
    INFO[0003] removing privileged container: 'ksniff-privileged' 
    INFO[0003] executing command: '[docker --host unix:///var/run/docker.sock rm -f ksniff-container-hPDDCmXn]' on container: 'ksniff-privileged', pod: 'ksniff-w4mkm', namespace: 'default' 
    INFO[0003] command: '[docker --host unix:///var/run/docker.sock rm -f ksniff-container-hPDDCmXn]' executing successfully exitCode: '0', stdErr :'Error: No such container: ksniff-container-hPDDCmXn'
    INFO[0003] privileged container: 'ksniff-privileged' removed successfully 
    INFO[0003] removing pod: 'ksniff-w4mkm'                 
    INFO[0003] removing privileged pod: 'ksniff-w4mkm'      
    INFO[0003] privileged pod: 'ksniff-w4mkm' removed       
    INFO[0003] pod: 'ksniff-w4mkm' removed successfully     
    INFO[0003] sniffer cleanup completed successfully       
    Error: exit status 1
    
  • can't execute 'ctr': No such file or directory

    can't execute 'ctr': No such file or directory

    containerd://1.6.8

    kubectl sniff rabbitmq-server-0 -p -n applianceshack -v
    INFO[0000] running in verbose mode                      
    DEBU[0000] pod 'rabbitmq-server-0' status: 'Running'    
    INFO[0000] no container specified, taking first container we found in pod. 
    INFO[0000] selected container: 'rabbitmq'               
    INFO[0000] sniffing method: privileged pod              
    INFO[0000] sniffing on pod: 'rabbitmq-server-0' [namespace: 'applianceshack', container: 'rabbitmq', filter: '', interface: 'any'] 
    INFO[0000] creating privileged pod on node: 'worker-1'  
    DEBU[0000] creating privileged pod on remote node       
    W1201 02:54:40.768551  191132 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostPID=true), privileged (container "ksniff-privileged" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "ksniff-privileged" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "ksniff-privileged" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "host", "container-socket" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "ksniff-privileged" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "ksniff-privileged" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
    INFO[0000] pod: 'ksniff-zcl7z' created successfully in namespace: 'applianceshack' 
    DEBU[0000] created pod details: &Pod{ObjectMeta:{ksniff-zcl7z ksniff- applianceshack  578d9a20-1987-4521-b38e-7da90d3da8c2 14886321 0 2022-12-01 02:54:40 +0000 UTC <nil> <nil> map[app:ksniff] map[] [] []  [{kubectl-sniff Update v1 2022-12-01 02:54:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"ksniff-privileged\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{".":{},"f:privileged":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/host\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/run/containerd/containerd.sock\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostPID":{},"f:nodeName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"container-socket\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"host\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:host,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/,Type:*Directory,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:container-socket,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/run/containerd/containerd.sock,Type:*Socket,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-4ccqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:ksniff-privileged,Image:docker.io/hamravesh/ksniff-helper:v3,Command:[sh -c sleep 10000000],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:container-socket,ReadOnly:true,MountPath:/run/containerd/containerd.sock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4ccqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker-1,HostNetwork:false,HostPID:true,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} 
    INFO[0000] waiting for pod successful startup           
    INFO[0002] pod: 'ksniff-zcl7z' created successfully on node: 'worker-1' 
    INFO[0002] spawning wireshark!                          
    INFO[0002] starting remote sniffing using privileged pod 
    INFO[0002] executing command: '[/bin/sh -c 
        set -ex
        export CONTAINERD_SOCKET="/run/containerd/containerd.sock"
        export CONTAINERD_NAMESPACE="k8s.io"
        export CONTAINER_RUNTIME_ENDPOINT="unix:///host${CONTAINERD_SOCKET}"
        export IMAGE_SERVICE_ENDPOINT=${CONTAINER_RUNTIME_ENDPOINT}
        crictl pull docker.io/maintained/tcpdump:latest >/dev/null
        netns=$(crictl inspect 3d5503facffd72c58406eb812425f387e72c8b4a8fdcd8e72e0c44f9aac08b87 | jq '.info.runtimeSpec.linux.namespaces[] | select(.type == "network") | .path' | tr -d '"')
        exec chroot /host ctr -a ${CONTAINERD_SOCKET} run --rm --with-ns "network:${netns}" docker.io/maintained/tcpdump:latest ksniff-container-VDipuLFu tcpdump -i any -U -w -  
        ]' on container: 'ksniff-privileged', pod: 'ksniff-zcl7z', namespace: 'applianceshack' 
    INFO[0002] command: '[/bin/sh -c 
        set -ex
        export CONTAINERD_SOCKET="/run/containerd/containerd.sock"
        export CONTAINERD_NAMESPACE="k8s.io"
        export CONTAINER_RUNTIME_ENDPOINT="unix:///host${CONTAINERD_SOCKET}"
        export IMAGE_SERVICE_ENDPOINT=${CONTAINER_RUNTIME_ENDPOINT}
        crictl pull docker.io/maintained/tcpdump:latest >/dev/null
        netns=$(crictl inspect 3d5503facffd72c58406eb812425f387e72c8b4a8fdcd8e72e0c44f9aac08b87 | jq '.info.runtimeSpec.linux.namespaces[] | select(.type == "network") | .path' | tr -d '"')
        exec chroot /host ctr -a ${CONTAINERD_SOCKET} run --rm --with-ns "network:${netns}" docker.io/maintained/tcpdump:latest ksniff-container-VDipuLFu tcpdump -i any -U -w -  
        ]' executing successfully exitCode: '127', stdErr :'+ export 'CONTAINERD_SOCKET=/run/containerd/containerd.sock'
    + export 'CONTAINERD_NAMESPACE=k8s.io'
    + export 'CONTAINER_RUNTIME_ENDPOINT=unix:///host/run/containerd/containerd.sock'
    + export 'IMAGE_SERVICE_ENDPOINT=unix:///host/run/containerd/containerd.sock'
    + crictl pull docker.io/maintained/tcpdump:latest
    + crictl inspect 3d5503facffd72c58406eb812425f387e72c8b4a8fdcd8e72e0c44f9aac08b87
    + jq '.info.runtimeSpec.linux.namespaces[] | select(.type == "network") | .path'
    + tr -d '"'
    + netns=/proc/178883/ns/net
    + exec chroot /host ctr -a /run/containerd/containerd.sock run --rm --with-ns network:/proc/178883/ns/net docker.io/maintained/tcpdump:latest ksniff-container-VDipuLFu tcpdump -i any -U -w -
    chroot: can't execute 'ctr': No such file or directory
    ' 
    INFO[0002] remote sniffing using privileged pod completed 
    INFO[0003] starting sniffer cleanup                     
    INFO[0003] removing privileged container: 'ksniff-privileged' 
    INFO[0003] executing command: '[/bin/sh -c 
        set -ex
        export CONTAINERD_SOCKET="/run/containerd/containerd.sock"
        export CONTAINERD_NAMESPACE="k8s.io"
        export CONTAINER_ID="ksniff-container-VDipuLFu"
        chroot /host ctr -a ${CONTAINERD_SOCKET} task kill -s SIGKILL ${CONTAINER_ID}
        ]' on container: 'ksniff-privileged', pod: 'ksniff-zcl7z', namespace: 'applianceshack' 
    INFO[0003] command: '[/bin/sh -c 
        set -ex
        export CONTAINERD_SOCKET="/run/containerd/containerd.sock"
        export CONTAINERD_NAMESPACE="k8s.io"
        export CONTAINER_ID="ksniff-container-VDipuLFu"
        chroot /host ctr -a ${CONTAINERD_SOCKET} task kill -s SIGKILL ${CONTAINER_ID}
        ]' executing successfully exitCode: '127', stdErr :'+ export 'CONTAINERD_SOCKET=/run/containerd/containerd.sock'
    + export 'CONTAINERD_NAMESPACE=k8s.io'
    + export 'CONTAINER_ID=ksniff-container-VDipuLFu'
    + chroot /host ctr -a /run/containerd/containerd.sock task kill -s SIGKILL ksniff-container-VDipuLFu
    chroot: can't execute 'ctr': No such file or directory
    ' 
    INFO[0003] privileged container: 'ksniff-privileged' removed successfully 
    INFO[0003] removing pod: 'ksniff-zcl7z'                 
    INFO[0003] removing privileged pod: 'ksniff-zcl7z'      
    INFO[0003] privileged pod: 'ksniff-zcl7z' removed       
    INFO[0003] pod: 'ksniff-zcl7z' removed successfully     
    INFO[0003] sniffer cleanup completed successfully       
    Error: signal: aborted (core dumped)
    
    k get nodes -o wide
    NAME             STATUS   ROLES           AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION   CONTAINER-RUNTIME
    controlplane-1   Ready    control-plane   29d    v1.25.2   10.188.0.11   <none>        Talos (v1.2.5)   5.15.72-talos    containerd://1.6.8
    controlplane-2   Ready    control-plane   28d    v1.25.2   10.188.0.14   <none>        Talos (v1.2.5)   5.15.72-talos    containerd://1.6.8
    controlplane-3   Ready    control-plane   25d    v1.25.2   10.188.0.15   <none>        Talos (v1.2.5)   5.15.72-talos    containerd://1.6.8
    worker-1         Ready    <none>          25d    v1.25.2   10.188.0.16   <none>        Talos (v1.2.5)   5.15.72-talos    containerd://1.6.8
    worker-2         Ready    <none>          25d    v1.25.2   10.188.0.17   <none>        Talos (v1.2.5)   5.15.72-talos    containerd://1.6.8
    worker-3         Ready    <none>          24d    v1.25.2   10.188.0.18   <none>        Talos (v1.2.5)   5.15.72-talos    containerd://1.6.8
    worker-4         Ready    <none>          2d9h   v1.25.2   10.188.0.21   <none>        Talos (v1.2.5)   5.15.72-talos    containerd://1.6.8
    
  • Add devcontainer and Linux arm64 support

    Add devcontainer and Linux arm64 support

    • Add a basic devcontainer configuration which allows this package to be built from a Docker image with all dependencies integrated.
    • Add a .gitignore to prevent built binaries from being picked up as new files.
    • Use YAML anchors to simplify the .krew.yaml configuration.
    • Add Linux ARM64 support.
    • Fix syntax error in .travis.yml.
  • netns return empty string so nothing is ever captured

    netns return empty string so nothing is ever captured

    Hi, I'm really interested into testing ksniff. I just install it on an AWS standard AMI with a local microk8s cluster installed along with docker as container runtime.

    I have a simple webserver where I launch ksniff on. But there is never a single packet coming in wireshark (or stdout when I launch just ksniff with -o -). Of course I make in parralel plenty of HTTP request to the simple webserver "bgd"

    After issuing a ps command, I can see how the ksniff container command has been run and I thinks this may be the cause of my problem but I don't know how to specify a net (like net=host ? but will it work ?)

    # Command was 
    kubectl sniff -n default bgd-xxx-yyy 
    
    INFO[0005] executing command: '[/bin/sh -c 
        set -ex
        export CONTAINERD_SOCKET="/run/containerd/containerd.sock"
        export CONTAINERD_NAMESPACE="k8s.io"
        export CONTAINER_RUNTIME_ENDPOINT="unix:///host${CONTAINERD_SOCKET}"
        export IMAGE_SERVICE_ENDPOINT=${CONTAINER_RUNTIME_ENDPOINT}
        crictl pull docker.io/maintained/tcpdump:latest >/dev/null
        netns=$(crictl inspect 5a0038d78596179fa055aa1395ae515f13011e06751b2271131a07f5a87ba27f | jq '.info.runtimeSpec.linux.namespaces[] | select(.type == "network") | .path' | tr -d '"')
        exec chroot /host ctr -a ${CONTAINERD_SOCKET} run --rm --with-ns "network:${netns}" docker.io/maintained/tcpdump:latest ksniff-container-sluEcVnE tcpdump -i any -U -w -  
        ]' on container: 'ksniff-privileged', pod: 'ksniff-t4r9t', namespace: 'default' 
    
    # ps gives
     4613 root      0:00 ctr -a /run/containerd/containerd.sock run --rm --with-ns network: docker.io/maintained/tcpdump:latest ksniff-container-sluEcVnE tcpdump -i any -U -w -
     4670 root      0:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id ksniff-container-sluEcVnE -address /run/containerd/containerd.sock
    

    Notice the --with-ns network:

    Any help is welcome and I can of course give more details, please ask

    The "app" I'm using for this test (image: quay.io/redhatworkshops/bgd:latest) and yaml is :

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: bgd
      name: bgd
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: bgd
      strategy: {}
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: bgd
        spec:
          containers:
          - image: quay.io/redhatworkshops/bgd:latest
            name: bgd
            env:
            - name: COLOR
              value: "green"
            resources: {}
    
  • Error: exit status 1

    Error: exit status 1

    Hi,

    Installed ksniff using krew and I am getting the following error:

    vagrant@rio-amazonas-jauarana:~/ksniff-master$ !1316
    kubectl sniff smf-f7d9788b5-pvh96 -f “port 8805” -n omec -p — socket /run/k3s/containerd/containerd.sock
    INFO[0000] no container specified, taking first container we found in pod.
    INFO[0000] selected container: 'smf'
    INFO[0000] sniffing method: privileged pod
    INFO[0000] sniffing on pod: 'smf-f7d9788b5-pvh96' [namespace: 'omec', container: 'smf', filter: '“port', interface: 'any']
    INFO[0000] creating privileged pod on node: 'rio-amazonas-jauarana'
    INFO[0000] pod: 'ksniff-znw2q' created successfully in namespace: 'omec'
    INFO[0000] waiting for pod successful startup
    INFO[0002] pod: 'ksniff-znw2q' created successfully on node: 'rio-amazonas-jauarana'
    INFO[0002] spawning wireshark!
    INFO[0002] starting remote sniffing using privileged pod
    INFO[0002] executing command: '[/bin/sh -c
        set -ex
        export CONTAINERD_SOCKET="/run/containerd/containerd.sock"
        export CONTAINERD_NAMESPACE="k8s.io"
        export CONTAINER_RUNTIME_ENDPOINT="unix:///host${CONTAINERD_SOCKET}"
        export IMAGE_SERVICE_ENDPOINT=${CONTAINER_RUNTIME_ENDPOINT}
        crictl pull docker.io/maintained/tcpdump:latest >/dev/null
        netns=$(crictl inspect 1febcb90c752074f2b5b6c655f46ee5839ab6b248fa66b9f470f7598558c398c | jq '.info.runtimeSpec.linux.namespaces[] | select(.type == "network") | .path' | tr -d '"')
        exec chroot /host ctr -a ${CONTAINERD_SOCKET} run --rm --with-ns "network:${netns}" docker.io/maintained/tcpdump:latest ksniff-container-EkUMLWmV tcpdump -i any -U -w - “port
        ]' on container: 'ksniff-privileged', pod: 'ksniff-znw2q', namespace: 'omec'
    INFO[0002] starting sniffer cleanup
    INFO[0002] removing privileged container: 'ksniff-privileged'
    INFO[0002] executing command: '[/bin/sh -c
        set -ex
        export CONTAINERD_SOCKET="/run/containerd/containerd.sock"
        export CONTAINERD_NAMESPACE="k8s.io"
        export CONTAINER_ID="ksniff-container-EkUMLWmV"
        chroot /host ctr -a ${CONTAINERD_SOCKET} task kill -s SIGKILL ${CONTAINER_ID}
        ]' on container: 'ksniff-privileged', pod: 'ksniff-znw2q', namespace: 'omec'
    INFO[0002] command: '[/bin/sh -c
        set -ex
        export CONTAINERD_SOCKET="/run/containerd/containerd.sock"
        export CONTAINERD_NAMESPACE="k8s.io"
        export CONTAINER_ID="ksniff-container-EkUMLWmV"
        chroot /host ctr -a ${CONTAINERD_SOCKET} task kill -s SIGKILL ${CONTAINER_ID}
        ]' executing successfully exitCode: '1', stdErr :'+ export 'CONTAINERD_SOCKET=/run/containerd/containerd.sock'
    + export 'CONTAINERD_NAMESPACE=k8s.io'
    + export 'CONTAINER_ID=ksniff-container-EkUMLWmV'
    + chroot /host ctr -a /run/containerd/containerd.sock task kill -s SIGKILL ksniff-container-EkUMLWmV
    ctr: container "ksniff-container-EkUMLWmV" in namespace "k8s.io": not found
    '
    INFO[0002] privileged container: 'ksniff-privileged' removed successfully
    INFO[0002] removing pod: 'ksniff-znw2q'
    INFO[0002] removing privileged pod: 'ksniff-znw2q'
    INFO[0002] privileged pod: 'ksniff-znw2q' removed
    INFO[0002] pod: 'ksniff-znw2q' removed successfully
    INFO[0002] sniffer cleanup completed successfully
    Error: exit status 1
    
    

    OS Version: 18.04 Wireshark: 3.6.5

    Am I doing something wrong? Can someone give me a hint?

    Thank you in advance!

Kubectl plugin to run curl commands against kubernetes pods

kubectl-curl Kubectl plugin to run curl commands against kubernetes pods Motivation Sending http requests to kubernetes pods is unnecessarily complica

Dec 22, 2022
A kubectl plugin to evict pods

kubectl-evict A kubectl plugin to evict pods. This plugin is good to remove a pod from your cluster or to test your PodDistruptionBudget. ?? Installat

Dec 7, 2022
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.
KinK is a helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Designed to ease clusters up for fast testing with batteries included in mind.

kink A helper CLI that facilitates to manage KinD clusters as Kubernetes pods. Table of Contents kink (KinD in Kubernetes) Introduction How it works ?

Dec 10, 2022
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes
A Kubernetes CSI plugin to automatically mount SPIFFE certificates to Pods using ephemeral volumes

csi-driver-spiffe csi-driver-spiffe is a Container Storage Interface (CSI) driver plugin for Kubernetes to work along cert-manager. This CSI driver tr

Dec 1, 2022
kubectl-fzf provides a fast and powerful fzf autocompletion for kubectl
kubectl-fzf provides a fast and powerful fzf autocompletion for kubectl

Kubectl-fzf kubectl-fzf provides a fast and powerful fzf autocompletion for kubectl. Table of Contents Kubectl-fzf Table of Contents Features Requirem

Nov 3, 2021
Kubectl golang - kubectl krew template repo

kubectl krew template repo There's a lot of scaffolding needed to set up a good

Jan 11, 2022
kubectl plugin for signing Kubernetes manifest YAML files with sigstore
kubectl plugin for signing Kubernetes manifest YAML files with sigstore

k8s-manifest-sigstore kubectl plugin for signing Kubernetes manifest YAML files with sigstore ⚠️ Still under developement, not ready for production us

Nov 28, 2022
kubectl plugin for generating nginx-ingress compatible basic-auth secrets on kubernetes clusters

kubectl-htpasswd kubectl plugin for easily generating hashed basic auth secrets. Supported hash algorithms bcrypt Examples Create the secret on the cl

Jul 17, 2022
A very simple, silly little kubectl plugin / utility that guesses which language an application running in a kubernetes pod was written in.

A very simple, silly little kubectl plugin / utility that guesses which language an application running in a kubernetes pod was written in.

Mar 9, 2022
Tcpdump-webhook - Toy Sidecar Injection with Mutating Webhook

tcpdump-webhook A simple demonstration of Kubernetes Mutating Webhooks. Injects

Feb 8, 2022
Viewnode displays Kubernetes cluster nodes with their pods and containers.

viewnode The viewnode shows Kubernetes cluster nodes with their pods and containers. It is very useful when you need to monitor multiple resources suc

Nov 23, 2022
gpu-memory-monitor is a metrics server for collecting GPU memory usage of kubernetes pods.

gpu-memory-monitor is a metrics server for collecting GPU memory usage of kubernetes pods. If you have a GPU machine, and some pods are using the GPU device, you can run the container by docker or kubernetes when your GPU device belongs to nvidia. The gpu-memory-monitor will collect the GPU memory usage of pods, you can get those metrics by API of gpu-memory-monitor

Jul 27, 2022
A kubectl plugin for easier query and operate k8s cluster.
A kubectl plugin for easier query and operate k8s cluster.

kube-query A kubectl plug-in that makes it easier to query and manipulate K8S clusters. (what is kubectl plug-in ?) Kube-query support some resource s

Jun 9, 2022
A kubectl plugin for finding decoded secret data with productive search flags.

kubectl-secret-data What is it? This is a kubectl plugin for finding decoded secret data. Since kubectl only outputs base64-encoded secrets, it makes

Dec 2, 2022
A 'kubectl' plugin for interacting with Clusternet.

kubectl-clusternet A kubectl plugin for interacting with Clusternet. Installation Install With Krew kubectl-clusternet can be installed using Krew, pl

Aug 14, 2022
A kubectl plugin for finding decoded secret data with productive search flags.

kubectl-secret-data What is it? This is a kubectl plugin for finding decoded secret data. Since kubectl outputs base64-encoded secrets basically, it m

Dec 2, 2022
A kubectl plugin for getting endoflife information about your cluster.
A kubectl plugin for getting endoflife information about your cluster.

kubectl-endoflife A kubectl plugin that checks your clusters for component compatibility and Kubernetes version end of life. This plugin is meant to a

Jul 21, 2022
🦥 kubectl plugin to easy to view pod

kubectl-lazy Install curl -sSL https://mirror.ghproxy.com/https://raw.githubusercontent.com/togettoyou/kubectl-lazy/main/install.sh | bash Or you can

Oct 13, 2022
A kubectl plugin to query multiple namespace at the same time.

kubemulti A kubectl plugin to query multiple namespace at the same time. $ kubemulti get pods -n cdi -n default NAMESPACE NAME

Mar 1, 2022