Local development against a remote Kubernetes or OpenShift cluster

Documentation - start here!

Build Status Join the chat at https://d6e.co/slack CII Best Practices

** Note: Telepresence 1 is being replaced by our even better Telepresence 2. Please try Telepresence 2 first and report any issues as we expect this will be the default by Q2 2021. **

Demo

asciicast

Telepresence: fast, efficient local development for Kubernetes microservices

Telepresence gives developers infinite scale development environments for Kubernetes. With Telepresence:

  • You run one service locally, using your favorite IDE and other tools
  • You run the rest of your application in the cloud, where there is unlimited memory and compute

This gives developers:

  • a fast local dev loop, with no waiting for a container build / push / deploy
  • ability to use their favorite local tools (IDE, debugger, etc.)
  • ability to run large-scale applications that can't run locally

Quick Start

  1. Install locally with Homebrew, apt, or dnf.

  2. Run telepresence.

  3. You now have a shell that proxies connections to Kubernetes.

For more about Telepresence, and the various options, read the documentation.

Usage Reporting

Telepresence collects some basic information about its users so it can send important client notices, such as new version availability and security bulletins. We also use the information to aggregate basic usage analytics anonymously. To disable this behavior set the environment variable SCOUT_DISABLE:

export SCOUT_DISABLE=1

To know more, check the documentation on usage reporting.

Get Involved

About Telepresence

Telepresence is an open source project hosted by the Cloud Native Computing Foundation and originally created by Ambassador Labs. Telepresence is licensed under the Apache 2.0 License. For information about recent releases, see https://www.telepresence.io/reference/changelog. Ambassador Labs also provides commercial support for a version of Telepresence that is designed for teams.

Owner
Telepresence
Fast, local development of Kubernetes services
Telepresence
Comments
  • Kubernetes DNS Resolution Failing Spuriously on Mac

    Kubernetes DNS Resolution Failing Spuriously on Mac

    I'm opening a telepresence session to our K8s cluster using method vpn-tcp without swapping out any deployments just to access K8s resources. Roughly half of the time I do this, it works perfectly. The other times, I get errors complaining that DNS resolution of the K8s addresses failed. However, when I run dig <K8s-IP>, it reports status NOERROR, so I know that there is no real issue with the addresses themselves. The only workaround I've found thus far is to restart my machine entirely. I've also tried the same setup on Linux and have not seen the issue after many runs. Specifically, this is on macOS Sierra 10.12.6. This seems like a particularly hairy issue that may be solved by the plans to run DNS locally, but I still wanted to document it.

    Thank you!

  • Homebrew Install fails: Error: sshfs has been disabled because it requires FUSE!

    Homebrew Install fails: Error: sshfs has been disabled because it requires FUSE!

    Hi everyone,

    I tried installing Telepresence following the official installation guide at https://www.telepresence.io/reference/install and the brew command does not work for me. It fails with this output:

    brew install datawire/blackbird/telepresence redacted Updating Homebrew... ==> Auto-updated Homebrew! Updated 1 tap (homebrew/core). ==> Updated Formulae Updated 4 formulae.

    ==> Installing telepresence from datawire/blackbird ==> Downloading https://homebrew.bintray.com/bottles/gdbm-1.19.big_sur.bottle.tar.gz Already downloaded: /Users/redacted/Library/Caches/Homebrew/downloads/9d8f2b865b1f004ad8a1b27b468833da402e5feb31e88557175a25209660d595--gdbm-1.19.big_sur.bottle.tar.gz ==> Downloading https://homebrew.bintray.com/bottles/mpdecimal-2.5.1.big_sur.bottle.tar.gz Already downloaded: /Users/redacted/Library/Caches/Homebrew/downloads/29dd7202ebb6142202c80dec9080c1a24ef37a7039f92356a122061477efce21--mpdecimal-2.5.1.big_sur.bottle.tar.gz ==> Downloading https://homebrew.bintray.com/bottles/openssl%401.1-1.1.1k.big_sur.bottle.tar.gz Already downloaded: /Users/redacted/Library/Caches/Homebrew/downloads/fae83f761867e592ca8209f4cf851a681825ea9a2b5af870b9237aa35ca1ef0f--openssl@1.1-1.1.1k.big_sur.bottle.tar.gz ==> Downloading https://homebrew.bintray.com/bottles/readline-8.1.big_sur.bottle.tar.gz Already downloaded: /Users/redacted/Library/Caches/Homebrew/downloads/b03c5b80e59c91f05f4327bf3cb7a4dbab63a8902b76c0c53c6d36eb2e4331e9--readline-8.1.big_sur.bottle.tar.gz ==> Downloading https://homebrew.bintray.com/bottles/sqlite-3.35.4.big_sur.bottle.tar.gz Already downloaded: /Users/redacted/Library/Caches/Homebrew/downloads/c9a360f163a962e10c1beb105f0478500624eedbec866bda679b0347558375a0--sqlite-3.35.4.big_sur.bottle.tar.gz ==> Downloading https://homebrew.bintray.com/bottles/python%403.9-3.9.4.big_sur.bottle.tar.gz Already downloaded: /Users/redacted/Library/Caches/Homebrew/downloads/e85b235944ed11458dbc89ca1e57cc0577bd18b11fbd4c05e03d3e4a0850ad3d--python@3.9-3.9.4.big_sur.bottle.tar.gz ==> Downloading https://homebrew.bintray.com/bottles/torsocks-2.3.0.big_sur.bottle.1.tar.gz Already downloaded: /Users/redacted/Library/Caches/Homebrew/downloads/5da81126b71bef8bd6ce670c223f822dd875b993234c02ce45db3c778d3f0efa--torsocks-2.3.0.big_sur.bottle.1.tar.gz ==> Downloading https://homebrew.bintray.com/bottles/libffi-3.3_3.big_sur.bottle.tar.gz Already downloaded: /Users/redacted/Library/Caches/Homebrew/downloads/60b45c0f23d19cde24cfc8e6834288901010f39f7733d9b3312e759a58229193--libffi-3.3_3.big_sur.bottle.tar.gz ==> Downloading https://homebrew.bintray.com/bottles/python%403.9-3.9.4.big_sur.bottle.tar.gz Already downloaded: /Users/redacted/Library/Caches/Homebrew/downloads/e85b235944ed11458dbc89ca1e57cc0577bd18b11fbd4c05e03d3e4a0850ad3d--python@3.9-3.9.4.big_sur.bottle.tar.gz ==> Downloading https://homebrew.bintray.com/bottles/glib-2.68.0.big_sur.bottle.tar.gz Already downloaded: /Users/redacted/Library/Caches/Homebrew/downloads/34a93ed75ca6c68aa1dc16c348870beef563edd005994b4c0ccb88b8e3628fa0--glib-2.68.0.big_sur.bottle.tar.gz Error: sshfs has been disabled because it requires FUSE!

    This happens even after running "brew cask install osxfuse" before and even if I install sshfs and MacFuse manually. Could you please help? I am using MacOS BigSur 11.2.3 (Intel, not Apple Silicon)

    Thanks and best regards Tarkleigh

  • Unable to intercept a service on AWS EKS

    Unable to intercept a service on AWS EKS

    I'm able to connect to a cluster in the current context but when I try to intercept an existing service running in a namespace I have an error and the intercept session cannot be established

    To Reproduce

    1. deploy the traffic manager
    telepresence helm install
    
    1. connect to the cluster
    telepresence connect
    
    1. intercept a service
    telepresence  intercept directory --port 8084:80 --namespace afe01 --env-file directory.env
    

    I see this error

    Error: rpc error: code = DeadlineExceeded desc = request timed out while waiting for agent directory.afe01 to arrive
    telepresence: error: rpc error: code = DeadlineExceeded desc = request timed out while waiting for agent directory.afe01 to arrive
    

    telepresence_logs.zip

    Expected behavior I expect the intercept session to be created

    **Versions **

    • Output of telepresence version
    telepresence version
    Client: v2.7.1 (api v3)
    Root Daemon: v2.7.1 (api v3)
    User Daemon: v2.7.1 (api v3)
    
    • Operating system of workstation running telepresence commands
    MacOS Monterey
    
    • Kubernetes environment and Version [e.g. Minikube, bare metal, Google Kubernetes Engine]
    Kubernetes AWS EKS
    

    VPN-related bugs: No VPN is installed

    Additional context The Kubernetes API is reachable through an ssh tunnel

  • Is there a plan to support dns query on SRV record?

    Is there a plan to support dns query on SRV record?

    I found this code in cmd/traffic/cmd/agent/client.go:120. It seems that telepresence's dns query only support the A/AAAA records, because use the net.LookupHost function.

            go func() {
                    for ctx.Err() == nil {
                            lr, err := lrStream.Recv()
                            if err != nil {
                                    if ctx.Err() == nil {
                                            dlog.Debugf(ctx, "lookup request stream recv: %+v", err) // May be io.EOF
                                    }
                                    return
                            }
                            dlog.Debugf(ctx, "LookupRequest for %s", lr.Host)
                            addrs, err := net.LookupHost(lr.Host)
                            r := rpc.LookupHostResponse{}
                            if err == nil {
                                    ips := make(iputil.IPs, len(addrs))
                                    for i, addr := range addrs {
                                            ips[i] = iputil.Parse(addr)
                                    }
                                    dlog.Debugf(ctx, "Lookup response for %s -> %s", lr.Host, ips)
                                    r.Ips = ips.BytesSlice()
                            }
                            response := rpc.LookupHostAgentResponse{
                                    Session:  session,
                                    Request:  lr,
                                    Response: &r,
                            }
                            if _, err = manager.AgentLookupHostResponse(ctx, &response); err != nil {
                                    if ctx.Err() == nil {
                                            dlog.Debugf(ctx, "lookup response: %+v %v", err, &response)
                                    }
                                    return
                            }
                    }
            }()
    

    But we use the net.LookupSRV function in our code. Since we want to use the dns SRV record to get the FQDN and port of the service through K8S service name.

    Is there a plan to support this SRV query ?

  • Support for Non-Deployment Type Pods

    Support for Non-Deployment Type Pods

    Are there any plans to implement support for pods launched via statefulsets? The majority of our workloads are statefulsets, so being able to run e.g: telepresence --swap-pod cassandra-2 --docker-run --hostname cassandra-2 -it cassandra:local-v1 would be very useful.

  • Telepresence doesn't work - SSH isn't starting

    Telepresence doesn't work - SSH isn't starting

    What were you trying to do?

    Run the $ telepresence command. This was freshly installed on a new machine.

    What did you expect to happen?

    It would create a connection into the cluster

    What happened instead?

    I get the traceback below.

    Automatically included information

    Command line: ['/usr/local/bin/telepresence', '--swap-deployment', 'one-service', '--namespace=develop', '--docker-run', '--rm', '-e', 'MAVEN_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005', '-v/Users/eugene/kuber/one:/build', '-v', '/Users/eugene/.m2/repository:/m2', '-p', '9001:9001', '-p', '5005:5005', 'maven-build:jdk8', 'mvn', '-Dmaven.repo.local=/m2', '-f', '/build', 'spring-boot:run'] Version: 0.95 Python version: 3.7.1 (default, Nov 28 2018, 11:51:54) [Clang 10.0.0 (clang-1000.11.45.5)] kubectl version: Client Version: v1.13.0 // Server Version: v1.10.0 oc version: (error: [Errno 2] No such file or directory: 'oc': 'oc') OS: Darwin home.local 17.7.0 Darwin Kernel Version 17.7.0: Wed Oct 10 23:06:14 PDT 2018; root:xnu-4570.71.13~1/RELEASE_X86_64 x86_64 Traceback:

    Traceback (most recent call last):
      File "/usr/local/bin/telepresence/telepresence/cli.py", line 130, in crash_reporting
        yield
      File "/usr/local/bin/telepresence/telepresence/main.py", line 84, in main
        runner, remote_info, env, socks_port, ssh, mount_dir
      File "/usr/local/bin/telepresence/telepresence/outbound/setup.py", line 75, in launch
        args.also_proxy, env, ssh, mount_dir
      File "/usr/local/bin/telepresence/telepresence/outbound/container.py", line 122, in run_docker_command
        local_ssh.wait()
      File "/usr/local/bin/telepresence/telepresence/connect/ssh.py", line 82, in wait
        raise RuntimeError("SSH isn't starting.")
    RuntimeError: SSH isn't starting.
    
    

    Logs:

    xit 255 in 0.00 secs.
      32.9 TEL | [139] Running: ssh -F /dev/null -q -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -p 52795 root@localhost /bin/true
      32.9 TEL | [139] exit 255 in 0.02 secs.
      33.1  28 |   28.8 TEL | [116] Running: ssh -F /dev/null -q -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -p 38023 telepresence@localhost /bin/true
      33.1  28 |   28.8 TEL | [116] exit 255 in 0.00 secs.
      33.2 TEL | [140] Running: ssh -F /dev/null -q -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -p 52795 root@localhost /bin/true
      33.2 TEL | [140] exit 255 in 0.02 secs.
      33.4  28 |   29.0 TEL | [117] Running: ssh -F /dev/null -q -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -p 38023 telepresence@localhost /bin/true
      33.4  28 |   29.0 TEL | [117] exit 255 in 0.00 secs.
      33.4 TEL | [141] Running: ssh -F /dev/null -q -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -p 52795 root@localhost /bin/true
      33.5 TEL | [141] exit 255 in 0.02 secs.
    
    
  • Telepresence breaks ec2 machine's internet

    Telepresence breaks ec2 machine's internet

    Describe the bug

    Hi, I am using telepresence to connect to my staging and CI EKS clusters via EC2 machines. Connecting to CI clusters works amazingly well, but connecting to staging clusters don't. Specifically, when I connect to telepresence via sudo -E telepresence connect, the root/user daemon starts but then it somehow interferes with my EC2's internet. Worst case is I am not able to SSH in my machine, the best case - my EC2 machine can't even ping google.com unless I do telepresence quit.

    Orthogonal question: Is there a way to run telepresence in a more isolated way such that it doesn't interfere with my machine's internet?


    To Reproduce Steps to reproduce the behavior:

    1. When I run sudo -E telepresence connect
    2. My ec2's internet vanishes until I reboot/disconnect telepresence.

    Expected behavior Telepresence to not mess with my machine's internet.

    Versions

    • 2.4.10

    Additional context Due to PII, I can't share all the logs gathered from telepresence gather-logs but if there's any specific log which is required then I can share after redacting.

    Seeing logs like: Connector

    2022-02-13 01:26:39.3373 info    connector/background-manager : Existing Traffic Manager 2.4.11 not owned by cli or does not need upgrade, will not modify
    2022-02-13 01:29:06.6510 error   connector/server-grpc/conn=2 : Tunnel manager.Send() failed: EOF
    
    2022-02-13 13:59:48.8730 error   connector/server-grpc/conn=9/Uninstall-13 : Unable to look for existing helm release: Kubernetes cluster unreachable: Get "<cluster>": dial tcp: lookup <cluster> on 127.0.0.53:53: read udp 127.0.0.1:52514->127.0.0.53:53: i/o timeout. Assuming it's already gone...
    

    Daemon

    2022-02-13 13:23:35.0659 info    Logging at this level "info"
    2022-02-13 13:23:35.0661 info    ---
    2022-02-13 13:23:35.0661 info    Telepresence daemon v2.4.10 (api v3) starting...
    2022-02-13 13:23:35.0661 info    PID is 193
    2022-02-13 13:23:35.0661 info    
    2022-02-13 13:23:35.0869 info    daemon/server-grpc : gRPC server started
    2022-02-13 13:23:36.7385 info    daemon/server-grpc/conn=2 : Adding never-proxy subnet 50.18.23.135/32
    2022-02-13 13:23:36.7491 info    daemon/server-grpc/conn=2 : Adding never-proxy subnet 52.8.73.143/32
    2022-02-13 13:23:36.7535 info    daemon/watch-cluster-info : Adding service subnet 10.100.0.0/16
    2022-02-13 13:23:36.7536 info    daemon/watch-cluster-info : Adding pod subnet 172.31.0.0/18
    2022-02-13 13:23:36.7540 info    daemon/watch-cluster-info : started command ["ip" "a" "add" "10.100.0.0/16" "dev" "tel0"] : dexec.pid="228"
    2022-02-13 13:23:36.7541 info    daemon/watch-cluster-info :  : dexec.pid="228" dexec.stream="stdin" dexec.err="EOF"
    2022-02-13 13:23:36.7548 info    daemon/watch-cluster-info : finished successfully: exit status 0 : dexec.pid="228"
    2022-02-13 13:23:36.7551 info    daemon/watch-cluster-info : started command ["ip" "a" "add" "172.31.0.0/18" "dev" "tel0"] : dexec.pid="229"
    2022-02-13 13:23:36.7552 info    daemon/watch-cluster-info :  : dexec.pid="229" dexec.stream="stdin" dexec.err="EOF"
    2022-02-13 13:23:36.7561 info    daemon/watch-cluster-info : finished successfully: exit status 0 : dexec.pid="229"
    2022-02-13 13:23:36.7562 info    daemon/watch-cluster-info : Setting cluster DNS to 10.100.0.10
    2022-02-13 13:23:36.7562 info    daemon/watch-cluster-info : Setting cluster domain to "cluster.local."
    2022-02-13 13:23:36.7588 info    daemon/server-router/MGR stream : Connected to Manager 2.4.11
    2022-02-13 13:23:36.8048 info    daemon/server-dns/docker : Automatically set -dns=127.0.0.53
    2022-02-13 13:23:36.8085 info    daemon/server-dns/docker/NAT-redirect : started command ["iptables" "-t" "nat" "-D" "OUTPUT" "-j" "telepresence-dns"] : dexec.pid="230"
    2022-02-13 13:23:36.8086 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="230" dexec.stream="stdin" dexec.err="EOF"
    2022-02-13 13:23:36.8134 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="230" dexec.stream="stdout+stderr" dexec.data="iptables v1.8.4 (legacy): Couldn't load target `telepresence-dns':No such file or directory\n"
    2022-02-13 13:23:36.8135 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="230" dexec.stream="stdout+stderr" dexec.data="\n"
    2022-02-13 13:23:36.8135 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="230" dexec.stream="stdout+stderr" dexec.data="Try `iptables -h' or 'iptables --help' for more information.\n"
    2022-02-13 13:23:36.8136 info    daemon/server-dns/docker/NAT-redirect : finished with error: exit status 2 : dexec.pid="230"
    2022-02-13 13:23:36.8138 info    daemon/server-dns/docker/NAT-redirect : started command ["iptables" "-t" "nat" "-F" "telepresence-dns"] : dexec.pid="231"
    2022-02-13 13:23:36.8139 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="231" dexec.stream="stdin" dexec.err="EOF"
    2022-02-13 13:23:36.8144 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="231" dexec.stream="stdout+stderr" dexec.data="iptables: No chain/target/match by that name.\n"
    2022-02-13 13:23:36.8145 info    daemon/server-dns/docker/NAT-redirect : finished with error: exit status 1 : dexec.pid="231"
    2022-02-13 13:23:36.8147 info    daemon/server-dns/docker/NAT-redirect : started command ["iptables" "-t" "nat" "-X" "telepresence-dns"] : dexec.pid="232"
    2022-02-13 13:23:36.8148 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="232" dexec.stream="stdin" dexec.err="EOF"
    2022-02-13 13:23:36.8153 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="232" dexec.stream="stdout+stderr" dexec.data="iptables: No chain/target/match by that name.\n"
    2022-02-13 13:23:36.8154 info    daemon/server-dns/docker/NAT-redirect : finished with error: exit status 1 : dexec.pid="232"
    2022-02-13 13:23:36.8156 info    daemon/server-dns/docker/NAT-redirect : started command ["iptables" "-t" "nat" "-N" "telepresence-dns"] : dexec.pid="233"
    2022-02-13 13:23:36.8157 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="233" dexec.stream="stdin" dexec.err="EOF"
    2022-02-13 13:23:36.8163 info    daemon/server-dns/docker/NAT-redirect : finished successfully: exit status 0 : dexec.pid="233"
    2022-02-13 13:23:36.8165 info    daemon/server-dns/docker/NAT-redirect : started command ["iptables" "-t" "nat" "-I" "OUTPUT" "1" "-j" "telepresence-dns"] : dexec.pid="234"
    2022-02-13 13:23:36.8166 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="234" dexec.stream="stdin" dexec.err="EOF"
    2022-02-13 13:23:36.8180 info    daemon/server-dns/docker/NAT-redirect : finished successfully: exit status 0 : dexec.pid="234"
    2022-02-13 13:23:36.8184 info    daemon/server-dns/docker/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "telepresence-dns" "-p" "udp" "--source" "127.0.0.1" "--sport" "39891" "-j" "RETURN"] : dexec.pid="235"
    2022-02-13 13:23:36.8185 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="235" dexec.stream="stdin" dexec.err="EOF"
    2022-02-13 13:23:36.8211 info    daemon/server-dns/docker/NAT-redirect : finished successfully: exit status 0 : dexec.pid="235"
    2022-02-13 13:23:36.8213 info    daemon/server-dns/docker/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "telepresence-dns" "-p" "udp" "--dest" "127.0.0.53/32" "--dport" "53" "-j" "REDIRECT" "--to-ports" "52912"] : dexec.pid="236"
    2022-02-13 13:23:36.8213 info    daemon/server-dns/docker/NAT-redirect :  : dexec.pid="236" dexec.stream="stdin" dexec.err="EOF"
    2022-02-13 13:23:36.8265 info    daemon/server-dns/docker/NAT-redirect : finished successfully: exit status 0 : dexec.pid="236"
    2022-02-13 13:24:21.0625 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:21.0626 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:21.0626 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:21.0626 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:21.0626 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:26.0630 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:26.0630 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:26.0631 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:26.0631 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:26.0631 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:31.3219 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:31.3219 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:31.3219 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:31.3220 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:31.3220 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:31.3220 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:36.3224 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:36.3225 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:36.3225 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:36.3225 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:36.3226 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:36.3226 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:39.0648 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:55046->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:41.3406 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:41.3406 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:41.3407 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:41.3407 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:46.3398 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:46.3398 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:46.3398 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:46.3398 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:51.3261 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:51.3261 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:51.3261 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:51.3261 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:56.3256 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:56.3257 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:56.3256 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:56.3256 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:24:59.0668 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:35353->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:01.3395 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:01.3395 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:01.3396 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:01.3396 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:06.3353 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:06.3354 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:06.3353 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:06.3354 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:11.3286 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:11.3286 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:11.3287 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:11.3287 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:16.3285 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:16.3286 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:16.3286 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:16.3285 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:19.0698 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:60155->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:21.3429 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:21.3429 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:21.3428 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:21.3429 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:26.3392 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:26.3393 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:26.3393 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:26.3393 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:31.0924 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:31.0924 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:36.0878 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:36.0879 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:39.0740 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:38120->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:41.0752 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:41.0753 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:46.0755 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:46.0756 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:46.0756 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:46.0756 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:51.0898 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:51.0898 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:51.0899 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:51.0899 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:56.9214 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:56.9215 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:56.9215 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:56.9215 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:56.9215 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:56.9215 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:25:59.0764 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:44467->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:01.9262 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:01.9262 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:01.9263 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:01.9263 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:01.9263 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:01.9263 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:06.9900 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:06.9901 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:06.9900 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:06.9901 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:06.9901 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:06.9901 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:11.9472 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:11.9472 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:11.9473 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:11.9473 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:11.9473 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:11.9473 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:19.0796 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:33979->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0958 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0958 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0958 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0958 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0959 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0959 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0959 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0959 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0960 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0960 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0960 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0960 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0960 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0960 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0961 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0961 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0961 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0961 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0961 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0961 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0961 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0961 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0962 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0962 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0962 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0962 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0962 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:36.0962 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:39.0825 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:34883->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:41.0835 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:41.0835 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:41.0836 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:41.0836 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:41.0836 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:41.0836 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:46.0845 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:46.0846 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:46.0846 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:46.0846 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:46.0846 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:46.0846 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:51.0955 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:51.0955 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:51.0956 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:51.0956 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:51.0956 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:51.0956 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:56.0942 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:56.0943 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:56.0943 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:56.0943 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:56.0944 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:56.0944 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:26:59.0865 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:37320->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:01.0872 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:01.0872 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:01.0873 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:01.0873 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:01.0873 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:01.0873 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:06.0884 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:06.0884 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:06.0884 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:06.0885 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:11.1045 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:11.1046 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:11.1046 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:11.1046 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:16.1006 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:16.1007 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:16.1007 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:16.1007 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:19.0896 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:58771->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:21.0903 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:21.0904 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:26.0906 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:26.0906 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:26.0907 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:26.0907 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:31.1043 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:31.1044 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:31.1044 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:31.1044 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:36.1120 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:36.1121 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:36.1121 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:36.1121 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:39.0918 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:43301->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:41.0927 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:41.0927 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:41.0928 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:41.0928 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:46.0925 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:46.0925 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:51.1020 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:51.1020 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:56.1050 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:56.1050 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:27:59.0941 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:33507->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:01.0946 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:01.0946 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:06.0951 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:06.0951 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:11.1069 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:11.1069 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:16.1049 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:16.1049 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:19.0963 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:34977->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:21.0973 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:21.0973 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:26.0979 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:26.0979 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:31.1118 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:31.1118 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:34.1052 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:28:36.1062 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:36.1062 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:39.0997 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:39160->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:41.1012 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:41.1013 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:46.1008 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:46.1009 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:51.1156 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:51.1156 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:54.1177 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:28:56.1181 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:56.1181 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:28:59.1024 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:52227->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:01.1032 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:01.1032 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:06.1046 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:06.1047 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:11.1174 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:11.1175 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:14.1135 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:29:16.1146 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:16.1146 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:19.1049 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:35760->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:21.1056 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:21.1056 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:26.1071 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:26.1072 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:31.1215 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:31.1216 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:34.1222 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:29:36.1230 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:36.1230 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:39.1083 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:48514->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:41.1094 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:41.1095 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:46.1108 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:46.1109 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:46.1110 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:46.1110 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:51.1184 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:51.1185 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:51.1185 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:51.1185 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:54.1337 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:29:56.1344 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:56.1344 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:56.1345 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:56.1345 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:29:59.1127 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:52449->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:01.1137 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:01.1138 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:01.1139 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:01.1139 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:06.1142 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:06.1142 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:06.1143 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:06.1143 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:08.5349 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:30:09.1652 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:30:11.1659 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:11.1660 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:11.1659 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:11.1660 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:14.1222 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:30:16.2579 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:16.2580 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:16.2580 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:16.2580 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:19.1155 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:48125->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:21.2628 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:21.2628 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:21.2629 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:21.2627 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:26.3170 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:26.3171 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:26.3171 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:26.3170 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:29.1292 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:30:29.2814 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:30:31.2816 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:31.2817 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:31.2817 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:31.2817 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:34.1354 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:30:36.1359 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:36.1359 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:38.2081 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:38.2081 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:39.1178 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:50489->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:41.1182 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:41.1183 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:43.2130 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:43.2130 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:46.1186 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:46.1187 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:48.2351 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:48.2351 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:49.1290 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:30:51.1296 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:51.1296 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:51.2304 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:30:53.2308 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:53.2308 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:54.1388 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:30:56.1391 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:56.1392 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:30:59.1206 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:59145->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:01.9039 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:01.9039 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:01.9040 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:01.9040 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:06.9086 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:06.9086 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:06.9087 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:06.9087 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:09.9299 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:31:11.9301 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:11.9302 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:11.9302 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:11.9302 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:14.1294 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:31:14.9307 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:31:16.9311 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:16.9311 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:16.9311 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:16.9311 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:19.1237 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:59946->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:21.1243 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:21.1244 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:26.1253 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:26.1254 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:31.1540 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:31.1541 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:31.1541 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:31.1541 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:34.1429 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:31:36.1438 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:36.1438 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:36.1439 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:36.1439 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:37.7106 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:31:39.1265 info    daemon/background-metriton : scout report "incluster_dns_query" failed: Post "https://metriton.datawire.io/scout": dial tcp: lookup metriton.datawire.io on 127.0.0.53:53: read udp 127.0.0.1:53171->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:41.1273 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:41.1273 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:41.1273 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:41.1273 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:46.1281 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:46.1281 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:46.1282 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:46.1282 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:49.1424 info    daemon/server-dns/docker/Server : scout report "incluster_dns_query" discarded. Output buffer is full (or closed)
    2022-02-13 13:31:50.9147 info    daemon/daemon-quit : Shutting down connector
    2022-02-13 13:31:50.9147 info    daemon/server-dns/docker:shutdown_logger : shutting down (gracefully)...
    2022-02-13 13:31:50.9147 info    daemon/server-dns/docker/Server:shutdown_logger : shutting down (gracefully)...
    2022-02-13 13:31:50.9147 info    daemon:shutdown_logger : shutting down (gracefully)...
    2022-02-13 13:31:51.1436 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    2022-02-13 13:31:51.1436 error   daemon/server-dns/docker/Server : read udp 127.0.0.1:39891->127.0.0.53:53: i/o timeout
    time="2022-02-13T13:31:51Z" level=info dexec.pid=280 msg="started command [\"iptables\" \"-t\" \"nat\" \"-D\" \"OUTPUT\" \"-j\" \"telepresence-dns\"]"
    time="2022-02-13T13:31:51Z" level=info dexec.pid=280 dexec.stream=stdin dexec.err=EOF
    time="2022-02-13T13:31:51Z" level=info dexec.pid=280 msg="finished successfully: exit status 0"
    time="2022-02-13T13:31:51Z" level=info dexec.pid=281 msg="started command [\"iptables\" \"-t\" \"nat\" \"-F\" \"telepresence-dns\"]"
    time="2022-02-13T13:31:51Z" level=info dexec.pid=281 dexec.stream=stdin dexec.err=EOF
    time="2022-02-13T13:31:51Z" level=info dexec.pid=281 msg="finished successfully: exit status 0"
    time="2022-02-13T13:31:51Z" level=info dexec.pid=282 msg="started command [\"iptables\" \"-t\" \"nat\" \"-X\" \"telepresence-dns\"]"
    time="2022-02-13T13:31:51Z" level=info dexec.pid=282 dexec.stream=stdin dexec.err=EOF
    time="2022-02-13T13:31:51Z" level=info dexec.pid=282 msg="finished successfully: exit status 0"
    2022-02-13 13:31:52.9155 info    daemon:shutdown_logger : shutting down (not-so-gracefully)...
    
  • Regression: Unable to download large files via intercept after telepresence 2.3

    Regression: Unable to download large files via intercept after telepresence 2.3

    Hello,

    I have been using telepresence v2.2.2 with a GKE cluster and an nginx ingress for a few weeks, using telepresence to develop frontend against backend already in the cluster.

    The frontend is using react and TypeScript, using yarn and yarn.lock file to lock all the dependencies explicitly. We're using webpack dev server with hot reload for development and that is how telepresence was used - original deployment of the prod version of the app gets intercepted and all requests for UI are routed to the webpack dev server on developer's computer.

    The application in GKE is behind nginx-ingress (version 0.32.0) and ingress is exposing the application over https.

    Things have worked fine for a while, including the hot reload feature. Recently I have updated to telepresence v2.3.2 and things stopped working. Specifically the UI never finishes loading. I can see some of webpack messages, but nothing finishes loading.

    After a lot of trial and error with things that could possibly go wrong, I have ruled out:

    • updates to the backend code - by rolling back to version that I am confident worked with telepresence and ensuring all backend components use a specific container image and settings (the project has a separate config repository for Kubernetes so it is easy to roll back to a specific version)
    • updates to the frontend code - I have rolled back to commit I know to work with telepresence ; since all the npm packages are yarn.lock-ed, it should be a reproducible setup
    • browser issues - I have run an older version of Chrome for a while to ensure the problem is not related to an update in the browser
    • generic networking issues - I have replaced webpack dev server with nginx running on my machine and hosting a static directory, I was able to download small and large files without issues

    After ruling out all the issues that could be caused by changes on my end, I have consistently tried all versions of telepresence and came up with v2.2.2 working as expected and v2.3.0 not working. I have also tried v2.3.1 and v2.3.2 and all of them fail to work and with telepresence intercept enabled.

    I suspect this has to be some edge case related to many features webpack dev server is using - such as websockets it uses to inform the browser of updates in the remote code and need to reload.

    Could anyone help determine what could be the issue? Is this a known problem with v2.3?

    Are there some settings, additional configuration or changes to traffic-manager deployment I could make to get additional logs / information?

  • Unable to intercept even though it says it was successful in creating the intercept

    Unable to intercept even though it says it was successful in creating the intercept

    Describe the bug Unable to intercept service. The local environment is MACOS-Catalina

    telepresence intercept demo-app-v1 --namespace=dev --port 9082:9082 --env-file config.env
    Using Deployment demo-app-v1
    intercepted
        Intercept name         : demo-app-v1-dev
        State                  : ACTIVE
        Workload kind          : Deployment
        Destination            : 127.0.0.1:9082
        Service Port Identifier: 9082
        Volume Mount Error     : macFUSE 4.0.5 or higher is required on your local machine
        Intercepting           : all TCP connections
    
    telepresence list
    No Workloads (Deployments, StatefulSets, or ReplicaSets)
    

    On the traffic agent side car logs I find this

    "2022-02-03 20:09:24.9697 debug   client : Lookup response for macc02zv6ttmd6t.dev.svc.cluster.local -> NOT FOUND
    "
    

    The intercept is just not working. WOnder what needs to be done.

  • [Enhancement] More resilient DNS handling with the traffic manager

    [Enhancement] More resilient DNS handling with the traffic manager

    Describe the bug cannot use k8s DNS

    ❯ curl -sv https://kubernetes.default.svc
    * Could not resolve host: kubernetes.default.svc
    

    log files: telepresence_logs.zip

    To Reproduce Steps to reproduce the behavior:

    1. When I run 'telepresence connect'
    2. I see 'connected'
    3. So I look at 'curl https://kubernetes.default'
    4. See error could not resolve host

    Expected behavior Can resolve the hostname.

    Versions (please complete the following information):

    • Output of telepresence version Client: v2.7.1 (api v3) Root Daemon: v2.7.1 (api v3) User Daemon: v2.7.1 (api v3)

    • Operating system of workstation running telepresence commands MacOS Monterey

    • Kubernetes environment and Version [e.g. Minikube, bare metal, Google Kubernetes Engine] Docker Desktop

    VPN-related bugs: telepresence test-vpn shows empty result.

    • Which VPN client are you using? F5 Access
    • Which VPN server are you using? F5
    • How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf. Mac does not use it.

    Additional context I cannot reproduce it but it surely happens a lot. I can make it work by restarting the traffic manager as @peakschris suggested, hence this must be related to the initialization process of the traffic manager. If we can periodically retry for DNS handling, can we get this fixed?

  • Downgrade Telepresence to version 0.99

    Downgrade Telepresence to version 0.99

    HI, I have telepresence version 0.101 and i want to downgrade to version 0.99. I tried to downgrade using a pre built copy using the link below but looks like its still takes the latest version. Is there a way for me to keep both versions but let me choose which version i want to use? If not then should i just delete my existing telepresence and use the link below? Thanks!!

    Pre-built link: https://s3.amazonaws.com/datawire-static-files/telepresence/telepresence-0.99.tar.gz

  • Use

    Use "kubernetes.default" service when extracting cluster domain.

    Description

    The traffic-manager would use "kubernetes.default.svc" prior to this commit, and that often failed.

    Checklist

    • [x] I made sure to update ./CHANGELOG.md.
    • [ ] I made sure to add any docs changes required for my change (including release notes).
    • [x] My change is adequately tested.
    • [ ] I updated DEVELOPING.md with any any special dev tricks I had to use to work on this code efficiently.
    • [ ] I updated TELEMETRY.md if I added, changed, or removed a metric name.
    • [ ] Once my PR is ready to have integration tests ran, I posted the PR in #telepresence-dev in the datawire-oss slack so that the "ok to test" label can be applied.
  • Telepresence systemd service(s)

    Telepresence systemd service(s)

    Please describe your use case / problem. We'd like to use telepresence connect as any others VPN/networking tools with systemd to monitor/reboot it and avoid launching root-level daemon by hand. Possibility to send logs/errors to stdout/stderr to have them in systemd-journald.

    Describe the solution you'd like One systemd service/template to connect to one or multiple cluster.

    Describe alternatives you've considered I've made some services, one root & one user, to do it. They are basic and needs improvements/suggestions.

    Root daemon:

    # /etc/systemd/system/telepresence.service
    [Unit]
    Description=Telepresence daemon
    Documentation=https://www.telepresence.io/docs/latest/quick-start/
    After=network-online.target multi-user.target
    Wants=network-online.target
    
    [Service]
    # Ensure we're clean and no old socket is used
    ExecStartPre=rm -f /var/run/telepresence-daemon.socket
    StandardInput=null
    ExecStart=/usr/local/bin/telepresence daemon-foreground %L/telepresence %T
    ExecStop=/usr/local/bin/telepresence quit -s --no-report
    LockPersonality=yes
    MemoryDenyWriteExecute=yes
    NoNewPrivileges=yes
    ProtectProc=invisible
    ProtectClock=yes
    DeviceAllow=/dev/net/tun
    Environment=XDG_CACHE_HOME=%T
    Environment=XDG_CONFIG_HOME=%T
    ProtectControlGroups=yes
    ProtectHome=yes
    ProtectKernelLogs=yes
    ProtectKernelModules=yes
    ProtectSystem=full
    Restart=always
    RestartSec=0
    RestrictNamespaces=yes
    RestrictRealtime=yes
    RestrictSUIDSGID=yes
    
    [Install]
    WantedBy=multi-user.target
    

    User daemon:

    # $HOME/.config/systemd/user/[email protected]
    [Unit]
    Description=Telepresence connect service
    
    [Service]
    Type=forking
    ExecStart=/usr/local/bin/telepresence connect --cache-dir=%T/telepresence --context=%i --request-timeout=2m --no-report
    ExecStop=/usr/local/bin/telepresence quit --stop-daemons --no-report
    PrivateTmp=yes
    MemoryDenyWriteExecute=yes
    NoNewPrivileges=yes
    ProtectProc=invisible
    ProtectControlGroups=yes
    ProtectHome=read-only
    ProtectSystem=full
    RestrictNamespaces=yes
    RestrictRealtime=yes
    RestrictSUIDSGID=yes
    

    Versions (please complete the following information)

    • telepresence version: v2.9.5
    • kubernetes version: 1.24.6

    Additional context

  • Update repo address in Install section

    Update repo address in Install section

    Description

    Update repo address in Install section to https://app.getambassador.io Te use of the previous one ( https://getambassador.io) is incorrect:

    %> helm repo add datawire https://getambassador.io
    Error: looks like "https://getambassador.io" is not a valid chart repository or cannot be reached: Get "https://www.getambassador.io/index.yaml": dial tcp [2a05:d014:275:cb01::c8]:443: connect: no route to host
    

    Checklist

    Nop

  • Don't panic when --docker-run is combined with --name

    Don't panic when --docker-run is combined with --name

    Description

    The code that parses arguments after -- would panic on flags that used --flag value instead of --flag=value. This commit ensures that both variants work.

    Checklist

    • [x] I made sure to update ./CHANGELOG.md.
    • [ ] I made sure to add any docs changes required for my change (including release notes).
    • [x] My change is adequately tested.
    • [ ] I updated DEVELOPING.md with any any special dev tricks I had to use to work on this code efficiently.
    • [ ] I updated TELEMETRY.md if I added, changed, or removed a metric name.
    • [ ] Once my PR is ready to have integration tests ran, I posted the PR in #telepresence-dev in the datawire-oss slack so that the "ok to test" label can be applied.
  • intercept error with request timed out while waiting for agent

    intercept error with request timed out while waiting for agent

    when i run telepresence intercept erda-cluster-manager -n erda -p 8280:80 -p 8294:9094 -p 8295:9095 --env-file .vscode/go-debug.env try to intercept pods traffic, telepresence report error with telepresence intercept: error: request timed out while waiting for agent erda-cluster-manager.erda to arrive.

    the version of telepresence is v2.9.5, server system version is ubuntu 18.04, client system is archlinux latest(just sync), kubernetes verison is 1.20.1

    here the log connector.log

    2022-12-27 14:23:21.5011 info    Telepresence Connector v2.9.5 (api v3) starting...
    2022-12-27 14:23:21.5011 info    PID is 73783
    2022-12-27 14:23:21.5012 info    
    2022-12-27 14:23:21.5043 info    connector/server-grpc : gRPC server started
    2022-12-27 14:23:21.6689 info    connector/session : -- Starting new session
    2022-12-27 14:23:21.6691 info    connector/session : Connecting to k8s cluster...
    2022-12-27 14:23:21.6955 info    connector/session : Server version v1.20.0
    2022-12-27 14:23:21.6957 info    connector/session : Context: kubernetes-admin@kubernetes
    2022-12-27 14:23:21.6958 info    connector/session : Server: https://192.168.1.132:6443
    2022-12-27 14:23:21.6962 info    connector/session : Connected to context kubernetes-admin@kubernetes (https://192.168.1.132:6443)
    2022-12-27 14:23:21.7007 info    connector/session : Connecting to traffic manager...
    2022-12-27 14:23:21.8471 info    connector/session : Connected to traffic-manager 2.9.5
    2022-12-27 14:23:21.8564 info    connector/session : Connecting to root daemon...
    2022-12-27 14:23:24.6941 info    connector/session : Configuration reloaded
    

    trafix-manager-xxxx.log

    2022-12-27 04:11:16.9962 info    Traffic Manager v2.9.5 [uid:1000,gid:0]
    2022-12-27 04:11:16.9979 info    unable to load license: error reading license: open /home/telepresence/license: no such file or directory
    2022-12-27 04:11:16.9983 info    starting cloud token watchers
    2022-12-27 04:11:22.0199 info    Using cluster domain "cluster.local."
    2022-12-27 04:11:22.0267 info    Extracting service subnet 10.96.0.0/12 from create service error message
    2022-12-27 04:11:22.0268 info    Using podCIDRStrategy: auto
    2022-12-27 04:11:22.0268 info    Using AlsoProxy: []
    2022-12-27 04:11:22.0268 info    Using NeverProxy: []
    2022-12-27 04:11:22.0268 info    ExcludeSuffixes: [.com .io .net .org .ru]
    2022-12-27 04:11:22.0269 info    IncludeSuffixes: []
    2022-12-27 04:11:22.1284 info    Scanning 4 nodes
    2022-12-27 04:11:22.1287 info    Found 4 subnets
    2022-12-27 04:11:22.1288 info    Deriving subnets from podCIRs of nodes
    2022-12-27 04:11:23.2881 info    Using traffic-agent image "docker.io/datawire/ambassador-telepresence-agent:1.13.5"
    2022-12-27 04:11:23.3425 info    prometheus : Prometheus metrics server not started
    2022-12-27 04:11:23.3426 info    agent-injector : Loading ConfigMaps from []
    2022-12-27 04:11:23.3428 info    agent-injector : Mutating webhook service is listening on :443
    2022-12-27 04:11:23.3427 info    cli-config : Started watcher for ConfigMap traffic-manager-clients
    2022-12-27 04:11:24.3431 info    agent-configs : Started watcher for Services cluster wide
    2022-12-27 04:11:24.3432 info    agent-configs : Started watcher for ConfigMap telepresence-agents cluster wide
    2022-12-27 04:11:24.3915 info    agent-configs : Successfully rolled out erda-cluster-manager.erda
    2022-12-27 04:11:26.5848 info    agent-injector/conn=10.244.2.126:443 : Injecting traffic-agent into pod erda-cluster-manager-57868cbb5-.erda
    2022-12-27 04:11:26.5879 info    agent-injector/conn=10.244.2.126:443 : Injecting 7 patches into pod erda-cluster-manager-57868cbb5-.erda
    2022-12-27 04:13:14.1054 info    agent-configs : Ended watcher for Services cluster wide
    2022-12-27 04:13:14.1054 info    agent-configs : Ended watcher for ConfigMap telepresence-agents cluster wide
    2022-12-27 04:13:14.1384 info    agent-injector/conn=10.244.2.126:443 : Successfully rolled out erda-cluster-manager.erda
    2022-12-27 04:20:12.7514 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "tel2-recursion-check.kube-system.airdream." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:40:07.0919 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "archlinux-br.com.br." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:40:07.0950 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "archlinux-br.com.br." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:40:07.5697 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "archlinux-br.com.br.airdream." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:40:07.5732 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "archlinux-br.com.br.airdream." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:43:12.7841 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "metadata.google.internal." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:43:12.8959 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "metadata.google.internal." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:43:12.9940 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "metadata.google.internal.airdream." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:43:12.9970 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "metadata.google.internal.airdream." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:43:18.7723 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "metadata.google.internal." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:43:18.7724 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "metadata.google.internal." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 05:51:52.5856 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "tel2-recursion-check.kube-system.airdream." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 06:03:42.8814 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "archlinux-br.com.br." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 06:03:43.7878 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "archlinux-br.com.br." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 06:03:44.0535 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "archlinux-br.com.br.airdream." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    2022-12-27 06:03:44.0538 error   httpd/conn=127.0.0.1:8081 : LookupIP failed, trying LookupIP "archlinux-br.com.br.airdream." : session_id="457f6708-8e73-4263-972e-32b2965e1b6f"
    

    daemon.log

    2022-12-27 14:23:21.2779 info    ---
    2022-12-27 14:23:21.2780 info    Telepresence daemon v2.9.5 (api v3) starting...
    2022-12-27 14:23:21.2781 info    PID is 73767
    2022-12-27 14:23:21.2781 info    
    2022-12-27 14:23:21.2806 info    daemon/server-grpc : gRPC server started
    2022-12-27 14:23:21.8606 info    daemon/session : -- Starting new session
    2022-12-27 14:23:21.8677 info    daemon/session : Connected to Manager 2.9.5
    2022-12-27 14:23:21.9055 info    daemon/session : also-proxy subnets []
    2022-12-27 14:23:21.9057 info    daemon/session : never-proxy subnets [192.168.1.132/32]
    2022-12-27 14:23:21.9059 info    daemon/session : Configuration reloaded
    2022-12-27 14:23:22.4150 info    daemon/session : Starting Endpoint
    2022-12-27 14:23:22.4153 info    daemon/session/watch-cluster-info : Setting cluster DNS to 10.244.2.126
    2022-12-27 14:23:22.4154 info    daemon/session/watch-cluster-info : Setting cluster domain to "cluster.local."
    2022-12-27 14:23:22.4154 info    daemon/session/watch-cluster-info : also-proxy subnets []
    2022-12-27 14:23:22.4155 info    daemon/session/watch-cluster-info : never-proxy subnets [192.168.1.132/32]
    2022-12-27 14:23:22.4155 info    daemon/session/watch-cluster-info : Adding Service subnet 10.96.0.0/12
    2022-12-27 14:23:22.4156 info    daemon/session/watch-cluster-info : Adding pod subnet 10.244.0.0/24
    2022-12-27 14:23:22.4156 info    daemon/session/watch-cluster-info : Adding pod subnet 10.244.1.0/24
    2022-12-27 14:23:22.4156 info    daemon/session/watch-cluster-info : Adding pod subnet 10.244.2.0/24
    2022-12-27 14:23:22.4157 info    daemon/session/watch-cluster-info : Adding pod subnet 10.244.3.0/24
    2022-12-27 14:23:22.4172 info    daemon/session/watch-cluster-info : started command ["ip" "a" "add" "10.96.0.0/12" "dev" "tel0"] : dexec.pid="73838"
    2022-12-27 14:23:22.4187 info    daemon/session/watch-cluster-info :  : dexec.pid="73838" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:22.4245 info    daemon/session/watch-cluster-info : finished successfully: exit status 0 : dexec.pid="73838"
    2022-12-27 14:23:22.4277 info    daemon/session/watch-cluster-info : started command ["ip" "a" "add" "10.244.0.0/24" "dev" "tel0"] : dexec.pid="73840"
    2022-12-27 14:23:22.4278 info    daemon/session/watch-cluster-info :  : dexec.pid="73840" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:22.4307 info    daemon/session/watch-cluster-info : finished successfully: exit status 0 : dexec.pid="73840"
    2022-12-27 14:23:22.4320 info    daemon/session/watch-cluster-info : started command ["ip" "a" "add" "10.244.1.0/24" "dev" "tel0"] : dexec.pid="73841"
    2022-12-27 14:23:22.4322 info    daemon/session/watch-cluster-info :  : dexec.pid="73841" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:22.4370 info    daemon/session/watch-cluster-info : finished successfully: exit status 0 : dexec.pid="73841"
    2022-12-27 14:23:22.4384 info    daemon/session/watch-cluster-info : started command ["ip" "a" "add" "10.244.2.0/24" "dev" "tel0"] : dexec.pid="73843"
    2022-12-27 14:23:22.4385 info    daemon/session/watch-cluster-info :  : dexec.pid="73843" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:22.4435 info    daemon/session/watch-cluster-info : finished successfully: exit status 0 : dexec.pid="73843"
    2022-12-27 14:23:22.4448 info    daemon/session/watch-cluster-info : started command ["ip" "a" "add" "10.244.3.0/24" "dev" "tel0"] : dexec.pid="73846"
    2022-12-27 14:23:22.4453 info    daemon/session/watch-cluster-info :  : dexec.pid="73846" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:22.4510 info    daemon/session/watch-cluster-info : finished successfully: exit status 0 : dexec.pid="73846"
    2022-12-27 14:23:22.4567 info    daemon/session/dns/resolved : started command ["docker" "inspect" "bridge" "-f" "{{(index .IPAM.Config 0).Gateway}}"] : dexec.pid="73849"
    2022-12-27 14:23:22.4570 info    daemon/session/dns/resolved :  : dexec.pid="73849" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:22.5370 info    daemon/session/dns/resolved :  : dexec.pid="73849" dexec.stream="stdout" dexec.data="172.17.0.1\n"
    2022-12-27 14:23:22.5472 info    daemon/session/dns/resolved : finished successfully: exit status 0 : dexec.pid="73849"
    2022-12-27 14:23:22.5511 info    daemon/session/dns/resolved : listening to docker bridge at 172.17.0.1
    2022-12-27 14:23:22.5515 info    daemon/session/dns/resolved/Server : Configuring DNS IP 10.244.2.126
    2022-12-27 14:23:24.5631 error   daemon/session/dns/resolved/SanityCheck : resolver did not receive requests from systemd-resolved
    2022-12-27 14:23:24.5634 error   daemon/session/dns/resolved/SanityCheck : goroutine "/daemon/session/dns/resolved/SanityCheck" exited with error: resolved not configured
    2022-12-27 14:23:24.5635 info    daemon/session/dns/resolved/Server:shutdown_logger : shutting down (gracefully)...
    2022-12-27 14:23:24.5638 info    daemon/session/dns/resolved:shutdown_logger : shutting down (gracefully)...
    2022-12-27 14:23:24.5698 info    daemon/session/dns/resolved:shutdown_status :   final goroutine statuses:
    2022-12-27 14:23:24.5700 info    daemon/session/dns/resolved:shutdown_status :     /daemon/session/dns/resolved/SanityCheck: exited with error
    2022-12-27 14:23:24.5700 info    daemon/session/dns/resolved:shutdown_status :     /daemon/session/dns/resolved/Server     : exited without error
    2022-12-27 14:23:24.5701 info    daemon/session/dns : Unable to use systemd-resolved, falling back to local server
    2022-12-27 14:23:24.5702 info    daemon/session/dns/legacy : Automatically set -dns=202.106.0.20
    2022-12-27 14:23:24.5721 info    daemon/session/dns/legacy : started command ["docker" "inspect" "bridge" "-f" "{{(index .IPAM.Config 0).Gateway}}"] : dexec.pid="73951"
    2022-12-27 14:23:24.5725 info    daemon/session/dns/legacy :  : dexec.pid="73951" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.6566 info    daemon/session/dns/legacy :  : dexec.pid="73951" dexec.stream="stdout" dexec.data="172.17.0.1\n"
    2022-12-27 14:23:24.6667 info    daemon/session/dns/legacy : finished successfully: exit status 0 : dexec.pid="73951"
    2022-12-27 14:23:24.6692 info    daemon/session/dns/legacy : listening to docker bridge at 172.17.0.1
    2022-12-27 14:23:24.6735 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-D" "OUTPUT" "-j" "TELEPRESENCE_DNS"] : dexec.pid="73963"
    2022-12-27 14:23:24.6737 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73963" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.6770 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73963" dexec.stream="stdout+stderr" dexec.data="iptables v1.8.8 (legacy): Couldn't load target `TELEPRESENCE_DNS':No such file or directory\n"
    2022-12-27 14:23:24.6771 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73963" dexec.stream="stdout+stderr" dexec.data="\n"
    2022-12-27 14:23:24.6772 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73963" dexec.stream="stdout+stderr" dexec.data="Try `iptables -h' or 'iptables --help' for more information.\n"
    2022-12-27 14:23:24.6777 info    daemon/session/dns/legacy/NAT-redirect : finished with error: exit status 2 : dexec.pid="73963"
    2022-12-27 14:23:24.6789 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-F" "TELEPRESENCE_DNS"] : dexec.pid="73964"
    2022-12-27 14:23:24.6792 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73964" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.6819 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73964" dexec.stream="stdout+stderr" dexec.data="iptables: No chain/target/match by that name.\n"
    2022-12-27 14:23:24.6826 info    daemon/session/dns/legacy/NAT-redirect : finished with error: exit status 1 : dexec.pid="73964"
    2022-12-27 14:23:24.6838 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-X" "TELEPRESENCE_DNS"] : dexec.pid="73965"
    2022-12-27 14:23:24.6840 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73965" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.6874 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73965" dexec.stream="stdout+stderr" dexec.data="iptables: No chain/target/match by that name.\n"
    2022-12-27 14:23:24.6882 info    daemon/session/dns/legacy/NAT-redirect : finished with error: exit status 1 : dexec.pid="73965"
    2022-12-27 14:23:24.6895 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-N" "TELEPRESENCE_DNS"] : dexec.pid="73966"
    2022-12-27 14:23:24.6898 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73966" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.6941 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73966"
    2022-12-27 14:23:24.6953 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--source" "192.168.1.130" "--sport" "57501" "-j" "RETURN"] : dexec.pid="73967"
    2022-12-27 14:23:24.6957 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73967" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7019 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73967"
    2022-12-27 14:23:24.7034 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--source" "192.168.1.130" "--sport" "47425" "-j" "RETURN"] : dexec.pid="73968"
    2022-12-27 14:23:24.7037 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73968" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7095 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73968"
    2022-12-27 14:23:24.7108 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--source" "192.168.1.130" "--sport" "37294" "-j" "RETURN"] : dexec.pid="73970"
    2022-12-27 14:23:24.7109 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73970" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7167 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73970"
    2022-12-27 14:23:24.7180 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--source" "192.168.1.130" "--sport" "46785" "-j" "RETURN"] : dexec.pid="73972"
    2022-12-27 14:23:24.7183 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73972" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7236 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73972"
    2022-12-27 14:23:24.7250 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--source" "192.168.1.130" "--sport" "45109" "-j" "RETURN"] : dexec.pid="73973"
    2022-12-27 14:23:24.7254 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73973" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7305 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73973"
    2022-12-27 14:23:24.7319 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--source" "192.168.1.130" "--sport" "36964" "-j" "RETURN"] : dexec.pid="73974"
    2022-12-27 14:23:24.7322 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73974" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7376 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73974"
    2022-12-27 14:23:24.7391 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--source" "192.168.1.130" "--sport" "36578" "-j" "RETURN"] : dexec.pid="73975"
    2022-12-27 14:23:24.7393 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73975" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7447 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73975"
    2022-12-27 14:23:24.7460 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--source" "192.168.1.130" "--sport" "37161" "-j" "RETURN"] : dexec.pid="73976"
    2022-12-27 14:23:24.7461 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73976" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7516 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73976"
    2022-12-27 14:23:24.7530 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--source" "192.168.1.130" "--sport" "40481" "-j" "RETURN"] : dexec.pid="73977"
    2022-12-27 14:23:24.7531 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73977" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7585 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73977"
    2022-12-27 14:23:24.7597 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--source" "192.168.1.130" "--sport" "48291" "-j" "RETURN"] : dexec.pid="73978"
    2022-12-27 14:23:24.7599 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73978" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7652 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73978"
    2022-12-27 14:23:24.7665 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-A" "TELEPRESENCE_DNS" "-p" "udp" "--dest" "202.106.0.20/32" "--dport" "53" "-j" "DNAT" "--to-destination" "127.0.0.1:35635"] : dexec.pid="73979"
    2022-12-27 14:23:24.7666 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73979" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7723 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73979"
    2022-12-27 14:23:24.7735 info    daemon/session/dns/legacy/NAT-redirect : started command ["iptables" "-t" "nat" "-I" "OUTPUT" "1" "-j" "TELEPRESENCE_DNS"] : dexec.pid="73980"
    2022-12-27 14:23:24.7737 info    daemon/session/dns/legacy/NAT-redirect :  : dexec.pid="73980" dexec.stream="stdin" dexec.err="EOF"
    2022-12-27 14:23:24.7781 info    daemon/session/dns/legacy/NAT-redirect : finished successfully: exit status 0 : dexec.pid="73980"
    
    
  • Panic if --docker-run --name uses space not '=' to separate option name from value

    Panic if --docker-run --name uses space not '=' to separate option name from value

    Describe the bug

    Just migrating from legacy telepresence and dropping in telepresence 2 with existing use of legacy telepresence panicked.

    Stack trace led me to the

    startInDocker function in the

    https://github.com/telepresenceio/telepresence/blob/release/v2/pkg/client/cli/intercept/state.go

    file.

    Specifically the attempt to extract the value of the '--name' option:

    name, hasName := getArg("--name")

    fails and panics in the getArg function:

    if i+1 < len(args) {
    					return parts[i+1], true
    

    As it is assumed the parts array has enough elements, which is not the case if the --name option is of the form

    --name the-name

    rather than:

    --name=the-name

    as the string is only split on '=':

       parts := strings.Split(arg, "=")
    

    To Reproduce

    run telepresence v2 using legacy options and in the --docker-run arguments specify --name the-name not --name=the-name

    telepresence --swap-deployment deployment-name --namespace blah --docker-run --name the-name ...
    

    Expected behavior telepresence to not panic and to run the docker container as specified

    Versions (please complete the following information):

    • all v2.9.5
    • Linux Alma 8
    • Rancher k3s
Automatically exposes the remote container's listening ports back to the local machine

Auto-portforward (apf) A handy tool to automatically set up proxies that expose the remote container's listening ports back to the local machine. Just

Dec 15, 2022
oniongrok forwards ports on the local host to remote Onion addresses as Tor hidden services and vice-versa

oniongrok Onion addresses for anything. oniongrok forwards ports on the local host to remote Onion addresses as Tor hidden services and vice-versa. Wh

Jan 1, 2023
Use qs-forward with QuickSocket to enable easy local development and testing!
Use qs-forward with QuickSocket to enable easy local development and testing!

qs-forward Use qs-forward with QuickSocket to enable easy local development and testing! Getting Started Want to jump in quick? Head over to the relea

Jul 3, 2022
K8s_dns_chaos: enables inject DNS chaos in a Kubernetes cluster for Chaos Engineering

k8s_dns_chaos Name k8s_dns_chaos - enables inject DNS chaos in a Kubernetes cluster for Chaos Engineering. Description This plugin implements the Kube

Dec 12, 2021
🧪 Run common networking tests against your site.
🧪 Run common networking tests against your site.

dstp dstp, run common networking tests against your site. Usage Usage: dstp [OPTIONS] [ARGS]

Jan 3, 2023
Scripts and other small tools developed against TCM systems

TCM Tools This repo contains scripts and small tools developed against TCM services that do not really have a home other places but we would like to m

Mar 22, 2022
A little tool to test IP addresses quickly against a geolocation and a reputation API

iptester A little tool to test IP addresses quickly against a geolocation and a

May 19, 2022
Command-line tool and library for Windows remote command execution in Go

WinRM for Go Note: if you're looking for the winrm command-line tool, this has been splitted from this project and is available at winrm-cli This is a

Nov 29, 2022
Simple HTTP tunnel using SSH remote port forwarding

Simple HTTP tunnel using SSH remote port forwarding

Nov 18, 2022
EasyAgent is an infrastructure component, applied to manage the life-cycle of services on the remote host.
EasyAgent is an infrastructure component, applied to manage the life-cycle of services on the remote host.

Easyagent English | 中文 介绍 easyagent是在袋鼠云内部广泛使用的基础架构组件,最佳应用场景包括ELK体系beats等数据采集器的管控和配置管理、数栈体系自动化部署等 基本原理 easyagent主要有sidecar和server两个组件,sidecar部署在主机端,si

Nov 24, 2022
V3IO Frames ("Frames") is a Golang based remote data frames access (over gRPC or HTTP stream)

V3IO Frames ("Frames") is a multi-model open-source data-access library that provides a unified high-performance DataFrame API for working with different types of data sources (backends). The library was developed by Iguazio to simplify working with data in the Iguazio Data Science Platform ("the platform"), but it can be extended to support additional backend types.

Oct 1, 2022
A major platform Remote Access Terminal Tool based by Blockchain/P2P.
A major platform Remote Access Terminal Tool based by Blockchain/P2P.

NGLite A major platform Remote Access Terminal Tool based by Blockchain/P2P. No public IP address required.More anonymity Example Detection Warning!!!

Jan 2, 2023
ZheTian Powerful remote load and execute ShellCode tool
 ZheTian Powerful remote load and execute ShellCode tool

ZheTian ZheTian Powerful remote load and execute ShellCode tool 免杀shellcode加载框架 命令详解 -u:从远程服务器加载base64混淆后的字节码。 -r:从本地文件内读。 -s:读取无修改的原始文件,只能从本地加载 -o:参数

Jan 9, 2023
Run commands on remote hosts, inspecting key indicators to manage infrastructure

inspector This is a very basic ssh helper tool to manage a smaller (few 100s up to a few 1000s) fleet of servers. The main point of inspector is to pr

Mar 3, 2022
A remote access tool & CNC
A remote access tool & CNC

⚠️ ⚠️ Disclaimer just use this with good intentions ⚠️ ⚠️ An useless rat (remote acces tool in develop) web client you want to use it? download pairat

Dec 14, 2022
A remote access tool & CNC
A remote access tool & CNC

⚠️ ⚠️ Disclaimer just use this with good intentions ⚠️ ⚠️ An useless rat (remote acces tool in develop) web client you want to use it? download pairat

Dec 14, 2022
The rest api that can manage the iptables rules of the remote host

fiewall-api firewall api是基于firewalld来远程管理iptables规则的rest-api,无需部署agent Features 指定一个主机ip,让这个主机上的iptables增加一个规则 处理单个IP或CIDR范围(xx.xx.xx.xx/mask,mac,inte

Mar 24, 2022
Application wirtten in GO to check if the port on the remote host is open

portcheck A simple Pod that get API POST request with port type and number with a target's IP address and checks if the destination port is available

Nov 26, 2021
Grab your files periodically from a remote FTP or SFTP server easily
Grab your files periodically from a remote FTP or SFTP server easily

About FTPGrab is a CLI application written in Go and delivered as a single executable (and a Docker image) to grab your files from a remote FTP or SFT

Jan 3, 2023