Kubedock is a minimal implementation of the docker api that will orchestrate containers on a Kubernetes cluster, rather than running containers locally.

Kubedock

Kubedock is an minimal implementation of the docker api that will orchestrate containers on a kubernetes cluster, rather than running containers locally. The main driver for this project is to run tests that require docker-containers inside a container, without the requirement of running docker-in-docker within resource heavy containers. Containers that are orchestrated by kubedock are considered short-lived and emphemeral and not intended to run production services. An example use case is running testcontainers-java enabled unit-tests in a tekton pipeline. In this use case, running kubedock in a sidecar can help orchestrating containers inside the kubernetes cluster instead of within the task container itself.

Quick start

Running this locally with a testcontainers enabled unit-test requires to run kubedock (kubedock server). After that start the unit tests in another terminal with the below environment variables set, for example:

export TESTCONTAINERS_RYUK_DISABLED=true
export DOCKER_HOST=tcp://127.0.0.1:8999
mvn test

The default configuration for kubedock is to orchestrate in the namespace that has been set in the current context. This can be overruled with -n argument (or via the NAMESPACE environment variable). The service requires permissions to create Deployments, Services and Configmaps in the namespace.

To see a complete list of available options and additional examples: kubedock --help.

Implementation

When kubedock is started with kubedock server it will start an API server on port :8999, which can be used as a drop-in replacement for the default docker api server. Additionally, kubedock can also start listening to an unix-socket (docker.sock).

Containers

Container API calls are translated towards kubernetes Deployment resources. When a container is started, it will create port-forwards for the ports that should be exposed (only tcp is supported). Starting a container is a blocking call that will wait until the Deployment results in a running Pod. By default it will wait for maximum 1 minute, but this is configurable with the --timeout argument. The logs API calls will always return the complete history of logs, and doesn't differentiate between stdout/stderr. All log output is send as stdout. Executions in the containers are supported.

Volumes

Volumes are implemented by copying over the source content towards the container by means of an init-container that is started before the actual container is started. By default the kubedock image with the same version as the running kubedock is used as the init container. However, this can be any image that has tar available and can be configured with the --initimage argument.

Volumes are one-way copies and emphemeral. This typically means, any data that is written into the volume is not available locally. This also means that mounts to devices, or sockets are not supported (e.g. mounting a docker-socket).

Copying data from a running container back towards the client is not supported either. Also be aware that copying data towards a container will implicitly start the container. This is different compared to a real docker api, where a container can be in an unstarted state. To 'workaround' this, use a volume instead.

Networking

Kubedock flattens all networking, which basicly means that everything will run in the same namespace. This should be sufficient for most use-cases. Network aliases are supported. When a network alias is present, it will create a service exposing all ports that have been exposed by the container. If no ports are configured, kubedock is able to fetch ports that are exposed in the container image. To do this, kubedock should be started with the --inspector argument.

Images

Kubedock implements the images API by tracking which images are requested. It is not able to actually build images. If kubedock is started with --inspector, kubedock will fetch configuration information about the image by calling external container registries. This configuration includes ports that are exposed by the container image itself, and increases network aliases support. The registries should be configured by the client (for example by doing a skopeo login).

Namespace locking

If multiple kubedocks are using the namespace, it might be possible there will be collisions in network aliases. Since networks are flattend (see Networking), all network aliases will result in a Service with the name of the given network alias. To ensure tests don't fail because of these name collisions, kubedock can lock the namespace while it's running. When enabling this with the --lock argument, kubedock will create a Configmap called kubedock-lock in the namespace in which it tracks the current ownership.

Resources cleanup

Kubedock will dynamically create deployments and services in the configured namespace. If kubedock is requested to delete a container, it will remove the deployment and related services. Kubedock will also delete all the resources (Services and Deployments) it created in the running instance before exiting (identified with the kubedock.id label).

Automatic reaping

If a test fails and didn't clean up its started containers, these resources will remain in the namespace. To prevent unused deployments and services lingering around, kubedock will automatically delete deployments and services that are older than 15 minutes (default) if it's owned by the current process. If the deployment is not owned by the running process, it will delete it after 30 minutes if the deployment or service has the label kubedock=true.

Forced cleaning

The reaping of resources can also be enforced at startup. When kubedock is started with the --prune-start argument, it will delete all resources that have the kubedock=true before starting the API server. These resource includes resources created by other instances.

See also

Comments
  • Port from container are not exposed by kubedock

    Port from container are not exposed by kubedock

    Hello there,

    when I'm running e.g. mongodb as a testcontainers image, port 27017 is not exposed by the kudedock, kubectl says that:

        Container ID:  containerd://f08607808925b030a57c604f02904ce8f74c02fd9fdf43fb317281c21f6f06e0
        Image:         mongo:4.4.10
        Image ID:      docker.io/library/mongo@sha256:2821997cba3c26465b59cc2e863b940d21a58732434462100af10659fc0d164f
        Port:          27017/TCP
        Host Port:     0/TCP
        Args:
          --replSet
          docker-rs
    

    Testcontainers test suite reports log:

    org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=UNKNOWN, servers=[{address=docker:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=UNKNOWN, servers=[{address=docker:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]
    

    docker is the container alias, port 2475 is exposed and docker API is available.

    Ports of target containers are not accessible from the container which uses kubedock. Do anyone know the reason of this?

    Cheers, W

  • Example for Mounting Files

    Example for Mounting Files

    Hi,

    my colleagues and I are having trouble using TestContainers to mount files into our container.

    Could you provide an example that demonstrates how kubedock can be used to successfully copy (or even mount) files to a container?

    We tried using:

    .withCopyFileToContainer(MountableFile.forClasspathResource("/ignite.xml"), "/conf/ignite.xml")
    .new GenericContainer(...)
            .withClasspathResourceMapping("redis.conf",
                                          "/etc/redis.conf",
                                          BindMode.READ_ONLY)
    .withFileSystemBind("./src/test/local/data-grid/conf", "/conf")
    

    but Ignite gives us a FileNotFoundException for /conf/ignite.xml (The config is needed for startup).

    This is using kubedock-0.4.0 with Kubernetes 1.21.1

    P.S. Thanks for creating kubedock! It's a great-looking solution for getting TestContainers to work nicely with Kubernetes.

  • Consider setting custom labels on managed resources

    Consider setting custom labels on managed resources

    Custom labels are translated to annotations on the managed resource (e.g. deployments, services). I wonder why not also set labels on the managed resource?

    Some platforms make a difference between annotations and labels on resources. I think it would make sense to set both on kubedock managed resources.

  • --reverse-proxy not working as expected when running kubedock as a standalone service

    --reverse-proxy not working as expected when running kubedock as a standalone service

    I have kubedock running as a standalone service in namespace A.

    I am running a test that uses kubedock to spin up a MySQL pod in namespace B.

    I can see that the MySQL pod has started up. When I exec into a pod in namespace A I can successfully nc MySQL using the pod IP (on port 3306) and cluster IP (on the randomly assigned port).

    However I can't connect to the kubedock pod using its pod IP and the randomly assigned MySQL service port.

    # nc -vvv 10.52.156.57:34615
    nc: 10.52.156.57:34615 (10.52.156.57:34615): Connection refused
    

    I am running kubedock with the following arguments:

          args:
            - "server"
            - "--image-pull-secrets=xxxx"
            - "--namespace=B"
            - "--reverse-proxy"
    

    Would you expect kubedock to work when run like this? Is there anything obvious I'm doing wrong?

    From the logs, it looks like the kubedock reverse proxy has started up, e.g.

    I0722 10:27:44.843072       1 deploy.go:190] reverse proxy for 34615 to 3306
    I0722 10:27:44.843079       1 tcpproxy.go:33] start reverse-proxy localhost:34615->172.20.86.237:3306
    I0722 10:27:44.852264       1 copy.go:36] copy 4096 bytes to 83710e5d2443:/
    
  • Waiting for running container times out

    Waiting for running container times out

    The following simple python script fails if running against kubedock, but works against docker:

        import docker
    
        client = docker.from_env(timeout=_DOCKER_CLIENT_TIMEOUT)
    
        container = client.containers.run(
            "busybox",
            entrypoint="echo",
            command="hey",
            detach=True,
            stdout=True,
            stderr=True,
            tty=False,
            labels={
                "com.joyrex2001.kubedock.deploy-as-job": "true"
            }
        )
        container.wait(timeout=_DOCKER_CLIENT_TIMEOUT)
    
        print(container.logs(stdout=True, stderr=True, tail=100))
    

    I can see the job starting and running successfully, however container.wait(timeout=_DOCKER_CLIENT_TIMEOUT) times out even though the pod has finished.

  • Testcontainers waiting for container output to contain expected content is not reliable

    Testcontainers waiting for container output to contain expected content is not reliable

    Hello,

    First of all thank you very much for the awesome project!

    We've tried using kubedock for out testcontainers tests, but have hit an issue with using the bellow pattern from the testcontainers docs:

    WaitingConsumer consumer = new WaitingConsumer();
    
    container.followOutput(consumer, STDOUT);
    
    consumer.waitUntil(frame -> 
        frame.getUtf8String().contains("STARTED"), 30, TimeUnit.SECONDS);
    

    About 1 out of 5 times, it will timeout even though the logs do contain the expected string. Calling container.getLogs() just before the wait confirms that.

    Is this a know limitation? I am happy to help debug this, but not sure where to start

  • Use timeout config for the init container timeout

    Use timeout config for the init container timeout

    In some environments (like our overloaded EKS cluster) it might take more than the hardcoded 30 seconds timeout for the setup init container to start.

    ~~This change makes it configuration with a default 30s which should match the current behaviour if the parameter is not overwritten.~~

    Simply use the timeout configuration parameter for the init timeout instead of the hardcoded 30s

  • Reverse proxy with random ports

    Reverse proxy with random ports

    Hi @joyrex2001, really enjoying kubedock so far!

    We are trying to move away from using --port-forward, replacing it with --reverse-proxy, unfortunately we have a bunch of TestContainers tests which need to communicate with the container via random ports. We're seemingly hitting a wall here with --reverse-proxy, with the TestContainers tests ending up failing with timeouts, whereas it works out of the box with --port-forward.

    Do you have any suggestion for this usecase? This might simply be that I do not fully understand how --reverse-proxy is supposed to work as there isn't really a lot of documentation on this flag, so feel free to correct me if it isn't designed for this. Alternatively, what makes --port-forward unreliable, and is it addressable?

    We would also like to host kubedock on our cluster, while running the tests remotely on our CI platform, however that requires an extra layer of proxying between kubedock and our CI with something like kubectl port-forward, which makes this problem even worse. Have you thought about this scenario as well?

  • Issue parsing environment variables where the value contains an `=` (equals) character

    Issue parsing environment variables where the value contains an `=` (equals) character

    Hi there,

    I have come across an issue while trying to set an environment variable that contains an equals character, e.g.

    Error from kubedock log:

    E1004 12:32:00.263399       1 container.go:74] could not parse env SOME_BASE_64_ENCODED_ENV_VARIABLE=MIIJKAIB...JsXVU2syw3EZ7Y=
    

    It seems that in container.go kubedock determines whether or not an env variable is valid by the presence of only 1 equals character:

    	for _, e := range co.Env {
    		f := strings.Split(e, "=")
    		if len(f) != 2 {
    			klog.Errorf("could not parse env %s", e)
    			continue
    		}
    		env = append(env, corev1.EnvVar{Name: f[0], Value: f[1]})
    	}
    

    I wonder if only splitting on the first equals that we encounter would work?

  • Allow specifying the User a container should run as

    Allow specifying the User a container should run as

    • Added a CLI flag that sets a default RunAsUser on managed pods, which can be overridden by setting the User value when creating a container via the Docker API.
    • Implemented as a PodSecurityContext on the generated pod. This is useful for running Kubedock against K8S clusters that require pods to run as non-root
  • kubedock high CPU usage if pods stuck in CrashLoopBackOff

    kubedock high CPU usage if pods stuck in CrashLoopBackOff

    Hi,

    today, I found this amazing project to close the gab between testcontainers and gitlab ci which runs on kubernetes, too. Thanks for this awesome work!

    Before I start to integrate kubedock, I test it locally. While initial tests are fine, running https://github.com/rieckpil/blog-tutorials/tree/master/spring-boot-integration-tests-testcontainers results into a high CPU usage.

    image

    Logs:

    jkr@joe-nb ~ % ~/Downloads/kubedock server --port-forward
    I1027 19:35:11.748402   84001 main.go:26] kubedock 0.7.0 (20211008-105904)
    I1027 19:35:11.749336   84001 main.go:95] kubernetes config: namespace=vrp-testcontainers-kubernetes, initimage=joyrex2001/kubedock:0.7.0, ready timeout=1m0s
    I1027 19:35:11.749668   84001 main.go:117] reaper started with max container age 1h0m0s
    I1027 19:35:11.749770   84001 main.go:68] port-forwarding services to 127.0.0.1
    I1027 19:35:11.749885   84001 main.go:100] default image pull policy: ifnotpresent
    I1027 19:35:11.749926   84001 main.go:102] using namespace: vrp-testcontainers-kubernetes
    I1027 19:35:11.750065   84001 main.go:35] api server started listening on :2475
    [GIN] 2021/10/27 - 19:35:20 | 200 |     123.807µs |       127.0.0.1 | GET      "/info"
    [GIN] 2021/10/27 - 19:35:20 | 200 |      29.519µs |       127.0.0.1 | GET      "/info"
    [GIN] 2021/10/27 - 19:35:20 | 200 |      28.059µs |       127.0.0.1 | GET      "/version"
    [GIN] 2021/10/27 - 19:35:20 | 200 |      75.252µs |       127.0.0.1 | GET      "/images/json"
    [GIN] 2021/10/27 - 19:35:20 | 200 |        78.5µs |       127.0.0.1 | GET      "/images/jboss/keycloak:11.0.0/json"
    [GIN] 2021/10/27 - 19:35:20 | 201 |     394.528µs |       127.0.0.1 | POST     "/containers/create"
    [GIN] 2021/10/27 - 19:35:35 | 204 | 14.567361081s |       127.0.0.1 | POST     "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/start"
    I1027 19:35:35.422243   84001 portforward.go:42] start port-forward 34468->8080
    [GIN] 2021/10/27 - 19:35:35 | 200 |     120.223µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
    [GIN] 2021/10/27 - 19:35:35 | 200 |     124.327µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
    E1027 19:35:36.603462   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:36 socat[60751] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:36.676557   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:36 socat[60758] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:37.762316   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:37 socat[60924] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:37.832716   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:37 socat[60931] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:38.912798   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:38 socat[61039] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:38.987719   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:39 socat[61046] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:40.076636   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:40 socat[61095] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    E1027 19:35:40.158013   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:40 socat[61107] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
    [GIN] 2021/10/27 - 19:35:53 | 201 |      84.187µs |       127.0.0.1 | POST     "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/exec"
    [GIN] 2021/10/27 - 19:35:54 | 200 |  317.038459ms |       127.0.0.1 | POST     "/exec/808a7da1789efc9f5e8a0b8bdf5b8ca44843e0dddcaeed5ab7e0a331870c2029/start"
    [GIN] 2021/10/27 - 19:35:54 | 200 |      68.388µs |       127.0.0.1 | GET      "/exec/808a7da1789efc9f5e8a0b8bdf5b8ca44843e0dddcaeed5ab7e0a331870c2029/json"
    [GIN] 2021/10/27 - 19:35:54 | 200 |      83.432µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
    I1027 19:35:54.033603   84001 containers.go:217] ignoring signal
    [GIN] 2021/10/27 - 19:35:54 | 204 |      42.978µs |       127.0.0.1 | POST     "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/kill"
    [GIN] 2021/10/27 - 19:35:54 | 200 |      90.172µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
    [GIN] 2021/10/27 - 19:35:54 | 204 |  253.744279ms |       127.0.0.1 | DELETE   "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369?v=true&force=true"
    [GIN] 2021/10/27 - 19:35:54 | 200 |     105.337µs |       127.0.0.1 | GET      "/images/postgres:12/json"
    [GIN] 2021/10/27 - 19:35:54 | 201 |     175.279µs |       127.0.0.1 | POST     "/containers/create"
    I1027 19:36:19.621781   84001 portforward.go:42] start port-forward 47785->5432
    [GIN] 2021/10/27 - 19:36:19 | 204 | 25.308206574s |       127.0.0.1 | POST     "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/start"
    [GIN] 2021/10/27 - 19:36:19 | 200 |     129.701µs |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/json"
    [GIN] 2021/10/27 - 19:36:19 | 200 |      81.498µs |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/json"
    [GIN] 2021/10/27 - 19:37:19 | 200 |     317.497µs |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/json"
    [GIN] 2021/10/27 - 19:37:19 | 200 |   74.017123ms |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/logs?stdout=true&stderr=true&since=0"
    [GIN] 2021/10/27 - 19:37:19 | 200 |    58.57439ms |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/logs?stdout=true&stderr=true&since=0"
    [GIN] 2021/10/27 - 19:37:19 | 201 |     190.866µs |       127.0.0.1 | POST     "/containers/create"
    [GIN] 2021/10/27 - 19:37:23 | 204 |  3.277132909s |       127.0.0.1 | POST     "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/start"
    I1027 19:37:23.216054   84001 portforward.go:42] start port-forward 62630->5432
    [GIN] 2021/10/27 - 19:37:23 | 200 |      95.659µs |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/json"
    [GIN] 2021/10/27 - 19:37:23 | 200 |      86.154µs |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/json"
    [GIN] 2021/10/27 - 19:38:23 | 200 |     116.501µs |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/json"
    [GIN] 2021/10/27 - 19:38:23 | 200 |   69.953111ms |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/logs?stdout=true&stderr=true&since=0"
    [GIN] 2021/10/27 - 19:38:23 | 200 |   61.136457ms |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/logs?stdout=true&stderr=true&since=0"
    [GIN] 2021/10/27 - 19:38:23 | 201 |     285.397µs |       127.0.0.1 | POST     "/containers/create"
    [GIN] 2021/10/27 - 19:38:28 | 204 |  4.293655856s |       127.0.0.1 | POST     "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/start"
    I1027 19:38:28.117308   84001 portforward.go:42] start port-forward 62426->5432
    [GIN] 2021/10/27 - 19:38:28 | 200 |     203.466µs |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/json"
    [GIN] 2021/10/27 - 19:38:28 | 200 |     164.063µs |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/json"
    [GIN] 2021/10/27 - 19:39:28 | 200 |     146.367µs |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/json"
    [GIN] 2021/10/27 - 19:39:28 | 200 |   66.033642ms |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/logs?stdout=true&stderr=true&since=0"
    [GIN] 2021/10/27 - 19:39:28 | 200 |   62.444152ms |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/logs?stdout=true&stderr=true&since=0"
    

    Kubedock: 0.7.0 OS: Mac OS Kubernetes: Openshift 3.11

    jkr@joe-nb ~ % kubectl get all -l kubedock=true
    NAME                                READY   STATUS             RESTARTS   AGE
    pod/169389ceb3c5-85446cf64b-66qpb   0/1     CrashLoopBackOff   5          3m
    pod/2d2f2db70a48-65bdb6dbb7-67rj8   0/1     CrashLoopBackOff   5          4m
    pod/a1f2506d6520-c5c57c8b6-x5b65    0/1     CrashLoopBackOff   5          5m
    
    NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
    service/kd-169389ceb3c5   ClusterIP   172.30.173.82   <none>        5432/TCP,62426/TCP   3m
    service/kd-2d2f2db70a48   ClusterIP   172.30.227.70   <none>        5432/TCP,62630/TCP   4m
    service/kd-a1f2506d6520   ClusterIP   172.30.71.160   <none>        5432/TCP,47785/TCP   5m
    
    NAME                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/169389ceb3c5   1         1         1            0           3m
    deployment.apps/2d2f2db70a48   1         1         1            0           4m
    deployment.apps/a1f2506d6520   1         1         1            0           5m
    
    NAME                                      DESIRED   CURRENT   READY   AGE
    replicaset.apps/169389ceb3c5-85446cf64b   1         1         0       3m
    replicaset.apps/2d2f2db70a48-65bdb6dbb7   1         1         0       4m
    replicaset.apps/a1f2506d6520-c5c57c8b6    1         1         0       5m
    

    This reason for this could be that the pods are in CrashLoopBackOff. The reason, that pods crashes are known (file permissions issues), but in such cases, kubedock should not generate such a high load. The high load is persistent even after mvn clean verify is finished. Also press CRTL+C takes some time to terminate the process.

    I'm able to reproduce this behaivor. If you teach me, I'm able to provide traces or profiling files. But before doing this, please ensure that such profiling files does not contain sensitive informations like the kube credentials.

  • Kubedock does not remove pods after test finishes

    Kubedock does not remove pods after test finishes

    I am using java / junit / testcontainers with Kubedock to spin up a bunch of containers for a test.

    After the test has finished, the containers are not removed immediately, but they are eventually cleaned up after an hour or so.

    Is this expected behaviour?

    To get around this I have added an afterAll hook that explicitly removes the pods, which works fine, but I wonder whether something is misconfigured as from the docs it sounds like pod removal should happen automatically after the test has finished.

  • Port forward automatic retry

    Port forward automatic retry

    Hi I'm using your tool and I really like it. Currently I have a problem that some containers seem to open, close and reopen ports on startup and this causes the port forward feature to fail. Is it possible to change the code in a way that if you detect an abortion of the port forward the tool will automatically retry the port forward? This would make this feature more robust. Thanks Markus Ritter

    P.S. if there is anything I can do to please let me know

  • ConfigMaps are fetch even if no option for them

    ConfigMaps are fetch even if no option for them

    Hi @joyrex2001 ,

    According to the minimum RBAC provided in the README.md it seems no call for ConfigMap should be done by default.

    But when running the image I get this kind of errors:

    E0808 17:00:59.867275 1 main.go:83] error cleaning k8s containers: configmaps is forbidden: User "system:serviceaccount:XXXXX:YYYYYY" cannot list resource "configmaps" in API group "" in the namespace "XXXXX"
    

    Should I give this rule too? Or does kubedock should change this?

    Thank you,

  • `OneShotStartupCheckStrategy` always returns `StartupStatus.NOT_YET_KNOWN`

    `OneShotStartupCheckStrategy` always returns `StartupStatus.NOT_YET_KNOWN`

    OneShotStartupCheckStrategy in testcontainers checks if finishedAt is not equal DOCKER_TIMESTAMP_ZERO ("0001-01-01T00:00:00Z"). Kubedock always returns DOCKER_TIMESTAMP_ZERO for finished containers.

  • Kubedock does not work with recent testcontainers-java kafka (1.16.+)

    Kubedock does not work with recent testcontainers-java kafka (1.16.+)

    Hi,

    First of all: thanks for making and maintaining this repo. Really useful! I have played around with this repo and especially with Kafka testcontainers on OpenShift with Tekton. I found out that your example works nice on OpenShift but mine project failed.

    Mainly because your examples use version 1.15.3 while mine project was using 1.16.3. There have been some changes around the dynamic updating of the Kafka config.

    With version 1.16.3 the args of the deployed containers look like:

    args:
            - sh
            - '-c'
            - |
              #!/bin/bash
              echo 'clientPort=2181' > zookeeper.properties
              echo 'dataDir=/var/lib/zookeeper/data' >> zookeeper.properties
              echo 'dataLogDir=/var/lib/zookeeper/log' >> zookeeper.properties
              zookeeper-server-start zookeeper.properties &
              echo '' > /etc/confluent/docker/ensure 
              /etc/confluent/docker/run 
    

    while 1.15.3 creates:

    args:
            - sh
            - '-c'
            - >-
              while [ ! -f /testcontainers_start.sh ]; do sleep 0.1; done;
              /testcontainers_start.sh
    
  • 404 when starting a container

    404 when starting a container

    Running a container build from the following Dockerfile:

                FROM alpine
                RUN echo \$RANDOM >> /tmp/test.txt
                CMD cat /tmp/test.txt && echo "DONE" && sleep 28800
    

    I get 404 from when calling container.start. When I look at the pods inside k8s, everything looks good, so I think this is a bug inside kubedock.

    In the logs I get the following, but not sure if it is a red herring:

    [GIN-debug] [WARNING] Headers were already written. Wanted to override status code 404 with 204
    

    I am going to try and figure out a small runnable example to demonstrate this.

Aceptadora provides the boilerplate to orchestrate the containers for an acceptance test.

aceptadora Aceptadora provides the boilerplate to orchestrate the containers for an acceptance test. Aceptadora is a replacement for docker-compose in

Nov 16, 2022
Docker-compose files for running full Storj network locally

docker-compose based Storj environment storj-up is a swiss-army tool to create / customize Storj clusters with the help of docker-compose (not just st

Nov 16, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

Feb 21, 2022
DepCharge is a tool designed to help orchestrate the execution of commands across many directories at once.

DepCharge DepCharge is a tool that helps orchestrate the execution of commands across the many dependencies and directories in larger projects. It als

Sep 27, 2022
Hassle-free minimal CI/CD for git repositories with docker or docker-compose projects.
Hassle-free minimal CI/CD for git repositories with docker or docker-compose projects.

GIT-PIPE Hassle-free minimal CI/CD for git repos for docker-based projects. Features: zero configuration for repos by default automatic encrypted back

Sep 23, 2022
A minimal Go project with user authentication ready out of the box. All frontend assets should be less than 100 kB on every page load

Golang Base Project A minimal Golang project with user authentication ready out of the box. All frontend assets should be less than 100 kB on every pa

Jan 1, 2023
Viewnode displays Kubernetes cluster nodes with their pods and containers.

viewnode The viewnode shows Kubernetes cluster nodes with their pods and containers. It is very useful when you need to monitor multiple resources suc

Nov 23, 2022
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

Dec 16, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Dec 4, 2022
Run Kubernetes locally
Run Kubernetes locally

minikube implements a local Kubernetes cluster on macOS, Linux, and Windows. minikube's primary goals are to be the best tool for local Kubernetes application development and to support all Kubernetes features that fit.

Nov 12, 2021
Truly Minimal Linux Distribution for Containers

Statesman Statesman is a minimal Linux distribution, running from memory, that has just enough functionality to run OCI-compatible containers. Rationa

Nov 12, 2021
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Dotnet-appsettings-env - Convert .NET appsettings.json file to Kubernetes, Docker and Docker-Compose environment variables

dotnet-appsettings-env Convert .NET appsettings.json file to Kubernetes, Docker

Dec 30, 2022
Dotnet-appsettings-env - Convert .NET appsettings.json file to Kubernetes, Docker and Docker-Compose environment variables

dotnet-appsettings-env Convert .NET appsettings.json file to Kubernetes, Docker

Feb 16, 2022
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.
Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes.

Carrier is a Kubernetes controller for running and scaling game servers on Kubernetes. This project is inspired by agones. Introduction Genera

Nov 25, 2022
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022