run regular Docker images in KVM/Qemu

runq

runq is a hypervisor-based Docker runtime based on runc to run regular Docker images as a lightweight KVM/Qemu virtual machine. The focus is on solving real problems, not on number of features.

Key differences to other hypervisor-based runtimes:

  • minimalistic design, small code base
  • no modification to existing Docker tools (dockerd, containerd, runc...)
  • coexistence of runq containers and regular runc containers
  • no extra state outside of Docker (no libvirt, no changes to /var/run/...)
  • small init program, no systemd
  • no custom guest kernel or custom qemu needed
  • runs on x86_64 and s390x (>= z13)

runc vs. runq

       runc container                   runq container
       +-------------------------+      +-------------------------+
       |                         |      |                     VM  |
       |                         |      | +---------------------+ |
       |                         |      | |                     | |
       |                         |      | |                     | |
       |                         |      | |                     | |
       |       application       |      | |     application     | |
       |                         |      | |                     | |
       |                         |      | |                     | |
       |                         |      | +---------------------+ |
       |                         |      | |     guest kernel    | |
       |                         |      | +---------------------+ |
       |                         |      |           qemu          |
       +-------------------------+      +-------------------------+
 ----------------------------------------------------------------------
                                host kernel

Installation

runq requires a host kernel >= 4.8 with KVM and VHOST_VSOCK support enabled. The easiest way to build runq and to put all dependencies together is using Docker. For fast development cycles a regular build environment might be more efficient. For this refer to section Developing runq.

# get the runq and runc source code
git clone --recurse-submodules https://github.com/gotoz/runq.git

# compile and create a release tar file in a Docker container
cd runq
make release

# install runq to `/var/lib/runq`
make release-install

Register runq as Docker runtime with appropriate defaults. See daemon.json for more options.

/etc/docker/daemon.json
{
  "runtimes": {
    "runq": {
      "path": "/var/lib/runq/runq",
      "runtimeArgs": [
        "--cpu", "1",
        "--mem", "256",
        "--dns", "8.8.8.8,8.8.4.4",
        "--tmpfs", "/tmp"
      ]
    }
  }
}

reload Docker config

systemctl reload docker.service

TLS certificates

runq-exec creates a secure connection between host and VM guests. Users of runq-exec are authenticated via a client certificate. Access to the client certificate must be limited to Docker users only.

The CA and server certificates must be installed in /var/lib/runq/qemu/certs. Access must be limited to the root user only.

Examples of server and client TLS certificates can be created with the script:

/var/lib/runq/qemu/mkcerts.sh

Note: On x86 and s390x < z14 the host must provide sufficient entropy to the VM guests via virtio-rng. If there is not enough entropy available on the host booting of guests can fail with a timeout error. The entropy that's currently available can be checked with:

cat /proc/sys/kernel/random/entropy_avail

The number returned should always be greater than 1000. On s390x >=z14 random data is provided by the hardware driven trng device (kernel module s390-trng).

Kernel module vhost_vsock

The kernel module vhost_vsock must be loaded on the host. This can be achieved by creating a config file for the systemd-modules-load service: /etc/modules-load.d/vhost-vsock.conf:

# Load vhost_vsock for runq
vhost_vsock

Usage examples

the simplest example

docker run --runtime runq -ti busybox sh

custom VM with 512MiB memory and 2 CPUs

docker run --runtime runq -e RUNQ_MEM=512 -e RUNQ_CPU=2 -ti busybox sh

allow loading of extra kernel modules by adding the SYS_MODULE capability

docker run --runtime runq --cap-add sys_module -ti busybox sh -c "modprobe brd && lsmod"

full example PostgreSQL with custom storage

dd if=/dev/zero of=data.img bs=1M count=200
mkfs.ext4 -F data.img

docker run \
    --runtime runq \
    --name pgserver \
    -e RUNQ_CPU=2 \
    -e RUNQ_MEM=512 \
    -e POSTGRES_PASSWORD=mysecret \
    -v $PWD/data.img:/dev/runq/0001/none/ext4/var/lib/postgresql \
    -d postgres:alpine

sleep 10

docker run \
    --runtime runq \
    --link pgserver:postgres \
    --rm \
    -e PGPASSWORD=mysecret \
    postgres:alpine psql -h postgres -U postgres -c "select 42 as answer;"

#  answer
# --------
#      42
# (1 row)

Container with Systemd

For containers that use Systemd as the Docker entry-point the container exit code must be treated differently to ensure that poweroff and reboot executed inside the container work as expected.

with --restart on-failure:1'
poweroff, halt -> SIGINT(2) -> want container restart       -> exit code 0 (forced)
reboot         -> SIGHUP(1) -> don't want container restart -> exit code 1

-e RUNQ_SYSTEMD=1 also prevents runq from mounting cgroups.

See test/examples/Dockerfile.systemd and test/examples/systemd.sh for an example.

/.runqenv

Runq can write the container environment variables in a file named /.runqenv placed in the root directory of the container. This might be useful for containers running Systemd as entry point. This feature can be enabled globally by configuring --runqenv in /etc/docker/daemon.json or for a single container by setting the environment variable RUNQ_RUNQENV to a true value.

runq Components

   docker cli
      dockerd engine
         docker-containerd-shim
               runq                                           container
              +--------------------------------------------------------+
              |                                                        |
  docker0     |                                                  VM    |
    `veth <------> veth                 +--------------------------+   |
              |        `<--- macvtap ---|-> eth0                   |   |
              |  proxy  <-----------------> init                   |   |
 runq-exec <-----------tls----------------> `vsockd                |   |
              |                         |+-------------namespace--+|   |
 overlayfs <-----9pfs-------------------||-> /                    ||   |
              |                         ||                        ||   |
 block dev <-----virtio-blk-------------||-> /dev/vdx             ||   |
              |                         ||                        ||   |
              |                         ||                        ||   |
              |                         ||                        ||   |
              |                         ||       application      ||   |
              |                         ||                        ||   |
              |                         |+------------------------+|   |
              |                         |       guest kernel       |   |
              |                         +--------------------------+   |
              |                                     qemu               |
              +--------------------------------------------------------+

 --------------------------------------------------------------------------
                                host kernel
  • cmd/runq

    • new docker runtime
  • cmd/proxy

    • new Docker entry point
    • first process in container (PID 1)
    • configures and starts Qemu (network, disks, ...)
    • forwards signals to VM init
    • receives application exit code
  • cmd/init

    • first process in VM (PID 1)
    • initializes the VM guest (network, disks, ...)
    • starts entry-point in PID and Mount namespace
    • sends signals to target application
    • forwards application exit code back to proxy
  • cmd/runq-exec

    • command line utility similar to docker exec
  • cmd/nsenter

    • enters the namespaces of entry-point for runq-exec
  • qemu

    • creates /var/lib/runq/qemu
    • read-only volume attached to every container
    • contains qemu rootfs (proxy, qemu, kernel and initrd)
  • initrd

    • prepares the initrd to boot the VM
  • pkg

    • helper packages

runq-exec

runq-exec (/var/lib/runq/runq-exec) is a command line utility similar to docker exec. It allows running additional commands in existing runq containers executed from the host. It uses VirtioVsock for the communication between host and VMs. TLS is used for encryption and client authorization. Support for runq-exec can be disabled by setting the container environment variable RUNQ_NOEXEC to a true value or by --noexec in /etc/docker/daemon.json.

Usage:
  runq-exec [options] <container> command args

Run a command in a running runq container

Options:
  -c, --tlscert string    TLS certificate file (default "/var/lib/runq/cert.pem")
  -k, --tlskey string     TLS private key file (default "/var/lib/runq/key.pem")
  -e, --env stringArray   Set environment variables for command
  -h, --help              Print this help
  -i, --interactive       Keep STDIN open even if not attached
  -t, --tty               Allocate a pseudo-TTY
  -v, --version           Print version

Environment Variable:
  DOCKER_HOST    specifies the Docker daemon socket.

Example:
  runq-exec -ti a6c3b7c bash

Qemu and guest Kernel

runq runs Qemu and Linux Kernel from the /var/lib/runq/qemu directory on the host. This directory is populated by make -C qemu. For simplicity Qemu and the Linux kernel are taken from the Ubuntu 18.04 LTS Docker base image. See qemu/x86_64/Dockerfile for details. This makes runq independent of the Linux distribution on the host. Qemu does not need to be installed on the host.

The kernel modules directory (/var/lib/runq/qemu/lib/modules) is bind-mounted into every container to /lib/modules. This allows the loading of extra kernel modules in any container if needed. For this SYS_MODULES capability is required (--cap-add sys_modules).

Networking

runq uses Macvtap devices to connect Qemu VirtIO interfaces to Docker bridges. By default a single Ethernet interface is created. Multiple networks can be used by connecting a container to the networks before start. See test/integration/net.sh as an example.

runq container can also be connected to one or more Docker networks of type Macvlan. This allows a direct connection between the VM and the physical host network without bridge and without NAT. See https://docs.docker.com/network/macvlan/ for details.

For custom networks the docker daemon implements an embedded DNS server which provides built-in service discovery for any container created with a valid container name. This Docker DNS server (listen address 127.0.0.11:53) is reachable only by runc containers and not by runq containers. A work-around is to run one or more DNS proxy container in the custom network with runc and use the proxy IP address for DNS of runq containers. See test/examples/dnsproxy.sh for details on how to setup a DNS proxy.

DNS configuration without proxy can be done globally via runtime options specified in '/etc/docker/daemon.json' (see example above) or via environment variables for each container at container start. The environment variables are RUNQ_DNS, RUNQ_DNS_OPT and RUNQ_DNS_SEARCH. Environment variables have priority over global options.

Setting the environment variable RUNQ_DNS_PRESERVE to "1" completely disables generation of /etc/resolv.conf by runq.

Storage

Extra storage can be added in the form of Qcow2 images, raw file images or regular block devices. Storage devices will be mounted automatically if a filesystem and a mount point has been specified. Supported filesystems are ext2, ext3, ext4, xfs and btrfs. Cache type must be writeback, writethrough, none or unsafe. Cache type "none" is recommended for filesystems that support O_DIRECT. See man qemu(1) for details about different cache types.

Syntax:

--volume <image  name>:/dev/runq/<id>/<cache type>[/<filesystem type><mount point>]
--device <device name>:/dev/runq/<id>/<cache type>[/<filesystem type><mount point>]

<id> is used to create symbolic links inside the VM guest that point to the Qemu Virtio device files. The id can be any character string that matches the regex pattern "^[a-zA-Z0-9-_]{1,36}$" but it must be unique within a container.

/dev/disk/by-runq-id/0001 -> ../../vda

Storage examples

Mount the existing Qcow image /data.qcow2 with xfs filesystem to /mnt/data:

docker run -v /data.qcow2:/dev/runq/0001/none/xfs/mnt/data ...

Attach the host device /dev/sdb1 formatted with ext4 to /mnt/data2:

docker run --device /dev/sdb1:/dev/runq/0002/writethrough/ext4/mnt/data2 ...

Attach the host device /dev/sdb2 without mounting:

docker run --device /dev/sdb2:/dev/runq/0003/writethrough ...

Rootdisk

A block device or a raw file with an EXT2 or EXT4 filesystem can be used as rootdisk of the VM. On first boot of the container the content of the Docker image is copied into the rootdisk. The block device or raw file will then be used as root filesystem via virtio-blk instead of 9pfs. But be aware that changes to the root filesystem will not be reflected in the source docker container filesystem. (docker cp will no longer work as expected)

# existing block device with empty ext4 filesystem
docker run --runtime runq --device /dev/sdb1:/dev/runq/0001/none/ext4 -e RUNQ_ROOTDISK=0001 -ti alpine sh

# new raw file
fallocate -l 1G disk.raw
mkfs.ext4 disk.raw
docker run --runtime runq --volume $PWD/disk.raw:/dev/runq/0001/none/ext4 -e RUNQ_ROOTDISK=0001 -ti alpine sh

Directories can be excluded from being copied with the RUNQ_ROOTDISK_EXCLUDE environment variable. E.g. -e RUNQ_ROOTDISK_EXCLUDE="/foo,/bar"

See Dockerfile.rootdisk and rootdisk.sh as a further example.

Capabilities

By default runq drops all capabilities except those needed (same as regular Docker does). The white list of the remaining capabilities is provided by the Docker engine.

AUDIT_WRITE CHOWN DAC_OVERRIDE FOWNER FSETID KILL MKNOD NET_BIND_SERVICE NET_RAW SETFCAP SETGID SETPCAP SETUID SYS_CHROOT

See man capabilities for a list of all available capabilities. Additional Capabilities can be added to the white list at container start:

docker run --cap-add SYS_TIME --cap-add SYS_MODULE ...`

Seccomp

runq supports the default Docker seccomp profile as well as custom profiles.

docker run --security-opt seccomp=<profile-file> ...

The default profile is defined by the Docker daemon and gets applied automatically. Note: Only the runq init binary is statically linked against libseccomp. Therefore libseccomp is needed only at compile time.

If the host operating system where runq is being built does not provide static libseccomp libraries one can also simply build and install libseccomp from the sources.

Seccomp can be disabled at container start:

docker run --security-opt seccomp=unconfined ...

Note: Some Docker daemon don't support custom Seccomp profiles. Run docker info to verify that Seccomp is supported by your daemon. If it is supported the output of docker info looks like this:

Security Options:
 seccomp
  Profile: default

AP adapter passthrough (s390x only)

AP devices provide cryptographic functions to all CPUs assigned to a Linux system running in an IBM Z system LPAR. AP devices can be made available to a runq container by passing a VFIO mediated device from the host through Qemu into the runq VM guest. VFIO mediated devices are enabled by the vfio_ap kernel module and allow for partitioning of AP devices and domains. The environment variable RUNQ_APUUID specifies the VFIO mediated device UUID. runq automatically loads the required zcrypt kernel modules inside the VM. E.g.:

docker run --runtime runq -e RUNQ_APUUID=b34543ee-496b-4769-8312-83707033e1de ...

For details on how to setup mediated devices on the host see https://www.kernel.org/doc/html/latest/s390/vfio-ap.html

Limitations

Most docker commands and options work as expected. However, due to the fact that the target application runs inside a Qemu VM which itself runs inside a Docker container and because of the minimalistic design principle of runq some docker commands and options don't work. E.g:

  • adding / removing networks and storage dynamically
  • docker exec (see runq-exec)
  • docker swarm
  • privileged mode
  • apparmor, selinux, ambient
  • docker HEALTHCHECK

The following common options of docker run are supported:

--attach                    --name
--cap-add                   --network
--cap-drop                  --publish
--cpus                      --restart
--cpuset-cpus               --rm
--detach                    --runtime
--entrypoint                --sysctl
--env                       --security-opt seccomp=unconfined
--env-file                  --security-opt no-new-privileges
--expose                    --security-opt seccomp=<filter-file>
--group-add                 --tmpfs
--help                      --tty
--hostname                  --ulimit
--init                      --user
--interactive               --volume
--ip                        --volumes-from
--link                      --workdir
--mount

Nested VM

A nested VM is a virtual machine that runs inside of a virtual machine. In plain KVM this feature is considered working but not meant for production use. Running KVM guests inside guests of other hypervisors such as VMware might not work as expected or might not work at all. However to try out runq in a VM guest the (experimental) runq runtime configuration parameter --nestedvm can be used. It modifies the parameters of the Qemu process.

Developing runq

For fast development cycles runq can be build on the host as follows:

  1. Prerequisites:
  • Docker >= 19.03.x-ce
  • Go >= 1.16
  • /var/lib/runq must be writable by the current user
  • Libseccomp static library. E.g. libseccomp-dev for Ubuntu or libseccomp-static for Fedora
  1. Download runq and runc source code
    git clone --recurse-submodules https://github.com/gotoz/runq.git
    
    
  2. Install Qemu and guest kernel to /var/lib/runq/qemu
    All files are taken from the Ubuntu 18.04 LTS Docker base image. (/var/lib/runq must be writeable by the current user.)
    cd runq
    make -C qemu all install
    
  3. Compile and install runq components to /var/lib/runq
    make install
    
  4. Create TLS certificates
    /var/lib/runq/qemu/mkcerts.sh
    
  5. Adjust file and directory permissions
    sudo chown -R root:root /var/lib/runq
    
  6. Register runq as Docker runtime with appropriate defaults as shown in section Installation above.

Contributing

See CONTRIBUTING for details.

License

The code is licensed under the Apache License 2.0.
See LICENSE for further details.

Comments
  • Skip generating resolv.conf when DNS_RUNQ_PRESERVE is defined

    Skip generating resolv.conf when DNS_RUNQ_PRESERVE is defined

    Kubernetes does not use the Docker builtin name resolver, and provides an appropriate resolv.conf file depending on cluster and pod configurations. A default DNS setting per container runtime does not work well with Kubernetes.

    This patch suppresses generation of resolv.conf by runq when DNS settings of /etc/docker/daemon.json or RUNQ_DNS are not specified.

  • Can runq be compiled & run on Mac OS X without QEMU?

    Can runq be compiled & run on Mac OS X without QEMU?

    It wasn't clear to me from the README. If the answer is no, I'll stop trying to install it on my Mac :) If they answer is yes, I'll submit a bug report. Since it's runc based, I'm guessing the answer is no.

    EDIT: Changed title from Should runq be usable on Mac OS X? to Can runq be compiled & run on Mac OS X without QEMU?

  • labels instead of RUNQ_ envs

    labels instead of RUNQ_ envs

    What about using docker labels instead of RUNQ_ envs - this would allow docker images to specify requirements as

    LABEL runq.cpu=2
    LABEL runq.mem=2048
    
  • Running qcow2/raw disk images under runq

    Running qcow2/raw disk images under runq

    Brief description Hi, Query - Is it possible to run normal qcow2 images using this runq. If so, Please share some example commands to spawn a vm.

    Thanks

  • caching modes in examples

    caching modes in examples

    The storage examples section contains the examples referencing 'writeback' and 'writethrough' caching. Since runq is using Io threads and direct mapping with qenu, I believe the best-practice is to set the caching mode to 'none'

  • --tmpfs overwrites existing permissions on mountpoints

    --tmpfs overwrites existing permissions on mountpoints

    Using --tmpfs /tmp seems to overwrite the permissions of an existing mount directory.

    We're getting this permission set on a container when running: drwxr-xr-x 2 root root 60 Jun 27 19:17 tmp

    The container template has: drwxrwxrwt 2 root root 60 Jun 27 18:41 tmp

  • Can we use host mouth directory?

    Can we use host mouth directory?

    Hi, As I see on the arch of runq, there is 9pfs component

    Screenshot from 2021-08-30 14-54-16

    I wonder if we can mount host folder to VM, like -v /home/user/data:/data

    docker run --runtime runq -e RUNQ_MEM=512 -e RUNQ_CPU=2 -ti -v /home/user/data:/data  busybox sh
    
  • Support for aarch64 (Raspberry Pi4)

    Support for aarch64 (Raspberry Pi4)

    Brief description Since this runner essentially uses QEMU to virtualise other environments, it would be nice if there would be support for ARM, especially for Raspberry Pi. I don't see why would it not work if you anyway virtualise, that itself works.

    Steps to reproduce the issue

    1. Get a Raspberry Pi4 with Rasbian
    2. Set up KVM/QEMU:
    sudo apt update
    sudo apt full-upgrade
    sudo echo "arm_64bit=1" >> /boot/config.txt
    sudo apt install libvirt0 binfmt-support qemu-system qemu-user-static
    sudo usermod -aG libvirt-qemu $(whoami)
    sudo virsh net-start default
    sudo virsh net-autostart default
    
    1. Try installing runq:
    git clone --recurse-submodules https://github.com/gotoz/runq.git
    cd runq
    make release
    make release-install
    

    Expected behaviour make does not fail and installs the runtime

    Actual behaviour

    pi@raspberrypi:~/runq $ make release && make release-install
    make -C qemu image
    make[1]: Entering directory '/home/pi/runq/qemu'
    cd aarch64 && \
    docker build -t runq-build-1804 .
    /bin/sh: 1: cd: can't cd to aarch64
    make[1]: *** [Makefile:6: image] Error 2
    make[1]: Leaving directory '/home/pi/runq/qemu'
    make: *** [Makefile:26: image] Error 2
    

    Content of section runtimes of /etc/docker/daemon.json Not yet available as the installation failed:

    pi@raspberrypi:~/runq $ cat /etc/docker/daemon.json
    cat: /etc/docker/daemon.json: No such file or directory
    

    Content of /var/lib/runq/qemu/proxy --version Not yet available as the installation failed:

    pi@raspberrypi:~/runq $ /var/lib/runq/qemu/proxy --version
    -bash: /var/lib/runq/qemu/proxy: No such file or directory
    

    Content of docker --version

    pi@raspberrypi:~/runq $ docker --version
    Docker version 20.10.6, build 370c289
    

    Additional information

    pi@raspberrypi:~/runq $ uname -a
    Linux raspberrypi 5.10.17-v8+ #1403 SMP PREEMPT Mon Feb 22 11:37:54 GMT 2021 aarch64 GNU/Linux
    
    pi@raspberrypi:~ $ /usr/bin/qemu-system-aarch64 --version
    QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-8+deb10u8)
    
    pi@raspberrypi:~ $ /usr/bin/qemu-system-x86_64 --version
    QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-8+deb10u8)
    
  • Multiarch support

    Multiarch support

    I would like to be able to run multi arch images. With normal runc backend this is possible by enabling qmeu-binfmt and then simply running

    $ docker run -it arm64v8/debian
    

    Trying this with runq will result in exec error

    $ docker run -it --runtime runq arm64v8/debian
    [entrypoint(1) 09048dc] exec format error
    Exec() failed
    main.runEntrypoint
    	/runq/cmd/init/entrypoint.go:129
    main.mainEntrypoint
    	/runq/cmd/init/entrypoint.go:22
    main.main
    	/runq/cmd/init/main.go:40
    runtime.main
    	/usr/local/go/src/runtime/proc.go:204
    runtime.goexit
    	/usr/local/go/src/runtime/asm_amd64.s:1374
    

    Are there plans to support other architectures? Maybe add a flag that maps to the requested qemu variant? e.g. -e RUNQ_ARCH=aarch64

  • Support read-only root filesystem

    Support read-only root filesystem

    This patch adds the support of the --read-only option of docker run. All mount points on root filesystem needs to be created in proxy in advance.

    This commit requires #11 to work correctly.

    In Kubernetes, a pod is created with an initial pause container, and the CRI plugin of containerd creates a pause container with a read-only root filesystem.

    Signed-off-by: Yohei Ueda [email protected]

  • Enable IPv6 support in VM when IPv6 address is assigned

    Enable IPv6 support in VM when IPv6 address is assigned

    When we enable IPv6 support in Docker, a runq container fails to start. https://docs.docker.com/config/daemon/ipv6/

    # docker run --runtime runq --rm busybox ip addr show eth0
    [init(1) 7291815] permission denied
    main.setupNetwork
    	/runq/cmd/init/network.go:74
    main.runInit
    	/runq/cmd/init/main.go:140
    main.main
    	/runq/cmd/init/main.go:48
    runtime.main
    	/usr/local/go/src/runtime/proc.go:204
    runtime.goexit
    	/usr/local/go/src/runtime/asm_s390x.s:779
    

    This is because default sysctl settings defined in cfg.go disable IPv6 support in VM. https://github.com/gotoz/runq/blob/d013e878cc2f35d23b4e85f5ac60ff9a872f27c4/internal/cfg/cfg.go#L23-L32

    To enable IPv6 in runq, we explicitly need to specify sysctl option as follows.

    # docker run --runtime runq --rm --sysctl net.ipv6.conf.all.disable_ipv6=0 --sysctl net.ipv6.conf.default.disable_ipv6=0 busybox ip addr show eth0
    2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
        link/ether ee:17:03:1a:3d:1a brd ff:ff:ff:ff:ff:ff
        inet 172.31.0.2/16 brd 172.31.255.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 2001:db8:1::242:ac1f:2/64 scope global flags 02
           valid_lft forever preferred_lft forever
        inet6 fe80::42:acff:fe1f:2/64 scope link tentative
           valid_lft forever preferred_lft forever
    

    This behavior is inconvenient when IPv6 is enabled.

    This patch enables IPv6 support in runq when proxy detects a IPv6 address.

  • do standard Docker vulnerabilities reside in this engine too?

    do standard Docker vulnerabilities reside in this engine too?

    I know that Docker has some bugs for RCE and other security issues. Does this engine end up being vulnerable to the same issues as stock Docker? I'm aware that it may introduce its own set of vulns, but i'm most concerned with finding a way to use docker images without exposing myself to dockers weaknesses.

    Thanks!

ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run, exec, cp, logs, stop)
ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run, exec, cp, logs, stop)

English / 日本語 ecsk ECS + Task = ecsk ?? ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run

Dec 13, 2022
An image server which automatically optimize non webp and avif images to webp and avif images

go-imageserver go-imageserver is an image server which automatically optimize no

Apr 18, 2022
This image is primarily used to ping/call a URL on regular intervals using Kubernetes (k8s) CronJob.

A simple webhook caller This image is primarily used to ping/call a URL on regular intervals using Kubernetes (k8s) CronJob. A sample job would look s

Nov 30, 2021
Vilicus is an open source tool that orchestrates security scans of container images(docker/oci) and centralizes all results into a database for further analysis and metrics.
Vilicus is an open source tool that orchestrates security scans of container images(docker/oci) and centralizes all results into a database for further analysis and metrics.

Vilicus Table of Contents Overview How does it work? Architecture Development Run deployment manually Usage Example of analysis Overview Vilicus is an

Dec 6, 2022
ghcr images - Fetched from docker-library

ghcri ghcri is the repo for Github Container Registry Images. Just like docker-library for Docker Registry. Usage Replace all docker library from dock

Aug 15, 2022
CLI based tools to find the secrets in docker Images
CLI based tools to find the secrets in docker Images

docker-secrets CLI based tools to find the secrets in docker Images This tool use detect-secrets to find the secrets in the docker Image file system P

Mar 22, 2022
🥑 Language focused docker images, minus the operating system.

"Distroless" Docker Images "Distroless" images contain only your application and its runtime dependencies. They do not contain package managers, shell

Jan 9, 2023
Woodpecker CI plugin to build multiarch Docker images with buildx

plugin-docker-buildx Woodpecker CI plugin to build multiarch Docker images with buildx Woodpecker CI plugin to build multiarch Docker images with buil

Nov 5, 2022
A tool to check whether docker images exist in the remote registry.

Check Docker Image A tool to check whether docker images exist in the remote registry. Build project: go build -o check-image . Example usage: REGISTR

Jul 26, 2022
Show dependency graph of docker images/containers
Show dependency graph of docker images/containers

docker-graph Show dependency graph of docker images/containers like this: Orange is images and green is containers. Features Collect docker images, co

Feb 7, 2022
Testcontainers is a Golang library that providing a friendly API to run Docker container. It is designed to create runtime environment to use during your automatic tests.

When I was working on a Zipkin PR I discovered a nice Java library called Testcontainers. It provides an easy and clean API over the go docker sdk to

Jan 7, 2023
A docker image and a launcher to run sasm on Windows and MacOS
A docker image and a launcher to run sasm on Windows and MacOS

Sasm-docker Sasm-docker simplifies the setup and use of SASM by running it inside a docker container and using x11 (X Window System) in order to displ

Nov 14, 2022
Build and run Docker containers leveraging NVIDIA GPUs
Build and run Docker containers leveraging NVIDIA GPUs

NVIDIA Container Toolkit Introduction The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includ

Jan 7, 2023
Docker-NodeJS - Creating a CI/CD Environment for Serverless Containers on Google Cloud Run
Docker-NodeJS - Creating a CI/CD Environment for Serverless Containers on Google Cloud Run

Creating a CI/CD Environment for Serverless Containers on Google Cloud Run Archi

Jan 8, 2022
Docker-based remote code runner / 基于 Docker 的远程代码运行器
Docker-based remote code runner / 基于 Docker 的远程代码运行器

Docker-based remote code runner / 基于 Docker 的远程代码运行器

Nov 9, 2022
Hassle-free minimal CI/CD for git repositories with docker or docker-compose projects.
Hassle-free minimal CI/CD for git repositories with docker or docker-compose projects.

GIT-PIPE Hassle-free minimal CI/CD for git repos for docker-based projects. Features: zero configuration for repos by default automatic encrypted back

Sep 23, 2022
Tool to convert docker-compose files to set of simple docker commands

docker-decompose Tool to convert docker-compose files to set of simple docker commands. Install Use go get to install the latest version of the librar

Apr 12, 2022
Go-http-server-docker - Simple sample server using docker and go

go-http-server-docker Simple sample webserver using docker and go.

Jan 8, 2022
Docker-hub-rate-limit - Show pulling rate status of Docker-hub

Docker-Hub Pull Rate Status This tool shows current status of docker hub pull ra

Jan 28, 2022