accelerated-container-image

Accelerated Container Image

Accelerated Container Image is an open-source implementation of paper "DADI: Block-Level Image Service for Agile and Elastic Application Deployment. USENIX ATC'20".

DADI (Data Accelerator for Disaggregated Infrastructure) is a solution for container acceleration including remote image and other features, and has been widely used in Alibaba and Alibaba Cloud, and already supported by Alibaba Cloud Registry (ACR).

At the heart of the acceleration is OverlayBD, which provides a merged view of a sequence of block-based layers as an iSCSI block device. It can be used for container acceleration by supporting fetching image data on-demand without downloading and unpacking the whole image before a container running. With OverlayBD image format, we can cold start a container instantly.

The key features are:

  • High Performance

    It's a block-device-based storage of OCI image, which has much lower complexity than filesystem-based implementations. For example, cross-layer hardlink and non-copy commands like chown are very complex for filesystem-based image without copying up, but is natively supported by OverlayBD. OverlayBD outperforms filesystem-based solutions in performance. Evaluation data is stated in DADI paper.

  • High Reliability

    OverlayBD outputs block devices through iSCSI protocol, which is widely used and supportted in most operation systems. OverlayBD backing-store can recover from failures or crashes.

  • Native Support for Writable

    OverlayBD can be used as writable/container layer. The end-users can build their OverlayBD images naturally without conversion.

Getting Started

  • See how to build and install OverlayBD component at README.

  • See how to build snaphshotter and ctr plugin components at BUILDING.

  • See how to install at INSTALL.

  • After build or install, see our examples at EXAMPLES.

  • Welcome to contribute! CONTRIBUTING

Overview

With OCI image spec, an image layer blob is saved as a tarball on the registry, describing the changeset based on it's previous layer. However, tarball is not designed to be seekable and random access is not supported. Complete downloading of all blobs is always necessary before bringing up a container.

An OverlayBD blob is a collection of modified data blocks under the filesystem and corresponding to the files added, modified or deleted by the layer. OverlayBD iSCSI backing-store is used to provide the merged view of layers and provides a virtual block device through iSCSI protocol. Filesystem is mounted on top of the device and an overlaybd blob can be accessed randomly and supports on-demond reading natively.

image data flow

The raw data of block differences, together with an index to the raw data, constitute the OverlayBD blob. When attaching and mounting an OverlayBD device, only indexes of each layer are loaded from remote, and stored in memory. For data reading, overlaybd performs a range lookup in the index to find out where in the blob to read and then performs a remote fetching. That blob is in Zfile format.

Zfile is a new compression file format to support seekable decompression, which can reduce storage and transmission costs. And also the checksum information to protect against data corruptions for on-demand reading is stored in Zfile. In order to be compatible with existing registries and container engines, Zfile is wrapped by a tar file, which has only one Zfile inside.

io-path

OverlayBD connects with applications through a filesystem mounted on an iSCSI block device. OverlayBD is agnostic to the choice of filesystem so users can select one that best fits their needs. I/O requests go from applications to a regular filesystem such as ext4. From there they go to iSCSI device and then to the user-space tgt - OverlayBD backing-store. Backend read operations are always on layer files. Some of the layer files may have already been downloaded, so these reads would hit local filesystem. Other reads will be directed to registry. Write and trim operations are handled by OverlayBD backing-store which writes the data and index files of the writable layer to the local file system. For more details, see the paper.

Components

  • OverlayBD

    OverlayBD provides a merged view of block-based layer sequence as a third-party backing-store of tgt, which is an user space iSCSI target framework.

  • OverlayBD-snapshotter

    OverlayBD snapshotter is a containerd snapshotter plugin for OverlayBD image. The snapshotter is compatible for OCI image, as well as overlayFS snapshotter.

Licenses

  • Both snapshotter and containerd ctr plugin are released under the Apache License, Version 2.0.
Owner
Comments
  • fail to use 'record-trace'

    fail to use 'record-trace'

    When I tried to record trace of an image (obd format), it failed:

    sudo bin/ctr record-trace registry.hub.docker.com/overlaybd/redis:6.2.6_obd redis_trace

    ctr: failed to setup network for namespace: plugin type="loopback" failed (add): failed to find plugin "loopback" in path [/opt/cni/bin/]

    I followed the docs strictly to build the environment, I download containerd-1.6.0-rc.1-linux-amd64.tar.gz from containerd.io All relative information I could get from google was about k8s, What should I do to handle with the problem?

  • ctr from release v0.5.2 failed to run on ubuntu 20.04

    ctr from release v0.5.2 failed to run on ubuntu 20.04

    Repro:

    • Install v0.5.2 on ubuntu 20.04
    • run /opt/overlaybd/snapshotter/ctr

    Error:

    /opt/overlaybd/snapshotter/ctr: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /opt/overlaybd/snapshotter/ctr)

  • can not run container

    can not run container

    Use the rpull subcommand to pull the overlaybd format image, but when I execute the following command: ctr run --net-host --snapshotter=overlaybd --rm -t registry.hub.docker.com/overlaybd/redis:6.2.1_obd demo I get error : ctr: failed to attach and mount for snapshot 8: failed to enable target for /sys/kernel/config/target/core/user_999999999/dev_8, : unknown How to solve this problem ?

  • obdconverted image fails to run for me

    obdconverted image fails to run for me

    Hi,

    I have been following the documentation to convert an OCI image to overlaybd friendly image based on https://github.com/alibaba/accelerated-container-image/blob/main/docs/EXAMPLES.md#convert-oci-image-into-overlaybd

    But I get the following error when trying to run it. Note that instead of localhost:5000/redis:6.2.1_obd, I use myreg.azurecr.io/test/redis:6.2.1. It probably shouldn't make any difference?

    ctr run --net-host --snapshotter=overlaybd --rm -t myreg.azurecr.io/test/redis:6.2.1 demo
    ctr: failed to prepare extraction snapshot "extract-164412284-SC8e sha256:23e0fe431efc04eba59e21e54ec38109f73b5b5df355234afca317c0b32f7b0e": failed to attach and mount for snapshot 33: failed to mount /dev/sdh to /var/lib/overlaybd/snapshots/33/block/mountpoint: read-only file system: unknown
    

    What should I check? The output

    Environment:

    root@agentpool1:/var/lib/waagent# ctr plugin ls | grep overlaybd
    io.containerd.snapshotter.v1    overlaybd                -              ok
    
    root@agentpool1:/var/lib/waagent# ctr snapshot --snapshotter overlaybd ls
    KEY PARENT KIND
    
    root@agentpool1:/var/lib/waagent# ctr images ls
    REF                                         TYPE                                                      DIGEST                                                                  SIZE     PLATFORMS                                                                                               LABELS
    myreg.azurecr.io/test/redis:6.2.1           application/vnd.docker.distribution.manifest.v2+json      sha256:d448b24bc45ae177ba279d04ea53ec09421dd5bee66b887d3106e0d380d6cc6b 65.0 MiB linux/amd64                                                                                             -
    registry.hub.docker.com/library/redis:6.2.1 application/vnd.docker.distribution.manifest.list.v2+json sha256:08e282682a708eb7f51b473516be222fff0251cdee5ef8f99f4441a795c335b6 36.9 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x -
    
  • Remote image access performance improvement, by peer-to-peer distribution system

    Remote image access performance improvement, by peer-to-peer distribution system

    OverlayBD provides block device with rootfs using remote image, the performance is highly associated with registry throughput and latency. Large scale container launch may cause hot spot on registry, and P2P distribution system should help.

  • Error while running busybox obd image:

    Error while running busybox obd image: "failed to attach and mount for snapshot 80: failed to enable target for"

    Hi! We have been able to run overlaybd containers (the redis server and Wordpress ones in the examples) on an AKS nodes, but I face an error when attempting to run a busybox obd image. It would be great if you could help identify the issue. Thanks!

    Steps to repro: .accelerated-container-image/script/performance/clean-env.sh

    sudo nerdctl run --net host --rm --pull=always docker.io/library/busybox # Works

    bin/ctr obdconv docker.io/library/busybox:latest docker.io/aganeshkumar/daditest:busybox_test_obd # Works

    Output:

    docker.io/aganeshkumar/daditest:busybox_test_obd:                                 resolved       |++++++++++++++++++++++++++++++++++++++| 
    manifest-sha256:dfcd0a2ff1e99bcb845919322698e1e7a0a11d517e812f8af75b2cc61e90fc11: exists         |++++++++++++++++++++++++++++++++++++++| 
    config-sha256:e85ab4f2f7c417565e4cf68c848b59e3d78e29c2fb96196d208180c2f3fb049f:   exists         |++++++++++++++++++++++++++++++++++++++| 
    elapsed: 0.6 s                                                                    total:   0.0 B (0.0 B/s)  
    

    nerdctl image push docker.io/aganeshkumar/daditest:busybox_obd # No error, can see image in this public dockerhub repo

    .accelerated-container-image/script/performance/clean-env.sh # To remove previously pulled images

    sudo nerdctl run -it --net host --rm --snapshotter=overlaybd docker.io/aganeshkumar/daditest:busybox_test_obd # Errors

    Error message:

    FATA[0000] failed to attach and mount for snapshot 80: failed to enable target for /sys/kernel/config/target/core/user_999999999/dev_80, failed:failed to open switch file `https://docker.io/v2/aganeshkumar/daditest/blobs/sha256:58e554736a9008721c7a0918428315cce2678f6440bb39dc9689ef22a809b7ac: unknown 
    

    For additional context:

    • This is the script to setup a node on AKS with overlaybd snapshotter: https://github.com/ganeshkumar5699/container-acceleration. I would assume, the same issue is hit even with a regular Linux VM.
    • If we don't remove the converted obd image locally, push the image and then attempt to run it, it works (presumably because the layers already exist locally)

    Potential cause of the issue: Not all the layers of the obd image are being converted locally or being pushed properly.

    It would be great to know how to make this work, as we want to run benchmarking tests with different converted obd images (starting with busybox). Thank you!

  • How to prefech data using snapshotter as block device without mount?

    How to prefech data using snapshotter as block device without mount?

    Now the prefetching is based on a trace file on the top layer of an image. What if we use a block device without any fs mounted. Is there any possibility to prefetch the data?

  • can not lazy pull

    can not lazy pull

    I can only pull the whole image, then run; /var/log/overlaybd.log prints: 2022/01/26 20:21:11|ERROR|th=0000000002AA8F60|main.cpp:301|dev_open:create image file failed 2022/01/26 20:21:11|INFO |th=0000000002AA8F60|main.cpp:291|dev_open:dev open /var/lib/containerd/io.containerd.snapshotter.v1.overlaybd/snapshots/116/block/config.v1.json 2022/01/26 20:21:11|ERROR|th=0000000002AA8F60|config_util.h:53|ParseJSON:error open json file: /var/lib/containerd/io.containerd.snapshotter.v1.overlaybd/snapshots/116/block/config.v1.json 2022/01/26 20:21:11|ERROR|th=0000000002AA8F60|image_service.cpp:273|create_image_file:error parse image config 2022/01/26 20:21:11|ERROR|th=0000000002AA8F60|main.cpp:301|dev_open:create image file failed 2022/01/26 20:21:11|INFO |th=0000000002AA8F60|main.cpp:291|dev_open:dev open /var/lib/containerd/io.containerd.snapshotter.v1.overlaybd/snapshots/117/block/config.v1.json 2022/01/26 20:21:11|ERROR|th=0000000002AA8F60|config_util.h:53|ParseJSON:error open json file: /var/lib/containerd/io.containerd.snapshotter.v1.overlaybd/snapshots/117/block/config.v1.json 2022/01/26 20:21:11|ERROR|th=0000000002AA8F60|image_service.cpp:273|create_image_file:error parse image config 2022/01/26 20:21:11|ERROR|th=0000000002AA8F60|main.cpp:301|dev_open:create image file failed 2022/01/26 20:21:11|INFO |th=0000000002AA8F60|main.cpp:291|dev_open:dev open /var/lib/containerd/io.containerd.snapshotter.v1.overlaybd/snapshots/118/block/config.v1.json 2022/01/26 20:21:11|ERROR|th=0000000002AA8F60|config_util.h:53|ParseJSON:error open json file: /var/lib/containerd/io.containerd.snapshotter.v1.overlaybd/snapshots/118/block/config.v1.json 2022/01/26 20:21:11|ERROR|th=0000000002AA8F60|image_service.cpp:273|create_image_file:error parse image config 2022/01/26 20:21:11|ERROR|th=0000000002AA8F60|main.cpp:301|dev_open:create image file failed 2022/01/26 20:23:07|INFO |th=00007F25202143C0|main.cpp:291|dev_open:dev open /var/lib/containerd/io.containerd.snapshotter.v1.overlaybd/snapshots/22/block/config.v1.json 2022/01/26 20:23:07|INFO |th=00007F24DEA0D840|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 50092870, idx_bytes: 94604, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24DE206C00|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 8396, idx_bytes: 92, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24DD1FEC80|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 14234, idx_bytes: 164, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24DC9F7BC0|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 29786352, idx_bytes: 50648, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24DC1F67C0|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 18883, idx_bytes: 176, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24DB9F4080|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 17718, idx_bytes: 180, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24DB1EC780|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 11272123, idx_bytes: 11888, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24DA1E5040|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 23701322, idx_bytes: 62140, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D81D2400|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 28651120, idx_bytes: 55368, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D71C8840|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 10817, idx_bytes: 104, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D69C1800|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 10993, idx_bytes: 100, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D61BDBC0|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 68894, idx_bytes: 448, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D59BCC40|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 28635014, idx_bytes: 62144, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D51B7C80|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 13938, idx_bytes: 104, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D49AF3C0|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 11050, idx_bytes: 92, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24DF211400|zfile.cpp:509|load_jump_table:trailer_offset: 4737183, idx_offset: 4207947, idx_bytes: 529236, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24DDA03800|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 150863644, idx_bytes: 284256, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D79CE840|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 19540623, idx_bytes: 45988, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24DA9EB040|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 9392, idx_bytes: 92, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D99DEC80|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 13495, idx_bytes: 104, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D91D6FC0|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 10207, idx_bytes: 100, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F24D89D4BC0|zfile.cpp:516|load_jump_table:read overwrite header. idx_offset: 10073, idx_bytes: 92, dict_size: 0, use_dict: 0 2022/01/26 20:23:07|INFO |th=00007F25202143C0|image_file.cpp:262|open_lowers:LSMT::open_files_ro(files, 22) success 2022/01/26 20:23:07|INFO |th=00007F25202143C0|image_file.cpp:362|init_image_file:RW layer path not set. return RO layers. 2022/01/26 20:23:07|INFO |th=00007F25202143C0|image_file.cpp:148|start_bk_dl_thread:no need to download 2022/01/26 20:23:07|INFO |th=00007F25202143C0|image_file.h:50|ImageFile:new imageFile, bs: 512, size: 68719476736

    I can't find dir /var/lib/containerd/io.containerd.snapshotter.v1.overlaybd/snapshots/x find in log on the host. I can't tell which componet get in trouble. Any help will be aappreciated.

  • Unable to use rpull from container registry

    Unable to use rpull from container registry

    I follow the instructions on https://github.com/alibaba/accelerated-container-image/blob/main/docs/EXAMPLES.md

    However, I got this error:

    ➜  accelerated-container-image git:(main) sudo bin/ctr rpull staging-registry.yuri.moe/redis:6.2.1_obd && sudo ctr run --net-host --snapshotter=overlaybd --rm -t staging-registry.yuri.moe/redis:6.2.1_obd demo
    staging-registry.yuri.moe/redis:6.2.1_obd:                                        resolved       |++++++++++++++++++++++++++++++++++++++| 
    manifest-sha256:23d8acc1c468e678019c12784bac514b09908c0accc7bf2a56ae8fe7fea9e1d6: downloading    |--------------------------------------|    0.0 B/3.3 KiB 
    elapsed: 0.2 s                                                                    total:   0.0 B (0.0 B/s)                                         
    done
    ctr: failed to attach and mount for snapshot 7: failed to enable target for /sys/kernel/config/target/core/user_999999999/dev_7, failed:failed to open remote file https://staging-registry.yuri.moe/v2/redis/blobs/sha256:5b8ddc4be300c03f643ace1d74a62a3614224569b7d2ef46d69f4a3e96fcb856: unknown
    
    

    These are the commands that run to get the OBD image and upload to my own registry.

    sudo ctr content fetch registry.hub.docker.com/library/redis:6.2.1
    sudo bin/ctr obdconv registry.hub.docker.com/library/redis:6.2.1 localhost:5000/redis:6.2.1_obd
    sudo ctr i push  staging-registry.yuri.moe/redis:6.2.1_obd
    

    I am able to lazy pull the image and run using this image registry.hub.docker.com/overlaybd/redis:6.2.1_obd

    The registry is running and open to the public in case you would like to test that out. It is the latest image from https://hub.docker.com/_/registry if you want to set up your own regsitry.

  • why  overlaybd is faster than overlay2 while in warm startup scenario?

    why overlaybd is faster than overlay2 while in warm startup scenario?

    In my opinion Overlaybd is another lazy-pulling container image snapshotter for containerd.It's based on block device and iscsi target driver.It will redirect IO from kernel virtual blocks to user mode overlaybd backend, finally resend to kernel local file system.I thinks overlaybd has longer IO path than overlayfs, for it will switch twice between user mode and kernal mode when container read a image file( no in cache), while overlayfs only switch once. Theoretically if container images are already downloaded, container read file IO would be slower in overlaybd than in overlayfs.

  • snapshotter: fix mkfs error for ext4 by add '-F' option

    snapshotter: fix mkfs error for ext4 by add '-F' option

    Without the option, mkfs will fail with error "failed to mkfs for dev /dev/sdm: mke2fs 1.42.9 (28-Dec-2013)\n/dev/sdm is entire device, not just one partition! \nProceed anyway? (y,n) : exit status 1"

    Signed-off-by: Wang Xingxing [email protected]

  • Add support to Prometheus

    Add support to Prometheus

    Now we can provide a port (for example, in /etc/overlaybd-snapshotter/config.json, we set monitorPort: 9099) for prometheus to monitor some metrics like GRPC APIs latency or error count.

    (from https://github.com/containerd/accelerated-container-image/issues/140)

    Signed-off-by: Haoqi Miao [email protected]

  • Record and replay block traces at runtime

    Record and replay block traces at runtime

    Current trace based prefetch happens at build time that requires running a dummy container using the image, adding an accelerated layer and push to a new image. This approach makes using it difficult in the following ways:

    1. It changed the semantic of an image to an application. Now instead of many applications using a single image, each application needs to use their own image. The application-level image is quite intrusive to the runtime, e.g.g we need to ask different teams to update their workload to use different images that was actually the same targzip image and it seems unconventional in the container ecosystem.
    2. It is difficult to have an accurate trace built during the build time. In large organizations, the runtime environment can be quite complex (many dependencies on database, cloud resources, internal service, different networking, firewalls), it's super hard and costly to have such environment in the build time (which only has Docker etc. but no K8s or any other dependencies).

    My thinking of the trace is at application level instead of at image level, it's better to be maintained by the workload/application owners instead of as part of an image. They decide when to record/replay. The interface could be:

    • To record:
    1. At runtime, the application owner puts a lock file or some other means (e.g. a new binary overlaybd-record {image}) to start recording, the input contains the trace filename
    2. Overlaybd starts to record the traces for the image once received such signal
    3. Stopping recording the trace could be time or signal based, the output will be an trace file
    4. Application owner collects and store the trace file
    • To replay
    1. Application owner simply put the trace file to a configured location or call a binary overlaybd-replay {image} etc

    I'm totally open to suggestion and discussion, I think the trace based prefetch is a super awesome feature and would love to adopt/contribute to make it even better/easier for adoption, thanks!

    cc @lihuiba @liulanzheng @BigVan

  • Cleanup the overlaybd device after the container stopped should be support

    Cleanup the overlaybd device after the container stopped should be support

    In some cases like the high-density pods deployed in a single host, overlaybd devices need to be cleaned up after all related containers stop which can reduce the system load. Let us add a boolean option in the config file like "autoRemoveDev"?

  • How to configure p2p and cache

    How to configure p2p and cache

        > @bengbeng-pp Currently in Alibaba Cloud, only the Function Compute uses trace prefetching, because it's relatively easier for them to record trace. Some business are reluctant to do such a thing.
    

    I think what you need is Cache + P2P distribution. For each of them DADI has an open-source implementation. By setting up a large scale of SSD cluster, you basically distribute / cache every hot piece of data in the network and thus a mighty network filesystem is formed :-)

    Hello,Is there any documentation on how to configure cache and p2p? When I pulled obd format image from registry, I can not see anything from /opt/overlaybd/registry_cache

    Originally posted by @dbfancier in https://github.com/containerd/accelerated-container-image/issues/120#issuecomment-1291546382

  • Overlaybd observability support

    Overlaybd observability support

    When using OverlayBD in production we will need to monitor the healthiness of OverlayBD components using popular cloud native instrumentation toolings.

    A similar issue was brought up here: https://github.com/containerd/overlaybd/issues/101 There are certain things users could try but it would be great that it's supported by the DADI service so it can be standardized and re-used. I believe this is key for helping DADI adoption.

    The following metrics are some rough idea for what we'd like to monitor:

    • Overlaybd:

      1. Healthcheck ping for the Overlaybd daemon
      2. number of failed blob reads group by http status (500 for registry error, 404 for blob not exists, 403 for auth failure etc.)
      3. blob read latency for each block (e.g. 1M)
      4. Other unexpected errors such as failed to write to local cache or online decompression failures.
      5. Virtual block device IO hang monitoring
      6. Virtual block device IO latency
    • Overlaybd-snapshotter:

      1. Healthcheck ping for the snapshotter daemon
      2. Error count of all GRPC APIs (prepare, commit etc.)
      3. Latency for all GRPC APIs

    It's ideal that the above metrics can be exposed in Prometheus such that's it's easy to monitor DADI in cloud native envs.

    Some similar monitoring support:

    • Docker daemon monitoring: https://docs.docker.com/config/daemon/prometheus/
    • kata-monitor: https://github.com/kata-containers/kata-containers/tree/main/src/runtime/pkg/kata-monitor

    Please let me know your thoughts, the metrics mentioned above are just some quick ideas, would be happy to discuss, too.

Tool to scan a container image's rootfs

image-rootfs-scanner A tool to pull and scan the rootfs of any container image for different binaries. It started out as a means of finding "restricte

Mar 30, 2022
darkroom - An image proxy with changeable storage backends and image processing engines with focus on speed and resiliency.
darkroom - An image proxy with changeable storage backends and image processing engines with focus on speed and resiliency.

Darkroom - Yet Another Image Proxy Introduction Darkroom combines the storage backend and the image processor and acts as an Image Proxy on your image

Dec 6, 2022
Easily customizable Social image (or Open graph image) generator

fancycard Easily customizable Social image (or Open graph image) generator Built with Go, Gin, GoQuery and Chromedp Build & Run Simply, Clone this rep

Jan 14, 2022
An API which allows you to upload an image and responds with the same image, stripped of EXIF data

strip-metadata This is an API which allows you to upload an image and responds with the same image, stripped of EXIF data. How to run You need to have

Nov 25, 2021
Imgpreview - Tiny image previews for HTML while the original image is loading
Imgpreview - Tiny image previews for HTML while the original image is loading

imgpreview This is a Go program that generates tiny blurry previews for images t

May 22, 2022
Image processing algorithms in pure Go
Image processing algorithms in pure Go

bild A collection of parallel image processing algorithms in pure Go. The aim of this project is simplicity in use and development over absolute high

Jan 6, 2023
Go package for fast high-level image processing powered by libvips C library

bimg Small Go package for fast high-level image processing using libvips via C bindings, providing a simple programmatic API. bimg was designed to be

Jan 2, 2023
Image processing library and rendering toolkit for Go.

blend Image processing library and rendering toolkit for Go. (WIP) Installation: This library is compatible with Go1. go get github.com/phrozen/blend

Nov 11, 2022
Decode embedded EXIF meta data from image files.

goexif Provides decoding of basic exif and tiff encoded data. Still in alpha - no guarantees. Suggestions and pull requests are welcome. Functionality

Dec 17, 2022
A lightning fast image processing and resizing library for Go

govips A lightning fast image processing and resizing library for Go This package wraps the core functionality of libvips image processing library by

Jan 8, 2023
Fast, simple, scalable, Docker-ready HTTP microservice for high-level image processing

imaginary Fast HTTP microservice written in Go for high-level image processing backed by bimg and libvips. imaginary can be used as private or public

Jan 3, 2023
Imaging is a simple image processing package for Go
Imaging is a simple image processing package for Go

Imaging Package imaging provides basic image processing functions (resize, rotate, crop, brightness/contrast adjustments, etc.). All the image process

Dec 30, 2022
Pure golang image resizing
Pure golang image resizing

This package is no longer being updated! Please look for alternatives if that bothers you. Resize Image resizing for the Go programming language with

Jan 9, 2023
smartcrop finds good image crops for arbitrary crop sizes
smartcrop finds good image crops for arbitrary crop sizes

smartcrop smartcrop finds good image crops for arbitrary sizes. It is a pure Go implementation, based on Jonas Wagner's smartcrop.js Image: https://ww

Jan 8, 2023
Go package for decoding and encoding TARGA image format

tga tga is a Go package for decoding and encoding TARGA image format. It supports RLE and raw TARGA images with 8/15/16/24/32 bits per pixel, monochro

Sep 26, 2022
:triangular_ruler: Create beautiful generative image patterns from a string in golang.
:triangular_ruler: Create beautiful generative image patterns from a string in golang.

geopattern Create beautiful generative image patterns from a string in golang. Go port of Jason Long's awesome GeoPattern library. Read geopattern's d

Dec 29, 2022
Go Image Filtering Toolkit
Go Image Filtering Toolkit

GO IMAGE FILTERING TOOLKIT (GIFT) Package gift provides a set of useful image processing filters. Pure Go. No external dependencies outside of the Go

Dec 23, 2022
Go Perceptual image hashing package

goimagehash Inspired by imagehash A image hashing library written in Go. ImageHash supports: Average hashing Difference hashing Perception hashing Wav

Jan 3, 2023
The imghdr module determines the type of image contained in a file for go
The imghdr module determines the type of image contained in a file for go

goimghdr Inspired by Python's imghdr Installation go get github.com/corona10/goimghdr List of return value Value Image format "rgb" SGI ImgLib Files

Oct 10, 2022