Fast docker image distribution plugin for containerd, based on CRFS/stargz

[ ⬇️ Download] [ 📔 Browse images] [ Quick Start (Kubernetes)] [ 🤓 Quick Start (nerdctl)]

Stargz Snapshotter

Tests Status Benchmarking Nightly

Read also introductory blog: Startup Containers in Lightning Speed with Lazy Image Distribution on Containerd

Pulling image is one of the time-consuming steps in the container lifecycle. Research shows that time to take for pull operation accounts for 76% of container startup time[FAST '16]. Stargz Snapshotter is an implementation of snapshotter which aims to solve this problem by lazy pulling. Lazy pulling here means a container can run without waiting for the pull completion of the image and necessary chunks of the image are fetched on-demand.

eStargz is a lazily-pullable image format proposed by this project. This is compatible to OCI/Docker images so this can be pushed to standard container registries (e.g. ghcr.io) as well as this is still runnable even on eStargz-agnostic runtimes including Docker. eStargz format is based on stargz image format by CRFS but comes with additional features like runtime optimization and content verification.

The following histogram is the benchmarking result for startup time of several containers measured on Github Actions, using GitHub Container Registry.

The benchmarking result on ecdb227

legacy shows the startup performance when we use containerd's default snapshotter (overlayfs) with images copied from docker.io/library without optimization. For this configuration, containerd pulls entire image contents and pull operation takes accordingly. When we use stargz snapshotter with eStargz-converted images but without any optimization (estargz-noopt) we are seeing performance improvement on the pull operation because containerd can start the container without waiting for the pull completion and fetch necessary chunks of the image on-demand. But at the same time, we see the performance drawback for run operation because each access to files takes extra time for fetching them from the registry. When we use eStargz with optimization (estargz), we can mitigate the performance drawback observed in estargz-noopt images. This is because stargz snapshotter prefetches and caches likely accessed files during running the container. On the first container creation, stargz snapshotter waits for the prefetch completion so create sometimes takes longer than other types of image. But it's still shorter than waiting for downloading all files of all layers.

The above histogram is the benchmarking result on the commit ecdb227. We are constantly measuring the performance of this snapshotter so you can get the latest one through the badge shown top of this doc. Please note that we sometimes see dispersion among the results because of the NW condition on the internet and the location of the instance in the Github Actions, etc. Our benchmarking method is based on HelloBench.

Stargz Snapshotter is a non-core sub-project of containerd.

Quick Start with Kubernetes

For using stargz snapshotter on kubernetes nodes, you need the following configuration to containerd as well as run stargz snapshotter daemon on the node. We assume that you are using containerd (> v1.4.2) as a CRI runtime.

version = 2

# Plug stargz snapshotter into containerd
# Containerd recognizes stargz snapshotter through specified socket address.
# The specified address below is the default which stargz snapshotter listen to.
[proxy_plugins]
  [proxy_plugins.stargz]
    type = "snapshot"
    address = "/run/containerd-stargz-grpc/containerd-stargz-grpc.sock"

# Use stargz snapshotter through CRI
[plugins."io.containerd.grpc.v1.cri".containerd]
  snapshotter = "stargz"
  disable_snapshot_annotations = false

Note that disable_snapshot_annotations = false is required since containerd > v1.4.2

This repo contains a Dockerfile as a KinD node image which includes the above configuration. You can use it with KinD like the following,

$ docker build -t stargz-kind-node https://github.com/containerd/stargz-snapshotter.git
$ kind create cluster --name stargz-demo --image stargz-kind-node

Then you can create eStargz pods on the cluster. In this example, we create a stargz-converted Node.js pod (ghcr.io/stargz-containers/node:13.13.0-esgz) as a demo.

apiVersion: v1
kind: Pod
metadata:
  name: nodejs
spec:
  containers:
  - name: nodejs-stargz
    image: ghcr.io/stargz-containers/node:13.13.0-esgz
    command: ["node"]
    args:
    - -e
    - var http = require('http');
      http.createServer(function(req, res) {
        res.writeHead(200);
        res.end('Hello World!\n');
      }).listen(80);
    ports:
    - containerPort: 80

The following command lazily pulls ghcr.io/stargz-containers/node:13.13.0-esgz from Github Container Registry and creates the pod so the time to take for it is shorter than the original image library/node:13.13.

$ kubectl --context kind-stargz-demo apply -f stargz-pod.yaml && kubectl get po nodejs -w
$ kubectl --context kind-stargz-demo port-forward nodejs 8080:80 &
$ curl 127.0.0.1:8080
Hello World!

Stargz snapshotter also supports further configuration including private registry authentication, mirror registries, etc.

Creating eStargz images with optimization

For lazy pulling images, you need to prepare eStargz images first. You can use ctr-remote command for do this. You can also try our pre-converted images listed in Trying pre-converted images.

In this section, we introduce ctr-remote command for converting images into eStargz with optimization for reading files. As shown in the above benchmarking result, on-demand lazy pulling improves the performance of pull but causes runtime performance penalty because reading files induce remotely downloading contents. For solving this, ctr-remote has workload-based optimization for images.

For trying the examples described in this section, you can also use the docker-compose-based demo environment. You can setup this environment as the following commands (put this repo on ${GOPATH}/src/github.com/containerd/stargz-snapshotter). Note that this runs privileged containers on your host.

$ cd ${GOPATH}/src/github.com/containerd/stargz-snapshotter/script/demo
$ docker-compose build containerd_demo
$ docker-compose up -d
$ docker exec -it containerd_demo /bin/bash
(inside container) # ./script/demo/run.sh

Generally, container images are built with purpose and the workloads are defined in the Dockerfile with some parameters (e.g. entrypoint, envvars and user). By default, ctr-remote optimizes the performance of reading files that are most likely accessed in the workload defined in the Dockerfile. You can also specify the custom workload using options if needed.

The following example converts the legacy library/ubuntu:18.04 image into eStargz. The command also optimizes the image for the workload of executing ls on /bin/bash. The thing actually done is it runs the specified workload in a temporary container and profiles all file accesses with marking them as likely accessed also during runtime. The converted image is still docker-compatible so you can run it with eStargz-agnostic runtimes (e.g. Docker).

# ctr-remote image pull docker.io/library/ubuntu:18.04
# ctr-remote image optimize --entrypoint='[ "/bin/bash", "-c" ]' --args='[ "ls" ]' docker.io/library/ubuntu:18.04 registry2:5000/ubuntu:18.04
# ctr-remote image push --plain-http registry2:5000/ubuntu:18.04

Finally, the following commands clear the local cache then pull the eStargz image lazily. Stargz snapshotter prefetches files that are most likely accessed in the optimized workload, which hopefully increases the cache hit rate for that workload and mitigates runtime overheads as shown in the benchmarking result shown top of this doc.

# ctr-remote image rm --sync registry2:5000/ubuntu:18.04
# ctr-remote images rpull --plain-http registry2:5000/ubuntu:18.04
fetching sha256:728332a6... application/vnd.docker.distribution.manifest.v2+json
fetching sha256:80026893... application/vnd.docker.container.image.v1+json
# ctr-remote run --rm -t --snapshotter=stargz registry2:5000/ubuntu:18.04 test /bin/bash
root@8dab301bd68d:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

Importing Stargz Snapshotter as go module

Currently, Stargz Snapshotter repository contains two Go modules as the following and both of them need to be imported.

  • github.com/containerd/stargz-snapshotter
  • github.com/containerd/stargz-snapshotter/estargz

Please make sure you import the both of them and they point to the same commit version.

Project details

Stargz Snapshotter is a containerd non-core sub-project, licensed under the Apache 2.0 license. As a containerd non-core sub-project, you will find the:

information in our containerd/project repository.

Comments
  • Estargz integrates Dragonfly P2P

    Estargz integrates Dragonfly P2P

    I am a Maintainer of Dragonfly. Does estargz need to combine Dragonfly's P2P transfer function to speed up image loading? Integrate like Nydus and Dragonfly, refer to https://d7y.io/docs/setup/integration/nydus.

    If estargz and dragonfly P2P integration is feasible, our two communities can work together to do a good job in the image acceleration ecosystem.

    @ktock @AkihiroSuda

  • Add CI to automate building and pushing `estargz-kind-node` and pre-converted estargz images

    Add CI to automate building and pushing `estargz-kind-node` and pre-converted estargz images

  • snapshotter doesn't like paths in ko images

    snapshotter doesn't like paths in ko images

    In github.com/google/ko built images, the paths we use within the tarball seem to be causing problems for the estargz snapshotter. These images run fine against a standard containerd, so it would be good to harden this snapshotter to support them:

    Dec 20 21:36:45 ubuntu containerd-stargz-grpc[1179]: {"error":"failed to cache prefetched layer: invalid child path \"/ko-app\"; must be child of \"\"","level":"debug","mountpoint":"/var/lib/containerd-stargz-grpc/snapshotter/snapshots/28/fs","msg":"failed to prefetched layer","time":"2020-12-20T21:36:45.919228015Z"}
    ...
    Dec 20 21:36:52 ubuntu containerd-stargz-grpc[1179]: {"error":"invalid child path \"/var\"; must be child of \"\"","level":"debug","mountpoint":"/var/lib/containerd-stargz-grpc/snapshotter/snapshots/27/fs","msg":"failed to fetch whole layer","time":"2020-12-20T21:36:52.422678402Z"}
    

    You can produce an image with ko + estargz by:

    1. clone github.com/google/ko
    2. go install ./cmd/ko
    3. GGCR_EXPERIMENT_ESTARGZ=1 KO_DOCKER_REPO=docker.io/{username} ko publish -B ./cmd.foo
    4. kubectl run foo --image={digest from above}
  • no state directory

    no state directory

    I am concerned about how much data has been pull to local. According to https://github.com/containerd/stargz-snapshotter/blob/main/docs/overview.md , it should have a state directory under container's root path. But I can't find any: command: ctr-remote images rpull --snapshotter=stargz iregistry.harbo.com/research/wordpress-estargz:latest ctr-remote run --rm -t --snapshotter=stargz iregistry.harbo.com/research/wordpress-estargz:latest test /bin/bash cat /.stargz-snapshotter/* output: cat: '/.stargz-snapshotter/*': No such file or directory

    My environment has install containerd & stargz-snappshotter & fuse successfully, and lazy pull has been observed, the only problem is that there is no state directory.

    Any help will be appreciated !

  • Dockerfile: bump up components (Kubernetes 1.20, ...)

    Dockerfile: bump up components (Kubernetes 1.20, ...)

    ~- Go: 1.13 -> 1.15 (incurs s/go install/go get/g)~

    • containerd: 1.4.2 -> 1.4.3
    • CNI plugins: 0.8.6 -> 0.9.0
    • kind node: 1.19.0 -> 1.20.0
  • Store filesystem metadata on disk

    Store filesystem metadata on disk

    Currently, stargz snapshotter holds filesystem metadata in memory but this ends up consuming a large amount of memory and they aren't free until the filesystem layer is unmounted. This commit tries to solve this problem by fixing snapshotter to store the metadata on disk using bbolt.

    • TODO
      • [x] resolve conflict against main version (add metrics support)

    Performance comparison

    Summary:

    • PR improves memory consumption (about 5x smaller)
    • PR increases *node.Getattr latency (about 2x ~ 3x slower) but actual workloads (HelloBench, stat commands) don't report significant performance degradation
      • Welcome for any suggestions about benchmarks where this change will be problematic

    Memory consumption

    • host memory: ~16GB
    • command: nerdctl --snapshotter=stargz --insecure-registry run -it --rm registry2:5000/kdeneon/plasma:unstable-esgz echo hello
      • registry2 runs as a container, on the same host of contianerd.
      • size of registry2:5000/kdeneon/plasma:unstable-esgz is 1.1 GiB according to containerd.
    • config
      metrics_address = "127.0.0.1:8234"
      disable_verification = false
      [[resolver.host."registry2:5000".mirrors]]
      host = "registry2:5000"
      insecure = true
      [directory_cache]
      direct = true
      

    This change makes memory consumption 5x smaller.

    |main|PR| |---|---| |main-mem02|entriesdb-mem02|

    latency of *node.Getattr operation

    • image: ghcr.io/stargz-containers/postgres:13.1-esgz
    • command: time nerdctl exec -it $NAME find /usr/ -type d -exec /bin/bash -c "stat {}/*" \;
    • config: same as the above

    This change makes *node.Getattr operation about 2x ~ 3x slower. Though this doesn't seems to cause large performance degradation in actual workloads (HelloBench and stat commands) as shown in the following sections, we should continuously seek the way to optimize the performance of reading metadata.

    |main|PR| |---|---| |main-getattr01|entriesdb-getattr01|

    Total time for find /usr/ -type d -exec /bin/bash -c "stat {}/*" \;: (Reading metadata doesn't seem to be a bottleneck of this command?)

    |main|PR| |---|---| |10.209s|10.112s|

    HelloBench

    https://github.com/ktock/stargz-snapshotter/runs/3351635146

    result

  • Unable to make it work on EKS with ECR

    Unable to make it work on EKS with ECR

    Hi

    First of all, thanks for working on this project, it's a very interesting approach!

    I'm trying to use this project on an AWS EKS cluster using a private AWS ECR registry. It works for the example estargz images, but not for our own images in the ECR. These images were converted to estargz images using ctr-remote, I checked the contents of the images and there is a stargz.index.json in every layer.tar file.

    My environment:

    Kubernetes version:

    Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.13-eks-84b4fe6", GitCommit:"e1318dce57b3e319a2e3fecf343677d1c4d4aa75", GitTreeState:"clean", BuildDate:"2022-06-09T18:22:07Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
    

    Node OS:

    # /etc/os-release
    NAME="Amazon Linux"
    VERSION="2"
    ID="amzn"
    ID_LIKE="centos rhel fedora"
    VERSION_ID="2"
    PRETTY_NAME="Amazon Linux 2"
    ANSI_COLOR="0;33"
    CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
    HOME_URL="https://amazonlinux.com/"
    

    Containerd version: containerd github.com/containerd/containerd 1.4.13 9cc61520f4cd876b86e77edfeb88fbcd536d1f9d Stargz snapshotter installed using:

    wget -O stargz-snapshotter.tar.gz https://github.com/containerd/stargz-snapshotter/releases/download/v0.12.0/stargz-snapshotter-v0.12.0-linux-amd64.tar.gz
    tar -C /usr/local/bin -xvf stargz-snapshotter.tar.gz containerd-stargz-grpc ctr-remote
    rm -rf stargz-snapshotter.tar.gz 
    wget -O /etc/systemd/system/stargz-snapshotter.service https://raw.githubusercontent.com/containerd/stargz-snapshotter/main/script/config/etc/systemd/system/stargz-snapshotter.service
    

    Containerd config:

    # /etc/containerd/config.toml
    version = 2
    root = "/var/lib/containerd"
    state = "/run/containerd"
    
    [grpc]
    address = "/run/containerd/containerd.sock"
    
    [plugins."io.containerd.grpc.v1.cri".containerd]
    default_runtime_name = "runc"
    snapshotter = "stargz"
    disable_snapshot_annotations = false
    
    [plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "602401143452.dkr.ecr.eu-central-1.amazonaws.com/eks/pause:3.5"
    
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
    runtime_type = "io.containerd.runc.v2"
    
    [plugins."io.containerd.grpc.v1.cri".cni]
    bin_dir = "/opt/cni/bin"
    conf_dir = "/etc/cni/net.d"
    
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["https://common-docker-r..."]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
        endpoint = ["https://common-gcr-docker-r...."]
    
    [proxy_plugins]
      [proxy_plugins.stargz]
        type = "snapshot"
        address = "/run/containerd-stargz-grpc/containerd-stargz-grpc.sock"
    

    Stargz config:

    [cri_keychain]
      enable_keychain = true
      image_service_path = "/run/containerd/containerd.sock"
    

    Kubelet args:

    /usr/bin/kubelet --cloud-provider aws --config /etc/kubernetes/kubelet/kubelet-config.json --kubeconfig /var/lib/kubelet/kubeconfig --container-runtime remote --container-runtime-endpoint unix:///run/containerd/containerd.sock --image-service-endpoint=unix:///run/containerd-stargz-grpc/containerd-stargz-grpc.sock  --network-plugin cni --node-ip=10.17.168.201 --pod-infra-container-image=602401143452.dkr.eu-central-1.amazonaws.com/eks/pause:3.5 --v=2 --image-pull-progress-deadline=60m --pod-max-pids=4096 --allowed-unsafe-sysctls=net.ipv4.tcp_keepalive* --node-labels=default=true,node.kubernetes.io/lifecycle=on-demand --kube-reserved cpu=250m,memory=0.5Gi,ephemeral-storage=1Gi --system-reserved cpu=250m,memory=0.2Gi,ephemeral-storage=1Gi --eviction-hard memory.available<0.2Gi,nodefs.available<10% --v=2
    

    Stargz snapshotter logs:

    Aug 19 13:23:53 ip-10-17-168-241.eu-central-1.compute.internal containerd-stargz-grpc[4102]: {"error":"failed to resolve layer: failed to resolve layer \"sha256:2094364f418ff8f3acdb3ccfbdc8bb21dd665ee9f842a2f46e9819b027fb897b\" from \"123456789012.dkr.ecr.eu-central-1.amazonaws.com/myimage\": failed to resolve the blob: failed to resolve the source: cannot resolve layer: failed to redirect (host \"123456789012.dkr.ecr.eu-central-1.amazonaws.com\", ref:\"123456789012.dkr.ecr.eu-central-1.amazonaws.com/myimage\", digest:\"sha256:2094364f418ff8f3acdb3ccfbdc8bb21dd665ee9f842a2f46e9819b027fb897b\"): failed to access to the registry with code 401: failed to resolve: failed to resolve target","key":"k8s.io/213/extract-91530123-dWyn sha256:21866e8c6acebe0e473ee29d02a4c98f5bd3894e0b4eb4abd8f7b6d0293db61d","level":"warning","msg":"failed to prepare remote snapshot","parent":"k8s.io/212/sha256:fefe00319c6075c52ec7276feb2b2b7cfa72bd53cebb1fc1f50161b8fb53cda2","remote-snapshot-prepared":"false","time":"2022-08-19T13:23:53.109030760Z"}
    Aug 19 13:23:53 ip-10-17-168-241.eu-central-1.compute.internal containerd-stargz-grpc[4102]: {"error":"failed to resolve layer \"sha256:2094364f418ff8f3acdb3ccfbdc8bb21dd665ee9f842a2f46e9819b027fb897b\" from \"123456789012.dkr.ecr.eu-central-1.amazonaws.com/myimage\": failed to resolve the blob: failed to resolve the source: cannot resolve layer: failed to redirect (host \"123456789012.dkr.ecr.eu-central-1.amazonaws.com\", ref:\"123456789012.dkr.ecr.eu-central-1.amazonaws.com/myimage\", digest:\"sha256:2094364f418ff8f3acdb3ccfbdc8bb21dd665ee9f842a2f46e9819b027fb897b\"): failed to access to the registry with code 401: failed to resolve: failed to resolve target","key":"k8s.io/213/extract-91530123-dWyn sha256:21866e8c6acebe0e473ee29d02a4c98f5bd3894e0b4eb4abd8f7b6d0293db61d","level":"debug","mountpoint":"/var/lib/containerd-stargz-grpc/snapshotter/snapshots/116/fs","msg":"failed to resolve layer","parent":"k8s.io/212/sha256:fefe00319c6075c52ec7276feb2b2b7cfa72bd53cebb1fc1f50161b8fb53cda2","time":"2022-08-19T13:23:53.108984950Z"}
    Aug 19 13:23:53 ip-10-17-168-241.eu-central-1.compute.internal containerd-stargz-grpc[4102]: {"digest":"sha256:2094364f418ff8f3acdb3ccfbdc8bb21dd665ee9f842a2f46e9819b027fb897b","error":null,"key":"k8s.io/213/extract-91530123-dWyn sha256:21866e8c6acebe0e473ee29d02a4c98f5bd3894e0b4eb4abd8f7b6d0293db61d","level":"debug","mountpoint":"/var/lib/containerd-stargz-grpc/snapshotter/snapshots/116/fs","msg":"using default handler","parent":"k8s.io/212/sha256:fefe00319c6075c52ec7276feb2b2b7cfa72bd53cebb1fc1f50161b8fb53cda2","ref":"123456789012.dkr.ecr.eu-central-1.amazonaws.com/myimage","src":"123456789012.dkr.ecr.eu-central-1.amazonaws.com/myimage/sha256:2094364f418ff8f3acdb3ccfbdc8bb21dd665ee9f842a2f46e9819b027fb897b","time":"2022-08-19T13:23:53.099252918Z"}
    Aug 19 13:23:52 ip-10-17-168-241.eu-central-1.compute.internal containerd-stargz-grpc[4102]: {"key":"k8s.io/211/extract-602545280-mL0R sha256:fefe00319c6075c52ec7276feb2b2b7cfa72bd53cebb1fc1f50161b8fb53cda2","level":"info","mountpoint":"/var/lib/containerd-stargz-grpc/snapshotter/snapshots/115/fs","msg":"Received status code: 401 Unauthorized. Refreshing creds...","parent":"k8s.io/210/sha256:239d9e752d8a1d45724ba1dc66131e885a07c94b557db6ea31f253241747f5a7","src":"123456789012.dkr.ecr.eu-central-1.amazonaws.com/myimage/sha256:2094364f418ff8f3acdb3ccfbdc8bb21dd665ee9f842a2f46e9819b027fb897b","time":"2022-08-19T13:23:52.617569396Z"}
    

    It seems that stargz snapshotter is unable to access ECR. Do you have any advice on how to fix this?

  • Enable to run containers on IPFS

    Enable to run containers on IPFS

    See https://github.com/containerd/stargz-snapshotter/blob/main/docs/ipfs.md for the latest spec.

    This commit enables to run of containers on IPFS.

    OCI image is extended in an OCI-compatible way. Stargz Snapshotter mounts the image from IPFS to the container's rootfs with lazy pulling support.

    The image must have the following OCI-compatible extension. This constructs a CID-based DAG of blobs in an OCI image.

    • Each descriptor in an image must have the following annotation
      • key: containerd.io/snapshot/remote/ipfs/cid
      • value: CID of the blob that the descriptor points to

    This commit adds ipfs library which includes the basic functionality to make containerd aware of containers on IPFS. These components are eStargz-agnostic and can be used for running non-eStargz images without lazy pulling (e.g. on overlayfs snapshotter).

    • ipfs.IndexConvertFunc provides containerd's converter.ConvertFunc implementation which converts an image to the IPFS-enabled image format as described above. This also adds the image contents to IPFS.
    • ipfs.NewResolver provides containerd's remote.Resovler implementation for IPFS. If a descriptor contains the CID annotation, it fetches the pointed content from IPFS.

    The following component enables lazy pulling of containers from IPFS.

    • (fs/remote/ipfs).Reader provides the way to read a range of a file on IPFS. This enables stargz snapshotter to mount the container's rootfs from IPFS with lazy pulling.

    Examples

    Storing image to IPFS and run with stargz snapshotter:

    # ipfs daemon
    # ctr-remote i pull ghcr.io/stargz-containers/python:3.9-org
    # ctr-remote i ipfs-add ghcr.io/stargz-containers/python:3.9-org test
    INFO[0098] Pushed                                        CID=QmfTVLXMG9TH7X523NytcXj35XtEDx4wgNWYepVumqpJZV
    # ipfs cat QmfTVLXMG9TH7X523NytcXj35XtEDx4wgNWYepVumqpJZV | jq
    {
      "mediaType": "application/vnd.oci.image.index.v1+json",
      "digest": "sha256:d8e862a13071692edbf4cffc3954591ca98954571615a0698a7ed59a09dc04df",
      "size": 342,
      "annotations": {
        "containerd.io/snapshot/remote/ipfs/cid": "QmSUi34RNpoxY4zqZFGZnCQcH6odXBRHT6VTGAduANwTj6"
      }
    }
    # # clear containerd cache here
    # time ( ctr-remote i rpull --ipfs QmfTVLXMG9TH7X523NytcXj35XtEDx4wgNWYepVumqpJZV && \
      ctr-remote run --snapshotter=stargz --rm -t QmfTVLXMG9TH7X523NytcXj35XtEDx4wgNWYepVumqpJZV foo python -c 'print("Hello, World!")' )
    fetching sha256:6fd287cf... application/vnd.oci.image.index.v1+json
    fetching sha256:5c13568a... application/vnd.oci.image.manifest.v1+json
    fetching sha256:236b4bd7... application/vnd.oci.image.config.v1+json
    Hello, World!
    
    real	0m1.609s
    user	0m0.047s
    sys	0m0.031s
    

    The container can also run with overlayfs snapshotter without lazy pulling but it's slower than stargz snapshotter:

    # time ( ctr-remote i rpull --snapshotter=overlayfs --ipfs QmfTVLXMG9TH7X523NytcXj35XtEDx4wgNWYepVumqpJZV && \
      ctr-remote run --snapshotter=overlayfs --rm -t QmfTVLXMG9TH7X523NytcXj35XtEDx4wgNWYepVumqpJZV foo python -c 'print("Hello, World!")' )
    fetching sha256:d8e862a1... application/vnd.oci.image.index.v1+json
    fetching sha256:b8df0fee... application/vnd.oci.image.manifest.v1+json
    fetching sha256:236b4bd7... application/vnd.oci.image.config.v1+json
    fetching sha256:94584b60... application/vnd.oci.image.layer.v1.tar+gzip
    fetching sha256:cecf6f1a... application/vnd.oci.image.layer.v1.tar+gzip
    fetching sha256:cfd1c7a0... application/vnd.oci.image.layer.v1.tar+gzip
    fetching sha256:9c3d8238... application/vnd.oci.image.layer.v1.tar+gzip
    fetching sha256:cce4a0fe... application/vnd.oci.image.layer.v1.tar+gzip
    fetching sha256:3467b44a... application/vnd.oci.image.layer.v1.tar+gzip
    fetching sha256:8ff4c537... application/vnd.oci.image.layer.v1.tar+gzip
    fetching sha256:b60d5d60... application/vnd.oci.image.layer.v1.tar+gzip
    fetching sha256:68277180... application/vnd.oci.image.layer.v1.tar+gzip
    Hello, World!
    
    real	0m11.655s
    user	0m0.571s
    sys	0m0.309s
    
  • How to inspect state directory to monitor on-demand filesystem errors

    How to inspect state directory to monitor on-demand filesystem errors

    Hi All,

    The documentation here mentions the presence of hidden state directory at the root of the filesystem. Unfortunately I do not see any such directory at the root file system path.

    [iamsumee@ip-10-0-56-202 stargz-snapshotter]$ ls /.stargz-snapshotter/*
    ls: cannot access /.stargz-snapshotter/*: No such file or directory
    

    Can someone please help me with the steps to locate the state directory. It will be super useful to monitor any on-demand filesystem errors by inspecting state directory contents.

  • Use single range request for registries which don't support multi range request

    Use single range request for registries which don't support multi range request

    Some cloud-provided registries including GCR doesn't support multi range request (e.g. range: bytes=0-3,5-9,16-20) For example, when we pull an image from GCR,

    # ctr-remote image rpull gcr.io/stargz-275102/ubuntu:18.04
    fetching sha256:36d56a39... application/vnd.docker.distribution.manifest.v2+json
    fetching sha256:b993a6c1... application/vnd.docker.container.image.v1+json
    fetching sha256:ce77d9a4... application/vnd.docker.image.rootfs.diff.tar.gzip
    fetching sha256:cc65679c... application/vnd.docker.image.rootfs.diff.tar.gzip
    fetching sha256:dcc819e8... application/vnd.docker.image.rootfs.diff.tar.gzip
    fetching sha256:1b677c82... application/vnd.docker.image.rootfs.diff.tar.gzip
    

    We get 400 response because of multi range request.

    DEBU[2020-04-23T02:27:15.607869311Z] failed to read layer                          digest="sha256:cc65679ceb37dd5d861fe76ced27bbca8897d3c73f9e5a4422491c5932837804" error="failed to parse stargz: error reading footer: unexpected status code on \"https://storage.googleapis.com/artifacts.stargz-275102.appspot.com/containers/images/sha256:cc65679ceb37dd5d861fe76ced27bbca8897d3c73f9e5a4422491c5932837804\": 400 Bad Request" mountpoint=/var/lib/containerd-stargz-grpc/snapshotter/snapshots/1/fs ref="gcr.io/stargz-275102/ubuntu:18.04"
    

    This commit solves this issue by using a single request instead. We might need tests with cloud-provided registries. I'll add them with another PR.

  • [WIP] Buildkit integration

    [WIP] Buildkit integration

    The re-opened version of #52. Thread on buildkit is moby/buildkit#1396.

    This is an experimental integration with buildkit for speeding up fetching base image.

    Our patched version of buildkit is here.

    This commit includes benchmark scripts to measure the time for buliding sample images. See this doc for manual testing.

    Though it seems good for the lazy distribution of the base images, we currently have two problems for exporting output image.

    1. Archiving layers takes a long time. It is because of the low READ performance including fetching contents from the registry.
    2. We only support exporting type tar. It is because when we use remote snapshotters, containerd doesn't store the image contents in the content store but buildkit needs to use these contents for exporting images.

    cc @AkihiroSuda @tonistiigi

  • IPFS: retry on repository is locked

    IPFS: retry on repository is locked

    Avoid following errors that can occur when stargz-snapshotter tries to read from the ipfs repo while other read is ongoing and the repo is locked.

    {"error":"exit status 1","level":"debug","msg":"failed to wait for process exit: \"bafybeibb2buketnuvn7r4dwusypy6eskr3ujhhs6wjxijhfsleibv4i3ke\"(offset:2100000,length:50000)","stderr":"Error: lock /root/.ipfs/repo.lock: someone else has the lock\n","time":"2022-12-20T22:21:53.331722954Z"}
    {"error":"exit status 1","level":"debug","msg":"failed to wait for process exit: \"bafybeibb2buketnuvn7r4dwusypy6eskr3ujhhs6wjxijhfsleibv4i3ke\"(offset:4300000,length:1350000)","stderr":"Error: lock /root/.ipfs/repo.lock: someone else has the lock\n","time":"2022-12-20T22:21:53.365469640Z"}
    {"error":"exit status 1","level":"debug","msg":"failed to wait for process exit: \"bafybeibb2buketnuvn7r4dwusypy6eskr3ujhhs6wjxijhfsleibv4i3ke\"(offset:100000,length:50000)","stderr":"Error: lock /root/.ipfs/repo.lock: someone else has the lock\n","time":"2022-12-20T22:21:53.398679478Z"}
    {"error":"exit status 1","level":"debug","msg":"failed to wait for process exit: \"bafybeibb2buketnuvn7r4dwusypy6eskr3ujhhs6wjxijhfsleibv4i3ke\"(offset:3200000,length:100000)","stderr":"Error: lock /root/.ipfs/repo.lock: someone else has the lock\n","time":"2022-12-20T22:21:53.426643305Z"}
    {"error":"exit status 1","level":"debug","msg":"failed to wait for process exit: \"bafybeibb2buketnuvn7r4dwusypy6eskr3ujhhs6wjxijhfsleibv4i3ke\"(offset:3750000,length:50000)","stderr":"Error: lock /root/.ipfs/repo.lock: someone else has the lock\n","time":"2022-12-20T22:21:53.459329022Z"}
    {"error":"exit status 1","level":"debug","msg":"failed to wait for process exit: \"bafybeibb2buketnuvn7r4dwusypy6eskr3ujhhs6wjxijhfsleibv4i3ke\"(offset:7150000,length:100000)","stderr":"Error: lock /root/.ipfs/repo.lock: someone else has the lock\n","time":"2022-12-20T22:21:53.487148800Z"}
    {"layer_sha":"sha256:fd45374cc5b3a6c80fe4ec413d58aeedf76526b5ec2e71b5844c3daa1c93280b","level":"info","metrics":"latency","msg":"value=5379.045937 milliseconds","operation":"background_fetch_decompress","time":"2022-12-20T22:21:53.487233487Z"}
    
  • Bump up k8s.io to 0.26.0

    Bump up k8s.io to 0.26.0

    • https://github.com/containerd/stargz-snapshotter/pull/1028
    • https://github.com/containerd/stargz-snapshotter/pull/1027
    • https://github.com/containerd/stargz-snapshotter/pull/1026
  • Bump github.com/klauspost/compress from 1.15.12 to 1.15.13

    Bump github.com/klauspost/compress from 1.15.12 to 1.15.13

    • https://github.com/containerd/stargz-snapshotter/pull/1033
    • https://github.com/containerd/stargz-snapshotter/pull/1032
    • https://github.com/containerd/stargz-snapshotter/pull/1031
  • Bump github.com/urfave/cli from 1.22.5 to 1.22.10 in /cmd

    Bump github.com/urfave/cli from 1.22.5 to 1.22.10 in /cmd

    Bumps github.com/urfave/cli from 1.22.5 to 1.22.10.

    Release notes

    Sourced from github.com/urfave/cli's releases.

    v1.22.10

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v1.22.9...v1.22.10

    v1.22.9

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v1.22.8...v1.22.9

    v1.22.8

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v1.22.7...v1.22.8

    Release 1.22.7

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v1.22.6...v1.22.7

    Release 1.22.6

    What's Changed

    Full Changelog: https://github.com/urfave/cli/compare/v1.22.5...v1.22.6

    Commits
    • c24c9f3 Fix:(issue_1094) Dont execute Before/After during shell completions (#1459)
    • 1eac782 Merge pull request #1428 from urfave/ignore-v2-ignored
    • 5083312 Ignore dirs that are ignored in v2
    • 575b8b4 Merge pull request #1383 from kolyshkin/v1-no-docs
    • 2930925 ci: test newly added tag
    • fc47b1a Add urfave_cli_no_docs build tag
    • ba801d7 Move some test helpers from docs_test to fish_test
    • 9810d12 Merge pull request #1384 from kolyshkin/v1-do-fix-ci
    • 22281d3 Fix CI
    • b370d04 Really fix TestApp_RunAsSubCommandIncorrectUsage
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
A Rancher and Kubernetes optimized immutable Linux distribution based on openSUSE

RancherOS v2 WORK IN PROGRESS RancherOS v2 is an immutable Linux distribution built to run Rancher and it's corresponding Kubernetes distributions RKE

Nov 14, 2022
Repositório para a aula sobre integração do containerd com Golang
Repositório para a aula sobre integração do containerd com Golang

Integrando containers nativamente usando Golang Este é o código finalizado da aplicação Já pensou em uma alternativa ao Docker? Que tal manipular cont

May 4, 2021
Installs containerd on Windows, optionally with default CNI plugins

containerd-installer Installs containerd on Windows, optionally with default CNI plugins Usage NAME: containerd-installer.exe - Install containerd

Nov 27, 2022
Nydus-snapshotter - A containerd snapshotter with capability of on-demand read

Nydus Snapshotter Nydus-snapshotter is a non-core sub-project of containerd. Pul

Dec 14, 2022
Runwasi - A containerd shim which runs wasm workloads in wasmtime

containerd-shim-wasmtime-v1 This is a containerd shim which runs wasm workloads

Dec 28, 2022
Fast, Docker-ready image processing server written in Go and libvips, with Thumbor URL syntax

Imagor Imagor is a fast, Docker-ready image processing server written in Go. Imagor uses one of the most efficient image processing library libvips (w

Dec 30, 2022
This action prints "true" if image is required to update based on the base image update.

container-image-updater This action prints "true" if image is required to update based on the base image update. Inputs Name Type Description base-ima

Apr 15, 2022
resource manifest distribution among multiple clusters.

Providing content to managed clusters Support a primitive that enables resources to be applied to a managed cluster. Community, discussion, contributi

Dec 26, 2022
Truly Minimal Linux Distribution for Containers

Statesman Statesman is a minimal Linux distribution, running from memory, that has just enough functionality to run OCI-compatible containers. Rationa

Nov 12, 2021
Walker's alias method is an efficient algorithm to sample from a discrete probability distribution.

walker-alias Walker's alias method is an efficient algorithm to sample from a discrete probability distribution. This means given an arbitrary probabi

Jun 14, 2022
A Kubernetes operator that allows for automatic provisioning and distribution of cert-manager certs across namespaces

cached-certificate-operator CachedCertificate Workflow When a CachedCertificate is created or updated the operator does the following: Check for a val

Sep 6, 2022
Apachedist-resource - A concourse resource to track updates of an apache distribution, e.g. tomcat

Apache Distribution Resource A concourse resource that can track information abo

Feb 2, 2022
RancherOS v2 is an immutable Linux distribution built to run Rancher and it's corresponding Kubernetes distributions RKE2 and k3s

RancherOS v2 is an immutable Linux distribution built to run Rancher and it's corresponding Kubernetes distributions RKE2 and k3s. It is built using the cOS-toolkit and based on openSUSE

Dec 27, 2022
Docker-based remote code runner / 基于 Docker 的远程代码运行器
Docker-based remote code runner / 基于 Docker 的远程代码运行器

Docker-based remote code runner / 基于 Docker 的远程代码运行器

Nov 9, 2022
k8s-image-swapper Mirror images into your own registry and swap image references automatically.
k8s-image-swapper Mirror images into your own registry and swap image references automatically.

k8s-image-swapper Mirror images into your own registry and swap image references automatically. k8s-image-swapper is a mutating webhook for Kubernetes

Dec 27, 2022
A tool to restart a Docker container with a newer version of the image

repull A tool to restart a Docker container with a newer version of an image used by the container Often you may need to pull a newer version of an im

Nov 28, 2022
Triggers an update to a Koyeb app service to re-deploy the latest docker image

Triggers an update to a Koyeb app service to re-deploy the latest docker image

May 5, 2021
Docker image for setting up one or multiple TCP ports forwarding, using socat

Docker socat Port Forward Docker image for setting up one or multiple TCP ports forwarding, using socat. Getting started The ports mappings are set wi

Dec 20, 2022
A docker image and a launcher to run sasm on Windows and MacOS
A docker image and a launcher to run sasm on Windows and MacOS

Sasm-docker Sasm-docker simplifies the setup and use of SASM by running it inside a docker container and using x11 (X Window System) in order to displ

Nov 14, 2022