concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

asciicinema example

BuildKit

GoDoc Build Status Go Report Card codecov

BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.

Key features:

  • Automatic garbage collection
  • Extendable frontend formats
  • Concurrent dependency resolution
  • Efficient instruction caching
  • Build cache import/export
  • Nested build job invocations
  • Distributable workers
  • Multiple output formats
  • Pluggable architecture
  • Execution without root privileges

Read the proposal from https://github.com/moby/moby/issues/32925

Introductory blog post https://blog.mobyproject.org/introducing-buildkit-17e056cc5317

Join #buildkit channel on Docker Community Slack

ℹ️ If you are visiting this repo for the usage of BuildKit-only Dockerfile features like RUN --mount=type=(bind|cache|tmpfs|secret|ssh), please refer to frontend/dockerfile/docs/syntax.md.

ℹ️ BuildKit has been integrated to docker build since Docker 18.06 . You don't need to read this document unless you want to use the full-featured standalone version of BuildKit.

Used by

BuildKit is used by the following projects:

Quick start

ℹ️ For Kubernetes deployments, see examples/kubernetes.

BuildKit is composed of the buildkitd daemon and the buildctl client. While the buildctl client is available for Linux, macOS, and Windows, the buildkitd daemon is only available for Linux currently.

The buildkitd daemon requires the following components to be installed:

The latest binaries of BuildKit are available here for Linux, macOS, and Windows.

Homebrew package (unofficial) is available for macOS.

$ brew install buildkit

To build BuildKit from source, see .github/CONTRIBUTING.md.

Starting the buildkitd daemon:

You need to run buildkitd as the root user on the host.

$ sudo buildkitd

To run buildkitd as a non-root user, see docs/rootless.md.

The buildkitd daemon supports two worker backends: OCI (runc) and containerd.

By default, the OCI (runc) worker is used. You can set --oci-worker=false --containerd-worker=true to use the containerd worker.

We are open to adding more backends.

To start the buildkitd daemon using systemd socket activiation, you can install the buildkit systemd unit files. See Systemd socket activation

The buildkitd daemon listens gRPC API on /run/buildkit/buildkitd.sock by default, but you can also use TCP sockets. See Expose BuildKit as a TCP service.

Exploring LLB

BuildKit builds are based on a binary intermediate format called LLB that is used for defining the dependency graph for processes running part of your build. tl;dr: LLB is to Dockerfile what LLVM IR is to C.

  • Marshaled as Protobuf messages
  • Concurrently executable
  • Efficiently cacheable
  • Vendor-neutral (i.e. non-Dockerfile languages can be easily implemented)

See solver/pb/ops.proto for the format definition, and see ./examples/README.md for example LLB applications.

Currently, the following high-level languages has been implemented for LLB:

Exploring Dockerfiles

Frontends are components that run inside BuildKit and convert any build definition to LLB. There is a special frontend called gateway (gateway.v0) that allows using any image as a frontend.

During development, Dockerfile frontend (dockerfile.v0) is also part of the BuildKit repo. In the future, this will be moved out, and Dockerfiles can be built using an external image.

Building a Dockerfile with buildctl

buildctl build \
    --frontend=dockerfile.v0 \
    --local context=. \
    --local dockerfile=.
# or
buildctl build \
    --frontend=dockerfile.v0 \
    --local context=. \
    --local dockerfile=. \
    --opt target=foo \
    --opt build-arg:foo=bar

--local exposes local source files from client to the builder. context and dockerfile are the names Dockerfile frontend looks for build context and Dockerfile location.

Building a Dockerfile using external frontend:

External versions of the Dockerfile frontend are pushed to https://hub.docker.com/r/docker/dockerfile-upstream and https://hub.docker.com/r/docker/dockerfile and can be used with the gateway frontend. The source for the external frontend is currently located in ./frontend/dockerfile/cmd/dockerfile-frontend but will move out of this repository in the future (#163). For automatic build from master branch of this repository docker/dockerfile-upstream:master or docker/dockerfile-upstream:master-labs image can be used.

buildctl build \
    --frontend gateway.v0 \
    --opt source=docker/dockerfile \
    --local context=. \
    --local dockerfile=.
buildctl build \
    --frontend gateway.v0 \
    --opt source=docker/dockerfile \
    --opt context=git://github.com/moby/moby \
    --opt build-arg:APT_MIRROR=cdn-fastly.deb.debian.org

Building a Dockerfile with experimental features like RUN --mount=type=(bind|cache|tmpfs|secret|ssh)

See frontend/dockerfile/docs/experimental.md.

Output

By default, the build result and intermediate cache will only remain internally in BuildKit. An output needs to be specified to retrieve the result.

Image/Registry

buildctl build ... --output type=image,name=docker.io/username/image,push=true

To export the cache embed with the image and pushing them to registry together, type registry is required to import the cache, you should specify --export-cache type=inline and --import-cache type=registry,ref=.... To export the cache to a local directy, you should specify --export-cache type=local. Details in Export cache.

buildctl build ...\
  --output type=image,name=docker.io/username/image,push=true \
  --export-cache type=inline \
  --import-cache type=registry,ref=docker.io/username/image

Keys supported by image output:

  • name=[value]: image name
  • push=true: push after creating the image
  • push-by-digest=true: push unnamed image
  • registry.insecure=true: push to insecure HTTP registry
  • oci-mediatypes=true: use OCI mediatypes in configuration JSON instead of Docker's
  • unpack=true: unpack image after creation (for use with containerd)
  • dangling-name-prefix=[value]: name image with prefix@<digest> , used for anonymous images
  • name-canonical=true: add additional canonical name name@<digest>
  • compression=[uncompressed,gzip,estargz,zstd]: choose compression type for layers newly created and cached, gzip is default value. estargz should be used with oci-mediatypes=true.
  • force-compression=true: forcefully apply compression option to all layers (including already existing layers).
  • buildinfo=[all,imageconfig,metadata,none]: choose build dependency version to export (default all).

If credentials are required, buildctl will attempt to read Docker configuration file $DOCKER_CONFIG/config.json. $DOCKER_CONFIG defaults to ~/.docker.

Local directory

The local client will copy the files directly to the client. This is useful if BuildKit is being used for building something else than container images.

buildctl build ... --output type=local,dest=path/to/output-dir

To export specific files use multi-stage builds with a scratch stage and copy the needed files into that stage with COPY --from.

...
FROM scratch as testresult

COPY --from=builder /usr/src/app/testresult.xml .
...
buildctl build ... --opt target=testresult --output type=local,dest=path/to/output-dir

Tar exporter is similar to local exporter but transfers the files through a tarball.

buildctl build ... --output type=tar,dest=out.tar
buildctl build ... --output type=tar > out.tar

Docker tarball

# exported tarball is also compatible with OCI spec
buildctl build ... --output type=docker,name=myimage | docker load

OCI tarball

buildctl build ... --output type=oci,dest=path/to/output.tar
buildctl build ... --output type=oci > output.tar

containerd image store

The containerd worker needs to be used

buildctl build ... --output type=image,name=docker.io/username/image
ctr --namespace=buildkit images ls

To change the containerd namespace, you need to change worker.containerd.namespace in /etc/buildkit/buildkitd.toml.

Cache

To show local build cache (/var/lib/buildkit):

buildctl du -v

To prune local build cache:

buildctl prune

Garbage collection

See ./docs/buildkitd.toml.md.

Export cache

BuildKit supports the following cache exporters:

  • inline: embed the cache into the image, and push them to the registry together
  • registry: push the image and the cache separately
  • local: export to a local directory
  • gha: export to GitHub Actions cache

In most case you want to use the inline cache exporter. However, note that the inline cache exporter only supports min cache mode. To enable max cache mode, push the image and the cache separately by using registry cache exporter.

Inline (push image and cache together)

buildctl build ... \
  --output type=image,name=docker.io/username/image,push=true \
  --export-cache type=inline \
  --import-cache type=registry,ref=docker.io/username/image

Note that the inline cache is not imported unless --import-cache type=registry,ref=... is provided.

ℹ️ Docker-integrated BuildKit (DOCKER_BUILDKIT=1 docker build) and docker buildxrequires --build-arg BUILDKIT_INLINE_CACHE=1 to be specified to enable the inline cache exporter. However, the standalone buildctl does NOT require --opt build-arg:BUILDKIT_INLINE_CACHE=1 and the build-arg is simply ignored.

Registry (push image and cache separately)

buildctl build ... \
  --output type=image,name=localhost:5000/myrepo:image,push=true \
  --export-cache type=registry,ref=localhost:5000/myrepo:buildcache \
  --import-cache type=registry,ref=localhost:5000/myrepo:buildcache

--export-cache options:

  • type=registry
  • mode=min (default): only export layers for the resulting image
  • mode=max: export all the layers of all intermediate steps.
  • ref=docker.io/user/image:tag: reference
  • oci-mediatypes=true|false: whether to use OCI mediatypes in exported manifests. Since BuildKit v0.8 defaults to true.

--import-cache options:

  • type=registry
  • ref=docker.io/user/image:tag: reference

Local directory

buildctl build ... --export-cache type=local,dest=path/to/output-dir
buildctl build ... --import-cache type=local,src=path/to/input-dir

The directory layout conforms to OCI Image Spec v1.0.

--export-cache options:

  • type=local
  • mode=min (default): only export layers for the resulting image
  • mode=max: export all the layers of all intermediate steps.
  • dest=path/to/output-dir: destination directory for cache exporter
  • oci-mediatypes=true|false: whether to use OCI mediatypes in exported manifests. Since BuildKit v0.8 defaults to true.

--import-cache options:

  • type=local
  • src=path/to/input-dir: source directory for cache importer
  • digest=sha256:deadbeef: digest of the manifest list to import.
  • tag=customtag: custom tag of image. Defaults "latest" tag digest in index.json is for digest, not for tag

GitHub Actions cache (experimental)

buildctl build ... \
  --output type=image,name=docker.io/username/image,push=true \
  --export-cache type=gha \
  --import-cache type=gha

Following attributes are required to authenticate against the Github Actions Cache service API:

  • url: Cache server URL (default $ACTIONS_CACHE_URL)
  • token: Access token (default $ACTIONS_RUNTIME_TOKEN)

ℹ️ This type of cache can be used with Docker Build Push Action where url and token will be automatically set. To use this backend in a inline run step, you have to include crazy-max/ghaction-github-runtime in your workflow to expose the runtime.

--export-cache options:

  • type=gha
  • mode=min (default): only export layers for the resulting image
  • mode=max: export all the layers of all intermediate steps.
  • scope=buildkit: which scope cache object belongs to (default buildkit)

--import-cache options:

  • type=gha
  • scope=buildkit: which scope cache object belongs to (default buildkit)

Consistent hashing

If you have multiple BuildKit daemon instances but you don't want to use registry for sharing cache across the cluster, consider client-side load balancing using consistent hashing.

See ./examples/kubernetes/consistenthash.

Metadata

To output build metadata such as the image digest, pass the --metadata-file flag. The metadata will be written as a JSON object to the specified file. The directory of the specified file must already exist and be writable.

buildctl build ... --metadata-file metadata.json
{"containerimage.digest": "sha256:ea0cfb27fd41ea0405d3095880c1efa45710f5bcdddb7d7d5a7317ad4825ae14",...}

Systemd socket activation

On Systemd based systems, you can communicate with the daemon via Systemd socket activation, use buildkitd --addr fd://. You can find examples of using Systemd socket activation with BuildKit and Systemd in ./examples/systemd.

Expose BuildKit as a TCP service

The buildkitd daemon can listen the gRPC API on a TCP socket.

It is highly recommended to create TLS certificates for both the daemon and the client (mTLS). Enabling TCP without mTLS is dangerous because the executor containers (aka Dockerfile RUN containers) can call BuildKit API as well.

buildkitd \
  --addr tcp://0.0.0.0:1234 \
  --tlscacert /path/to/ca.pem \
  --tlscert /path/to/cert.pem \
  --tlskey /path/to/key.pem
buildctl \
  --addr tcp://example.com:1234 \
  --tlscacert /path/to/ca.pem \
  --tlscert /path/to/clientcert.pem \
  --tlskey /path/to/clientkey.pem \
  build ...

Load balancing

buildctl build can be called against randomly load balanced the buildkitd daemon.

See also Consistent hashing for client-side load balancing.

Containerizing BuildKit

BuildKit can also be used by running the buildkitd daemon inside a Docker container and accessing it remotely.

We provide the container images as moby/buildkit:

  • moby/buildkit:latest: built from the latest regular release
  • moby/buildkit:rootless: same as latest but runs as an unprivileged user, see docs/rootless.md
  • moby/buildkit:master: built from the master branch
  • moby/buildkit:master-rootless: same as master but runs as an unprivileged user, see docs/rootless.md

To run daemon in a container:

docker run -d --name buildkitd --privileged moby/buildkit:latest
export BUILDKIT_HOST=docker-container://buildkitd
buildctl build --help

Podman

To connect to a BuildKit daemon running in a Podman container, use podman-container:// instead of docker-container:// .

podman run -d --name buildkitd --privileged moby/buildkit:latest
buildctl --addr=podman-container://buildkitd build --frontend dockerfile.v0 --local context=. --local dockerfile=. --output type=oci | podman load foo

sudo is not required.

Kubernetes

For Kubernetes deployments, see examples/kubernetes.

Daemonless

To run the client and an ephemeral daemon in a single container ("daemonless mode"):

docker run \
    -it \
    --rm \
    --privileged \
    -v /path/to/dir:/tmp/work \
    --entrypoint buildctl-daemonless.sh \
    moby/buildkit:master \
        build \
        --frontend dockerfile.v0 \
        --local context=/tmp/work \
        --local dockerfile=/tmp/work

or

docker run \
    -it \
    --rm \
    --security-opt seccomp=unconfined \
    --security-opt apparmor=unconfined \
    -e BUILDKITD_FLAGS=--oci-worker-no-process-sandbox \
    -v /path/to/dir:/tmp/work \
    --entrypoint buildctl-daemonless.sh \
    moby/buildkit:master-rootless \
        build \
        --frontend \
        dockerfile.v0 \
        --local context=/tmp/work \
        --local dockerfile=/tmp/work

Opentracing support

BuildKit supports opentracing for buildkitd gRPC API and buildctl commands. To capture the trace to Jaeger, set JAEGER_TRACE environment variable to the collection address.

docker run -d -p6831:6831/udp -p16686:16686 jaegertracing/all-in-one:latest
export JAEGER_TRACE=0.0.0.0:6831
# restart buildkitd and buildctl so they know JAEGER_TRACE
# any buildctl command should be traced to http://127.0.0.1:16686/

Running BuildKit without root privileges

Please refer to docs/rootless.md.

Building multi-platform images

Please refer to docs/multi-platform.md.

Contributing

Want to contribute to BuildKit? Awesome! You can find information about contributing to this project in the CONTRIBUTING.md

Owner
Moby
An open framework to assemble specialized container systems without reinventing the wheel.
Moby
Comments
  • Failed to compute cache key in newer version

    Failed to compute cache key in newer version

    This is a docker issue but it seems to be related to BuildKit only. this is something that was still working in docker ~19.03.10 but stopped functioning in 20.10.0+. I managed to bring down my DockerFile to a minimal repro:

    This works (A.DockerFile):

    FROM php:7.4.13-cli
    
    COPY --from=composer:2.0.8 /usr/bin/composer /usr/local/bin/composer
    

    This also works (B.DockerFile):

    FROM php:7.4.13-cli
    
    COPY --from=mlocati/php-extension-installer /usr/bin/install-php-extensions /usr/bin/
    

    This no longer works (C.DockerFile):

    FROM php:7.4.13-cli
    
    COPY --from=mlocati/php-extension-installer /usr/bin/install-php-extensions /usr/bin/
    COPY --from=composer:2.0.8 /usr/bin/composer /usr/local/bin/composer
    

    Output from running A and C after eachother:

    C:\Users\Test>set "DOCKER_BUILDKIT=1" & docker build -f A.Dockerfile .
    [+] Building 3.6s (7/7) FINISHED
     => [internal] load build definition from A.Dockerfile                                                                                                                                                           0.0s
     => => transferring dockerfile: 132B                                                                                                                                                                             0.0s
     => [internal] load .dockerignore                                                                                                                                                                                0.0s
     => => transferring context: 2B                                                                                                                                                                                  0.0s
     => [internal] load metadata for docker.io/library/php:7.4.13-cli                                                                                                                                                2.9s
     => CACHED FROM docker.io/library/composer:2.0.8                                                                                                                                                                 0.0s
     => => resolve docker.io/library/composer:2.0.8                                                                                                                                                                  0.5s
     => CACHED [stage-0 1/2] FROM docker.io/library/php:7.4.13-cli@sha256:c099060944167d20100140434ee13b7c134bc53ae8c0a72e81b8f01c07a1f49d                                                                           0.0s
     => [stage-0 2/2] COPY --from=composer:2.0.8 /usr/bin/composer /usr/local/bin/composer                                                                                                                           0.1s
     => exporting to image                                                                                                                                                                                           0.1s
     => => exporting layers                                                                                                                                                                                          0.0s
     => => writing image sha256:ea6d75bc9ad24e800c8083e9ea6b7774f2bd9610cb0e61b3640058c9c7fe34c6                                                                                                                     0.0s
    
    C:\Users\Test>set "DOCKER_BUILDKIT=1" & docker build -f C.Dockerfile .
    [+] Building 1.0s (8/8) FINISHED
     => [internal] load build definition from C.Dockerfile                                                                                                                                                           0.0s
     => => transferring dockerfile: 221B                                                                                                                                                                             0.0s
     => [internal] load .dockerignore                                                                                                                                                                                0.0s
     => => transferring context: 2B                                                                                                                                                                                  0.0s
     => [internal] load metadata for docker.io/library/php:7.4.13-cli                                                                                                                                                0.2s
     => FROM docker.io/mlocati/php-extension-installer:latest                                                                                                                                                        0.0s
     => => resolve docker.io/mlocati/php-extension-installer:latest                                                                                                                                                  0.0s
     => => sha256:ccf3a05d8241580ad9d2a6c884a735bb248e90942ab23e0f8197f851a999ddac 526B / 526B                                                                                                                       0.0s
     => CACHED FROM docker.io/library/composer:2.0.8                                                                                                                                                                 0.0s
     => [stage-0 1/3] FROM docker.io/library/php:7.4.13-cli@sha256:c099060944167d20100140434ee13b7c134bc53ae8c0a72e81b8f01c07a1f49d                                                                                  0.0s
     => CACHED [stage-0 2/3] COPY --from=mlocati/php-extension-installer /usr/bin/install-php-extensions /usr/bin/                                                                                                   0.0s
     => ERROR [stage-0 3/3] COPY --from=composer:2.0.8 /usr/bin/composer /usr/local/bin/composer                                                                                                                     0.0s
    ------
     > [stage-0 3/3] COPY --from=composer:2.0.8 /usr/bin/composer /usr/local/bin/composer:
    ------
    failed to compute cache key: "/usr/bin/composer" not found: not found
    

    This doesn't happen consistently in my build, sometimes everything builds fine and there are no issues. I'm using windows 10 (20H2) and the latest version of Docker Desktop that includes Docker version 20.10.2, build 2291f61, but I have also seen this happen on Linux with the same version

  • Dockerfile heredocs

    Dockerfile heredocs

    relates to https://github.com/moby/moby/issues/34423

    As mentioned in #2121, I've been making progress towards implementing heredocs in Dockerfiles, and thought it might be time to open a PR for it :tada:

    I've essentially got all the functionality I think we'd need before wanting to merge, though I'm sure there's some fixes/tests to write before that.

    Things that definitely need resolving before a merge is really possible:

    • [x] Gate the feature behind a build tag, as suggested by @tonistiigi
    • [x] Warn/error/do something if a heredoc is used in a place it's not expected (e.g. an ENV command)
    • [x] Handle RUN heredocs in Windows more elegantly (doesn't look particularly doable with cmd so the current hacky approach, might be the best?)
    • [x] Tests! Currently only the parsing stages are tested, so we some some more complex integration tests.

    I'd really appreciate any feedback anyone has on the current design and implementation!

  • Add OCI source

    Add OCI source

    This is an early draft. As discussed with @tonistiigi and @sipsma , this extends the source/containerimage/ so it can support "pulling" from 2 distinct places: an actual registry (as now) and an OCI layout. In addition to the docker-image: scheme, it also recognizes and oci-layout: scheme.

    It uses the same source/containerimage/ directory, just adding a typed constant so that it can choose if it is oci-layout from a directory, or from a registry. It then just selects which Pool.Resolver to use.

    Push() is disabled.

    Some things I haven't yet addressed:

    • buildkit often runs inside a container, so how is the provided directory mapped? This already had to be dealt with for local: scheme, so I will look there as well.
    • it is a little "conflicted" in that it expects a @sha256:<hash> but also knows how to read the OCI layout index.json. I will need to pick supporting image names and having hashes optional, or requiring hashes and eliminating name resolution.

    At this point, just ready for overall directional input.

  • buildkit + gcr.io private repos (credHelpers) do not stack

    buildkit + gcr.io private repos (credHelpers) do not stack

    Docker 18.09-ce here.

    I have FROM directive in my dockerfile pointing to a private registry:

    FROM gcr.io/...
    

    Running DOCKER_BUILDKIT=1 docker build . with this Dockerfile never finishes (after 5 minutes I hit CTRL-C). Without buildkit it builds fine in seconds.

    My ~/.docker/config.json is as follows:

    {
      "credHelpers": {
        "us.gcr.io": "gcloud",
        "staging-k8s.gcr.io": "gcloud",
        "asia.gcr.io": "gcloud",
        "gcr.io": "gcloud",
        "marketplace.gcr.io": "gcloud",
        "eu.gcr.io": "gcloud"
      }
    }
    

    After waiting long time and pressing CTRL-C, the following error is printed (exact image names scrambled with ...):

    ------
     > [stage-1 1/4] FROM gcr.io/...:
    ------
    failed to copy: httpReaderSeeker: failed open: unexpected status code https://gcr.io/v2/...: 403 Forbidden
    

    Bug?

  • Support schema1 push for quay?

    Support schema1 push for quay?

    Astonishingly Quay.io still does not support schema2: https://github.com/bazelbuild/rules_docker/issues/102

    DEBU[0011] do request                                    digest=sha256:eb300a827decea6de23bda3e4ec5a60dcb3fb59bd01792fe3b54c08c10f68214 mediatype="application/vnd.docker.distribution.manifest.v2+json" request.headers=map[Content-Type:[application/vnd.docker.distribution.m
    anifest.v2+json]] request.method=PUT size=1245 url="https://quay.io/v2/****/****/manifests/latest"
    DEBU[0012] fetch response received                       digest=sha256:eb300a827decea6de23bda3e4ec5a60dcb3fb59bd01792fe3b54c08c10f68214 mediatype="application/vnd.docker.distribution.manifest.v2+json" response.headers=map[Server:[nginx/1.13.12] Date:[Thu, 24 May 2018 03:1
    1:16 GMT] Content-Type:[application/json] Content-Length:[131]] size=1245 status="415 Unsupported Media Type" url="https://quay.io/v2/****/****/manifests/latest"
    ERRO[0012] /moby.buildkit.v1.Control/Solve returned error: unexpected status: 415 Unsupported Media Type
    

    Do we want to support pushing as schema1?

    I hesitate to add support for such deprecated format, but probably we should do if there are also other registry implementations that lack support for schema2.

    cc @alexellis cc @dmcgowan @stevvooe

  • always display image hashes

    always display image hashes

    It's tough to debug docker building when I can't just get into the previously successful intermediate build image and run the next command manually...

    docker run -it --rm hash_id bash
    # execute the next RUN line here manually.
    

    I would therefore argue that image hashes should always display, just like they do in the current docker.

  • Bridge network

    Bridge network

    adds the support for bridge networking for containerd & runc workers. fixes #28 Needs a review/suggestion Temporary Interface naming.

    NOTE: Still "docker0" is hard-coded, need to provide user input.

    Signed-off-by: Kunal Kushwaha [email protected]

  • Cache manifest lists can't be exported to gcr

    Cache manifest lists can't be exported to gcr

    Related to https://github.com/moby/buildkit/issues/720 that currently has a gcr specific workaround in the pull code.

    When exporting a manifest list that contains the cache metadata for a build (eg. with mode max), the upload fails in gcr with 400 error.

    There was a report in slack that it also failed with 404 https://dockercommunity.slack.com/archives/C7S7A40MP/p1566224768282100 (hence similarities with #720) but couldn't repro that.

    #7 ERROR: error writing manifest blob: failed commit on ref "sha256:813b455d58cf597f96c8f20d04ae670127a94cd4786f14da09fef88e97bab090": unexpected status: 400 Bad Request
    ------
     > exporting cache:
    ------
    error: failed to solve: rpc error: code = Unknown desc = error writing manifest blob: failed commit on ref "sha256:813b455d58cf597f96c8f20d04ae670127a94cd4786f14da09fef88e97bab090": unexpected status: 400 Bad Request
    

    After pushing all the blobs manifest list push fails with:

    request:
    
    PUT https://gcr.io/v2/.../hello/manifests/cache HTTP/2.0
                            ← 400 application/json 190b 197ms
    
    :authority:       gcr.io
    content-type:     application/vnd.docker.distribution.manifest.list.v2+json
    content-length:   935
    accept-encoding:  gzip
    user-agent:       Go-http-client/2.0
    authorization:    Bearer ....
    
    {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.list.v2+json","manifests":[{"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip","digest":"sha256:0503825856099e6adb39c8297af09547f69684b7016b7f3680ed801aa310baaa","size":2789742,"annotations":{"buildkit/createdat":"2019-08-14T17:52:26.789506223-07:00","containerd.io/uncompressed":"sha256:1bfeebd65323b8ddf5bd6a51cc7097b72788bc982e9ab3280d53d3c613adffa7"}},{"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip","digest":"sha256:a8ce4bba72fb43cbaeb05eca5d6d682696b915c53a3519d0f00f5aec063c0ae9","size":219,"annotations":{"buildkit/createdat":"2019-08-19T14:39:20.018002388-07:00","containerd.io/uncompressed":"sha256:72e39bafb519d4c8b9d12597a538667085627a6b8c3c3f085d319d3ef6955b4f"}},{"mediaType":"application/vnd.buildkit.cacheconfig.v0","digest":"sha256:02656190e9941165d11c1ac96d683c3da0cec10714e01ed4de4b6ded7e8b7c49","size":565}]}
    
    
    response:
    
    docker-distribution-api-versio  registry/2.0
    n:
    content-type:                   application/json
    content-length:                 190
    content-encoding:               gzip
    date:                           Mon, 19 Aug 2019 21:39:36 GMT
    server:                         Docker Registry
    cache-control:                  private
    x-xss-protection:               0
    x-frame-options:                SAMEORIGIN
    alt-svc:                        quic=":443"; ma=2592000; v="46,43,39"
    [decoded gzip] JSON
    {
        "errors": [
            {
                "code": "MANIFEST_INVALID",
                "message": "Failed to parse manifest for request \"/v2/.../hello/manifests/cache\": Failed to deserialize application/vnd.docker.distribution.manifest.list.v2+json."
            }
        ]
    }
    

    @mattmoor @dmcgowan

  • DOCKER_BUILDKIT=1 prevents custom networks using docker build --network

    DOCKER_BUILDKIT=1 prevents custom networks using docker build --network

    Not sure if this is a Docker CLI or BuildKit issue, but when using BuildKit by specifying DOCKER_BUILDKIT=1 as part of a Docker Build in Docker v18.09, a custom Docker network is no longer recognized.

    So for example, a command of "docker build --network demo-net ." will now return:

    Error response from daemon: network mode "demo-net" not supported by buildkit

    Seemingly this means that it only supports specific network modes are being supported, and not custom Docker networks.

  • CNI network for workers

    CNI network for workers

    This PR enables networking for buildkit workers using CNI plugins. This implementation uses default CNI conf files from standard directories.

    Would like to know feedback on this.

    NOTE: Options for providing custom folders (CNI binaries & conf ) are not yet supported in CLI.

  • Add IncludePatterns and ExcludePatterns options for Copy

    Add IncludePatterns and ExcludePatterns options for Copy

    Allow include and exclude patterns to be specified for the "copy" op, similarly to "local".

    Depends on https://github.com/tonistiigi/fsutil/pull/101

    cc @hinshun

  • [remotecache/s3] fix: "-"">

    [remotecache/s3] fix: "updated_at" in http header can't pass some http gateway, "_" -> "-"

    the word updated_at in s3 Metadata will be converted to x-amz-meta-updated_at in the http header, but the http headers which contain _ can't pass some http gateway, such as tencent cloud object storage gateway.

  • progress: fix clean context cancelling

    progress: fix clean context cancelling

    @sipsma I'm not sure if this is best solution so feel free to open an alternative but discovered that the current state is not quite correct.

    When case <-ctx.Done(): gets called it doesn't return out of the function but sets onFinalStatus to return after iteration. But then it calls manager.Status(ctx that is guaranteed to error because we already know that the context is closed. Because the error handling is relaxed it doesn't error the function but still logs error message every time.

    Signed-off-by: Tonis Tiigi [email protected]

  • client: add extra debug to tests

    client: add extra debug to tests

    I was trying to debug #3401 but couldn't repro or understand how the race is possible as long as tests do not have inner parallelization. Adding more debug so if it happens again we have some extra data.

    Signed-off-by: Tonis Tiigi [email protected]

  • [0.11 backport] vendor: docker and docker/cli v23.0.0-rc.1

    [0.11 backport] vendor: docker and docker/cli v23.0.0-rc.1

    • backport of https://github.com/moby/buildkit/pull/3435

    vendor: github.com/containerd/containerd v1.6.14

    full diff: https://github.com/containerd/containerd/compare/v1.6.13...v1.6.14

    vendor: github.com/docker/docker v23.0.0-rc.1

    full diff: https://github.com/docker/docker/compare/v23.0.0-beta.1...v23.0.0-rc.1

    vendor: github.com/docker/cli v23.0.0-rc.1

    full diff: https://github.com/docker/cli/compare/v23.0.0-beta.1...v23.0.0-rc.1

  • [v0.8 backport] frontend: fix testMultiStageImplicitFrom to account for busybox changes

    [v0.8 backport] frontend: fix testMultiStageImplicitFrom to account for busybox changes

    backport of

    • https://github.com/moby/buildkit/pull/3269
    • https://github.com/moby/buildkit/pull/3436

    It looks like there's some changes between busybox:1.34.0 and up; version 1.34.0 of the image did not have a /usr/bin directory (only /usr/sbin);

    docker run --rm -it busybox:1.34.0 ls -al /usr/
    total 12
    drwxr-xr-x    3 root     root          4096 Sep 13  2021 .
    drwxr-xr-x    1 root     root          4096 Dec 27 14:45 ..
    drwxr-xr-x    2 daemon   daemon        4096 Sep 13  2021 sbin
    

    But 1.34.1 and up do;

    docker run --rm -it busybox:1.34.1 ls -al usr/
    total 16
    drwxr-xr-x    4 root     root          4096 Dec 21 18:28 .
    drwxr-xr-x    1 root     root          4096 Dec 27 14:44 ..
    drwxr-xr-x    2 root     root          4096 Dec 21 18:28 bin
    drwxr-xr-x    2 daemon   daemon        4096 Dec 21 18:28 sbin
    

    It's not immediately apparent what caused this change, or if it's in busybox itself, or in the official image only; https://github.com/mirror/busybox/compare/1_34_0...1_34_1

    But either way, this change caused a test to fail:

    sandbox.go:238: time="2022-12-27T13:45:25.294022820Z" level=debug msg="> creating 4gr5bno8rj7l3k7h9jxe3jhal [/bin/sh -c mkdir /usr/bin && echo -n foo > /usr/bin/go]" span="[golang 2/2] RUN mkdir /usr/bin && echo -n foo > /usr/bin/go"
    sandbox.go:238: time="2022-12-27T13:45:25.433886983Z" level=debug msg="sandbox set key processing took 70.062631ms for container 5b4o358g2ryquk4s6ami38gqo"
    sandbox.go:238: mkdir: can't create directory '/usr/bin': File exists
    

    (cherry picked from commit 34f9898f3112cac7c899e41d45ecdeb1502c3131)

  • vendor: docker and docker/cli v23.0.0-rc.1

    vendor: docker and docker/cli v23.0.0-rc.1

    vendor: github.com/containerd/containerd v1.6.14

    full diff: https://github.com/containerd/containerd/compare/v1.6.13...v1.6.14

    vendor: github.com/docker/docker v23.0.0-rc.1

    full diff: https://github.com/docker/docker/compare/v23.0.0-beta.1...v23.0.0-rc.1

    vendor: github.com/docker/cli v23.0.0-rc.1

    full diff: https://github.com/docker/cli/compare/v23.0.0-beta.1...v23.0.0-rc.1

Conjur Kubernetes All-in-One Dockerfile

conjur-authn-k8s-aio Conjur Kubernetes All-in-One Dockerfile Supported Authenticators Usage Build Secretless Broker Build Conjur Authn-K8s Client Buil

Sep 14, 2022
Buildg: A tool to interactively debug Dockerfile

buildg: A tool to interactively debug Dockerfile buildg is a tool to interactively debug Dockerfile based on BuildKit. Source-level inspection Breakpo

Dec 29, 2022
BuildKit - A toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner
BuildKit - A toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner

BuildKit BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner. Key features: Automati

Feb 19, 2022
Tpf2-tpnetmap-toolkit - A toolkit to create svg map images from TransportFever2 world data
Tpf2-tpnetmap-toolkit - A toolkit to create svg map images from TransportFever2 world data

tpf2-tpnetmap-toolkit TransportFever2 のワールドデータから svg のマップ画像を作成するツールキットです。 1. 導入方

Feb 17, 2022
Highly configurable prompt builder for Bash, ZSH and PowerShell written in Go.
Highly configurable prompt builder for Bash, ZSH and PowerShell written in Go.

Go Bullet Train (GBT) Highly configurable prompt builder for Bash, ZSH and PowerShell written in Go. It's inspired by the Oh My ZSH Bullet Train theme

Dec 17, 2022
Simple, rootless, "FROM scratch" OCI image builder

zeroimage zeroimage some-program is like building the following Docker image: FROM scratch COPY some-program /some-program ENTRYPOINT ["/some-program"

Jun 26, 2022
Kevi: a kubernetes appliance builder/deployer

kevi kevi is a kubernetes appliance builder/deployer (syskeví = Greek for "appli

May 3, 2022
Demo of skaffold's port-forwarding with ko builder (does not work)

skaffold port-forwarding : Ko builder vs docker builder When using ko builder (see folder ko/), port forwarding does not work (skaffold debug or skaff

Jan 6, 2022
Fast, concurrent, streaming access to Amazon S3, including gof3r, a CLI. http://godoc.org/github.com/rlmcpherson/s3gof3r

s3gof3r s3gof3r provides fast, parallelized, pipelined streaming access to Amazon S3. It includes a command-line interface: gof3r. It is optimized for

Dec 26, 2022
concurrent map implementation using bucket list like a skip list.

Skip List Map in Golang Skip List Map is an ordered and concurrent map. this Map is goroutine safety for reading/updating/deleting, no-require locking

Oct 8, 2022
Kepler (Kubernetes-based Efficient Power Level Exporter) uses eBPF to probe energy related system stats and exports as Prometheus metrics
Kepler (Kubernetes-based Efficient Power Level Exporter) uses eBPF to probe energy related system stats and exports as Prometheus metrics

kepler Kepler (Kubernetes Efficient Power Level Exporter) uses eBPF to probe energy related system stats and exports as Prometheus metrics Architectur

Dec 26, 2022
Harbormaster - Toolkit for automating the creation & mgmt of Docker components and tools

My development environment is MacOS with an M1 chip and I mostly develop for lin

Feb 17, 2022
APKrash is an Android APK security analysis toolkit focused on comparing APKs to detect tampering and repackaging.
 APKrash is an Android APK security analysis toolkit focused on comparing APKs to detect tampering and repackaging.

APKrash APKrash is an Android APK security analysis toolkit focused on comparing APKs to detect tampering and repackaging. Features Able to analyze pu

Nov 8, 2022
Substation is a cloud native toolkit for building modular ingest, transform, and load (ITL) data pipelines

Substation Substation is a cloud native data pipeline toolkit. What is Substation? Substation is a modular ingest, transform, load (ITL) application f

Dec 30, 2022
Walker's alias method is an efficient algorithm to sample from a discrete probability distribution.

walker-alias Walker's alias method is an efficient algorithm to sample from a discrete probability distribution. This means given an arbitrary probabi

Jun 14, 2022
A penetration toolkit for container environment

ctrsploit: A penetration toolkit for container environment 中文文档 Pre-Built Release https://github.com/ctrsploit/ctrsploit/releases Usage Quick-Start wg

Dec 6, 2022
CDK - Zero Dependency Container Penetration Toolkit
 CDK - Zero Dependency Container Penetration Toolkit

CDK is an open-sourced container penetration toolkit, offering stable exploitation in different slimmed containers without any OS dependency. It comes with penetration tools and many powerful PoCs/EXPs helps you to escape container and takeover K8s cluster easily.

Dec 29, 2022
IndieAuth Toolkit for Go.

IndieAuth Toolkit for Go This repository contains a set of tools to help you implement IndieAuth, both server and client, in Go. The documentation can

Nov 26, 2022
JOY5 AV Toolkit.

JOY5 AV Toolkit.

Dec 30, 2022