🌍 Earthly is a build automation tool for the container era

Earthly

GitHub Actions CI Join the chat on Slack Docs Website Install Earthly Docker Hub License BSL

🐳 Build anything via containers - build images or standalone artifacts (binaries, packages, arbitrary files)

πŸ›  Programming language agnostic - allows use of language-specific build tooling

πŸ” Repeatable builds - does not depend on user's local installation: runs the same locally, as in CI

β›“ Parallelism that just works - build in parallel without special considerations

🏘 Mono and Poly-repo friendly - ability to split the build definitions across vast project hierarchies

πŸ’Ύ Shared caching - share build cache between CI runners

πŸ”€ Multi-platform - build for multiple platforms in parallel


🌍 Earthly is a build automation tool for the container era. It allows you to execute all your builds in containers. This makes them self-contained, repeatable, portable and parallel. You can use Earthly to create Docker images and artifacts (eg binaries, packages, arbitrary files).


Get Earthly




Table of Contents



Why Use Earthly?

πŸ” Reproduce CI failures

Earthly builds are self-contained, isolated and repeatable. Regardless of whether Earthly runs in your CI or on your laptop, there is a degree of guarantee that the build will run the same way. This allows for faster iteration on the build scripts and easier debugging when something goes wrong. No more git commit -m "try again".

🀲 Builds that run the same for everyone

Repeatable builds also mean that your build will run the same on your colleagues' laptop without any additional project-specific or language-specific setup. This fosters better developer collaboration and mitigates works-for-me type of issues.

πŸš€ From zero to working build in minutes

Jump from project to project with ease, regardless of the language they are written in. Running the project's test suites is simply a matter of running an Earthly target (without fiddling with project configuration to make it compile and run on your system). Contribute across teams with confidence.

πŸ“¦ Reusability

A simple, yet powerful import system allows for reusability of builds across directories or even across repositories. Importing other builds does not have hidden environment-specific implications - it just works.

❀️ It's like Makefile and Dockerfile had a baby

Taking some of the best ideas from Makefiles and Dockerfiles, Earthly combines two build specifications into one.



Where Does Earthly Fit?

Earthly fits between language-specific tooling and the CI

Earthly is meant to be used both on your development machine and in CI. It can run on top of popular CI systems (like Jenkins, Circle, GitHub Actions). It is typically the layer between language-specific tooling (like maven, gradle, npm, pip, go build) and the CI build spec.



How Does It Work?

In short: containers, layer caching and complex build graphs!

Earthly executes builds in containers, where execution is isolated. The dependencies of the build are explicitly specified in the build definition, thus making the build self-sufficient.

We use a target-based system to help users break-up complex builds into reusable parts. Nothing is shared between targets, other than clearly declared dependencies. Nothing shared means no unexpected race conditions. In fact, the build is executed in parallel whenever possible, without any need for the user to take care of any locking or unexpected environment interactions.

ℹ️ Note

Earthfiles might seem very similar to Dockerfile multi-stage builds. In fact, the same technology is used underneath. However, a key difference is that Earthly is designed to be a general purpose build system, not just a Docker image specification. Read more about how Earthly is different from Dockerfiles.


Installation

See installation instructions.

To build from source, check the contributing page.



Quick Start

Here are some resources to get you started with Earthly

See also the full documentation.

Reference pages

A simple example (for Go)

# Earthfile
FROM golang:1.15-alpine3.13
RUN apk --update --no-cache add git
WORKDIR /go-example

all:
  BUILD +lint
  BUILD +docker

build:
  COPY main.go .
  RUN go build -o build/go-example main.go
  SAVE ARTIFACT build/go-example AS LOCAL build/go-example

lint:
  RUN go get golang.org/x/lint/golint
  COPY main.go .
  RUN golint -set_exit_status ./...

docker:
  COPY +build/go-example .
  ENTRYPOINT ["/go-example/go-example"]
  SAVE IMAGE go-example:latest
// main.go
package main

import "fmt"

func main() {
  fmt.Println("hello world")
}

Invoke the build using earthly +all.

Demonstration of a simple Earthly build

Examples for other languages are available on the examples page.



Features

πŸ“¦ Modern import system

Earthly can be used to reference and build targets from other directories or even other repositories. For example, if we wanted to build an example target from the github.com/earthly/earthly repository, we could issue

# Try it yourself! No need to clone.
earthly github.com/earthly/earthly/examples/go:main+docker
# Run the resulting image.
docker run --rm go-example:latest

πŸ”¨ Reference other targets using +

Use + to reference other targets and create complex build inter-dependencies.

Target and artifact reference syntax

Examples

  • Same directory (same Earthfile)

    BUILD +some-target
    FROM +some-target
    COPY +some-target/my-artifact ./
  • Other directories

    BUILD ./some/local/path+some-target
    FROM ./some/local/path+some-target
    COPY ./some/local/path+some-target/my-artifact ./
  • Other repositories

    BUILD github.com/someone/someproject:v1.2.3+some-target
    FROM github.com/someone/someproject:v1.2.3+some-target
    COPY github.com/someone/someproject:v1.2.3+some-target/my-artifact ./

πŸ’Ύ Caching that works the same as Docker builds

Demonstration of Earthly's caching

Cut down build times in CI through Shared Caching.

πŸ›  Multi-platform support

Build for multiple platforms in parallel.

all:
    BUILD \
        --platform=linux/amd64 \
        --platform=linux/arm64 \
        --platform=linux/arm/v7 \
        --platform=linux/arm/v6 \
        +build

build:
    FROM alpine:3.13
    CMD ["uname", "-m"]
    SAVE IMAGE multiplatform-image

β›“ Parallelization that just works

Whenever possible, Earthly automatically executes targets in parallel.

Demonstration of Earthly's parallelization

🀲 Make use of build tools that work everywhere

No need to ask your team to install protoc, a specific version of Python, Java 1.6 or the .NET Core ecosystem. You only install once, in your Earthfile, and it works for everyone. Or even better, you can just make use of the rich Docker Hub ecosystem.

FROM golang:1.15-alpine3.13
WORKDIR /proto-example

proto:
  FROM namely/protoc-all:1.29_4
  COPY api.proto /defs
  RUN --entrypoint -- -f api.proto -l go
  SAVE ARTIFACT ./gen/pb-go /pb AS LOCAL pb

build:
  COPY go.mod go.sum .
  RUN go mod download
  COPY +proto/pb pb
  COPY main.go ./
  RUN go build -o build/proto-example main.go
  SAVE ARTIFACT build/proto-example

See full example code.

πŸ”‘ Cloud secrets support built-in

Secrets are never stored within an image's layers and they are only available to the commands that need them.

earthly set /user/github/token 'shhh...'
release:
  RUN --push --secret GITHUB_TOKEN=+secrets/user/github/token github-release upload file.bin


FAQ

How is Earthly different from Dockerfiles?

Dockerfiles were designed for specifying the make-up of Docker images and that's where Dockerfiles stop. Earthly takes some key principles of Dockerfiles (like layer caching), but expands on the use-cases. For example, Earthly can output regular artifacts, run unit and integration tests and also create several Docker images at a time - all of which are outside the scope of Dockerfiles.

It is possible to use Dockerfiles in combination with other technologies (eg Makefiles or bash files) in order to solve for such use-cases. However, these combinations are difficult to parallelize, difficult to scale across repositories as they lack a robust import system and also they often vary in style from one team to another. Earthly does not have these limitations as it was designed as a general purpose build system.

As an example, Earthly introduces a richer target, artifact and image referencing system, which allows for better reuse in complex builds spanning a single large repository or multiple repositories. Because Dockerfiles are only meant to describe one image at a time, such features are outside the scope of applicability of Dockerfiles.

How do I tell apart classical Dockerfile commands from Earthly commands?

Check out the Earthfile reference doc page. It has all the commands there and it specifies which commands are the same as Dockerfile commands and which are new.

Can Earthly build Dockerfiles?

Yes! You can use the command FROM DOCKERFILE to inherit the commands in an existing Dockerfile.

build:
  FROM DOCKERFILE .
  SAVE IMAGE some-image:latest

You may also optionally port your Dockerfiles to Earthly entirely. Translating Dockerfiles to Earthfiles is usually a matter of copy-pasting and making small adjustments. See the getting started page for some Earthfile examples.

How is Earthly different from Bazel?

Bazel is a build tool developed by Google for the purpose of optimizing speed, correctness and reproducibility of their internal monorepo codebase. Earthly draws inspiration from some of the principles of Bazel (mainly the idea of repeatable builds), but it is different in a few key ways:

  • Earthly does not replace language-specific tools, like Maven, Gradle, Webpack etc. Instead, it leverages and integrates with them. Adopting Bazel usually means that all build files need to be completely rewritten. This is not the case with Earthly as it mainly acts as the glue between builds.
  • The learning curve of Earthly is more accessible, especially if the user already has experience with Dockerfiles. Bazel, on the other hand, introduces some completely new concepts.
  • Bazel has a purely descriptive specification language. Earthly is a mix of descriptive and imperative language.
  • Bazel uses tight control of compiler toolchain to achieve true hermetic builds, whereas Earthly uses containers and well-defined inputs.

Overall, compared to Bazel, Earthly sacrifices some correctness and reproducibility in favor of significantly better usability and composability with existing open-source technologies.



Contributing

  • Please report bugs as GitHub issues.
  • Join us on Slack!
  • Questions via GitHub issues are welcome!
  • PRs welcome! But please give a heads-up in GitHub issue before starting work. If there is no GitHub issue for what you want to do, please create one.
  • To build from source, check the contributing page.


Licensing

Earthly is licensed under the Business Source License 1.1. See licenses/BSL for more information.

Owner
Earthly
Build automation for the post-container era
Earthly
Comments
  • Some DIND scenarios fail when using cgroups v2

    Some DIND scenarios fail when using cgroups v2

    Notable, podman and kind have been documented to fail on systems using cgroups v2. Here is all the relevant information:

    • Docker updated to use cgroups v2 in Docker Desktop at version 4.3.0. I do not know what docker CLI version this correlates to
    • Our unit tests that use podman appear to be trying to use cgroups v1. We may need to update podman for these unit tests since docker is on v2 (and podman is inside docker here, for our unit tests), or add compatibility flags
    • To turn off/on cgroups v2, I use the following kernel param: systemd.unified_cgroup_hierarchy=0 (Pop!_OS 21.10)
    • v2 is a unified, single-root hierarchy; vs the v1 multi-root approach
    • You can examine currently running cgroups via systemd-cgtop
    • kind also does not run when using v2, inside a WITH DOCKER. I haven't dug too much into regular docker, but I assume its only partially functional in this case
    • You can check what cgroups versions you have by grep cgroup /proc/filesystems
  • SSL/TLS trust issues: unable to add custom certificates or disable verification

    SSL/TLS trust issues: unable to add custom certificates or disable verification

    Hi, I'm working in a corporate environment where I need to specify the certificates which should be used for verification of remote connections. Unfortunately, right now I can't pull from internal registries since I get the following error:

    ...snip...
         r/r/ubi8:latest | --> Load metadata linux/amd64
         r/r/ubi8:latest | WARN: (Load metadata linux/amd64) failed to do request: Head https://REGISTRY/PATH: x509: certificate signed by unknown authority
    Error: failed to do request: Head https://REGISTRY/PATH: x509: certificate signed by unknown authority
    

    Since the registry is contacted every time regardless of local image presence (outlined further here in #345), it appears that I have a blocker to adopting Earthly.

    How can I provide more information to troubleshooting this? I would be happy to try some things out, as well. I really appreciate all of your time and efforts.

    Thanks, +Jonathan

  • SAVE IMAGE is slow, even when there's no work to be done

    SAVE IMAGE is slow, even when there's no work to be done

    I was observing that for highly optimized builds the slowest part can be saving images. For example, here’s a repo where earth +all takes 12s if everything is cached (like I run earth +all twice). Yet if I comment the SAVE IMAGE lines out the total time drops to 2s. This implies that SAVE IMAGE is doing a lot of work, even when nothing has changed.

    Is there anything I can do speed up SAVE IMAGE in instances like this? I’m surprised that SAVE IMAGE does anything if the image hasn’t changed, is it possible for it do some more sophisticated content negotiation with the layers?

    After talking with @agbell in Slack I hypothesized that it might not be possible for Earthly to what images/layers the host has. This is all conjecture on my part, but:

    If Earth is running in a container then it doesn’t know the state of the registry on the host machine, and what layers it has. It’s only option is to export the entire image to the host, which on a Mac could be slow because containers on a mac are actually running in a VM.

    Maybe if Earth could mount/be aware of the host docker registry it could just do docker push? This reminds me of similar problems that are being solved in the Kubernetes local cluster space https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry

  • Cache export hangs on target with just `BUILD` commands, and no `FROM` in the base target

    Cache export hangs on target with just `BUILD` commands, and no `FROM` in the base target

    I am using earthly with GitHub Actions and the following earthly command:

    earthly --ci --remote-cache=ghcr.io/notin-dev/cache:cache --push +build-all
    

    The entire build is taking about 8 minutes to complete, but the cache step is taking over 20 minutes to complete and I get the following output

    cache | --> exporting cache
    [653](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:653)
                   cache | [          ]   0% preparing build cache for export
    [654](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:654)
                   cache | [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ] 100% preparing build cache for export
    [655](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:655)
                 ongoing | cache (1 minute ago)
    [656](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:656)
                 ongoing | cache (2 minutes ago)
    [657](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:657)
                 ongoing | cache (3 minutes ago)
    [658](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:658)
                 ongoing | cache (4 minutes ago)
    [659](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:659)
                 ongoing | cache (5 minutes ago)
    [660](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:660)
                 ongoing | cache (6 minutes ago)
    [661](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:661)
                 ongoing | cache (7 minutes ago)
    [662](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:662)
                 ongoing | cache (8 minutes ago)
    [663](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:663)
                 ongoing | cache (9 minutes ago)
    [664](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:664)
                 ongoing | cache (10 minutes ago)
    [665](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:665)
                 ongoing | cache (11 minutes ago)
    [666](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:666)
                 ongoing | cache (12 minutes ago)
    [667](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:667)
                 ongoing | cache (13 minutes ago)
    [668](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:668)
                 ongoing | cache (14 minutes ago)
    [669](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:669)
                 ongoing | cache (15 minutes ago)
    [670](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:670)
                 ongoing | cache (16 minutes ago)
    [671](https://github.com/notin-dev/lomp/runs/6957678719?check_suite_focus=true#step:5:671)
                 ongoing | cache (17 minutes ago)
    

    I initially though that the issue was with heavy usage of SAVE IMAGE --cache-hint and so I removed every occurrence of that in the repository but even then the cache step is taking forever to complete.

    I'm not sure if I'm doing something wrong here.

    This is my GitHub Actions file:

    name: Build Project
    on:
      pull_request:
        branches: [main]
      push:
        branches:
          - main
    concurrency:
      group: ${{ github.head_ref || github.ref }}
      cancel-in-progress: true
    jobs:
      build_project:
        runs-on: ubuntu-latest
        permissions:
          contents: read
          packages: write
        steps:
          - name: Checkout repository
            uses: actions/checkout@v2
          - name: Logging into Container Registry
            uses: docker/login-action@v2
            with:
              registry: ghcr.io
              username: ${{ github.actor }}
              password: ${{ secrets.GITHUB_TOKEN }}
          - name: Installing earthly
            run: sudo /bin/sh -c 'wget https://github.com/earthly/earthly/releases/download/v0.6.14/earthly-linux-amd64 -O /usr/local/bin/earthly && chmod +x /usr/local/bin/earthly'
          - name: Building project
            run: earthly --ci --remote-cache=ghcr.io/notin-dev/cache:cache --push +build-all
    
  • Earthly image --push to Artifactory fails with 401

    Earthly image --push to Artifactory fails with 401

    I'm using Earthly with the --push flag to push images to Artifactory and upon doing so Earthly returns the error:

                  output | [          ] pushing layers ... 0%
                 ongoing | output (15 seconds ago)
                  output | [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ] pushing layers ... 100%
                  output | WARN: (exporting outputs) failed commit on ref "layer-sha256:b8c8466005032e78723c242ae920cf0eb59f27af010c8646dea7a16f4dda57db": unexpected status: 401 Unauthorized
    ================================ SUCCESS [main] ================================
    Error: failed commit on ref "layer-sha256:b8c8466005032e78723c242ae920cf0eb59f27af010c8646dea7a16f4dda57db": unexpected status: 401 Unauthorized
    

    and the earthly-buildkitd logs show:

    time="2021-05-28T10:12:32Z" level=error msg="/moby.buildkit.v1.Control/Solve returned error: rpc error: code = Unknown desc = failed commit on ref \"layer-sha256:b8c8466005032e78723c242ae920cf0eb59f27af010c8646dea7a16f4dda57db\": unexpected status: 401 Unauthorized\n"
    err EOF
    

    I issue a docker login before attempting the Earthly build.

    Using docker to push the image works as does docker buildx build --push so Artifactory is accepting image uploads from other sources, I just can't get it to work directly from Earthly.

    I have tried this with some other registries and they all succeed:

    • Nexus
    • Docker v2 registry (running locally)
    • JFrog Container Registry

    Would you have any suggestions for what could be wrong?

    This is a corporate Artifactory server so I don't currently have logs from it but I am trying to get them to see if they reveal anything.

    Thanks!

  • bug: docker authentication expires after a few hours causing 401 errors while building

    bug: docker authentication expires after a few hours causing 401 errors while building

    I have an Earthfile that extends a private image. After a few hours of using earthly, I start getting 401 errors. I'm still able to pull the image with docker pull, and running docker login doesn't have any effect.

    I am able to get it working again by restarting the earthly container manually, or by changing the arguments I run with.

    $ earthly -i +test
         buildkitd | Found buildkit daemon as docker container (earthly-buildkitd)
         quay.io/<org>/<image>:<tag> | --> Load metadata linux/amd64
         quay.io/<org>/<image>:<tag> | WARN: (Load metadata linux/amd64) unexpected status code [manifests ci]: 401 UNAUTHORIZED
    Error: unexpected status code [manifests ci]: 401 UNAUTHORIZED
    
    $ docker login quay.io                                              
    Authenticating with existing credentials...
    Login Succeeded
    
    $ docker pull quay.io/<org>/<image>:<tag>
    <tag>: Pulling from <org>/<image>
    Digest: sha256:fcacc41495f6a7655032299d0ea83de0a9e577b64eaaaab8e4bf53a8f81bddb3
    Status: Image is up to date for quay.io/<org>/<image>:<tag>
    quay.io/<org>/<image>:<tag>
    
    $ earthly -i +test
         buildkitd | Found buildkit daemon as docker container (earthly-buildkitd)
         quay.io/<org>/<image>:<tag> | --> Load metadata linux/amd64
         quay.io/<org>/<image>:<tag> | WARN: (Load metadata linux/amd64) unexpected status code [manifests ci]: 401 UNAUTHORIZED
    Error: unexpected status code [manifests ci]: 401 UNAUTHORIZED
    
    $ earthly +test
                       buildkitd | Found buildkit daemon as docker container (earthly-buildkitd)
                       buildkitd | Settings do not match. Restarting buildkit daemon with updated settings...
                       buildkitd | ...Done
     quay.io/<org>/<image>:<tag> | --> Load metadata linux/amd64
                           +base | --> FROM quay.io/<org>/<image>:<tag>
                         context | --> local context .
                           +base | [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ] resolve quay.io/<org>/<image>:<tag>@sha256:fcacc41495f6a7655032299d0ea83de0a9e577b64eaaaab8e4bf53a8f81bddb3 ... 100%
                         context | [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ] transferring .: ... 100%
    

    This documentation suggests that Earthly should be able to inherited from the host. This page doesn't contain anything special for quay.

    Is it possible I'm missing something? Or is this unintended behavior?

  • panic: failed to get edge

    panic: failed to get edge

    I've recently upgraded to Earthly 0.5.13 and our CI jobs that run multiple Earthly targets in parallel have started failing with errors such as this:

    # ...other yarn install logs...
    ./p/sometarget+dependencies | [5/5] Building fresh packages...
    ./p/sometarget+dependencies | success Saved lockfile.
    ./p/sometarget+dependencies | Done in 102.14s.
    ./p/sometarget+dependencies | yarn cache v1.22.0
    ./p/sometarget+dependencies | success Cleared cache.
    ./p/sometarget+dependencies | Done in 6.45s.
    =================================Buildkit Logs==================================
    Error: transport is closing
    It seems that buildkitd is shutting down or it has crashed. You can report crashes at https://github.com/earthly/earthly/issues/new.
    starting earthly-buildkit with EARTHLY_GIT_HASH=a7f082f8fade1badbee0a221c9a9da2317449e4c BUILDKIT_BASE_IMAGE=github.com/earthly/buildkit:c1749dff2545b0202fc15f33eaa3278b1aa8803e+build
    BUILDKIT_ROOT_DIR=/tmp/earthly/buildkit
    CACHE_SIZE_MB=150000
    EARTHLY_ADDITIONAL_BUILDKIT_CONFIG=
    CNI_MTU=1500
    ======== CNI config ==========
    {
    	"cniVersion": "0.3.0",
    	"name": "buildkitbuild",
    	"type": "bridge",
    	"bridge": "cni0",
    	"isGateway": true,
    	"ipMasq": true,
    	"mtu": 1500,
    	"ipam": {
    		"type": "host-local",
    		"subnet": "172.30.0.0/16",
    		"routes": [
    			{ "dst": "0.0.0.0/0" }
    		]
    	}
    }
    ======== End CNI config ==========
    ======== Buildkitd config ==========
    debug = false
    root = "/tmp/earthly/buildkit"
    insecure-entitlements = [ "security.insecure" ]
    [worker.oci]
      enabled = true
      snapshotter = "auto"
      gc = true
      networkMode = "cni"
      cniBinaryPath = "/usr/libexec/cni"
      cniConfigPath = "/etc/cni/cni-conf.json"
        # Please note the required indentation to fit in buildkit.toml.template accordingly.
      # 1/100 of total cache size.
      gckeepstorage = 1500000000
      [[worker.oci.gcpolicy]]
        # 1/10 of total cache size.
        keepBytes = 15000000000
        filters = [ "type==source.local", "type==source.git.checkout"]
      [[worker.oci.gcpolicy]]
        all = true
        # Cache size MB with 6 zeros, to turn it into bytes.
        keepBytes = 150000000000
    ======== End buildkitd config ==========
    Detected container architecture is x86_64
    starting shellrepeater
    time="2021-05-27T14:06:14Z" level=info msg="auto snapshotter: using overlayfs"
    time="2021-05-27T14:06:29Z" level=info msg="found worker \"7dl0od0h2nmn3awn6fgkr04er\", labels=map[org.mobyproject.buildkit.worker.executor:oci org.mobyproject.buildkit.worker.hostname:9a81ae26219d org.mobyproject.buildkit.worker.snapshotter:overlayfs], platforms=[linux/amd64 linux/386]"
    time="2021-05-27T14:06:29Z" level=warning msg="skipping containerd worker, as \"/run/containerd/containerd.sock\" does not exist"
    time="2021-05-27T14:06:29Z" level=info msg="found 1 workers, default=\"7dl0od0h2nmn3awn6fgkr04er\""
    time="2021-05-27T14:06:29Z" level=warning msg="currently, only the default worker can be used."
    time="2021-05-27T14:06:29Z" level=info msg="running server on /run/buildkit/buildkitd.sock"
    panic: failed to get edge
    goroutine 16 [running]:
    github.com/moby/buildkit/solver.(*pipeFactory).NewInputRequest(0xc00c90def8, 0x0, 0x1549b20, 0xc00c6d8700, 0xc01bed6400, 0xc001961928, 0xc00453d200)
    	/src/solver/scheduler.go:354 +0x1fc
    github.com/moby/buildkit/solver.(*edge).createInputRequests(0xc001960dc0, 0x2, 0xc00c90def8, 0x0, 0x1dff070)
    	/src/solver/edge.go:809 +0x31d
    github.com/moby/buildkit/solver.(*edge).unpark(0xc001960dc0, 0xc01bed8420, 0x1, 0x1, 0xc00c90de48, 0x0, 0x0, 0x1dff070, 0x0, 0x0, ...)
    	/src/solver/edge.go:360 +0x179
    github.com/moby/buildkit/solver.(*scheduler).dispatch(0xc00023d180, 0xc001960dc0)
    	/src/solver/scheduler.go:136 +0x430
    github.com/moby/buildkit/solver.(*scheduler).loop(0xc00023d180)
    	/src/solver/scheduler.go:104 +0x179
    created by github.com/moby/buildkit/solver.newScheduler
    	/src/solver/scheduler.go:35 +0x1ab
    

    Similar logs in the job that is in progress at the same time...

    ./p/someothertarget+dependencies | [5/5] Building fresh packages...
    ./p/someothertarget+dependencies | success Saved lockfile.
    ./p/someothertarget+dependencies | Done in 85.08s.
    ./p/someothertarget+dependencies | yarn cache v1.22.0
    ./p/someothertarget+dependencies | success Cleared cache.
    ./p/someothertarget+dependencies | Done in 7.32s.
    ./p/someothertarget+dependencies | package=someothertarget
    ./p/someothertarget+dependencies | *cached* --> SAVE ARTIFACT ./ ./packages/someothertarget+dependencies/app
    =================================Buildkit Logs==================================
    Error: transport is closing
    It seems that buildkitd is shutting down or it has crashed. You can report crashes at https://github.com/earthly/earthly/issues/new.
    starting earthly-buildkit with EARTHLY_GIT_HASH=a7f082f8fade1badbee0a221c9a9da2317449e4c BUILDKIT_BASE_IMAGE=github.com/earthly/buildkit:c1749dff2545b0202fc15f33eaa3278b1aa8803e+build
    BUILDKIT_ROOT_DIR=/tmp/earthly/buildkit
    CACHE_SIZE_MB=150000
    EARTHLY_ADDITIONAL_BUILDKIT_CONFIG=
    CNI_MTU=1500
    ======== CNI config ==========
    {
    	"cniVersion": "0.3.0",
    	"name": "buildkitbuild",
    	"type": "bridge",
    	"bridge": "cni0",
    	"isGateway": true,
    	"ipMasq": true,
    	"mtu": 1500,
    	"ipam": {
    		"type": "host-local",
    		"subnet": "172.30.0.0/16",
    		"routes": [
    			{ "dst": "0.0.0.0/0" }
    		]
    	}
    }
    ======== End CNI config ==========
    ======== Buildkitd config ==========
    debug = false
    root = "/tmp/earthly/buildkit"
    insecure-entitlements = [ "security.insecure" ]
    [worker.oci]
      enabled = true
      snapshotter = "auto"
      gc = true
      networkMode = "cni"
      cniBinaryPath = "/usr/libexec/cni"
      cniConfigPath = "/etc/cni/cni-conf.json"
        # Please note the required indentation to fit in buildkit.toml.template accordingly.
      # 1/100 of total cache size.
      gckeepstorage = 1500000000
      [[worker.oci.gcpolicy]]
        # 1/10 of total cache size.
        keepBytes = 15000000000
        filters = [ "type==source.local", "type==source.git.checkout"]
      [[worker.oci.gcpolicy]]
        all = true
        # Cache size MB with 6 zeros, to turn it into bytes.
        keepBytes = 150000000000
    ======== End buildkitd config ==========
    Detected container architecture is x86_64
    starting shellrepeater
    time="2021-05-27T14:06:14Z" level=info msg="auto snapshotter: using overlayfs"
    time="2021-05-27T14:06:29Z" level=info msg="found worker \"7dl0od0h2nmn3awn6fgkr04er\", labels=map[org.mobyproject.buildkit.worker.executor:oci org.mobyproject.buildkit.worker.hostname:9a81ae26219d org.mobyproject.buildkit.worker.snapshotter:overlayfs], platforms=[linux/amd64 linux/386]"
    time="2021-05-27T14:06:29Z" level=warning msg="skipping containerd worker, as \"/run/containerd/containerd.sock\" does not exist"
    time="2021-05-27T14:06:29Z" level=info msg="found 1 workers, default=\"7dl0od0h2nmn3awn6fgkr04er\""
    time="2021-05-27T14:06:29Z" level=warning msg="currently, only the default worker can be used."
    time="2021-05-27T14:06:29Z" level=info msg="running server on /run/buildkit/buildkitd.sock"
    panic: failed to get edge
    goroutine 16 [running]:
    github.com/moby/buildkit/solver.(*pipeFactory).NewInputRequest(0xc00c90def8, 0x0, 0x1549b20, 0xc00c6d8700, 0xc01bed6400, 0xc001961928, 0xc00453d200)
    	/src/solver/scheduler.go:354 +0x1fc
    github.com/moby/buildkit/solver.(*edge).createInputRequests(0xc001960dc0, 0x2, 0xc00c90def8, 0x0, 0x1dff070)
    	/src/solver/edge.go:809 +0x31d
    github.com/moby/buildkit/solver.(*edge).unpark(0xc001960dc0, 0xc01bed8420, 0x1, 0x1, 0xc00c90de48, 0x0, 0x0, 0x1dff070, 0x0, 0x0, ...)
    	/src/solver/edge.go:360 +0x179
    github.com/moby/buildkit/solver.(*scheduler).dispatch(0xc00023d180, 0xc001960dc0)
    	/src/solver/scheduler.go:136 +0x430
    github.com/moby/buildkit/solver.(*scheduler).loop(0xc00023d180)
    	/src/solver/scheduler.go:104 +0x179
    created by github.com/moby/buildkit/solver.newScheduler
    	/src/solver/scheduler.go:35 +0x1ab
    

    Should running jobs in parallel on the same machine be OK? They are both run in Gitlab via Docker executors but on the same Docker host so the buildkitd instance is shared.

  • Ruby on rails example

    Ruby on rails example

    We need an example rails application added to the examples folder: https://github.com/earthly/earthly/tree/master/examples

    An example Earthfile should also be included which can build, test, and produce a docker file of the example.

    The Go example can serve as a reference point for how to structure an example.

  • React example

    React example

    We need an example React JS app added to the examples folder: https://github.com/earthly/earthly/tree/master/examples

    An example Earthfile should also be included which can build and run tests.

    The Go example can serve as a reference point for how to structure an example.

  • expand autocomplete to work with user home directories

    expand autocomplete to work with user home directories

    Currently if you type earth ~<tab><tab> this auto completes to earth ~/, instead it should auto complete to list all available users (e.g. ~/, ~adam, ~alex, ~corey, ~vlad).

    Here's the relevant section of the code that needs to be expanded to list other users:

    https://github.com/earthly/earthly/blob/master/autocomplete/complete.go#L106-L109

    This will also have to be able to do autocompletion so when you type earth ~a<tab><tab>, it'll only display users starting with an a (e.g. ~adam, ~alex)

  • Support multi-platform builds

    Support multi-platform builds

    Is it possible to support multi-platform (e.g. amd64, arm64, arm v7) image build with docker buildx feature? With docker command, by using compatible builder, we can now provide --platform flag to specify target platform.

    docker [buildx] build --platform linux/amd64,linux/arm64,linux/arm/v7 .

    https://docs.docker.com/buildx/working-with-buildx/

  • explicitly check docker login vars are set

    explicitly check docker login vars are set

    This is to help with sanity checking docker login is passed variables correctly.

    For example, I had some tests that were failing on:

    WARN Earthfile line 369:12: The command 'RUN docker login $DOCKERHUB_MIRROR --username="$USERNAME" --***' failed: unauthorized
    

    which were due to upstream server issues, but I wanted to eliminate the possibility that secrets were not correctly passed.

  • support for `ARG --no-cache`

    support for `ARG --no-cache`

    I have a task whose output I'd like to cache for a time. I attempted to do that by having a cache_key argument whose value will change with time, but it seems the ARG gets cached, and is not recomputed.

    Is it expected that computed args are cached like this? Is there a supported way to cache a job for an arbitrary amount of time?

    VERSION 0.6
    
    build:
      FROM debian:bullseye-slim
      # I want to regularly re-download the input to stay up to date.
      #
      # In reality I want to bust the cache_key daily, but for testing purposes we
      # can bust it every second.
      # ARG cache_key=$(date +%Y-%m-%d)
      ARG cache_key=$(date +%s)
    
      COPY (+get-input/input_data.db --cache_key=${cache_key}) input_data.db
    
      ## Do some "expensive processing" with the input
      RUN ls -l input_data.db > processed_data.out
    
      SAVE ARTIFACT processed_data.out processed_data.out AS LOCAL processed_data_from_${cache_key}.out
    
    
    get-input:
      FROM debian:bullseye-slim
      RUN apt-get update && apt-get install -y curl
      ARG --required cache_key
      RUN curl https://example.com/latest-data.db > input_data.db
      SAVE ARTIFACT input_data.db input_data.db
    

    Subsequent runs show that (+get-input/input_data) was cached, not re-run.

              +get-input | *cached* --> SAVE ARTIFACT input_data.db +get-input/input_data.db
                  +build | *cached* --> COPY (cache_key=1671724751) +get-input/input_data.db input_data.db
    

    If I pass the earthly --no-cache +build the cache gets busted and get-input gets recached.

  • Proposal: Specify env file inside a target

    Proposal: Specify env file inside a target

    I would be nice if i could set the env file used inside the target.

    #.env.sta
    ROLE=sta
    #.env.prod
    ROLE=prod
    
    VERSION --use-cache-command --shell-out-anywhere 0.6
    ARG ROLE   
    # Commands
    # earthly -P +run
    sta: 
      ENV_FILE .env.sta
      FROM earthly/dind:alpine
      RUN --no-cache echo $ROLE
    
    prod: 
      ENV_FILE .env.prod
      FROM earthly/dind:alpine
      RUN --no-cache echo $ROLE
    

    workaround which is not so nice.

    
    VERSION --use-cache-command --shell-out-anywhere 0.6
    
    run-sta:
      ARG envfile=.env.sta   
      
      LOCALLY
      ENV EARTHLY_ENV_FILE=$envfile 
      RUN earthly +run
    
    run:  
      ARG --required ROLE  
      FROM earthly/dind:alpine
      RUN --no-cache echo $ROLE 
    
  • global args for import

    global args for import

    If i run a target from an imported Earthfile i would expect to have the global ARG scope available. Use case of this is e.g. that i have to define different URL for the registry in every SAVE IMAGE

    Given different registries for my environments, e.g. sta and prod. SAVE IMAGE --push my-sta-registry.com/another-image:latest SAVE IMAGE --push my-sta-registry.com/another-image:latest

    I have a big setup with multiple Earthfiles and mand SAVE IMAGE i have to handover the registry url to the next and next target. Is there a way to define the base registry url somehow globally? if i call targets from another Earthfile i don't have the ARG's from the first Earthfile available.

    # Earthfile
    VERSION --use-cache-command 0.6
    IMPORT ./sub 
    ARG ROLE=dev  
    
    run:    
      BUILD sub+run
     
    
    # sub/Earthfile
    VERSION --use-cache-command 0.6
    
    run:  
      ARG --required ROLE  
      FROM earthly/dind:alpine
      RUN --no-cache echo $ROLE 
    
    earthly +run
     1. Init πŸš€
    β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”
    
               buildkitd | Found buildkit daemon as docker container (earthly-buildkitd)
    
     2. Build πŸ”§
    β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”
    
    Share your logs with an Earthly account (experimental)! Register for one at https://ci.earthly.dev.
    Error: build target: build main: failed to solve: async earthfile2llb for sub+run: sub/Earthfile line 5:2 apply ARG: value not supplied for required ARG: ROLE
    in              ./sub+run --ROLE=
    

    Proposal

    # Earthfile
    VERSION --use-cache-command 0.6
    IMPORT ./sub 
    ARG ROLE=dev  
    
    run:    
      BUILD sub+run # no need to add --ROLE=$ROLE
    
    # sub/Earthfile
    VERSION --use-cache-command 0.6
    ARG ROLE+=$ROLE # this would get the ROLE value from the callers Earthfile. Not sure about the syntax but probably it would help to have something like ARG inheritance wich is defined in the child.
    
    run:  
      ARG --required ROLE  
      FROM earthly/dind:alpine
      RUN --no-cache echo $ROLE 
    
  • WITH DOCKER used inside LOCALLY does not clean up after itself when cancelled

    WITH DOCKER used inside LOCALLY does not clean up after itself when cancelled

    Given this Earthfile:

    VERSION 0.6
    
    example:
      FROM debian:bullseye-slim
      RUN echo "while true; do sleep 5 ; echo hello; done" > /run.sh
      RUN chmod +x /run.sh
      CMD /run.sh
    
    watch:
      LOCALLY
      # FROM earthly/dind:alpine
      WITH DOCKER --load="example=+example"
        RUN docker run --name="earthly-locally-docker-run-example" example
      END
    

    When running earthly +watch, the Docker image correctly prints hello every five seconds. But when stopping the build using Ctrl + C, the Docker container is not cleaned up. It continues running in the background on the host.

    When swapping LOCALLY for FROM earthly/dind:alpine the problem does not occur. But obviously the build runs much slower because of the overhead of 'docker in docker'.

CBuild build system - A tiny build system for C

cuild - CBuild A build system for C Building $ go build . Usage Similar to GNU Make, a file named "Cuildfile" is required. You have a few flags to us

Jan 17, 2022
a build tool for Go, with a focus on cross-compiling, packaging and deployment

goxc NOTE: goxc has long been in maintenance mode. Ever since Go1.5 supported simple cross-compilation, this tool lost much of its value. There are st

Dec 9, 2022
Create build pipelines in Go

taskflow Create build pipelines in Go This package aims to simplify the creation of build pipelines in Go instead of using scripts or Make. taskflow A

Dec 30, 2022
Colorize (highlight) `go build` command output
Colorize (highlight) `go build` command output

colorgo colorgo is a wrapper to go command that colorizes output from go build and go test. Installation go get -u github.com/songgao/colorgo Usage c

Dec 18, 2022
EGo lets you build, debug und run Go apps on Intel SGX - as simple as conventional Go programming!

EGo is a framework for building confidential apps in Go. Confidential apps run in always-encrypted and verifiable enclaves on Intel SGX-enabled ha

Dec 28, 2022
Build system and task runner for Go projects
Build system and task runner for Go projects

Gilbert is task runner that aims to provide declarative way to define and run tasks like in other projects like Gradle, Maven and etc.

Dec 21, 2022
Build and (re)start go web apps after saving/creating/deleting source files.

unmaintained Fresh Fresh is a command line tool that builds and (re)starts your web application everytime you save a Go or template file. If the web f

Jan 2, 2023
KintoHub is an open source build and deployment platform designed with a developer-friendly interface for Kubernetes.
KintoHub is an open source build and deployment platform designed with a developer-friendly interface for Kubernetes.

What is Kintohub? KintoHub is an open source build and deployment platform designed with a developer-friendly interface for Kubernetes. Build your cod

Jun 7, 2022
Build systems with Go examples
Build systems with Go examples

What is this? This is a repository containing all the examples from the book BUILD SYSTEMS with GO (and save the world). This book is written to help

Dec 23, 2022
An experimental way to apply patches to the Go runtime at build time.

go-patch-overlay An experimental way to apply patches to the Go runtime at build time. Assuming you have a directory of patches to apply to the Go sou

Oct 31, 2022
Please is a cross-language high-performance extensible build system for reproducible multi-language builds.

Please is a cross-language build system with an emphasis on high performance, extensibility and reproducibility. It supports a number of popular languages and can automate nearly any aspect of your build process.

Dec 30, 2022
Blueprint Build System For Golang

Blueprint Build System Blueprint is being archived on 2021 May 3. On 2021 May 3, we will be archiving the Blueprint project. This means it will not be

Nov 20, 2021
Various tools for usage with Golang like installer, github tool and cloud features.

Gopei2 (Go Programming Environment Installer) Gopei shell install Go compiler, LiteIDE and configure for you the entire environment, variables, paths,

Dec 23, 2022
Tool to check for dependency confusion vulnerabilities in multiple package management systems

Confused A tool for checking for lingering free namespaces for private package names referenced in dependency configuration for Python (pypi) requirem

Jan 2, 2023
An extremely opinionated TypeScript monorepo tool.

Unirepo is an extremely opinionated TypeScript build tool. Typical monorepo management tools in the Node.js ecosystem provide automation aroun

Nov 29, 2022
a Make/rake-like dev tool using Go
a Make/rake-like dev tool using Go

About Mage is a make-like build tool using Go. You write plain-old go functions, and Mage automatically uses them as Makefile-like runnable targets. I

Jan 7, 2023
πŸš€ gowatch is a command line tool that builds and (re)starts your go project everytime you save a Go or template file.
πŸš€ gowatch is a command line tool that builds and (re)starts your go project everytime you save a Go or template file.

gowatch δΈ­ζ–‡ζ–‡ζ‘£ gowatch is a command line tool that builds and (re)starts your go project everytime you save a Go or template file. Installation To insta

Dec 30, 2022
A simple tool to help WoW repack administrators manipulate the repack database(s)

WoW Repack Manipulator This tool makes it easier for an administrator of a WoW Repack (private WoW server, basically) to manipulate the database that

Feb 7, 2022
Build automation for the container era
Build automation for the container era

?? Build anything via containers - build images or standalone artifacts (binaries, packages, arbitrary files) ?? Programming language agnostic - allow

Jan 3, 2023
Kubernetes built with Earthly
Kubernetes built with Earthly

Kubernetes built with Earthly Earthly is a very pragmatic iteration from the existing ecosystem of container build tools available today. It combines

Nov 3, 2021