Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project

Moby Project logo

Moby is an open-source project created by Docker to enable and accelerate software containerization.

It provides a "Lego set" of toolkit components, the framework for assembling them into custom container-based systems, and a place for all container enthusiasts and professionals to experiment and exchange ideas. Components include container build tools, a container registry, orchestration tools, a runtime and more, and these can be used as building blocks in conjunction with other tools and projects.

Principles

Moby is an open project guided by strong principles, aiming to be modular, flexible and without too strong an opinion on user experience. It is open to the community to help set its direction.

  • Modular: the project includes lots of components that have well-defined functions and APIs that work together.
  • Batteries included but swappable: Moby includes enough components to build fully featured container system, but its modular architecture ensures that most of the components can be swapped by different implementations.
  • Usable security: Moby provides secure defaults without compromising usability.
  • Developer focused: The APIs are intended to be functional and useful to build powerful tools. They are not necessarily intended as end user tools but as components aimed at developers. Documentation and UX is aimed at developers not end users.

Audience

The Moby Project is intended for engineers, integrators and enthusiasts looking to modify, hack, fix, experiment, invent and build systems based on containers. It is not for people looking for a commercially supported system, but for people who want to work and learn with open source code.

Relationship with Docker

The components and tools in the Moby Project are initially the open source components that Docker and the community have built for the Docker Project. New projects can be added if they fit with the community goals. Docker is committed to using Moby as the upstream for the Docker Product. However, other projects are also encouraged to use Moby as an upstream, and to reuse the components in diverse ways, and all these uses will be treated in the same way. External maintainers and contributors are welcomed.

The Moby project is not intended as a location for support or feature requests for Docker products, but as a place for contributors to work on open source code, fix bugs, and make the code more useful. The releases are supported by the maintainers, community and users, on a best efforts basis only, and are not intended for customers who want enterprise or commercial support; Docker EE is the appropriate product for these use cases.


Legal

Brought to you courtesy of our legal counsel. For more context, please see the NOTICE document in this repo.

Use and transfer of Moby may be subject to certain restrictions by the United States and other governments.

It is your responsibility to ensure that your use and/or transfer does not violate applicable laws.

For more information, please see https://www.bis.doc.gov

Licensing

Moby is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

Owner
Moby
An open framework to assemble specialized container systems without reinventing the wheel.
Moby
Comments
  • docker fails to mount the block device for the container on devicemapper

    docker fails to mount the block device for the container on devicemapper

    When running something like for i in {0..100}; do docker run busybox echo test; done with Docker running on devicemapper, errors are thrown and containers fail to run:

    2014/02/10 9:48:42 Error: start: Cannot start container 56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284: Error getting container 56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284' on '/var/lib/docker/devicemapper/mnt/56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284': no such file or directory
    2014/02/10 9:48:42 Error: start: Cannot start container b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914: Error getting container b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914' on '/var/lib/docker/devicemapper/mnt/b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914': no such file or directory
    2014/02/10 9:48:43 Error: start: Cannot start container ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c: Error getting container ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c' on '/var/lib/docker/devicemapper/mnt/ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c': no such file or directory
    test
    2014/02/10 9:48:43 Error: start: Cannot start container 1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3: Error getting container 1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3' on '/var/lib/docker/devicemapper/mnt/1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3': no such file or directory
    

    Fedora 20 with kernel 3.12.9 doesn't seem to be affected.

    kernel version, distribution, docker info and docker version:

    3.11.0-15-generic #25~precise1-Ubuntu SMP Thu Jan 30 17:39:31 UTC 2014 x86_64 x86_64
    Ubuntu 12.04.4
     docker info
    Containers: 101
    Images: 44
    Driver: devicemapper
     Pool Name: docker-8:1-4980769-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 3234.9 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 6.9 Mb
     Metadata Space Total: 2048.0 Mb
    
    Client version: 0.8.0-dev
    Go version (client): go1.2
    Git commit (client): 695719b
    Server version: 0.8.0-dev
    Git commit (server): 695719b
    Go version (server): go1.2
    Last stable version: 0.8.0
    

    The Docker binary is actually master with PR #4017 merged.

    /cc @alexlarsson

  • Proposal: Add support for build-time environment variables to the 'build' API

    Proposal: Add support for build-time environment variables to the 'build' API

    A build-time environment variable is passed to the builder (as part of build API) and made available to the Dockerfile primitives for use in variable expansion and setting up the environment for the RUN primitive (without modifying the Dockerfile and persisting them as environment in the final image).

    Following simple example illustrates the feature:

    docker build --build-env usr=foo --build-env http_proxy="my.proxy.url" <<EOF
    From busybox
    USER ${usr:-bar}
    RUN git clone <my.private.repo.behind.a.proxy>
    EOF
    

    Some of the use cases this PR enables are listed below (captured from comments in the PR's thread).

    [Edit: 05/22/2015] ~~A build-time environment variable gets used only while processing the 'RUN' primitive of a DockerFile. Such a variable is only accessible during 'RUN' and is 'not' persisted with the intermediate and final docker images, thereby addressing the portability concerns of the images generated with 'build'.~~

    This addresses issue #4962

    +++++++++ Edit: 05/21/2015, 06/26/2015

    This PR discussion thread has grown, bringing out use cases that this PR serves and doesn't serves well. Below I consolidate a list of those use cases that have emerged from the comments for ease of reference.

    There are two broad usecases that this feature enables:

    • passing build environment specific variables without modifying the Dockerfile or persisting them in the final image. A common usecase is the proxy url (http_proxy, https_proxy...) ~~but this can be any other environment variable as well~~ but there are other cases as well like this one https://github.com/docker/docker/issues/14191#issuecomment-115672621.
      • related comments: https://github.com/docker/docker/pull/9176#issuecomment-72072046 https://github.com/docker/docker/pull/9176#issuecomment-104386863
    • parametrize builds.
      • related comments: https://github.com/docker/docker/pull/9176#issuecomment-99269827 https://github.com/docker/docker/pull/9176#issuecomment-75432026 https://github.com/docker/docker/issues/9731#issuecomment-77370381

    The following use-case is not served well by this feature and hence not recommended to be used such:

    • passing secrets with caching turned on:
      • related comments: https://github.com/docker/docker/pull/9176#issuecomment-101876406 https://github.com/docker/docker/pull/9176#issuecomment-99542089

    Following use-cases might still be suitable with caching turned off: https://github.com/docker/docker/pull/9176#issuecomment-88278968 https://github.com/docker/docker/pull/9176#issuecomment-88377527

  • docker build should support privileged operations

    docker build should support privileged operations

    Currently there seems to be no way to run privileged operations outside of docker run -privileged.

    That means that I cannot do the same things in a Dockerfile. My recent issue: I'd like to run fuse (for encfs) inside of a container. Installing fuse is already a mess with hacks and ugly workarounds (see [1] and [2]), because mknod fails/isn't supported without a privileged build step.

    The only workaround right now is to do the installation manually, using run -privileged, and creating a new 'fuse base image'. Which means that I cannot describe the whole container, from an official base image to finish, in a single Dockerfile.

    I'd therefor suggest adding either

    • a docker build -privileged
      this should do the same thing as run -privileged, i.e. removing all caps limitations

    or

    • a RUNP command in the Dockerfile
      this should .. well .. RUN, but with _P_rivileges

    I tried looking at the source, but I'm useless with go and couldn't find a decent entrypoint to attach a proof of concept, unfortunately. :(

    1: https://github.com/rogaha/docker-desktop/blob/master/Dockerfile#L40 2: https://github.com/dotcloud/docker/issues/514#issuecomment-22101217

  • New feature request: Selectively disable caching for specific RUN commands in Dockerfile

    New feature request: Selectively disable caching for specific RUN commands in Dockerfile

    branching off the discussion from #1384 :

    I understand -no-cache will disable caching for the entire Dockerfile. But would be useful if I can disable cache for a specific RUN command? For example updating repos or downloading a remote file .. etc. From my understanding that right now RUN apt-get update if cached wouldn't actually update the repo? This will cause the results to be different than from a VM?

    If disable caching for specific commands in the Dockerfile is made possible, would the subsequent commands in the file then not use the cache? Or would they do something a bit more intelligent - e.g. use cache if the previous command produced same results (fs layer) when compared to a previous run?

  • Document how to connect to Docker host from container

    Document how to connect to Docker host from container

    I had some trouble figuring out how to connect the docker host from the container. Couldn't find documentation, but did find irc logs saying something about using 172.16.42.1, which works.

    It'd be nice if this behavior and how it's related to docker0 was documented.

  • Docker 1.9.1 hanging at build step

    Docker 1.9.1 hanging at build step "Setting up ca-certificates-java"

    A few of us within the office upgraded to the latest version of docker toolbox backed by Docker 1.9.1 and builds are hanging as per the below build output.

    docker version:

     Version:      1.9.1
     API version:  1.21
     Go version:   go1.4.3
     Git commit:   a34a1d5
     Built:        Fri Nov 20 17:56:04 UTC 2015
     OS/Arch:      darwin/amd64
    
    Server:
     Version:      1.9.1
     API version:  1.21
     Go version:   go1.4.3
     Git commit:   a34a1d5
     Built:        Fri Nov 20 17:56:04 UTC 2015
     OS/Arch:      linux/amd64
    

    docker info:

    Containers: 10
    Images: 57
    Server Version: 1.9.1
    Storage Driver: aufs
     Root Dir: /mnt/sda1/var/lib/docker/aufs
     Backing Filesystem: extfs
     Dirs: 77
     Dirperm1 Supported: true
    Execution Driver: native-0.2
    Logging Driver: json-file
    Kernel Version: 4.1.13-boot2docker
    Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
    CPUs: 1
    Total Memory: 1.956 GiB
    Name: vbootstrap-vm
    ID: LLM6:CASZ:KOD3:646A:XPRK:PIVB:VGJ5:JSDB:ZKAN:OUC4:E2AK:FFTC
    Debug mode (server): true
     File Descriptors: 13
     Goroutines: 18
     System Time: 2015-11-24T02:03:35.597772191Z
     EventsListeners: 0
     Init SHA1: 
     Init Path: /usr/local/bin/docker
     Docker Root Dir: /mnt/sda1/var/lib/docker
    Labels:
     provider=virtualbox
    

    uname -a:

    Darwin JRedl-MB-Pro.local 15.0.0 Darwin Kernel Version 15.0.0: Sat Sep 19 15:53:46 PDT 2015; root:xnu-3247.10.11~1/RELEASE_X86_64 x86_64
    

    Here is a snippet from the docker build uppet that hangs on the Setting up ca-certificates-java line. Something to do with the latest version of docker and openjdk?

    update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode
    update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode
    Setting up ca-certificates-java (20140324) ...
    

    Docker file example:

    FROM gcr.io/google_appengine/base
    
    # Prepare the image.
    ENV DEBIAN_FRONTEND noninteractive
    RUN apt-get update && apt-get install -y -qq --no-install-recommends build-essential wget curl unzip python python-dev php5-mysql php5-cli php5-cgi openjdk-7-jre-headless openssh-client python-openssl && apt-get clean
    

    I can confirm that this is not an issue with Docker 1.9.0 or Docker Toolbox 1.9.0d. Let me know if I can provide any further information but this feels like a regression of some sort within the new version.

  • Swarm is having occasional network connection problems between nodes.

    Swarm is having occasional network connection problems between nodes.

    Few times a day I am having connection issues between nodes and clients are seeing occasional "Bad request" error. My swarm setup (aws) has following services: nginx (global) and web (replicated=2) and separate overlay network. In nginx.conf I am using proxy_pass http://web:5000 to route requests to web service. Both services are running and marked as healthy and haven't been restarted while having these errors. Manager is separate node (30sec-manager1).

    Few times a day for few requests I am receiving an errors that nginx couldn't connect upstream and I always see 10.0.0.6 IP address mentioned:

    Here are related nginx and docker logs. Both web services are replicated on 30sec-worker3 and 30sec-worker4 nodes.

    Nginx log:
    ----------
    2017/03/29 07:13:18 [error] 7#7: *44944 connect() failed (113: Host is unreachable) while connecting to upstream, client: 104.154.58.95, server: 30seconds.com, request: "GET / HTTP/1.1", upstream: "http://10.0.0.6:5000/", host: "30seconds.com"
    
    Around same time from docker logs (journalctl -u docker.service)
    
    on node 30sec-manager1:
    ---------------------------
    Mar 29 07:12:50 30sec-manager1 docker[30365]: time="2017-03-29T07:12:50.736935344Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker3-054c94d39b58)"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54.659229055Z" level=info msg="memberlist: Marking 30sec-worker3-054c94d39b58 as failed, suspect timeout reached"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:10 30sec-manager1 docker[30365]: time="2017-03-29T07:13:10.302960985Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:13:11 30sec-manager1 docker[30365]: time="2017-03-29T07:13:11.055187819Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker3-054c94d39b58)"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:17 30sec-manager1 docker[30365]: time="2017-03-29T07:13:17Z" level=info msg="Firewalld running: false"
    
    on node 30sec-worker3:
    -------------------------
    Mar 29 07:12:50 30sec-worker3 docker[30362]: time="2017-03-29T07:12:50.613402284Z" level=info msg="memberlist: Suspect 30sec-manager1-b1cbc10665cc has failed, no acks received"
    Mar 29 07:12:55 30sec-worker3 docker[30362]: time="2017-03-29T07:12:55.614174704Z" level=warning msg="memberlist: Refuting a dead message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:09 30sec-worker3 docker[30362]: time="2017-03-29T07:13:09.613368306Z" level=info msg="memberlist: Suspect 30sec-worker4-4ca6b1dcaa42 has failed, no acks received"
    Mar 29 07:13:10 30sec-worker3 docker[30362]: time="2017-03-29T07:13:10.613972658Z" level=info msg="memberlist: Suspect 30sec-manager1-b1cbc10665cc has failed, no acks received"
    Mar 29 07:13:11 30sec-worker3 docker[30362]: time="2017-03-29T07:13:11.042788976Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:14 30sec-worker3 docker[30362]: time="2017-03-29T07:13:14.613951134Z" level=info msg="memberlist: Marking 30sec-worker4-4ca6b1dcaa42 as failed, suspect timeout reached"
    Mar 29 07:13:25 30sec-worker3 docker[30362]: time="2017-03-29T07:13:25.615128313Z" level=error msg="Bulk sync to node 30sec-manager1-b1cbc10665cc timed out"
    
    on node 30sec-worker4:
    -------------------------
    Mar 29 07:12:49 30sec-worker4 docker[30376]: time="2017-03-29T07:12:49.658082975Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54.658737367Z" level=info msg="memberlist: Marking 30sec-worker3-054c94d39b58 as failed, suspect timeout reached"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:09 30sec-worker4 docker[30376]: time="2017-03-29T07:13:09.658056735Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16.303689665Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    
    syslog on 30sec-worker4:
    --------------------------
    Mar 29 07:12:49 30sec-worker4 docker[30376]: time="2017-03-29T07:12:49.658082975Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54.658737367Z" level=info msg="memberlist: Marking 30sec-worker3-054c94d39b58 as failed, suspect timeout reached"
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.048975] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.100691] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.130069] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.155859] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.180461] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.205707] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.230326] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.255597] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 docker[30376]: message repeated 7 times: [ time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"]
    Mar 29 07:13:09 30sec-worker4 docker[30376]: time="2017-03-29T07:13:09.658056735Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16.303689665Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: message repeated 7 times: [ time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"]
    

    I checked other cases when nginx can't find find upstream and always I find these 3x lines appear most at these times in docker logs in:

    level=info msg="memberlist:Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker3-054c94d39b58)"
    level=warning msg="memberlist: Refuting a dead message (from: 30sec-worker3-054c94d39b58)
    

    By searching other issues, found that these have similar errors, so it may be related: https://github.com/docker/docker/issues/28843 https://github.com/docker/docker/issues/25325

    Anything I should check or debug more to spot the problem or is it a bug? Thank you.

    Output of docker version:

    Client:
     Version:      17.03.0-ce
     API version:  1.26
     Go version:   go1.7.5
     Git commit:   60ccb22
     Built:        Thu Feb 23 11:02:43 2017
     OS/Arch:      linux/amd64
    
    Server:
     Version:      17.03.0-ce
     API version:  1.26 (minimum version 1.12)
     Go version:   go1.7.5
     Git commit:   60ccb22
     Built:        Thu Feb 23 11:02:43 2017
     OS/Arch:      linux/amd64
     Experimental: false
    

    Output of docker info:

    Containers: 18
     Running: 3
     Paused: 0
     Stopped: 15
    Images: 16
    Server Version: 17.03.0-ce
    Storage Driver: aufs
     Root Dir: /var/lib/docker/aufs
     Backing Filesystem: extfs
     Dirs: 83
     Dirperm1 Supported: true
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
     Volume: local
     Network: bridge host macvlan null overlay
    Swarm: active
     NodeID: ck99cyhgydt8y1zn8ik2xmcdv
     Is Manager: true
     ClusterID: in0q54eh74ljazrprt0vza3wj
     Managers: 1
     Nodes: 5
     Orchestration:
      Task History Retention Limit: 5
     Raft:
      Snapshot Interval: 10000
      Number of Old Snapshots to Retain: 0
      Heartbeat Tick: 1
      Election Tick: 3
     Dispatcher:
      Heartbeat Period: 5 seconds
     CA Configuration:
      Expiry Duration: 3 months
     Node Address: 172.31.31.146
     Manager Addresses:
      172.31.31.146:2377
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: 977c511eda0925a723debdc94d09459af49d082a
    runc version: a01dafd48bc1c7cc12bdb01206f9fea7dd6feb70
    init version: 949e6fa
    Security Options:
     apparmor
     seccomp
      Profile: default
    Kernel Version: 4.4.0-57-generic
    Operating System: Ubuntu 16.04.1 LTS
    OSType: linux
    Architecture: x86_64
    CPUs: 1
    Total Memory: 990.6 MiB
    Name: 30sec-manager1
    ID: 5IIF:RONB:Y27Q:5MKX:ENEE:HZWM:XYBV:O6KN:BKL6:AEUK:2VKB:MO5P
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    WARNING: No swap limit support
    Labels:
     provider=amazonec2
    Experimental: false
    Insecure Registries:
     127.0.0.0/8
    Live Restore Enabled: false
    

    Additional environment details (AWS, VirtualBox, physical, etc.): Amazon AWS (Manager - t2.micro, rest of nodes - t2.small)

    docker-compose.yml (There are more services and nodes in setup, but I posted only involved ones)

    version: "3"
    
    services:
    
      nginx:
        image: 333435094895.dkr.ecr.us-east-1.amazonaws.com/swarm/nginx:latest
        ports:
          - 80:80
          - 81:81
        networks:
          - thirtysec
        depends_on:
          - web
        deploy:
          mode: global
          update_config:
            delay: 2s
            monitor: 2s
    
      web:
        image: 333435094895.dkr.ecr.us-east-1.amazonaws.com/swarm/os:latest
        command: sh -c "python manage.py collectstatic --noinput && daphne thirtysec.asgi:channel_layer -b 0.0.0.0 -p 5000"
        ports:
          - 5000:5000
        networks:
          - thirtysec
        deploy:
          mode: replicated
          replicas: 2
          labels: [APP=THIRTYSEC]
          update_config:
            delay: 15s
            monitor: 15s
          placement:
            constraints: [node.labels.aws_type == t2.small]
    
        healthcheck:
          test: goss -g deploy/swarm/checks/web-goss.yaml validate
          interval: 2s
          timeout: 3s
          retries: 15
    
    networks:
        thirtysec:
    

    web-goss.yaml

    port:
      tcp:5000:
        listening: true
        ip:
        - 0.0.0.0
    
  • Phase 1 implementation of user namespaces as a remapped container root

    Phase 1 implementation of user namespaces as a remapped container root

    This pull request is an initial implementation of user namespace support in the Docker daemon that we are labeling an initial "phase 1" milestone with limited scope/capability; which hopefully will be available for use in Docker 1.7.

    The code is designed to support full uid and gid maps, but this implementation limits the scope of usage to a remap of just the root uid/gid to a non-privileged user on the host. This remapping is scoped at the Docker daemon level, so all containers running on a Docker daemon will have the same remapped uid/gid as root. See PR #11253 for an initial discussion on the design.

    Discussion of future, possibly more complex, phases should be separate from specific design/code review of this "phase 1" implementation--see the above-mentioned PR for discussions on more advanced use cases such as mapping complete uid/gid ranges per tenant in a multi-tenant environment.

  • flatten images - merge multiple layers into a single one

    flatten images - merge multiple layers into a single one

    There's no way to flatten images right now. When performing a build in multiple steps, a few images can be generated and a larger number of layers is produced. When these are pushed to the registry, a lot of data and a large number of layers have to be downloaded.

    There are some cases where one starts with a base image (or another image), changes some large files in one step, changes them again in the next and deletes them in the end. This means those files would be stored in 2 separate layers and deleted by whiteout files in the final image.

    These intermediary layers aren't necessarily useful to others or to the final deployment system.

    Image flattening should work like this:

    • the history of the build steps needs to be preserved
    • the flattening can be done up to a target image (for example, up to a base image)
    • the flattening should also be allowed to be done completely (as if exporting the image)
  • Device-mapper does not release free space from removed images

    Device-mapper does not release free space from removed images

    Docker claims, via docker info to have freed space after an image is deleted, but the data file retains its former size and the sparse file allocated for the device-mapper storage backend file will continue to grow without bound as more extents are allocated.

    I am using lxc-docker on Ubuntu 13.10:

    Linux ergodev-zed 3.11.0-14-generic #21-Ubuntu SMP Tue Nov 12 17:04:55 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
    

    This sequence of commands reveals the problem:

    Doing a docker pull stackbrew/ubuntu:13.10 increased space usage reported docker info, before:

    Containers: 0
    Images: 0
    Driver: devicemapper
     Pool Name: docker-252:0-131308-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 291.5 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 0.7 Mb
     Metadata Space Total: 2048.0 Mb
    WARNING: No swap limit support
    

    And after docker pull stackbrew/ubuntu:13.10:

    Containers: 0
    Images: 3
    Driver: devicemapper
     Pool Name: docker-252:0-131308-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 413.1 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 0.8 Mb
     Metadata Space Total: 2048.0 Mb
    WARNING: No swap limit support
    

    And after docker rmi 8f71d74c8cfc, it returns:

    Containers: 0
    Images: 0
    Driver: devicemapper
     Pool Name: docker-252:0-131308-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 291.5 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 0.7 Mb
     Metadata Space Total: 2048.0 Mb
    WARNING: No swap limit support
    

    Only problem is, the data file has expanded to 414MiB (849016 512-byte sector blocks) per stat. Some of that space is properly reused after an image has been deleted, but the data file never shrinks. And under some mysterious condition (not yet able to reproduce) I have 291.5 MiB allocated that can't even be reused.

    My dmsetup ls looks like this when there are 0 images installed:

    # dmsetup ls
    docker-252:0-131308-pool        (252:2)
    ergodev--zed--vg-root   (252:0)
    cryptswap       (252:1)
    

    And a du of the data file shows this:

    # du /var/lib/docker/devicemapper/devicemapper/data -h
    656M    /var/lib/docker/devicemapper/devicemapper/data
    

    How can I have docker reclaim space, and why doesn't docker automatically do this when images are removed?

  • Unable to remove a stopped container: `device or resource busy`

    Unable to remove a stopped container: `device or resource busy`

    Apologies if this is a duplicate issue, there seems to be several outstanding issues around a very similar error message but under different conditions. I initially added a comment on #21969 and was told to open a separate ticket, so here it is!


    BUG REPORT INFORMATION

    Output of docker version:

    Client:
     Version:      1.11.0
     API version:  1.23
     Go version:   go1.5.4
     Git commit:   4dc5990
     Built:        Wed Apr 13 18:34:23 2016
     OS/Arch:      linux/amd64
    
    Server:
     Version:      1.11.0
     API version:  1.23
     Go version:   go1.5.4
     Git commit:   4dc5990
     Built:        Wed Apr 13 18:34:23 2016
     OS/Arch:      linux/amd64
    

    Output of docker info:

    Containers: 2
     Running: 2
     Paused: 0
     Stopped: 0
    Images: 51
    Server Version: 1.11.0
    Storage Driver: aufs
     Root Dir: /var/lib/docker/aufs
     Backing Filesystem: extfs
     Dirs: 81
     Dirperm1 Supported: false
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
     Volume: local
     Network: bridge null host
    Kernel Version: 3.13.0-74-generic
    Operating System: Ubuntu 14.04.3 LTS
    OSType: linux
    Architecture: x86_64
    CPUs: 1
    Total Memory: 3.676 GiB
    Name: ip-10-1-49-110
    ID: 5GAP:SPRQ:UZS2:L5FP:Y4EL:RR54:R43L:JSST:ZGKB:6PBH:RQPO:PMQ5
    Docker Root Dir: /var/lib/docker
    Debug mode (client): false
    Debug mode (server): false
    Registry: https://index.docker.io/v1/
    WARNING: No swap limit support
    

    Additional environment details (AWS, VirtualBox, physical, etc.):

    Running on Ubuntu 14.04.3 LTS HVM in AWS on an m3.medium instance with an EBS root volume.

    Steps to reproduce the issue:

    1. $ docker run --restart on-failure --log-driver syslog --log-opt syslog-address=udp://localhost:514 -d -p 80:80 -e SOME_APP_ENV_VAR myimage
    2. Container keeps shutting down and restarting due to a bug in the runtime and exiting with an error
    3. Manually running docker stop container
    4. Container is successfully stopped
    5. Trying to rm container then throws the error: Error response from daemon: Driver aufs failed to remove root filesystem 88189a16be60761a2c04a455206650048e784d750533ce2858bcabe2f528c92e: rename /var/lib/docker/aufs/diff/a48629f102d282572bb5df964eeec7951057b50f21df7abe162f8de386e76dc0 /var/lib/docker/aufs/diff/a48629f102d282572bb5df964eeec7951057b50f21df7abe162f8de386e76dc0-removing: device or resource busy
    6. Restart docker engine: $ sudo service docker restart
    7. $ docker ps -a shows that the container no longer exists.
  • [23.0 backport] Update delve version

    [23.0 backport] Update delve version

    • backport of https://github.com/moby/moby/pull/44713
    • backport of https://github.com/moby/moby/pull/44600

    (cherry picked from commit ad8804885c3d2a1576b51e8aad92f061a3296299)

    - A picture of a cute animal (not mandatory but encouraged)

  • [20.10] Revert

    [20.10] Revert "seccomp: block socket calls to AF_VSOCK in default profile"

    As discussed in last week's maintainer call.


    This reverts commit 57b229012a5b5ff97889ae44c9b6fa77ba9b3a5c.

    This change, while favorable from a security standpoint, caused a regression for users of the 20.10 branch of Moby. As such, we are reverting it to ensure stability and compatibility for the affected users.

    However, users of AF_VSOCK in containers should recognize that this (special) address family is not currently namespaced in any version of the Linux kernel, and may result in unexpected behavior, like VMs communicating directly with host hypervisors.

    Future branches, including the 23.0 branch, will continue to filter AF_VSOCK. Users who need to allow containers to communicate over the unnamespaced AF_VSOCK will need to turn off seccomp confinement or set a custom seccomp profile.

    It is our hope that future mechanisms will make this more ergonomic/maintainable for end users, and that future kernels will support namespacing of AF_VSOCK.

    Closes moby/moby#44670.

  • api/types/container: add RestartPolicyMode type and enum

    api/types/container: add RestartPolicyMode type and enum

    had this stashed locally

    • follow-up to https://github.com/moby/moby/pull/44379

    - Description for the changelog

    - A picture of a cute animal (not mandatory but encouraged)

An efficient Go Rapid Product Assembly system used within the Bhojpur.NET Platform ecosystem.

Bhojpur GoRPA - Builder, Packager, Assembler An efficient Go-based Rapid Product Assembly software tool used within the Bhojpur.NET Platform ecosystem

Apr 28, 2022
godesim Simulate complex systems with a simple API.

godesim Simulate complex systems with a simple API. Wrangle non-linear differential equations while writing maintainable, simple code. Why Godesim?

Jan 5, 2023
F' - A flight software and embedded systems framework

F´ (F Prime) is a component-driven framework that enables rapid development and deployment of spaceflight and other embedded software applications.

Jan 4, 2023
IBus Engine for GoVarnam. An easy way to type Indian languages on GNU/Linux systems.

IBus Engine For GoVarnam An easy way to type Indian languages on GNU/Linux systems. goibus - golang implementation of libibus Thanks to sarim and haun

Feb 10, 2022
Distributed Systems 2021 -- Miniproject 3

Mini_Project3 == A Distributed Auction System == You must implement a distributed auction system using replication: a distributed component which hand

Dec 1, 2021
A simple tool to send binary data over a serial port. Designed for use with my retro computer systems.

Colin's Transfer Tool This is a really basic tool to transfer firmware files to my retro computer systems over a serial port. This removes the need fo

Dec 21, 2021
Ghdl - A much more convenient way to download GitHub release binaries on the command line, works on Win & Unix-like systems

ghdl Memorize ghdl as github download ghdl is a fast and simple program (and als

Oct 12, 2022
MIT 6.824: Distributed Systems (Spring 2020)

MIT6.824 MIT 6.824: Distributed Systems (Spring 2020) Lab 1 Lab 2 Lab 2A Lab 2B Lab 2C Lab 2D Lab 3 Lab 3A Lab 3B Lab 4 Lab 4A Lab 4B Lab 4 Challenge

Dec 26, 2022
A getting-started project based on asynq.

README QuickStart Make sure Redis run on localhost:6379. cd workers && go run workers.go cd client && go run client.go We can run client.go first the

Apr 2, 2022
This Go based project of Aadhyarupam Innovators demonstrate the code examples for building microservices, integration with cloud services (Google Cloud Firestore), application configuration management (Viper) etc.

This Go based project of Aadhyarupam Innovators demonstrate the code examples for building microservices, integration with cloud services (Google Cloud Firestore), application configuration management (Viper) etc.

Dec 22, 2022
Placeholder for the future project (lets-go-chat)Placeholder for the future project (lets-go-chat)

Placeholder for the future project (lets-go-chat)Placeholder for the future project (lets-go-chat)

Jan 10, 2022
Complete container management platform

Rancher Rancher is an open source project that provides a container management platform built for organizations that deploy containers in production.

Jan 8, 2023
Generate random, pronounceable, sometimes even memorable, "superhero like" codenames - just like Docker does with container names.

Codename an RFC1178 implementation to generate pronounceable, sometimes even memorable, "superheroe like" codenames, consisting of a random combinatio

Dec 11, 2022
Monitoring Go application inside docker container by InfluxDB, Telegraf, Grafana
Monitoring Go application inside docker container by InfluxDB, Telegraf, Grafana

REST API for TreatField app Docker compose for TIG and Golang simple app: https://github.com/tochytskyi/treatfield-api/blob/main/docker-compose.yml Gr

Nov 6, 2021
Generic-list-go - Go container/list but with generics

generic-list-go Go container/list but with generics. The code is based on contai

Dec 7, 2022
Flow-based and dataflow programming library for Go (golang)
Flow-based and dataflow programming library for Go (golang)

GoFlow - Dataflow and Flow-based programming library for Go (golang) Status of this branch (WIP) Warning: you are currently on v1 branch of GoFlow. v1

Dec 30, 2022
GObject-introspection based bindings generator

WARNING! This project is no longer maintained. Probably doesn't even compile. GObject-introspection based bindings generator for Go. Work in progress

Jan 5, 2022
Yubigo is a Yubikey client API library that provides an easy way to integrate the Yubico Yubikey into your existing Go-based user authentication infrastructure.

yubigo Yubigo is a Yubikey client API library that provides an easy way to integrate the Yubikey into any Go application. Installation Installation is

Oct 27, 2022
A Go based HTTP Botnet
A Go based HTTP Botnet

Second interation of GoBot, https://github.com/SaturnsVoid/GoBot2 GoBot GoBot is a project i am working on as i learn Go. GoBot is a PoC(Proof of Conc

Nov 24, 2022