Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project

Moby Project logo

Moby is an open-source project created by Docker to enable and accelerate software containerization.

It provides a "Lego set" of toolkit components, the framework for assembling them into custom container-based systems, and a place for all container enthusiasts and professionals to experiment and exchange ideas. Components include container build tools, a container registry, orchestration tools, a runtime and more, and these can be used as building blocks in conjunction with other tools and projects.

Principles

Moby is an open project guided by strong principles, aiming to be modular, flexible and without too strong an opinion on user experience. It is open to the community to help set its direction.

  • Modular: the project includes lots of components that have well-defined functions and APIs that work together.
  • Batteries included but swappable: Moby includes enough components to build fully featured container system, but its modular architecture ensures that most of the components can be swapped by different implementations.
  • Usable security: Moby provides secure defaults without compromising usability.
  • Developer focused: The APIs are intended to be functional and useful to build powerful tools. They are not necessarily intended as end user tools but as components aimed at developers. Documentation and UX is aimed at developers not end users.

Audience

The Moby Project is intended for engineers, integrators and enthusiasts looking to modify, hack, fix, experiment, invent and build systems based on containers. It is not for people looking for a commercially supported system, but for people who want to work and learn with open source code.

Relationship with Docker

The components and tools in the Moby Project are initially the open source components that Docker and the community have built for the Docker Project. New projects can be added if they fit with the community goals. Docker is committed to using Moby as the upstream for the Docker Product. However, other projects are also encouraged to use Moby as an upstream, and to reuse the components in diverse ways, and all these uses will be treated in the same way. External maintainers and contributors are welcomed.

The Moby project is not intended as a location for support or feature requests for Docker products, but as a place for contributors to work on open source code, fix bugs, and make the code more useful. The releases are supported by the maintainers, community and users, on a best efforts basis only, and are not intended for customers who want enterprise or commercial support; Docker EE is the appropriate product for these use cases.


Legal

Brought to you courtesy of our legal counsel. For more context, please see the NOTICE document in this repo.

Use and transfer of Moby may be subject to certain restrictions by the United States and other governments.

It is your responsibility to ensure that your use and/or transfer does not violate applicable laws.

For more information, please see https://www.bis.doc.gov

Licensing

Moby is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

Owner
Moby
An open framework to assemble specialized container systems without reinventing the wheel.
Moby
Comments
  • docker fails to mount the block device for the container on devicemapper

    docker fails to mount the block device for the container on devicemapper

    When running something like for i in {0..100}; do docker run busybox echo test; done with Docker running on devicemapper, errors are thrown and containers fail to run:

    2014/02/10 9:48:42 Error: start: Cannot start container 56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284: Error getting container 56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284' on '/var/lib/docker/devicemapper/mnt/56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284': no such file or directory
    2014/02/10 9:48:42 Error: start: Cannot start container b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914: Error getting container b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914' on '/var/lib/docker/devicemapper/mnt/b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914': no such file or directory
    2014/02/10 9:48:43 Error: start: Cannot start container ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c: Error getting container ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c' on '/var/lib/docker/devicemapper/mnt/ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c': no such file or directory
    test
    2014/02/10 9:48:43 Error: start: Cannot start container 1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3: Error getting container 1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3' on '/var/lib/docker/devicemapper/mnt/1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3': no such file or directory
    

    Fedora 20 with kernel 3.12.9 doesn't seem to be affected.

    kernel version, distribution, docker info and docker version:

    3.11.0-15-generic #25~precise1-Ubuntu SMP Thu Jan 30 17:39:31 UTC 2014 x86_64 x86_64
    Ubuntu 12.04.4
     docker info
    Containers: 101
    Images: 44
    Driver: devicemapper
     Pool Name: docker-8:1-4980769-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 3234.9 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 6.9 Mb
     Metadata Space Total: 2048.0 Mb
    
    Client version: 0.8.0-dev
    Go version (client): go1.2
    Git commit (client): 695719b
    Server version: 0.8.0-dev
    Git commit (server): 695719b
    Go version (server): go1.2
    Last stable version: 0.8.0
    

    The Docker binary is actually master with PR #4017 merged.

    /cc @alexlarsson

  • Proposal: Add support for build-time environment variables to the 'build' API

    Proposal: Add support for build-time environment variables to the 'build' API

    A build-time environment variable is passed to the builder (as part of build API) and made available to the Dockerfile primitives for use in variable expansion and setting up the environment for the RUN primitive (without modifying the Dockerfile and persisting them as environment in the final image).

    Following simple example illustrates the feature:

    docker build --build-env usr=foo --build-env http_proxy="my.proxy.url" <<EOF
    From busybox
    USER ${usr:-bar}
    RUN git clone <my.private.repo.behind.a.proxy>
    EOF
    

    Some of the use cases this PR enables are listed below (captured from comments in the PR's thread).

    [Edit: 05/22/2015] ~~A build-time environment variable gets used only while processing the 'RUN' primitive of a DockerFile. Such a variable is only accessible during 'RUN' and is 'not' persisted with the intermediate and final docker images, thereby addressing the portability concerns of the images generated with 'build'.~~

    This addresses issue #4962

    +++++++++ Edit: 05/21/2015, 06/26/2015

    This PR discussion thread has grown, bringing out use cases that this PR serves and doesn't serves well. Below I consolidate a list of those use cases that have emerged from the comments for ease of reference.

    There are two broad usecases that this feature enables:

    • passing build environment specific variables without modifying the Dockerfile or persisting them in the final image. A common usecase is the proxy url (http_proxy, https_proxy...) ~~but this can be any other environment variable as well~~ but there are other cases as well like this one https://github.com/docker/docker/issues/14191#issuecomment-115672621.
      • related comments: https://github.com/docker/docker/pull/9176#issuecomment-72072046 https://github.com/docker/docker/pull/9176#issuecomment-104386863
    • parametrize builds.
      • related comments: https://github.com/docker/docker/pull/9176#issuecomment-99269827 https://github.com/docker/docker/pull/9176#issuecomment-75432026 https://github.com/docker/docker/issues/9731#issuecomment-77370381

    The following use-case is not served well by this feature and hence not recommended to be used such:

    • passing secrets with caching turned on:
      • related comments: https://github.com/docker/docker/pull/9176#issuecomment-101876406 https://github.com/docker/docker/pull/9176#issuecomment-99542089

    Following use-cases might still be suitable with caching turned off: https://github.com/docker/docker/pull/9176#issuecomment-88278968 https://github.com/docker/docker/pull/9176#issuecomment-88377527

  • docker build should support privileged operations

    docker build should support privileged operations

    Currently there seems to be no way to run privileged operations outside of docker run -privileged.

    That means that I cannot do the same things in a Dockerfile. My recent issue: I'd like to run fuse (for encfs) inside of a container. Installing fuse is already a mess with hacks and ugly workarounds (see [1] and [2]), because mknod fails/isn't supported without a privileged build step.

    The only workaround right now is to do the installation manually, using run -privileged, and creating a new 'fuse base image'. Which means that I cannot describe the whole container, from an official base image to finish, in a single Dockerfile.

    I'd therefor suggest adding either

    • a docker build -privileged
      this should do the same thing as run -privileged, i.e. removing all caps limitations

    or

    • a RUNP command in the Dockerfile
      this should .. well .. RUN, but with _P_rivileges

    I tried looking at the source, but I'm useless with go and couldn't find a decent entrypoint to attach a proof of concept, unfortunately. :(

    1: https://github.com/rogaha/docker-desktop/blob/master/Dockerfile#L40 2: https://github.com/dotcloud/docker/issues/514#issuecomment-22101217

  • New feature request: Selectively disable caching for specific RUN commands in Dockerfile

    New feature request: Selectively disable caching for specific RUN commands in Dockerfile

    branching off the discussion from #1384 :

    I understand -no-cache will disable caching for the entire Dockerfile. But would be useful if I can disable cache for a specific RUN command? For example updating repos or downloading a remote file .. etc. From my understanding that right now RUN apt-get update if cached wouldn't actually update the repo? This will cause the results to be different than from a VM?

    If disable caching for specific commands in the Dockerfile is made possible, would the subsequent commands in the file then not use the cache? Or would they do something a bit more intelligent - e.g. use cache if the previous command produced same results (fs layer) when compared to a previous run?

  • Document how to connect to Docker host from container

    Document how to connect to Docker host from container

    I had some trouble figuring out how to connect the docker host from the container. Couldn't find documentation, but did find irc logs saying something about using 172.16.42.1, which works.

    It'd be nice if this behavior and how it's related to docker0 was documented.

  • Docker 1.9.1 hanging at build step

    Docker 1.9.1 hanging at build step "Setting up ca-certificates-java"

    A few of us within the office upgraded to the latest version of docker toolbox backed by Docker 1.9.1 and builds are hanging as per the below build output.

    docker version:

     Version:      1.9.1
     API version:  1.21
     Go version:   go1.4.3
     Git commit:   a34a1d5
     Built:        Fri Nov 20 17:56:04 UTC 2015
     OS/Arch:      darwin/amd64
    
    Server:
     Version:      1.9.1
     API version:  1.21
     Go version:   go1.4.3
     Git commit:   a34a1d5
     Built:        Fri Nov 20 17:56:04 UTC 2015
     OS/Arch:      linux/amd64
    

    docker info:

    Containers: 10
    Images: 57
    Server Version: 1.9.1
    Storage Driver: aufs
     Root Dir: /mnt/sda1/var/lib/docker/aufs
     Backing Filesystem: extfs
     Dirs: 77
     Dirperm1 Supported: true
    Execution Driver: native-0.2
    Logging Driver: json-file
    Kernel Version: 4.1.13-boot2docker
    Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
    CPUs: 1
    Total Memory: 1.956 GiB
    Name: vbootstrap-vm
    ID: LLM6:CASZ:KOD3:646A:XPRK:PIVB:VGJ5:JSDB:ZKAN:OUC4:E2AK:FFTC
    Debug mode (server): true
     File Descriptors: 13
     Goroutines: 18
     System Time: 2015-11-24T02:03:35.597772191Z
     EventsListeners: 0
     Init SHA1: 
     Init Path: /usr/local/bin/docker
     Docker Root Dir: /mnt/sda1/var/lib/docker
    Labels:
     provider=virtualbox
    

    uname -a:

    Darwin JRedl-MB-Pro.local 15.0.0 Darwin Kernel Version 15.0.0: Sat Sep 19 15:53:46 PDT 2015; root:xnu-3247.10.11~1/RELEASE_X86_64 x86_64
    

    Here is a snippet from the docker build uppet that hangs on the Setting up ca-certificates-java line. Something to do with the latest version of docker and openjdk?

    update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode
    update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode
    Setting up ca-certificates-java (20140324) ...
    

    Docker file example:

    FROM gcr.io/google_appengine/base
    
    # Prepare the image.
    ENV DEBIAN_FRONTEND noninteractive
    RUN apt-get update && apt-get install -y -qq --no-install-recommends build-essential wget curl unzip python python-dev php5-mysql php5-cli php5-cgi openjdk-7-jre-headless openssh-client python-openssl && apt-get clean
    

    I can confirm that this is not an issue with Docker 1.9.0 or Docker Toolbox 1.9.0d. Let me know if I can provide any further information but this feels like a regression of some sort within the new version.

  • Swarm is having occasional network connection problems between nodes.

    Swarm is having occasional network connection problems between nodes.

    Few times a day I am having connection issues between nodes and clients are seeing occasional "Bad request" error. My swarm setup (aws) has following services: nginx (global) and web (replicated=2) and separate overlay network. In nginx.conf I am using proxy_pass http://web:5000 to route requests to web service. Both services are running and marked as healthy and haven't been restarted while having these errors. Manager is separate node (30sec-manager1).

    Few times a day for few requests I am receiving an errors that nginx couldn't connect upstream and I always see 10.0.0.6 IP address mentioned:

    Here are related nginx and docker logs. Both web services are replicated on 30sec-worker3 and 30sec-worker4 nodes.

    Nginx log:
    ----------
    2017/03/29 07:13:18 [error] 7#7: *44944 connect() failed (113: Host is unreachable) while connecting to upstream, client: 104.154.58.95, server: 30seconds.com, request: "GET / HTTP/1.1", upstream: "http://10.0.0.6:5000/", host: "30seconds.com"
    
    Around same time from docker logs (journalctl -u docker.service)
    
    on node 30sec-manager1:
    ---------------------------
    Mar 29 07:12:50 30sec-manager1 docker[30365]: time="2017-03-29T07:12:50.736935344Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker3-054c94d39b58)"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54.659229055Z" level=info msg="memberlist: Marking 30sec-worker3-054c94d39b58 as failed, suspect timeout reached"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-manager1 docker[30365]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:10 30sec-manager1 docker[30365]: time="2017-03-29T07:13:10.302960985Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:13:11 30sec-manager1 docker[30365]: time="2017-03-29T07:13:11.055187819Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker3-054c94d39b58)"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:14 30sec-manager1 docker[30365]: time="2017-03-29T07:13:14Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-manager1 docker[30365]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:17 30sec-manager1 docker[30365]: time="2017-03-29T07:13:17Z" level=info msg="Firewalld running: false"
    
    on node 30sec-worker3:
    -------------------------
    Mar 29 07:12:50 30sec-worker3 docker[30362]: time="2017-03-29T07:12:50.613402284Z" level=info msg="memberlist: Suspect 30sec-manager1-b1cbc10665cc has failed, no acks received"
    Mar 29 07:12:55 30sec-worker3 docker[30362]: time="2017-03-29T07:12:55.614174704Z" level=warning msg="memberlist: Refuting a dead message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:09 30sec-worker3 docker[30362]: time="2017-03-29T07:13:09.613368306Z" level=info msg="memberlist: Suspect 30sec-worker4-4ca6b1dcaa42 has failed, no acks received"
    Mar 29 07:13:10 30sec-worker3 docker[30362]: time="2017-03-29T07:13:10.613972658Z" level=info msg="memberlist: Suspect 30sec-manager1-b1cbc10665cc has failed, no acks received"
    Mar 29 07:13:11 30sec-worker3 docker[30362]: time="2017-03-29T07:13:11.042788976Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:14 30sec-worker3 docker[30362]: time="2017-03-29T07:13:14.613951134Z" level=info msg="memberlist: Marking 30sec-worker4-4ca6b1dcaa42 as failed, suspect timeout reached"
    Mar 29 07:13:25 30sec-worker3 docker[30362]: time="2017-03-29T07:13:25.615128313Z" level=error msg="Bulk sync to node 30sec-manager1-b1cbc10665cc timed out"
    
    on node 30sec-worker4:
    -------------------------
    Mar 29 07:12:49 30sec-worker4 docker[30376]: time="2017-03-29T07:12:49.658082975Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54.658737367Z" level=info msg="memberlist: Marking 30sec-worker3-054c94d39b58 as failed, suspect timeout reached"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:09 30sec-worker4 docker[30376]: time="2017-03-29T07:13:09.658056735Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16.303689665Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    
    syslog on 30sec-worker4:
    --------------------------
    Mar 29 07:12:49 30sec-worker4 docker[30376]: time="2017-03-29T07:12:49.658082975Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54.658737367Z" level=info msg="memberlist: Marking 30sec-worker3-054c94d39b58 as failed, suspect timeout reached"
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.048975] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 docker[30376]: time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.100691] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.130069] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.155859] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.180461] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.205707] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.230326] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 kernel: [645679.255597] IPVS: __ip_vs_del_service: enter
    Mar 29 07:12:54 30sec-worker4 docker[30376]: message repeated 7 times: [ time="2017-03-29T07:12:54Z" level=info msg="Firewalld running: false"]
    Mar 29 07:13:09 30sec-worker4 docker[30376]: time="2017-03-29T07:13:09.658056735Z" level=info msg="memberlist: Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16.303689665Z" level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker4-4ca6b1dcaa42)"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"
    Mar 29 07:13:16 30sec-worker4 docker[30376]: message repeated 7 times: [ time="2017-03-29T07:13:16Z" level=info msg="Firewalld running: false"]
    

    I checked other cases when nginx can't find find upstream and always I find these 3x lines appear most at these times in docker logs in:

    level=info msg="memberlist:Suspect 30sec-worker3-054c94d39b58 has failed, no acks received"
    level=warning msg="memberlist: Refuting a suspect message (from: 30sec-worker3-054c94d39b58)"
    level=warning msg="memberlist: Refuting a dead message (from: 30sec-worker3-054c94d39b58)
    

    By searching other issues, found that these have similar errors, so it may be related: https://github.com/docker/docker/issues/28843 https://github.com/docker/docker/issues/25325

    Anything I should check or debug more to spot the problem or is it a bug? Thank you.

    Output of docker version:

    Client:
     Version:      17.03.0-ce
     API version:  1.26
     Go version:   go1.7.5
     Git commit:   60ccb22
     Built:        Thu Feb 23 11:02:43 2017
     OS/Arch:      linux/amd64
    
    Server:
     Version:      17.03.0-ce
     API version:  1.26 (minimum version 1.12)
     Go version:   go1.7.5
     Git commit:   60ccb22
     Built:        Thu Feb 23 11:02:43 2017
     OS/Arch:      linux/amd64
     Experimental: false
    

    Output of docker info:

    Containers: 18
     Running: 3
     Paused: 0
     Stopped: 15
    Images: 16
    Server Version: 17.03.0-ce
    Storage Driver: aufs
     Root Dir: /var/lib/docker/aufs
     Backing Filesystem: extfs
     Dirs: 83
     Dirperm1 Supported: true
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
     Volume: local
     Network: bridge host macvlan null overlay
    Swarm: active
     NodeID: ck99cyhgydt8y1zn8ik2xmcdv
     Is Manager: true
     ClusterID: in0q54eh74ljazrprt0vza3wj
     Managers: 1
     Nodes: 5
     Orchestration:
      Task History Retention Limit: 5
     Raft:
      Snapshot Interval: 10000
      Number of Old Snapshots to Retain: 0
      Heartbeat Tick: 1
      Election Tick: 3
     Dispatcher:
      Heartbeat Period: 5 seconds
     CA Configuration:
      Expiry Duration: 3 months
     Node Address: 172.31.31.146
     Manager Addresses:
      172.31.31.146:2377
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: 977c511eda0925a723debdc94d09459af49d082a
    runc version: a01dafd48bc1c7cc12bdb01206f9fea7dd6feb70
    init version: 949e6fa
    Security Options:
     apparmor
     seccomp
      Profile: default
    Kernel Version: 4.4.0-57-generic
    Operating System: Ubuntu 16.04.1 LTS
    OSType: linux
    Architecture: x86_64
    CPUs: 1
    Total Memory: 990.6 MiB
    Name: 30sec-manager1
    ID: 5IIF:RONB:Y27Q:5MKX:ENEE:HZWM:XYBV:O6KN:BKL6:AEUK:2VKB:MO5P
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    WARNING: No swap limit support
    Labels:
     provider=amazonec2
    Experimental: false
    Insecure Registries:
     127.0.0.0/8
    Live Restore Enabled: false
    

    Additional environment details (AWS, VirtualBox, physical, etc.): Amazon AWS (Manager - t2.micro, rest of nodes - t2.small)

    docker-compose.yml (There are more services and nodes in setup, but I posted only involved ones)

    version: "3"
    
    services:
    
      nginx:
        image: 333435094895.dkr.ecr.us-east-1.amazonaws.com/swarm/nginx:latest
        ports:
          - 80:80
          - 81:81
        networks:
          - thirtysec
        depends_on:
          - web
        deploy:
          mode: global
          update_config:
            delay: 2s
            monitor: 2s
    
      web:
        image: 333435094895.dkr.ecr.us-east-1.amazonaws.com/swarm/os:latest
        command: sh -c "python manage.py collectstatic --noinput && daphne thirtysec.asgi:channel_layer -b 0.0.0.0 -p 5000"
        ports:
          - 5000:5000
        networks:
          - thirtysec
        deploy:
          mode: replicated
          replicas: 2
          labels: [APP=THIRTYSEC]
          update_config:
            delay: 15s
            monitor: 15s
          placement:
            constraints: [node.labels.aws_type == t2.small]
    
        healthcheck:
          test: goss -g deploy/swarm/checks/web-goss.yaml validate
          interval: 2s
          timeout: 3s
          retries: 15
    
    networks:
        thirtysec:
    

    web-goss.yaml

    port:
      tcp:5000:
        listening: true
        ip:
        - 0.0.0.0
    
  • Phase 1 implementation of user namespaces as a remapped container root

    Phase 1 implementation of user namespaces as a remapped container root

    This pull request is an initial implementation of user namespace support in the Docker daemon that we are labeling an initial "phase 1" milestone with limited scope/capability; which hopefully will be available for use in Docker 1.7.

    The code is designed to support full uid and gid maps, but this implementation limits the scope of usage to a remap of just the root uid/gid to a non-privileged user on the host. This remapping is scoped at the Docker daemon level, so all containers running on a Docker daemon will have the same remapped uid/gid as root. See PR #11253 for an initial discussion on the design.

    Discussion of future, possibly more complex, phases should be separate from specific design/code review of this "phase 1" implementation--see the above-mentioned PR for discussions on more advanced use cases such as mapping complete uid/gid ranges per tenant in a multi-tenant environment.

  • flatten images - merge multiple layers into a single one

    flatten images - merge multiple layers into a single one

    There's no way to flatten images right now. When performing a build in multiple steps, a few images can be generated and a larger number of layers is produced. When these are pushed to the registry, a lot of data and a large number of layers have to be downloaded.

    There are some cases where one starts with a base image (or another image), changes some large files in one step, changes them again in the next and deletes them in the end. This means those files would be stored in 2 separate layers and deleted by whiteout files in the final image.

    These intermediary layers aren't necessarily useful to others or to the final deployment system.

    Image flattening should work like this:

    • the history of the build steps needs to be preserved
    • the flattening can be done up to a target image (for example, up to a base image)
    • the flattening should also be allowed to be done completely (as if exporting the image)
  • Device-mapper does not release free space from removed images

    Device-mapper does not release free space from removed images

    Docker claims, via docker info to have freed space after an image is deleted, but the data file retains its former size and the sparse file allocated for the device-mapper storage backend file will continue to grow without bound as more extents are allocated.

    I am using lxc-docker on Ubuntu 13.10:

    Linux ergodev-zed 3.11.0-14-generic #21-Ubuntu SMP Tue Nov 12 17:04:55 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
    

    This sequence of commands reveals the problem:

    Doing a docker pull stackbrew/ubuntu:13.10 increased space usage reported docker info, before:

    Containers: 0
    Images: 0
    Driver: devicemapper
     Pool Name: docker-252:0-131308-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 291.5 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 0.7 Mb
     Metadata Space Total: 2048.0 Mb
    WARNING: No swap limit support
    

    And after docker pull stackbrew/ubuntu:13.10:

    Containers: 0
    Images: 3
    Driver: devicemapper
     Pool Name: docker-252:0-131308-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 413.1 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 0.8 Mb
     Metadata Space Total: 2048.0 Mb
    WARNING: No swap limit support
    

    And after docker rmi 8f71d74c8cfc, it returns:

    Containers: 0
    Images: 0
    Driver: devicemapper
     Pool Name: docker-252:0-131308-pool
     Data file: /var/lib/docker/devicemapper/devicemapper/data
     Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
     Data Space Used: 291.5 Mb
     Data Space Total: 102400.0 Mb
     Metadata Space Used: 0.7 Mb
     Metadata Space Total: 2048.0 Mb
    WARNING: No swap limit support
    

    Only problem is, the data file has expanded to 414MiB (849016 512-byte sector blocks) per stat. Some of that space is properly reused after an image has been deleted, but the data file never shrinks. And under some mysterious condition (not yet able to reproduce) I have 291.5 MiB allocated that can't even be reused.

    My dmsetup ls looks like this when there are 0 images installed:

    # dmsetup ls
    docker-252:0-131308-pool        (252:2)
    ergodev--zed--vg-root   (252:0)
    cryptswap       (252:1)
    

    And a du of the data file shows this:

    # du /var/lib/docker/devicemapper/devicemapper/data -h
    656M    /var/lib/docker/devicemapper/devicemapper/data
    

    How can I have docker reclaim space, and why doesn't docker automatically do this when images are removed?

  • Unable to remove a stopped container: `device or resource busy`

    Unable to remove a stopped container: `device or resource busy`

    Apologies if this is a duplicate issue, there seems to be several outstanding issues around a very similar error message but under different conditions. I initially added a comment on #21969 and was told to open a separate ticket, so here it is!


    BUG REPORT INFORMATION

    Output of docker version:

    Client:
     Version:      1.11.0
     API version:  1.23
     Go version:   go1.5.4
     Git commit:   4dc5990
     Built:        Wed Apr 13 18:34:23 2016
     OS/Arch:      linux/amd64
    
    Server:
     Version:      1.11.0
     API version:  1.23
     Go version:   go1.5.4
     Git commit:   4dc5990
     Built:        Wed Apr 13 18:34:23 2016
     OS/Arch:      linux/amd64
    

    Output of docker info:

    Containers: 2
     Running: 2
     Paused: 0
     Stopped: 0
    Images: 51
    Server Version: 1.11.0
    Storage Driver: aufs
     Root Dir: /var/lib/docker/aufs
     Backing Filesystem: extfs
     Dirs: 81
     Dirperm1 Supported: false
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
     Volume: local
     Network: bridge null host
    Kernel Version: 3.13.0-74-generic
    Operating System: Ubuntu 14.04.3 LTS
    OSType: linux
    Architecture: x86_64
    CPUs: 1
    Total Memory: 3.676 GiB
    Name: ip-10-1-49-110
    ID: 5GAP:SPRQ:UZS2:L5FP:Y4EL:RR54:R43L:JSST:ZGKB:6PBH:RQPO:PMQ5
    Docker Root Dir: /var/lib/docker
    Debug mode (client): false
    Debug mode (server): false
    Registry: https://index.docker.io/v1/
    WARNING: No swap limit support
    

    Additional environment details (AWS, VirtualBox, physical, etc.):

    Running on Ubuntu 14.04.3 LTS HVM in AWS on an m3.medium instance with an EBS root volume.

    Steps to reproduce the issue:

    1. $ docker run --restart on-failure --log-driver syslog --log-opt syslog-address=udp://localhost:514 -d -p 80:80 -e SOME_APP_ENV_VAR myimage
    2. Container keeps shutting down and restarting due to a bug in the runtime and exiting with an error
    3. Manually running docker stop container
    4. Container is successfully stopped
    5. Trying to rm container then throws the error: Error response from daemon: Driver aufs failed to remove root filesystem 88189a16be60761a2c04a455206650048e784d750533ce2858bcabe2f528c92e: rename /var/lib/docker/aufs/diff/a48629f102d282572bb5df964eeec7951057b50f21df7abe162f8de386e76dc0 /var/lib/docker/aufs/diff/a48629f102d282572bb5df964eeec7951057b50f21df7abe162f8de386e76dc0-removing: device or resource busy
    6. Restart docker engine: $ sudo service docker restart
    7. $ docker ps -a shows that the container no longer exists.
  • hack: restore copy_binaries func

    hack: restore copy_binaries func

    reported on community slack https://dockercommunity.slack.com/archives/C50QFMRC2/p1672827713572689

    - What I did

    copy_binaries func has been removed in https://github.com/moby/moby/pull/44546 (https://github.com/moby/moby/commit/8086f4012330d1c1058e07fc4e5e4522dd432c20#diff-04ff962b93ae3db9d1620183ecb03b37c366b4fa3cf189cf5ec2b646c44d432f) but is still useful for the dev environment.

    cc @akerouanton

    - How I did it

    - How to verify it

    - Description for the changelog

    - A picture of a cute animal (not mandatory but encouraged)

  • Updated outdated docker contributing guidelines link

    Updated outdated docker contributing guidelines link

    - What I did Updated the contributing guidelines link. I discovered this when making a separate PR.

    - How I did it Found the equivalent link for docker contributing guidelines

    - How to verify it Click the link

    - Description for the changelog

    Updated contributing guidelines link in CONTRIBUTING.md since the old link was broken.

    - A picture of a cute animal (not mandatory but encouraged)

    cute_animal2

  • `GenerateRandomName` now panics when `size` is over 64. Fixes #44362

    `GenerateRandomName` now panics when `size` is over 64. Fixes #44362

    Signed-off-by: Kirk Easterson [email protected]

    - What I did Added a check for size to GenerateRandomName and added a test for it

    - How I did it Added a check for size to GenerateRandomName

    - How to verify it Run the test TestGenerateRandomName in libnetwork/netutils/utils_linux_test.go. Or TESTDIRS="./libnetwork/netutils/" make test-unit

    - Description for the changelog

    netutils.GenerateRandomName now panics when the size argument is over 64

    - A picture of a cute animal (not mandatory but encouraged)

    cute_animal

  • Clear conntrack entries for published UDP ports

    Clear conntrack entries for published UDP ports

    - What I did

    Conntrack entries are created for UDP flows even if there's nowhere to route these packets (ie. no listening socket and no NAT rules to apply). Moreover, iptables NAT rules are evaluated by netfilter only when creating a new conntrack entry.

    When Docker adds NAT rules, netfilter will ignore them for any packet matching a pre-existing conntrack entry. In such case, when dockerd runs with userland proxy enabled, packets got routed to it and the main symptom will be bad source IP address (as shown by #44688).

    If the publishing container is run through Docker Swarm or in "standalone" Docker but with no userland proxy, affected packets will be dropped (eg. routed to nowhere).

    As such, Docker needs to flush all conntrack entries for published UDP ports to make sure NAT rules are correctly applied to all packets.

    • Fixes #44688
    • Fixes #8795
    • Fixes #16720
    • Fixes #7540
    • Fixes moby/libnetwork#2423 and probably more.

    - How to verify it

    By running the repro case in #44688.

    As a side note: I tested the repro case with the current master branch, which includes #43409, but this fix doesn't work (at least not for that issue) as it's the equivalent of these commands:

    $ conntrack -D -f ipv4 --reply-src 172.17.0.2
    $ conntrack -D -f ipv4 --reply-dst 172.17.0.2
    

    - Description for the changelog

    Clear conntrack entries for published UDP ports.

  • Dockerfile: use default apt mirrors

    Dockerfile: use default apt mirrors

    follow-up https://github.com/moby/moby/pull/44546#discussion_r1059776409

    - What I did

    Removes APT_MIRROR added in https://github.com/moby/moby/pull/39537 as I don't think we need an alternative mirror anymore. Also removes BUILD_APT_MIRROR added in https://github.com/moby/moby/pull/26375 that does not seem to be used.

    - How I did it

    - How to verify it

    - Description for the changelog

    - A picture of a cute animal (not mandatory but encouraged)

  • ci: build and push moby bundles on Docker Hub

    ci: build and push moby bundles on Docker Hub

    follow-up https://github.com/rumpl/moby/pull/24

    - What I did

    Adds a new workflow to build and push non-runnable multi-platform image on Docker Hub moby/moby-bin that contains bundles. This is useful if we want to try out latest changes without building moby.

    image

    https://hub.docker.com/r/crazymax/moby-bin/tags

    $ undock --rm-dist --all crazymax/moby-bin:latest ./moby-bin
    ...
    $ tree -nh ./moby-bin/
    [4.0K]  ./moby-bin/
    ├── [4.0K]  linux_amd64
    │   ├── [ 37M]  containerd
    │   ├── [8.1M]  containerd-shim-runc-v2
    │   ├── [ 18M]  ctr
    │   ├── [748K]  docker-init
    │   ├── [1.9M]  docker-proxy
    │   ├── [ 62M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 11M]  rootlesskit
    │   ├── [6.9M]  rootlesskit-docker-proxy
    │   ├── [ 14M]  runc
    │   └── [ 32M]  vpnkit
    ├── [4.0K]  linux_arm64
    │   ├── [ 36M]  containerd
    │   ├── [7.9M]  containerd-shim-runc-v2
    │   ├── [ 17M]  ctr
    │   ├── [527K]  docker-init
    │   ├── [2.0M]  docker-proxy
    │   ├── [ 59M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 10M]  rootlesskit
    │   ├── [6.6M]  rootlesskit-docker-proxy
    │   ├── [ 13M]  runc
    │   └── [ 38M]  vpnkit
    ├── [4.0K]  linux_armv5
    │   ├── [ 35M]  containerd
    │   ├── [7.8M]  containerd-shim-runc-v2
    │   ├── [ 17M]  ctr
    │   ├── [484K]  docker-init
    │   ├── [1.9M]  docker-proxy
    │   ├── [ 57M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 10M]  rootlesskit
    │   ├── [6.8M]  rootlesskit-docker-proxy
    │   └── [ 13M]  runc
    ├── [4.0K]  linux_armv6
    │   ├── [ 35M]  containerd
    │   ├── [7.8M]  containerd-shim-runc-v2
    │   ├── [ 17M]  ctr
    │   ├── [484K]  docker-init
    │   ├── [1.9M]  docker-proxy
    │   ├── [ 57M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 10M]  rootlesskit
    │   ├── [6.8M]  rootlesskit-docker-proxy
    │   └── [ 13M]  runc
    ├── [4.0K]  linux_armv7
    │   ├── [ 35M]  containerd
    │   ├── [7.8M]  containerd-shim-runc-v2
    │   ├── [ 17M]  ctr
    │   ├── [364K]  docker-init
    │   ├── [1.9M]  docker-proxy
    │   ├── [ 57M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 10M]  rootlesskit
    │   ├── [6.8M]  rootlesskit-docker-proxy
    │   └── [ 12M]  runc
    ├── [4.0K]  linux_ppc64le
    │   ├── [ 36M]  containerd
    │   ├── [8.0M]  containerd-shim-runc-v2
    │   ├── [ 18M]  ctr
    │   ├── [843K]  docker-init
    │   ├── [1.9M]  docker-proxy
    │   ├── [ 60M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 10M]  rootlesskit
    │   ├── [6.7M]  rootlesskit-docker-proxy
    │   └── [ 13M]  runc
    ├── [4.0K]  linux_s390x
    │   ├── [ 39M]  containerd
    │   ├── [8.6M]  containerd-shim-runc-v2
    │   ├── [ 19M]  ctr
    │   ├── [615K]  docker-init
    │   ├── [2.0M]  docker-proxy
    │   ├── [ 64M]  dockerd
    │   ├── [ 14K]  dockerd-rootless-setuptool.sh
    │   ├── [5.1K]  dockerd-rootless.sh
    │   ├── [ 11M]  rootlesskit
    │   ├── [7.1M]  rootlesskit-docker-proxy
    │   └── [ 14M]  runc
    └── [4.0K]  windows_amd64
        ├── [103K]  containerutility.exe
        ├── [2.1M]  docker-proxy.exe
        └── [ 57M]  dockerd.exe
    
    8 directories, 82 files
    

    - How I did it

    - How to verify it

    $ docker buildx bake bin-image-cross
    

    - Description for the changelog

    - A picture of a cute animal (not mandatory but encouraged)

    cc @rumpl @vvoland

Plugin for Helm to integrate the sigstore ecosystem

helm-sigstore Plugin for Helm to integrate the sigstore ecosystem. Search, upload and verify signed Helm Charts in the Rekor Transparency Log. Info he

Dec 21, 2022
Boxygen is a container as code framework that allows you to build container images from code

Boxygen is a container as code framework that allows you to build container images from code, allowing integration of container image builds into other tooling such as servers or CLI tooling.

Dec 13, 2021
Amazon ECS Container Agent: a component of Amazon Elastic Container Service
Amazon ECS Container Agent: a component of Amazon Elastic Container Service

Amazon ECS Container Agent The Amazon ECS Container Agent is a component of Amazon Elastic Container Service (Amazon ECS) and is responsible for manag

Dec 28, 2021
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Jan 23, 2022
Resilient, scalable Brainf*ck, in the spirit of modern systems design

Brainf*ck-as-a-Service A little BF interpreter, inspired by modern systems design trends. How to run it? docker-compose up -d bash hello.sh # Should p

Nov 22, 2022
go-ima is a tool that checks if a file has been tampered with. It is useful in ensuring integrity in CI systems
go-ima is a tool that checks if a file has been tampered with.  It is useful in ensuring integrity in CI systems

go-ima Tool that checks the ima-log to see if a file has been tampered with. How to use Set the IMA policy to tcb by configuring GRUB GRUB_CMDLINE_LIN

Apr 26, 2022
An Alert notification service is an application which can receive alerts from certain alerting systems like System_X and System_Y and send these alerts to developers in the form of SMS and emails.

Alert-System An Alert notification service is an application which can receive alerts from certain alerting systems like System_X and System_Y and sen

Dec 10, 2021
My solutions to labs of MIT 6.824: Distributed Systems.

MIT 6.824 Distributed Systems Labs

Dec 30, 2021
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
 KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

Jan 7, 2023
Go project to manage an ubuntu docker container
Go project to manage an ubuntu docker container

Go-docker-manager This project consist of a Go app that connects to a Docker backend, spans a Ubuntu container and shows live CPU/Memory information f

Oct 27, 2021
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

Dec 17, 2021
crud is a cobra based CLI utility which helps in scaffolding a simple go based micro-service along with build scripts, api documentation, micro-service documentation and k8s deployment manifests

crud crud is a CLI utility which helps in scaffolding a simple go based micro-service along with build scripts, api documentation, micro-service docum

Nov 29, 2021
Using the Golang search the Marvel Characters. This project is a web based golang application that shows the information of superheroes using Marvel api.
Using the Golang search the Marvel Characters. This project is a web based golang application that shows the information of superheroes using Marvel api.

marvel-universe-web using the Golang search the Marvel Universe Characters About The Project This project is a web based golang application that shows

Oct 10, 2021
Hackathon project with intent to help based on heuristics for aks cluster upgrades.
Hackathon project with intent to help based on heuristics for aks cluster upgrades.

AKS-Upgrade-Doctor AKS Upgrade Doctor is a client side, self-help diagnostic tool designed to identify and detect possible issues that cause upgrade o

Sep 20, 2022
Git-based DevOps PaaS: Project, Pipeline, Kubernetes, ServiceMesh, MutilCloud

gitctl 一体化 DevOps 平台 从代码到应用的一体化编排,应用全生命周期管理,多云托管。 gitctl 会有哪些功能? git 代码托管 projec

Oct 24, 2022
A simple project (which is visitor counter) on kubernetesA simple project (which is visitor counter) on kubernetes

k8s playground This project aims to deploy a simple project (which is visitor counter) on kubernetes. Deploy steps kubectl apply -f secret.yaml kubect

Dec 16, 2022
this Project is base project about restfull API and MySQL

Requirements. This project only supports to run on Ubuntu currently go version >= 1.16 docker docker-compose Install Protobuffer https://github.com/pr

Dec 10, 2021
Production-Grade Container Scheduling and Management
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Jan 2, 2023