Docker Swarm Ingress service based on OpenResty with automatic Let's Encrypt SSL provisioning

Ingress Service for Docker Swarm

Docker Stars Docker Pulls

Swarm Ingress OpenResty is a ingress service for Docker in Swarm mode that makes deploying microservices easy. It configures itself automatically and dynamically using services labels.

Docker Images

Features

  • No external load balancer or config files needed making for easy deployments
  • Integrated TLS decryption for services which provide a certificate and key
  • Automatic service discovery and load balancing handled by Docker
  • Scaled and maintained by the Swarm for high resilience and performance
  • On the fly SSL registration and renewal

SSL registration and renewal

This OpenResty plugin automatically and transparently issues SSL certificates from Let's Encrypt as requests are received using lua-resty-auto-ssl plugin. It works like:

  • A SSL request for a SNI hostname is received.
  • If the system already has a SSL certificate for that domain, it is immediately returned (with OCSP stapling).
  • If the system does not yet have an SSL certificate for this domain, it issues a new SSL certificate from Let's Encrypt. Domain validation is handled for you. After receiving the new certificate (usually within a few seconds), the new certificate is saved, cached, and returned to the client (without dropping the original request).

Run the Service

The Ingress service acts as a reverse proxy in your cluster. It exposes port 80 and 443 to the public an redirects all requests to the correct service in background. It is important that the ingress service can reach other services via the Swarm network (that means they must share a network).

docker service create --name ingress \
  --network ingress-routing \
  -p 80:80 \
  -p 443:443 \
  --mount type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock \
  --constraint node.role==manager \
  opcycle/swarm-ingress

It is important to mount the docker socket, otherwise the service can't update its configuration.

The ingress service should be scaled to multiple nodes to prevent short outages when the node with the ingress servic becomes unresponsive (use --replicas X when starting the service).

Register a Service for Ingress

A service can easily be configured using ingress. You must simply provide a label ingress.host which determines the hostname under wich the service should be publicly available.

Configuration Labels

Additionally to the hostname you can also map another port and path of your service. By default a request would be redirected to http://service-name:80/.

Label Required Default Description
ingress.host yes - When configured ingress is enabled. The hostname which should be mapped to the service. Multiple domain supported using ingress.host0 .. ingress.hostN
ingress.port no 80 The port which serves the service in the cluster.
ingress.path no / A optional path which is prefixed when routing requests to the service.
ingress.ssl no - Enable SSL provisioning for host
ingress.ssl_redirect no - Enable automatic redirect from HTTP to HTTPS
ingress.max_body_size no 10m Max request body size
ingress.proxy_timeout no 600 Proxy timeout

Run a Service with Enabled Ingress

It is important to run the service which should be used for ingress that it shares a network. A good way to do so is to create a common network ingress-routing (docker network create --driver overlay ingress-routing).

To start a service with ingress simply pass the required labels on creation.

docker service create --name my-service \
  --network ingress-routing \
  --label ingress.host=my-service.company.tld \
  --label ingress.ssl=yes \
  --label ingress.ssl_redirect=yes \
  nginx

It is also possible to later add a service to ingress using service update.

docker service update \
  --label-add ingress.host=my-service.company.tld \
  --label-add ingress.port=8080 \
  my-service

Contributing

We'd love for you to contribute to this container. You can request new features by creating an issue, or submit a pull request with your contribution.

Issues

If you encountered a problem running this container, you can file an issue. For us to provide better support, be sure to include the following information in your issue:

  • Host OS and version
  • Docker version
  • Output of docker info
  • Version of this container
  • The command you used to run the container, and any relevant output you saw (masking any sensitive information)
Similar Resources

Easy cloud instance provisioning

post-init (work in progress) Post-Init is a set of tools that allows you to easily connect to, provision, and interact with cloud instances after they

Dec 6, 2021

A Packer plugin for provisioning with Terraform (local)

Packer Plugin Terraform Inspired by Megan Marsh's talk https://www.hashicorp.com/resources/extending-packer I bit the bullet and started making my own

Nov 23, 2022

Custom Terraform provider that allows provisioning VGS Proxy Routes.

VGS Terraform Provider Custom Terraform provider that allows provisioning VGS Proxy Routes. How to Install Requirements: terraform ver 0.12 or later M

Mar 12, 2022

Extensible Provisioning Protocol (EPP) in Go

EPP for Go Extensible Provisioning Protocol (EPP) for Go. EPP is an XML-based protocol for provisioning and managing domain names and other objects at

Jan 18, 2022

Karpenter: an open-source node provisioning project built for Kubernetes

Karpenter: an open-source node provisioning project built for Kubernetes

Karpenter is an open-source node provisioning project built for Kubernetes. Its goal is to improve the efficiency and cost of running workloads on Kub

Dec 1, 2022

Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments.

Apollo Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments. Philosophy Linux-

Feb 7, 2022

A command-line debugging tool to check the latency of SSL handshake

ssl-handshake A command-line tool for testing SSL handshake latency, written in

Nov 13, 2022

crud is a cobra based CLI utility which helps in scaffolding a simple go based micro-service along with build scripts, api documentation, micro-service documentation and k8s deployment manifests

crud crud is a CLI utility which helps in scaffolding a simple go based micro-service along with build scripts, api documentation, micro-service docum

Nov 29, 2021

Bootstrap curated Kubernetes stacks. Logging, metrics, ingress and more - delivered with gitops.

Gimlet Stack Bootstrap curated Kubernetes stacks. Logging, metrics, ingress and more - delivered with gitops. You can install logging aggregators, met

Dec 1, 2021
Comments
  • Multiple replicas and Let's Encrypt

    Multiple replicas and Let's Encrypt

    When using multiple replicas of the ingress controller it seems like the well-known data is not replicated between the containers and therefore Let's Encrypt gets a HTTP 404 when trying to validate the /.well-known/ path

    Does the containers in fact try to replicate the files? Or do i have to mount a shared volume between the containers?

  • host not found in upstream

    host not found in upstream

    Sometimes container with openresty up first that upstream containers and this situation results in the following error:

    [root@1f9be2668aa7 conf.d]# nginx -t
    2022/07/21 01:43:42 [emerg] 524#524: host not found in upstream "myapp" in /usr/local/openresty/nginx/conf/conf.d/proxy.conf:146
    nginx: [emerg] host not found in upstream "myapp_portal" in /usr/local/openresty/nginx/conf/conf.d/proxy.conf:146
    nginx: configuration file /usr/local/openresty/nginx/conf/nginx.conf test failed
    

    ... and even despite the fact that the upstream containers started their work correctly, openresty still continues to work with errors.

    The problem may locate in host configuration in resolver parameter:

      location / {
        resolver 127.0.0.11;
        
        proxy_send_timeout 600;
        proxy_read_timeout 600;
        proxy_connect_timeout 600;
    
        proxy_pass http://myapp_portal:80/;
      }
    

    Two views of problem-solving:

    1. Hard-code resolver valid time, for example resolver 127.0.0.11 valid=30s;;
    2. Expand code base for defining parameter valid from environment variables.

    See also:

    • https://nginx.org/ru/docs/http/ngx_http_core_module.html#resolver
A Kubernetes operator that allows for automatic provisioning and distribution of cert-manager certs across namespaces

cached-certificate-operator CachedCertificate Workflow When a CachedCertificate is created or updated the operator does the following: Check for a val

Sep 6, 2022
Kubernetes-native automatic dashboard for Ingress
Kubernetes-native automatic dashboard for Ingress

ingress-dashboard Automatic dashboard generation for Ingress objects. Features: No JS Supports OIDC (Keycloak, Google, Okta, ...) and Basic authorizat

Oct 20, 2022
Manage Lets Encrypt certificates for a Kubernetes cluster.

Kubernetes Certificate Manager This project is loosely based on https://github.com/kelseyhightower/kube-cert-manager It took over most of its document

Mar 11, 2022
Traefik-redirect-operator is created to substitute manual effort of creating an ingress and service type External.
Traefik-redirect-operator is created to substitute manual effort of creating an ingress and service type External.

Overview Traefik Redirect Operator is used to help creating a combination of Ingress of Traefik controller along with Service's ExternalName type. The

Sep 22, 2021
expose controller, when deployment created service and ingress will be created

expose-controller expose controller, when deployment created service and ingress will be created How to test git clone repository cd expose-controller

Dec 23, 2021
Testcontainers is a Golang library that providing a friendly API to run Docker container. It is designed to create runtime environment to use during your automatic tests.

When I was working on a Zipkin PR I discovered a nice Java library called Testcontainers. It provides an easy and clean API over the go docker sdk to

Jan 7, 2023
Docker-based remote code runner / 基于 Docker 的远程代码运行器
Docker-based remote code runner / 基于 Docker 的远程代码运行器

Docker-based remote code runner / 基于 Docker 的远程代码运行器

Nov 9, 2022
CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.
CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.

OpenEBS LVM CSI Driver CSI driver for provisioning Local PVs backed by LVM and more. Project Status Currently the LVM CSI Driver is in alpha

Dec 24, 2022
FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute (USENIX ATC'21)

FaaSNet FaaSNet is the first system that provides an end-to-end, integrated solution for FaaS-optimized container runtime provisioning. FaaSNet uses l

Jan 2, 2023
Dynamically provisioning persistent local storage with Kubernetes

Local Path Provisioner Overview Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the

Jan 4, 2023