Build share and run your distributed applications.

简体中文

Go Release

What is sealer

Build distributed application, share to anyone and run anywhere!!!

image

sealer[ˈsiːlər] provides the way for distributed application package and delivery based on kubernetes.

It solves the delivery problem of complex applications by packaging distributed applications and dependencies(like database,middleware) together.

Concept

  • CloudImage : like Dockerimage, but the rootfs is kubernetes, and contains all the dependencies(docker images,yaml files or helm chart...) your application needs.
  • Kubefile : the file describe how to build a CloudImage.
  • Clusterfile : the config of using CloudImage to run a cluster.

image

We can write a Kubefile, and build a CloudImage, then using a Clusterfile to run a cluster.

For example, build a dashboard CloudImage:

Kubefile:

# base CloudImage contains all the files that run a kubernetes cluster needed.
#    1. kubernetes components like kubectl kubeadm kubelet and apiserver images ...
#    2. docker engine, and a private registry
#    3. config files, yaml, static files, scripts ...
FROM registry.cn-qingdao.aliyuncs.com/sealer-io/cloudrootfs:v1.16.9-alpha.6
# download kubernetes dashboard yaml file
RUN wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
# when run this CloudImage, will apply a dashboard manifests
CMD kubectl apply -f recommended.yaml

Build dashobard CloudImage:

sealer build -t registry.cn-qingdao.aliyuncs.com/sealer-io/dashboard:latest .

Run a kubernetes cluster with dashboard:

# sealer will install a kubernetes on host 192.168.0.2 then apply the dashboard manifests
sealer run registry.cn-qingdao.aliyuncs.com/sealer-io/dashboard:latest --master 192.168.0.2 --passwd xxx
# check the pod
kubectl get pod -A|grep dashboard

Push the CloudImage to the registry

# you can push the CloudImage to docker hub, Ali ACR, or Harbor
sealer push registry.cn-qingdao.aliyuncs.com/sealer-io/dashboard:latest

Usage scenarios & features

  • An extremely simple way to install kubernetes and other software in the kubernetes ecosystem in a production or offline environment.
  • Through Kubefile, you can easily customize the kubernetes CloudImage to package the cluster and applications, and submit them to the registry.
  • Powerful life cycle management capabilities, to perform operations such as cluster upgrade, cluster backup and recovery, node expansion and contraction in unimaginable simple ways
  • Very fast, complete cluster installation within 3 minutes
  • Support ARM x86, v1.20 and above versions support containerd, almost compatible with all Linux operating systems that support systemd
  • Does not rely on ansible haproxy keepalived, high availability is achieved through ipvs, takes up less resources, is stable and reliable
  • There are very few in the official warehouse. Many ecological software images can be used directly, including all dependencies, one-click installation

Quick start

Install a kubernetes cluster

sealer run kubernetes:v1.19.2 --master 192.168.0.2

If it is installed on the cloud:

export ACCESSKEYID=xxx
export ACCESSKEYSECRET=xxx
sealer run registry.cn-qingdao.aliyuncs.com/sealer-io/dashboard:latest

Or specify the number of nodes to run the cluster

sealer run registry.cn-qingdao.aliyuncs.com/sealer-io/dashboard:latest \
  --masters 3 --nodes 3
[root@iZm5e42unzb79kod55hehvZ ~]# kubectl get node
NAME                    STATUS ROLES AGE VERSION
izm5e42unzb79kod55hehvz Ready master 18h v1.16.9
izm5ehdjw3kru84f0kq7r7z Ready master 18h v1.16.9
izm5ehdjw3kru84f0kq7r8z Ready master 18h v1.16.9
izm5ehdjw3kru84f0kq7r9z Ready <none> 18h v1.16.9
izm5ehdjw3kru84f0kq7raz Ready <none> 18h v1.16.9
izm5ehdjw3kru84f0kq7rbz Ready <none> 18h v1.16.9

View the default startup configuration of the CloudImage:

sealer config registry.cn-qingdao.aliyuncs.com/sealer-io/dashboard:latest

Use Clusterfile to set up a k8s cluster

Scenario 1. Install on an existing server, the provider type is BAREMETAL

Clusterfile content:

apiVersion: sealer.aliyun.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster
spec:
  image: registry.cn-qingdao.aliyuncs.com/sealer-io/cloudrootfs:v1.16.9-alpha.5
  provider: BAREMETAL
  ssh:
    passwd:
    pk: xxx
    pkPasswd: xxx
    user: root
  network:
    interface: eth0
    cniName: calico
    podCIDR: 100.64.0.0/10
    svcCIDR: 10.96.0.0/22
    withoutCNI: false
  certSANS:
    -aliyun-inc.com
    -10.0.0.2
    
  masters:
    ipList:
     -172.20.125.234
     -172.20.126.5
     -172.20.126.6
  nodes:
    ipList:
     -172.20.126.8
     -172.20.126.9
     -172.20.126.10
[root@iZm5e42unzb79kod55hehvZ ~]# sealer apply -f Clusterfile
[root@iZm5e42unzb79kod55hehvZ ~]# kubectl get node

Scenario 2. Automatically apply for Alibaba Cloud server for installation, provider: ALI_CLOUD Clusterfile:

apiVersion: sealer.aliyun.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster
spec:
  image: registry.cn-qingdao.aliyuncs.com/sealer-io/cloudrootfs:v1.16.9-alpha.5
  provider: ALI_CLOUD
  ssh:
    passwd:
    pk: xxx
    pkPasswd: xxx
    user: root
  network:
    interface: eth0
    cniName: calico
    podCIDR: 100.64.0.0/10
    svcCIDR: 10.96.0.0/22
    withoutCNI: false
  certSANS:
    -aliyun-inc.com
    -10.0.0.2
    
  masters:
    cpu: 4
    memory: 4
    count: 3
    systemDisk: 100
    dataDisks:
    -100
  nodes:
    cpu: 4
    memory: 4
    count: 3
    systemDisk: 100
    dataDisks:
    -100

clean the cluster

Some information of the basic settings will be written to the Clusterfile and stored in /root/.sealer/[cluster-name]/Clusterfile.

sealer delete -f /root/.sealer/my-cluster/Clusterfile
Owner
Alibaba
Alibaba Open Source
Alibaba
Comments
  • feat: support ipvs vip overwriting

    feat: support ipvs vip overwriting

    Describe what this PR does / why we need it

    we can overwrite the default ipvs VIP through cluster env.

    Does this pull request fix one issue?

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

  • Sealer apply is not really an apply operation

    Sealer apply is not really an apply operation

    What happen?

    Reexecute sealer apply a kube-installer image will only check the cluster node and the relevant parts of the application are not checked. Imagine a scenario:

    1. I run a kube-installer image that contains APP1 v1
    2. I run an app-installer image that contains APP1 v2
    3. I re-run the kube-installer image from step 1 again, what should happen?

    image

    Relevant log output?

    No response

    What you expected to happen?

    No response

    How to reproduce it (as minimally and precisely as possible)?

    No response

    Anything else we need to know?

    No response

    What is the version of Sealer you using?

    sealer 0.9.0

    What is your OS environment?

    No response

    What is the Kernel version?

    No response

    Other environment you want to tell us?

    • Cloud provider or hardware configuration:
    • Install tools:
    • Others:
  • Customize kubernetes manifests

    Customize kubernetes manifests

    Is there any plan to support defining the /etc/kubernetes/manifests/ file under the master node? There are unreasonable configurations in the default output files. For example, kube-controller-manager and kube-scheduler will lead to monitoring indicators collection errors or tls-related security issues. We solve these configurations by manually modifying them.

  • Update readme

    Update readme

    Describe what this PR does / why we need it

    1. Update readme
    2. modify image name

    Does this pull request fix one issue?

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

  • [proposal] How to implement image storage for sealer using Skopeo

    [proposal] How to implement image storage for sealer using Skopeo

    Issue Description

    background:

    Now we always pull application images online at build stage, even if there are images in docker-daemon locally. And we pull blobs of images and manage all files by hands which is so inelegant. As #1874 said,we need to load offline images at build stage.

    Skopeo is a image transporter, it can transport series of types of image from one to another, and skopeo copy ability can help us to implement load offline images. Skopeo operates on the following image and repository types: containers-storage:docker-referencedir:pathdocker://docker-referencedocker-archive:path[:docker-reference]docker-daemon:docker-referenceoci:path:tag. Skopeo can transport images between any two types above:

    skopeo copy containers-storage:docker.io/library/alpine:3.13 docker://sea.hub:5000/library/alpine:3.13
    skopeo copy docker-daemon:ubuntu:rolling docker://sea.hub:5000/library/ubuntu:rolling
    
    

    docker://docker-referencedocker-daemon:docker-reference and oci:path:tag these three image types may match our needs. But, each image type mentioned above has different structure, docker://docker-reference needs there is a registry service running on.

    So, there are three solutions tu implement image store i figured:

    1. Running a registry in build stage, and use skopeo copy ability to transport images to registry. It can help us fully make use of Skopeo, and completely offload loading image capability to Skopeo. Obviously, this approach relies on the Docker and registry containers, and brings complexity to the build stage.
    2. Partly make use of skopeo copy to connect docker-daemon and read local images, then still manage all files by hands. It's hard to implement, and still inelegant. In the long run, if the Registry store format changes, our maintenance costs will be high.
    3. Load images to a middle type at build stage using skopeo copy ability, and load middle type image to registry at run stage affter registry is running using skopy copy directly. For example, use oci:path:tag as middle type, so there are two step as follow:
      skopeo copy docker-daemon:docker.io/library/alpine:3.13 oci:$rootfs/ociimages:library/alpine:3.13
      skopeo copy oci:$rootfs/ociimages:library/alpine:3.13 docker://sea.hub:5000/library/alpine:3.13
      

      In this way, we seperate registry and image store, if user want to use their own registry to store image (eg. harbor) in the future, we retain the ability to do that. Additional costs is that we move images twice, making run stage be slower.

    Each solution has its inherent weaknesses, and the design needs to determine which one to use, or if there is a better one

    Describe what feature you want

    Additional context

The repository aims to share some useful about distributed system

The repository aims to share some useful about distributed system

Dec 14, 2021
Dapr is a portable, event-driven, runtime for building distributed applications across cloud and edge.
Dapr is a portable, event-driven, runtime for building distributed applications across cloud and edge.

Dapr is a portable, serverless, event-driven runtime that makes it easy for developers to build resilient, stateless and stateful microservices that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

Jan 5, 2023
Distributed lock manager. Warning: very hard to use it properly. Not because it's broken, but because distributed systems are hard. If in doubt, do not use this.

What Dlock is a distributed lock manager [1]. It is designed after flock utility but for multiple machines. When client disconnects, all his locks are

Dec 24, 2019
Distributed reliable key-value store for the most critical data of a distributed system

etcd Note: The main branch may be in an unstable or even broken state during development. For stable versions, see releases. etcd is a distributed rel

Dec 30, 2022
Easy to use Raft library to make your app distributed, highly available and fault-tolerant
Easy to use Raft library to make your app distributed, highly available and fault-tolerant

An easy to use customizable library to make your Go application Distributed, Highly available, Fault Tolerant etc... using Hashicorp's Raft library wh

Nov 16, 2022
Scalable, fault-tolerant application-layer sharding for Go applications

ringpop-go (This project is no longer under active development.) Ringpop is a library that brings cooperation and coordination to distributed applicat

Jan 5, 2023
High performance, distributed and low latency publish-subscribe platform.
High performance, distributed and low latency publish-subscribe platform.

Emitter: Distributed Publish-Subscribe Platform Emitter is a distributed, scalable and fault-tolerant publish-subscribe platform built with MQTT proto

Jan 2, 2023
Fast, efficient, and scalable distributed map/reduce system, DAG execution, in memory or on disk, written in pure Go, runs standalone or distributedly.

Gleam Gleam is a high performance and efficient distributed execution system, and also simple, generic, flexible and easy to customize. Gleam is built

Jan 1, 2023
Go Open Source, Distributed, Simple and efficient Search Engine

Go Open Source, Distributed, Simple and efficient full text search engine.

Dec 31, 2022
Asynq: simple, reliable, and efficient distributed task queue in Go
Asynq: simple, reliable, and efficient distributed task queue in Go

Asynq: simple, reliable, and efficient distributed task queue in Go

Dec 31, 2022
💡 A Distributed and High-Performance Monitoring System. The next generation of Open-Falcon
💡 A Distributed and High-Performance Monitoring System.  The next generation of Open-Falcon

夜莺简介 夜莺是一套分布式高可用的运维监控系统,最大的特点是混合云支持,既可以支持传统物理机虚拟机的场景,也可以支持K8S容器的场景。同时,夜莺也不只是监控,还有一部分CMDB的能力、自动化运维的能力,很多公司都基于夜莺开发自己公司的运维平台。开源的这部分功能模块也是商业版本的一部分,所以可靠性有保

Jan 5, 2023
short-url distributed and high-performance

durl 是一个分布式的高性能短链服务,逻辑简单,并提供了相关api接口,开发人员可以快速接入,也可以作为go初学者练手项目.

Jan 2, 2023
A distributed and coördination-free log management system
A distributed and coördination-free log management system

OK Log is archived I hoped to find the opportunity to continue developing OK Log after the spike of its creation. Unfortunately, despite effort, no su

Dec 26, 2022
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Jan 4, 2023
Golimit is Uber ringpop based distributed and decentralized rate limiter
Golimit is Uber ringpop based distributed and decentralized rate limiter

Golimit A Distributed Rate limiter Golimit is Uber ringpop based distributed and decentralized rate limiter. It is horizontally scalable and is based

Dec 21, 2022
A distributed systems library for Kubernetes deployments built on top of spindle and Cloud Spanner.

hedge A library built on top of spindle and Cloud Spanner that provides rudimentary distributed computing facilities to Kubernetes deployments. Featur

Nov 9, 2022
A distributed locking library built on top of Cloud Spanner and TrueTime.

A distributed locking library built on top of Cloud Spanner and TrueTime.

Sep 13, 2022