a small form factor OpenShift/Kubernetes optimized for edge computing

Microshift

Microshift is OpenShift1 Kubernetes in a small form factor and optimized for edge computing.

Edge devices deployed out in the field pose very different operational, environmental, and business challenges from those of cloud computing. These motivate different engineering trade-offs for Kubernetes at the far edge than for cloud or near-edge scenarios. Microshift's design goals cater to this:

  • make frugal use of system resources (CPU, memory, network, storage, etc.),
  • tolerate severe networking constraints,
  • update (resp. roll back) securely, safely, speedily, and seamlessly (without disrupting workloads), and
  • build on and integrate cleanly with edge-optimized OSes like Fedora IoT and RHEL for Edge, while
  • providing a consistent development and management experience with standard OpenShift.

We believe these properties should also make Microshift a great tool for other use cases such as Kubernetes applications development on resource-constrained systems, scale testing, and provisioning of lightweight Kubernetes control planes.

Note: Microshift is still early days and moving fast. Features are missing. Things break. But you can still help shape it, too.

1) more precisely OKD, the Kubernetes distribution by the OpenShift community

Using Microshift

To give Microshift a try, simply install a recent test version (we don't provide stable releases yet) on a Fedora-derived Linux distro (we've only tested Fedora, RHEL, and CentOS Stream so far) using:

curl -sfL https://raw.githubusercontent.com/redhat-et/microshift/main/install.sh | sh -

This will install Microshift's dependencies (CRI-O), install it as a systemd service and start it.

For convenience, the script will also add a new "microshift" context to your $HOME/.kube/config, so you'll be able to access your cluster using, e.g.:

kubectl get all -A --context microshift

or

kubectl config use-context microshift
kubectl get all -A

Notes: When installing Microshift on a system with an older version already installed, it is safest to remove the old data directory and start fresh:

rm -rf /var/lib/microshift && rm -r $HOME/.microshift

Developing Microshift

Building

You can locally build Microshift using one of two methods, either using a container build (recommended) on Podman or Docker:

make microshift

or directly on the host after installing the build-time dependencies

sudo dnf install -y glibc-static
make

Running

Use install.sh to set up your sytem and install run-time dependencies for Microshift, then simply:

sudo microshift run

Microshift keeps all its state in its data-dir, which defaults to /var/lib/microshift when running Microshift as privileged user and $HOME/.microshift otherwise. Note that running Microshift unprivileged only works without node role at the moment (i.e. using --roles=controlplane instead of the default of --roles=controlplane,node).

You can find the kubeadmin's kubeconfig under $DATADIR/resources/kubeadmin/kubeconfig.

Owner
Red Hat Emerging Technologies
Red Hat Emerging Technologies
Comments
  • selinux configs and  volume for microshift-containerized

    selinux configs and volume for microshift-containerized

    Signed-off-by: Parul Singh [email protected]

    Which issue(s) this PR addresses: For podman deployment:

    • systemd unit file for starting and managing microshift-containerized.

    Closes #434, #433, #432

  • USHIFT-535: Remove dns configurable option from MicroShift config

    USHIFT-535: Remove dns configurable option from MicroShift config

    Cluster DNS is set to the 10th IP of Service CIDR

    Signed-off-by: Vu Dinh [email protected]

    Which issue(s) this PR addresses:

    Closes #

  • Do not modify default logging parameters

    Do not modify default logging parameters

    klog is a singleton library, and since we use that library from all our services in a single process setting different log files won't work, and will syphon all logs into the last log file we add.

    Keep the simple strategy of letting all output to stderr for now.

    Related-Issue: #493

    Signed-off-by: Miguel Angel Ajo [email protected]

  • USHIFT-607: Introduce a new config format for microshift config

    USHIFT-607: Introduce a new config format for microshift config

    Separate component-specific config and microshift-specific config into 2 sections within top-level config. The components config is modelled after OpenShift config v1 API while microshift config contains fields that are specific to microshift usage.

    Signed-off-by: Vu Dinh [email protected]

    Which issue(s) this PR addresses:

    Closes #

  • USHIFT-233: move the version config map to match where it will be in OCP

    USHIFT-233: move the version config map to match where it will be in OCP

    Closes: USHIFT-233

    This is related to https://github.com/openshift/enhancements/pull/1203, but we can go ahead and take it now to make less work to update test suite changes later.

  • logrus -> klog

    logrus -> klog

    Signed-off-by: Parul [email protected]

    Which issue(s) this PR addresses:

    Closes https://github.com/redhat-et/microshift/issues/134

  • USHIFT-227: Cluster Policy Controller integration

    USHIFT-227: Cluster Policy Controller integration

    Which issue(s) this PR addresses:

    Closes USHIFT-227

    This PR carry the following items:

    • Enabling Cluster Policy Controller
    • Disabling resource-quota and cluster-quota-reconciliation controllers.
    • Creating of openshift-kube-controller-manager namespace (it's where the CreatedSCCRanges events happen)
    • Applying csr-approver and namespace-security cluster roles and cluster role bindings. (required by the CPC)
  • [Enhancement]: MicroShift Health Check

    [Enhancement]: MicroShift Health Check

    What would you like to be added:

    Monitoring Platform/Cluster Monitoring Nodes Monitoring Pods Containers status Pods per Node Services Health Resources Utilization (CPU/Memory/Network) etc.

    Why is this needed:

    Give end-user visibility, on the state and health of their applications and solutions Very important for the mission-critical applications

  • Allow MicroShift to join new worker nodes

    Allow MicroShift to join new worker nodes

    Allow MicroShift to join new worker nodes, according to design here #498 (see individual commits for review),

    • 4523987f Add flags to allow TLS bootstrapping of nodes
    • 9d8e2ade Add bootstrap module and generate token file
    • 26f04a33 Add ClusterRoleBinding for bootstrap process
    • 1103d731 Generate bootstrap kubeconfig
    • 1fd2a3c9 Allow MicroShift to start node role standalone
    • 6c84f168 Apply CRB for bootstraping nodes
    • 1366626f Use bootstrap kubeconfig for kube-proxy
    • c2eacfd6 Use netcgo insted of netgo
    • 40bff393 Add vagrant env to test/devel/debug multi-worker

    Related PRs: #499 , #500

  • [RFE] Multi-node Request for Enhacement

    [RFE] Multi-node Request for Enhacement

    This commit only describes the addition of new compute nodes to an existing MicroShift cluster. Highly available control plane will be described in later PRs.

    Signed-off-by: Ricardo Noriega [email protected]

    This Enhacement proposal addresses part of the #460 epic.

  • API-1433: Configure route host assignment admission plugin.

    API-1433: Configure route host assignment admission plugin.

    $ cat<<EOF | oc apply --server-side -f-
    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: hello-microshift
    spec:
      to:
        kind: Service
        name: hello-microshift
    EOF
    
    route.route.openshift.io/hello-microshift serverside-applied
    
    $ oc get route hello-microshift -o yaml
    
    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      annotations:
        openshift.io/host.generated: "true"
      creationTimestamp: "2022-11-11T23:53:33Z"
      generation: 1
      name: hello-microshift
      namespace: default
      resourceVersion: "2659"
      uid: cd35cd20-b3fd-4d50-9912-f34b3935acfd
    spec:
      host: hello-microshift-default.cluster.local
      to:
        kind: Service
        name: hello-microshift
      wildcardPolicy: None
    
    $ cat<<EOF | oc apply --server-side -f-
    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: hello-microshift
    spec:
      to:
        kind: Service
        name: hello-microshift
      wildcardPolicy: ""
    EOF
    
    The Route "hello-microshift" is invalid: spec.wildcardPolicy: Invalid value: "": field is immutable
    
  • Add arch info for assets/release/release-x86_64.json base section

    Add arch info for assets/release/release-x86_64.json base section

    As of now assets/release/release-aarch64.json file have arch info as part of base section "base": "4.12.0-0.nightly-arm64-2023-01-03-161334" but it is not present for assets/release/release-x86_64.json. This PR puts the arch info in base same as aarch64 asset file.

  • WIP rebase.py

    WIP rebase.py

    Wraps rebase.sh with extra goodies for better interactions with PRs. Checks rebase.sh exit code, so case of error the logs is committed along of any other staging changes, PR's title will get *FAILURE* prefix and PR's description will have big (h1) message to check committed rebase.sh log. If PR exists for the branch (which name is amd64's release tag), PR desc will be updated. PR description now includes: amd64 and arm64 release tags, and (deduced) prow job link.

    Obsoletes create_pr.py

  • Store ovn databases to hostpath

    Store ovn databases to hostpath

    ovn databases (nb/sb) were stored in containers and recreated every time when ovnkube-master container restarts. This commit saves container databases to host directory (/etc/ovn/) which avoids recreation of ovn databases after ovnkube-master restarts. It optimizes around 0~10M memory for each database container from a snapshot metrics with metrics-server. Also add a command in cleanup script to remove the ovn databases.

    Related-Issue: https://issues.redhat.com/browse/NP-648

  • [BUG] Microshift won't pass - showing dracut-initqueue timeout RHEL 8.7

    [BUG] Microshift won't pass - showing dracut-initqueue timeout RHEL 8.7

    What happened?

    trying to install the microshift on my lab and getting error "dracut-initqueue timeout - starting timeout scripts". Following guide https://github.com/openshift/microshift/blob/main/docs/getting_started.md

    Just before the dracut timeout the lines are shown.

    Tech Preview: NMVe/TCP may not be fully supported.

    Using the latest RHEL 8.7 image. Most likely issue with the RHEL.

    What did you expect to happen?

    Normal installation.

    How to reproduce it (as minimally and precisely as possible)?

    1. '...'
    2. '...'

    Anything else we need to know?

    Environment

    • MicroShift version (use microshift version):
    • Hardware configuration:
    • OS (e.g: cat /etc/os-release): RHEL 8.7 on Fedora 35
    • Kernel (e.g. uname -a): Linux 6.0.12-100.fc35.x86_64

    Relevant logs

Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

Dec 17, 2021
A helper tool for getting OpenShift/Kubernetes data directly from Etcd.

Etcd helper A helper tool for getting OpenShift/Kubernetes data directly from Etcd. How to build $ go build . Basic Usage This requires setting the f

Dec 10, 2021
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Dec 30, 2022
Secure Edge Networking Based On Kubernetes And KubeEdge.
Secure Edge Networking Based On Kubernetes And KubeEdge.

What is FabEdge FabEdge is an open source edge networking solution based on kubernetes and kubeedge. It solves the problems including complex network

Jan 3, 2023
OpenYurt - Extending your native Kubernetes to edge(project under CNCF)
OpenYurt - Extending your native Kubernetes to edge(project under CNCF)

openyurtio/openyurt English | 简体中文 What is NEW! Latest Release: September 26th, 2021. OpenYurt v0.5.0. Please check the CHANGELOG for details. First R

Jan 7, 2023
A Rancher and Kubernetes optimized immutable Linux distribution based on openSUSE

RancherOS v2 WORK IN PROGRESS RancherOS v2 is an immutable Linux distribution built to run Rancher and it's corresponding Kubernetes distributions RKE

Nov 14, 2022
Addon Operator coordinates the lifecycle of Add-ons in managed OpenShift
Addon Operator coordinates the lifecycle of Add-ons in managed OpenShift

Addon Operator Addon Operator coordinates the lifecycle of Addons in managed OpenShift. dev tools setup pre-commit hooks: make pre-commit-install glob

Dec 29, 2022
A controller to create K8s Ingresses for Openshift routes.

route-to-ingress-operator A controller to create corresponding ingress.networking.k8s.io/v1 resources for route.openshift.io/v1 TODO int port string p

Jan 7, 2022
A TUI interface to navigate and view OpenShift 4 must-gather logs
A TUI interface to navigate and view OpenShift 4 must-gather logs

MGR "Must Gather Reader" MGR "not the final name" is a simple TUI interface to navigate and view OpenShift 4 must-gather files. How to run it: Downloa

Dec 21, 2022
Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using HPE Smart Storage Administrator tool

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Jan 17, 2022
Implementations of Power VS Provider for the OpenShift machine-api

Machine API Provider Power VS This repository contains implementations of Power VS Provider for the OpenShift machine-api. This provider runs as a mac

Jan 31, 2022
Oc-clusteroperator - OpenShift CLI plugin to change the state of ClusterOperators from managed to unmanaged and back again

oc-clusteroperator OpenShift CLI plugin to change the state of ClusterOperators

Feb 15, 2022
A serverless cluster computing system for the Go programming language

Bigslice Bigslice is a serverless cluster data processing system for Go. Bigslice exposes composable API that lets the user express data processing ta

Dec 14, 2022
An Alert notification service is an application which can receive alerts from certain alerting systems like System_X and System_Y and send these alerts to developers in the form of SMS and emails.

Alert-System An Alert notification service is an application which can receive alerts from certain alerting systems like System_X and System_Y and sen

Dec 10, 2021
Simplified network and services for edge applications
Simplified network and services for edge applications

English | 简体中文 EdgeMesh Introduction EdgeMesh is a part of KubeEdge, and provides a simple network solution for the inter-communications between servi

Jan 1, 2023
Go library providing algorithms optimized to leverage the characteristics of modern CPUs

asm Go library providing algorithms optimized to leverage the characteristics of modern CPUs. Motivation With the development of Cloud technologies, a

Dec 29, 2022
Tape backup software optimized for large WORM data and long-term recoverability

Mixtape Backup software for tape users with lots of WORM data. Draft design License This codebase is not open-source software (or free, or "libre") at

Oct 30, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

May 19, 2021
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.
Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way.

Litmus Cloud-Native Chaos Engineering Read this in other languages. ???? ???? ???? ???? Overview Litmus is a toolset to do cloud-native chaos engineer

Jan 1, 2023