Enable dynamic and seamless Kubernetes multi-cluster topologies

Integration Pipeline Status

Liqo Logo

Enable dynamic and seamless Kubernetes multi-cluster topologies



Explore the docs »

View Demo · Report Bug · Request Feature

About the project

Liqo is a platform to enable dynamic and decentralized resource sharing across Kubernetes clusters, either on-prem or managed. Liqo allows to run pods on a remote cluster seamlessly and without any modification of Kubernetes and the applications. With Liqo it is possible to extend the control plane of a Kubernetes cluster across the cluster's boundaries, making multi-cluster native and transparent: collapse an entire remote cluster to a virtual local node, by allowing workloads offloading and resource management compliant with the standard Kubernetes approach.



Table of Contents
  1. Main Features
  2. Quickstart
  3. Installation
  4. Roadmap
  5. Contributing
  6. Community
  7. License

Main features

  • Decentralized governance: peer-to-peer paradigm, without any centralized management entity.
  • Clusters discovery: leverage on three different ways to discover (and peer to) other clusters:
    • Manual configuration: through a custom API representing other clusters
    • DNS: automatic discovery through DNS records
    • LAN: automatic discovery of neighboring clusters available in the same LAN.
  • Transparent offloading: pods scheduled on the virtual node are offloaded to the remote cluster; they can be controlled by merely accessing the pod objects in the local one; the resources needed by the pods (services, endpoints, configmaps, etc.) are translated and replicated remotely. It allows inter-cluster pod-to-pod and pod-to-service communication.
  • Pod resilience: the offloaded pods' lifecycle is controlled by a remote replicaset.
  • Inter-cluster networking: the clusters inter-connection is implemented by a Wireguard tunnel, which ensure encryption and reliability.
  • CNI independence: compliance with many CNIs (Calico, Cilium, Flannel, etc.) even in heterogeneous scenarios (the two clusters can have different CNIs).

Quickstart

This quickstart lets you try Liqo in a playground environment built by two clusters in KinD.

Install liqoctl

First, set the variables corresponding to your set-up:

OS=linux # possible values: linux,windows,darwin
ARCH=amd64 # possible values: amd64,arm64 

Then, you should execute the following commands to install the latest version of liqoctl:

curl --fail -LSO "https://get.liqo.io/liqoctl-${OS}-${ARCH}" && \
chmod +x "liqoctl-${OS}-${ARCH}" && \
sudo mv "liqoctl-${OS}-${ARCH}" /usr/local/bin/liqoctl

Alternatively, you can directly download liqoctl from the Liqo releases page on GitHub.

Provision two KinD clusters.

source <(curl -L https://get.liqo.io/clusters.sh)

Install Liqo on both clusters:

export KUBECONFIG=$KUBECONFIG_1
liqoctl install kind --cluster-name cluster1
export KUBECONFIG=$KUBECONFIG_2
liqoctl install kind --cluster-name cluster2

Wait that all containers are up and running. When a new virtual-kubelet pops out, a new node modeling the remote cluster is present and ready to receive pods. Check it out with:

kubectl get nodes

Use the resources

Create a new namespace and label it to tell Liqo that the pods created in that namespace are suitable for offloading in the remote cluster.

kubectl create namespace liqo-demo
kubectl label namespace liqo-demo liqo.io/enabled=true

Deploy the Google microservice Shop application.

kubectl apply -f https://get.liqo.io/app.yaml -n liqo-demo

You can observe that:

  • Your application is correctly working by exposing the application frontend port and later connecting with a browser to localhost:8000. To expose the pod port:
kubectl port-forward -n liqo-demo service/frontend 8080:80
  • Your application is transparently deployed across two different clusters:
kubectl get pods -n liqo-demo -o wide  

Going Further

If you want to explore the Liqo internals, including how to inspect and interact with a service deployed with Liqo, you can explore the documentation website:

Roadmap

Planned features for the next release (v0.3, expected early-September, 2021) are the following:

  • Support for deployments spanning across more than two clusters.
  • Support for a more balanced scheduling mechanism to distribute jobs across clusters.
  • Support for Amazon Elastic Kubernetes Service.
  • Support for more granular permission control over remote cluster resources.

Contributing

All contributors are warmly welcome. If you want to become a new contributor, we are so happy!. Just, before doing it, read the repo's guidelines presented on our documentation website.

Community

To get involved with the Liqo community, join the slack channel.

notification Community Meeting
Liqo holds weekly community meeting on Monday, 5.30pm UTC (6.30 CET, 9.30am PST). To join the community meeting, follow this link. Convert to your timezone here.

License

This project includes code from the Virtual Kubelet project https://github.com/virtual-kubelet/virtual-kubelet, licensed under the Apache 2.0 license.

Liqo is distributed under the Apache-2.0 License. See License for more information.

FOSSA Status

Liqo is a project kicked off at Polytechnic of Turin (Italy) and actively maintained with ❤️ by all the Liqoers.

Owner
LiqoTech
Enable dynamic and seamless Kubernetes multi-cluster topologies
LiqoTech
Comments
  • [INSTALL ERRO] Error initializing installer

    [INSTALL ERRO] Error initializing installer

    What happened

    When I install liqo on my laptop, after I run liqoctl install k3s --only-output-values --dump-values-path tmp.yaml (Helm) or liqoctl install k3s (liqoctl), the shell gives me this ERRO

    ERRO  Error initializing installer: The connection to the server 0.0.0.0:46601 was refused - did you specify the right host or port?
    

    What you expected to happen:

    Install liqo

    How to reproduce it (as minimally and precisely as possible):

    I don't know

    Anything else we need to know?:

    What is wrong and how to fix the error

    Environment:

    • Liqo version: stable
    • Kubernetes version (use kubectl version):
    Client Version: version.Info
    {Major:"1", Minor:"24", GitVersion:"v1.24.4+k3s1", GitCommit:"c3f830e9b9ed8a4d9d0e2aa663b4591b923a296e", GitTreeState:"clean", BuildDate:"2022-08-25T03:45:26Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
    Kustomize Version: v4.5.4
    Server Version: version.Info
    {Major:"1", Minor:"24", GitVersion:"v1.24.4+k3s1", GitCommit:"c3f830e9b9ed8a4d9d0e2aa663b4591b923a296e", GitTreeState:"clean", BuildDate:"2022-08-25T03:45:26Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
    
    • Cloud provider or hardware configuration: Leigion-5-pro with AMD® Ryzen 7 5800h with radeon graphics × 16
    • Network plugin and version:
    • Install tools: liqoctl and helm
    • Others:
  • Custom resource client, informer, and lister generation

    Custom resource client, informer, and lister generation

    Description

    • Add makefile target to generate client, informer, and lister for custom resources.
    • Generate virtualkubelet group client, informer, and lister.

    Reference: https://cloud.redhat.com/blog/kubernetes-deep-dive-code-generation-customresources

    Notes

    It might look like a huge PR but it's mostly autogenerated code. Indeed, all the code in pkg/client is autogenerated. My main contribution is the generate-groups target in the makefile.

    • I had to rename the apis\virtualkubelet package and make it all lowercase because of the code generator/go fmt complaining about case-insensitive import collision.
    • I removed docs, fmt, vet sub-target from the gen target after a short chat with @giorio94. It sound reasonable not to run fmt and vet on autogenerated code. Regarding docs, it looks like there is a different workflow taking care of the documentation.
    • I called the target generate-groups rather than generate-client to align with the naming convention found in the documentation and script that we are using.

    edit: it's annoying not to run fmt so I have left it.

  • LiqoNet: peer connectivity check

    LiqoNet: peer connectivity check

    This PR introduces a way to check connectivity between 2 peered clusters.

    TODO:

    • [x] Add first externalCIDR IP to liqo.tunnel
    • [x] Exclude IP from usable IPs
    • [x] Add NAT rules on the destination cluster.
    • [x] Add ConnChecker
    • [x] Add periodic ping
    • [x] Expose connection status as prometheus metric
  • IPAM documentation

    IPAM documentation

    This PR adds a section on the Liqo Network Manager documentation page that describes the IPAM module and its important role in different situations: remapping of networks, translation of IP addresses of offloaded Pods and mapping of endpoint IP addresses during reflection.

  • Liqo Storage POC

    Liqo Storage POC

    Description

    Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

    Fixes #(issue)

    How Has This Been Tested?

    Please describe the tests that you ran to verify your changes. Please also note any relevant details for your test configuration.

    • [ ] Test A
    • [ ] Test B
  • Liqo Controller Manager: ShadowPod validating webhook

    Liqo Controller Manager: ShadowPod validating webhook

    Description

    In a "two-cluster" peering scenario, where one Cluster plays the role of "Customer" and the other one of "Cloud Provider", it is important for the two entities that the Resource plan agreed is as much as possible guaranteed and controlled.

    • The Cloud Provider has the need to control what happen in own cluster, that resource usage limits agreed are respected (also with a toleration) and that nothing exceeds the Offer provided

    in the other hand

    • The Customer wants that the ResourceOffer Plan "buyed" is always guaranteed (also with a toleration) respecting what has been agreed at the beginning, it has the need to be protected about what is defined in kind of "resource contract"

    The goal of a Resource Validator system is to manage all this requirements, defining a new optional way of peering between clusters

    What's needed

    This PR introduces some new elements to allow the management of "Customer" offloaded resources on host cluster (that ideally represents a Cloud Provider)

    • [x] Validating Webhook for incoming ShadowPods based on existing ResourceOffer
    • [x] Used/Free resources calculator after each Offloading Request / Deletion
    • [x] Cached information to decrease the overhead generated by each validation check after an Offloading Request
    • [x] Cache mutex for concurrent access
    • [x] Scheduled cache consistency refresh

    TODO

    • [x] improve cache data structure
    • [x] Improve logging using klog
  • Virtual kubelet: namespace mapper refactoring

    Virtual kubelet: namespace mapper refactoring

    Description

    • Refactor namespace mapper in virtualKubelet.
    • Refactor reflection manager to get rid of StartAllNamespaces() method call in provider to take care of fallbackReflectors.

    How Has This Been Tested?

    • [x] Existing tests.
    • [x] Added unit tests for the namespace handler.
  • controller-manager: fix peering status reporting logic

    controller-manager: fix peering status reporting logic

    Description

    Wait for ResourceOffer being accepted before setting OutgoingPeeringStatus=Established (we currently set it to Established once we receive the ResourceOffer).

    How Has This Been Tested?

    Please describe the tests that you ran to verify your changes. Please also note any relevant details for your test configuration.

    • [x] Manual tests
    • [x] Automated tests
  • Virtual kubelet: reflection-based pod offloading

    Virtual kubelet: reflection-based pod offloading

    Description

    This PR implements pod offloading through the reflection logic, instead of leveraging the virtual kubelet provider abstraction. This allows to unify the outgoing (i.e., creation of the remote pod) and incoming (i.e., the status realignment) flows, to reduce code duplication and improve the overall performance. At the same time it moves from a remote resilience mechanism based on replicasets to one leveraging a custom ShadowPod resource, for increased control, better performance and naming consistency.

    Caveats:

    • pod translation still has most of the limitations of the previous version (in terms of translated fields)

    Fixes #721 Fixes #678 Fixes #604

    How Has This Been Tested?

    Please describe the tests that you ran to verify your changes. Please also note any relevant details for your test configuration.

    • [x] Unit testing (new + existing)
    • [x] E2E testing
    • [x] Manual
  • Broker PoC

    Broker PoC

    This PR add new Broker component based on the standard Broadcaster. This version just replicates the first ResourceOffer it knows to all cluster that start an incoming peering.

  • [WIP] Peering process monitoring

    [WIP] Peering process monitoring

    Description

    Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

    Fixes #(issue)

    How Has This Been Tested?

    Please describe the tests that you ran to verify your changes. Please also note any relevant details for your test configuration.

    • [ ] Test A
    • [ ] Test B
  • Remote Pod offloading strategies are not working for some applications where pvc's are required

    Remote Pod offloading strategies are not working for some applications where pvc's are required

    What happened:

    We have different clouds in our environment like AWS,AZURE,GCP and we have some terraform modules in our environment. Using local offloading all the modules are working but for remote offloading the modules that has dependencies in creating pvc's are not working for remote offloading.

    What you expected to happen:

    As like local the module should get provisioned for remote offloading strategies for same cloud combinations like AKS-AKS,EKS-EKS and also with cross cloud combinations like AKS-EKS, EKS-AKS

    Anything else we need to know?:

    Environment:

    • Liqo version:

    • Client version: v0.6.0 Server version: v0.6.0

    • Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:16:20Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.12", GitCommit:"f941a31f4515c5ac03f5fc7ccf9a330e3510b80d", GitTreeState:"clean", BuildDate:"2022-11-09T17:12:33Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}

    • Cloud provider or hardware configuration: PVC's are not getting created for remote offload strategies for same and cross cloud combinations

    • Others: Command used for remote pod offloading

    #liqoctl offload namespace ***** --namespace-mapping-strategy EnforceSameName --pod-offloading-strategy Remote --selector liqo.io/provider=eks

  • VK: enable retrieval of SA tokens through API (k8s >= 1.24)

    VK: enable retrieval of SA tokens through API (k8s >= 1.24)

    Description

    This PR modifies the virtual kubelet to introduce the support for the retrieval of service account tokens from the dedicated API, in addition to the standard secrets-based approach. This enables the support for offloaded applications interacting with the local Kubernetes API server, when hosted on Kubernetes 1.24 and above. Moreover, it includes a few related changes, allowing to filter reflection events per type, and enqueue a given event after a certain interval elapsed.

    Fixes #1185

    How Has This Been Tested?

    Please describe the tests that you ran to verify your changes. Please also note any relevant details for your test configuration.

    • [x] Existing E2E tests (now enabled also on k8s 1.24+)
    • [x] Unit tests (new + existing)
    • [x] Manual testing on Kind
  • Fix AWS IAM User Creation

    Fix AWS IAM User Creation

    Description

    This pr fixes a bug that prevents two EKS clusters to re-do an in-band peering after the unpeering

    Fixes #(issue)

    How Has This Been Tested?

    Please describe the tests that you ran to verify your changes. Please also note any relevant details for your test configuration.

    • [ ] Test A
    • [ ] Test B
  • Liqo peering is not establishing in the 2nd attempt

    Liqo peering is not establishing in the 2nd attempt

    What happened:

    I have 2 clusters which are peered and liqo v0.6.0 is installed on both the clusters. I tried unpeering the clusters and tried to establish the peering again, but peering is not happening between both the clusters in the 2nd attempt of peering

    What you expected to happen:

    After peering and unpeering in the 2nd attempt of peering, the clusters should be peered but it is not happening

    How to reproduce it (as minimally and precisely as possible):

    Environment:

    • Liqo version: Client version: v0.6.0 Server version: v0.6.0

    • Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.12", GitCommit:"f941a31f4515c5ac03f5fc7ccf9a330e3510b80d", GitTreeState:"clean", BuildDate:"2022-11-09T17:12:33Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"} WARNING: version difference between client (1.21) and server (1.23) exceeds the supported minor version skew of +/-1

    • Cloud provider or hardware configuration:- Getting authentication failed to remote cluster error for aks-aks, aks-eks, eks-eks combinations

    • Others:

    Command used for peering: #liqoctl peer in-band --kubeconfig "config-local" --remote-kubeconfig "config-eks" --bidirectional

    Logs and Error:

    INFO (local) cluster identity correctly retrieved
    INFO (local) network configuration correctly retrieved
    INFO (local) wireGuard configuration correctly retrieved
    INFO (local) authentication token correctly retrieved
    INFO (local) authentication endpoint correctly retrieved
    INFO (local) proxy endpoint correctly retrieved
    INFO (remote) cluster identity correctly retrieved
    INFO (remote) network configuration correctly retrieved
    INFO (remote) wireGuard configuration correctly retrieved
    INFO (remote) authentication token correctly retrieved
    INFO (remote) authentication endpoint correctly retrieved
    INFO (remote) proxy endpoint correctly retrieved
    INFO (local) foreign cluster for remote cluster "priiam-eks-dev" not found: marked for creation
    INFO (remote) foreign cluster for remote cluster "test-aks-dev" not found: marked for creation
    INFO (local) tenant namespace "liqo-tenant-priiam-eks-dev-da4494" created for remote cluster "priiam-eks-dev" INFO (remote) tenant namespace "liqo-tenant-test-aks-dev-8c2972" created for remote cluster "test-aks-dev" INFO (local) network configuration created in local cluster "test-aks-dev"
    INFO (local) network configuration created in remote cluster "priiam-eks-dev"
    INFO (local) network configuration status correctly reflected from cluster "priiam-eks-dev"
    INFO (remote) network configuration created in local cluster "priiam-eks-dev"
    INFO (remote) network configuration created in remote cluster "test-aks-dev"
    INFO (remote) network configuration status correctly reflected from cluster "test-aks-dev"
    INFO (local) IPAM service correctly port-forwarded "43825:6000"
    INFO (remote) IPAM service correctly port-forwarded "33211:6000"
    INFO (local) proxy address "10.0.177.16" remapped to "10.245.0.2" for remote cluster "priiam-eks-dev"
    INFO (remote) proxy address "10.100.95.177" remapped to "10.101.0.2" for remote cluster "test-aks-dev"
    INFO (local) auth address "10.0.81.183" remapped to "10.245.0.3" for remote cluster "priiam-eks-dev"
    INFO (remote) auth address "10.100.140.83" remapped to "10.101.0.3" for remote cluster "test-aks-dev"
    INFO (local) foreign cluster for remote cluster "priiam-eks-dev" correctly configured
    INFO (remote) foreign cluster for remote cluster "test-aks-dev" correctly configured
    INFO (local) Network established to the remote cluster "priiam-eks-dev"
    INFO (remote) Network established to the remote cluster "test-aks-dev"
    ERRO (local) Authentication to the remote cluster "priiam-eks-dev" failed: timed out waiting for the condition INFO (remote) IPAM service port-forward correctly stopped "33211:6000"
    INFO (local) IPAM service port-forward correctly stopped "43825:6000

  • Bump the k8s libraries to the latest version

    Bump the k8s libraries to the latest version

    Description

    This PR bumps the k8s libraries to the latest version, updating at the same time the automatically generated manifests which caused dependabot PR to fail linting.

    How Has This Been Tested?

    Please describe the tests that you ran to verify your changes. Please also note any relevant details for your test configuration.

    • [ ] Existing tests
Topology-tester - Application to easily test microservice topologies and distributed tracing including K8s and Istio

Topology Tester The Topology Tester app allows you to quickly build a dynamic mi

Jan 14, 2022
cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resource objects related of Kubernetes Cluster API.

Overview cluster-api-state-metrics (CASM) is a service that listens to the Kubernetes API server and generates metrics about the state of custom resou

Oct 27, 2022
Multi cluster kubernetes dashboard with batteries included. Build by developers, for developers.

kubetower Multi cluster kubernetes dashboard with batteries included. Built by developers, for developers. Features Restart deployments with one click

Nov 28, 2022
kubetnl tunnels TCP connections from within a Kubernetes cluster to a cluster-external endpoint, e.g. to your local machine. (the perfect complement to kubectl port-forward)

kubetnl kubetnl (kube tunnel) is a command line utility to tunnel TCP connections from within a Kubernetes to a cluster-external endpoint, e.g. to you

Dec 16, 2022
A Terraform module to manage cluster authentication (aws-auth) for an Elastic Kubernetes (EKS) cluster on AWS.

Archive Notice The terraform-aws-modules/eks/aws v.18.20.0 release has brought back support aws-auth configmap! For this reason, I highly encourage us

Dec 4, 2022
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds
vcluster - Create fully functional virtual Kubernetes clusters - Each cluster runs inside a Kubernetes namespace and can be started within seconds

Website • Quickstart • Documentation • Blog • Twitter • Slack vcluster - Virtual Clusters For Kubernetes Lightweight & Low-Overhead - Based on k3s, bu

Jan 4, 2023
PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes.

GalaxyKube -- PolarDB-X Operator PolarDB-X Operator is a Kubernetes extension that aims to create and manage PolarDB-X cluster on Kubernetes. It follo

Dec 19, 2022
kitex running in kubernetes cluster and discover each other in kubernetes Service way

Using kitex in kubernetes Kitex [kaɪt'eks] is a high-performance and strong-extensibility Golang RPC framework. This go module helps you to build mult

Feb 21, 2022
Enterprise-grade container platform tailored for multicloud and multi-cluster management
Enterprise-grade container platform tailored for multicloud and multi-cluster management

KubeSphere Container Platform What is KubeSphere English | 中文 KubeSphere is a distributed operating system providing cloud native stack with Kubernete

Jan 2, 2023
Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Jan 25, 2022
Moby: an open-source project created by Docker to enable and accelerate software containerization
Moby: an open-source project created by Docker to enable and accelerate software containerization

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Dec 10, 2021
Go WhatsApp Multi-Device Implementation in REST API with Multi-Session/Account Support

Go WhatsApp Multi-Device Implementation in REST API This repository contains example of implementation go.mau.fi/whatsmeow package with Multi-Session/

Dec 3, 2022
K8s controller implementing Multi-Cluster Services API based on AWS Cloud Map.

AWS Cloud Map MCS Controller for K8s Introduction AWS Cloud Map multi-cluster service discovery for Kubernetes (K8s) is a controller that implements e

Dec 17, 2022
CoreDNS plugin implementing K8s multi-cluster services DNS spec.

corends-multicluster Name multicluster - implementation of Multicluster DNS Description This plugin implements the Kubernetes DNS-Based Multicluster S

Dec 3, 2022
A pod scaler golang app that can scale replicas either inside of cluster or out of the cluster

pod-scaler A simple pod scaler golang application that can scale replicas via manipulating the deployment Technologies The project has been created us

Oct 24, 2021
Influxdb-cluster - InfluxDB Cluster for replacing InfluxDB Enterprise

InfluxDB ATTENTION: Around January 11th, 2019, master on this repository will be

Dec 26, 2022
Enable your Go applications to self update

go-selfupdate Enable your Golang applications to self update. Inspired by Chrome based on Heroku's hk. Features Tested on Mac, Linux, Arm, and Windows

Jan 3, 2023
Puccini-terraform - Enable TOSCA for Terraform using Puccini

(work in progress) TOSCA for Terraform Enable TOSCA for Terraform using Puccini.

Jun 27, 2022
CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.
CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.

OpenEBS LVM CSI Driver CSI driver for provisioning Local PVs backed by LVM and more. Project Status Currently the LVM CSI Driver is in alpha

Dec 24, 2022