Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)

Kilo

Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes.

Build Status Go Report Card

Overview

Kilo connects nodes in a cluster by providing an encrypted layer 3 network that can span across data centers and public clouds. By allowing pools of nodes in different locations to communicate securely, Kilo enables the operation of multi-cloud clusters. Kilo's design allows clients to VPN to a cluster in order to securely access services running on the cluster. In addition to creating multi-cloud clusters, Kilo enables the creation of multi-cluster services, i.e. services that span across different Kubernetes clusters.

An introductory video about Kilo from KubeCon EU 2019 can be found on youtube.

How it works

Kilo uses WireGuard, a performant and secure VPN, to create a mesh between the different nodes in a cluster. The Kilo agent, kg, runs on every node in the cluster, setting up the public and private keys for the VPN as well as the necessary rules to route packets between locations.

Kilo can operate both as a complete, independent networking provider as well as an add-on complimenting the cluster-networking solution currently installed on a cluster. This means that if a cluster uses, for example, Flannel for networking, Kilo can be installed on top to enable pools of nodes in different locations to join; Kilo will take care of the network between locations, while Flannel will take care of the network within locations.

Installing on Kubernetes

Kilo can be installed on any Kubernetes cluster either pre- or post-bring-up.

Step 1: get WireGuard

Kilo requires the WireGuard kernel module to be loaded on all nodes in the cluster. Starting at Linux 5.6, the kernel includes WireGuard in-tree; Linux distributions with older kernels will need to install WireGuard. For most Linux distributions, this can be done using the system package manager. See the WireGuard website for up-to-date instructions for installing WireGuard.

Clusters with nodes on which the WireGuard kernel module cannot be installed can use Kilo by leveraging a userspace WireGuard implementation.

Step 2: open WireGuard port

The nodes in the mesh will require an open UDP port in order to communicate. By default, Kilo uses UDP port 51820.

Step 3: specify topology

By default, Kilo creates a mesh between the different logical locations in the cluster, e.g. data-centers, cloud providers, etc. For this, Kilo needs to know which groups of nodes are in each location. If the cluster does not automatically set the topology.kubernetes.io/region node label, then the kilo.squat.ai/location annotation can be used. For example, the following snippet could be used to annotate all nodes with GCP in the name:

for node in $(kubectl get nodes | grep -i gcp | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location="gcp"; done

Kilo allows the topology of the encrypted network to be completely customized. See the topology docs for more details.

Step 4: ensure nodes have public IP

At least one node in each location must have an IP address that is routable from the other locations. If the locations are in different clouds or private networks, then this must be a public IP address. If this IP address is not automatically configured on the node's Ethernet device, it can be manually specified using the kilo.squat.ai/force-endpoint annotation.

Step 5: install Kilo!

Kilo can be installed by deploying a DaemonSet to the cluster.

To run Kilo on kubeadm:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-kubeadm.yaml

To run Kilo on bootkube:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-bootkube.yaml

To run Kilo on Typhoon:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon.yaml

To run Kilo on k3s:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-k3s.yaml

Add-on Mode

Administrators of existing clusters who do not want to swap out the existing networking solution can run Kilo in add-on mode. In this mode, Kilo will add advanced features to the cluster, such as VPN and multi-cluster services, while delegating CNI management and local networking to the cluster's current networking provider. Kilo currently supports running on top of Flannel.

For example, to run Kilo on a Typhoon cluster running Flannel:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon-flannel.yaml

See the manifests directory for more examples.

VPN

Kilo also enables peers outside of a Kubernetes cluster to connect to the VPN, allowing cluster applications to securely access external services and permitting developers and support to securely debug cluster resources. In order to declare a peer, start by defining a Kilo peer resource:

cat <<'EOF' | kubectl apply -f -
apiVersion: kilo.squat.ai/v1alpha1
kind: Peer
metadata:
  name: squat
spec:
  allowedIPs:
  - 10.5.0.1/32
  publicKey: GY5aT1N9dTR/nJnT1N2f4ClZWVj0jOAld0r8ysWLyjg=
  persistentKeepalive: 10
EOF

This configuration can then be applied to a local WireGuard interface, e.g. wg0, to give it access to the cluster with the help of the kgctl tool:

kgctl showconf peer squat > peer.ini
sudo wg setconf wg0 peer.ini

See the VPN docs for more details.

Multi-cluster Services

A logical application of Kilo's VPN is to connect two different Kubernetes clusters. This allows workloads running in one cluster to access services running in another. For example, if cluster1 is running a Kubernetes Service that we need to access from Pods running in cluster2, we could do the following:

# Register the nodes in cluster1 as peers of cluster2.
for n in $(kubectl --kubeconfig $KUBECONFIG1 get no -o name | cut -d'/' -f2); do
    kgctl --kubeconfig $KUBECONFIG1 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR1 | kubectl --kubeconfig $KUBECONFIG2 apply -f -
done
# Register the nodes in cluster2 as peers of cluster1.
for n in $(kubectl --kubeconfig $KUBECONFIG2 get no -o name | cut -d'/' -f2); do
    kgctl --kubeconfig $KUBECONFIG2 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR2 | kubectl --kubeconfig $KUBECONFIG1 apply -f -
done
# Create a Service in cluster2 to mirror the Service in cluster1.
cat <<'EOF' | kubectl --kubeconfig $KUBECONFIG2 apply -f -
apiVersion: v1
kind: Service
metadata:
  name: important-service
spec:
  ports:
    - port: 80
---
apiVersion: v1
kind: Endpoints
metadata:
    name: important-service
subsets:
  - addresses:
      - ip: $CLUSTERIP # The cluster IP of the important service on cluster1.
    ports:
      - port: 80
EOF

Now, important-service can be used on cluster2 just like any other Kubernetes Service.

See the multi-cluster services docs for more details.

Analysis

The topology and configuration of a Kilo network can be analyzed using the kgctl command line tool. For example, the graph command can be used to generate a graph of the network in Graphviz format:

kgctl graph | circo -Tsvg > cluster.svg

Owner
Lucas Servén Marín
working on Kubernetes, Prometheus, and Thanos
Lucas Servén Marín
Comments
  • How to connect two clusters, networking and kubectl?

    How to connect two clusters, networking and kubectl?

    Please let me know if this is inappropriate for here, and I'll close and research independently.

    I'm struggling with my usage case, which I believe is possible with kilo.

    I have a k3s cluster (with kilo) in kvm virtual machines using NAT. These are on a host with a public ip (local network can ping to cluster). I want to use the cluster to run backend services. (longhorn, postgresql etc). I then also want to run a single node cluster on the host machine, to connect to the backend service cluster and provide public facing services.

    I want to be able to kubectl to both clusters (I like octant to monitor/manipulate cluster status). I also want to share services between clusters (predominantly backend to single node cluster). I am currently using kubectl from the host to access the backend cluster. I want to be able to do this remotely as I could for a cluster with a public ip.

    It seems that the correct approach might be to setup kilo vpn between the two clusters. Then vpn server to provide a public endpoint on the public facing single node instance.

    My head is going a bit fluffy with all the internconnected parts, and what needs to be where. (order to connect the two clusters, applying metallb to clusters, how to identify and access the independent clusters and kubectl to them).

    I'm assuming that I will need to install kgctl on the kvm host.

    Any guidance will be appreciated.

  • no internet connection within pod on K3s --no-flannel

    no internet connection within pod on K3s --no-flannel

    Installed k3s on a clean environment disabling default flannel. then installing kilo-k3s.yaml.

    between nodes i can ping nodes, wg is properly configured and have connectivity between all 3 nodes.

    But seems there is no internet connection inside the pods deployed on workers

    interface: kilo0
      public key: xxxxx
      private key: (hidden)
      listening port: 51820
    
    peer: xxxxxxxxxxxxxxxxx
      endpoint: xxxxxxxxxx
      allowed ips: 10.42.1.0/24, 172.17.0.1/32, 10.4.0.3/32
      latest handshake: 20 seconds ago
      transfer: 988 B received, 1.25 KiB sent
      persistent keepalive: every 5 seconds
    
    peer: xxxxxxxxxxxxxx
      endpoint: xxxxxxxx
      allowed ips: 10.42.2.0/24, 10.17.100.16/32, 10.4.0.2/32
      transfer: 0 B received, 592 B sent
      persistent keepalive: every 5 seconds
    
  • k3s kilo pods crashlooping

    k3s kilo pods crashlooping

    sudo kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-k3s-flannel.yaml

    serviceaccount/kilo created
    clusterrole.rbac.authorization.k8s.io/kilo created
    clusterrolebinding.rbac.authorization.k8s.io/kilo created
    daemonset.apps/kilo created
    

    sudo kubectl logs -f kilo-cz64w -n kube-system

    failed to create Kubernetes config: Error loading config file "/etc/kubernetes/kubeconfig": read /etc/kubernetes/kubeconfig: is a directory

    I think the problem is with kilo-k3s-flannel.yaml:99.

  • Multi-cluster Services not working latest version?

    Multi-cluster Services not working latest version?

    Hi Multi-cluster Services not working latest version?

    I had to change in manifestos to - name: kilo image: squat/kilo: amd64-298a772d686740b8d93979c013ce876592e7a7cf

    multi-cluster will no longer be supported in new releases?

    Thanks :)

  • Can Kilo Work With Nodes Behind NAT?

    Can Kilo Work With Nodes Behind NAT?

    I've been looking into using Kubernetes in an at-edge setting. In this type of deployment I'd be setting up nodes behind other people's NAT'ed networks. Kubernete's API and CRDs make a lot of things I need to do very easy to accomplish (daemonsets, service meshes, and config management, etc) very simple. Wireguard would provide a transparent security layer. In my application I don't mind the high latency with communications to the API server and other things. One thing that I don't control in my deployment is the router at the locations of deployment. I can guarantee there will be a network connection that will be able to speak to my api server but I cannot forward ports.

    I noticed in your documentation you must provide at least one public IP to each region. Is there some way to use kilo to avoid this constraint? What does this constraint come from? Is this some inherent feature of WG?

  • kilo interrupts existing connections every 30s between a public master and a node behind NAT

    kilo interrupts existing connections every 30s between a public master and a node behind NAT

    Hi, I have one master node in region A with a public ip and a worker node in region B behind a NAT (two separate networks).

    After deploy Kilo I annotated both nodes to force external ip (master with own public ip and worker with NAT public ip) and to set the related location on each (master: region-a, worker: region-b).

    Checking the wireguard peers in the master, with wg command, I can see the peer of the worker, with the NAT public ip as endpoint, but the port is different than the wireguard listen port set on the worker node.

    I can also see that the an handshake was made successfully, but after 30s approximately, the Kilo recreate the peer because it detects differences on configuration (log: 'WireGuard configurations are different'), due to the endpoint port and interrupting existing connections.

    How can I solve this? Thanks in advance.

  • Why does kilo get cluster config using kubeconfig (or API server URL flag) when it has a service account?

    Why does kilo get cluster config using kubeconfig (or API server URL flag) when it has a service account?

    While setting up kilo on a k3s cluster I noticed that it uses -kubeconfig, or -master to get the config that is used when interfacing with the cluster. This code can be seen here.

    This seems like a security problem - why should kilo require access to my kubeconfig, which contains credentials that have the power to do anything to the cluster? Moreover, it seems redundant: I looked through kilo-k3s-flannel.yaml (which is what I used to get it working) and noticed that a service account is created for kilo with all of the permissions it should need.

    This example (see main.go) uses this function to get the config. Can kilo not use this function instead?

    I'm new to interfacing applications with kubernetes clusters, so if I'm missing something my apologies. If it's be welcome I'd be happy to submit a pull request for this.

  • remote VPN client

    remote VPN client

    ok enlighten me again! docs are sketchy .... for a vpn now that kilo is running on 3 nodes.... id like to connnect my remote laptop for administration and web access, where are we deriving the keys from ? i imagine this is the "client" public key ? and i can get the server/cluster side with wg showconf ? I do appreciate the assitance, but yes i will say docs are a bit lacking, that being said point me in the right direction, and i can help with documentation once i get the connection sorted.

    VPN

    Kilo also enables peers outside of a Kubernetes cluster to connect to the VPN, allowing cluster applications to securely access external services and permitting developers and support to securely debug cluster resources. In order to declare a peer, start by defining a Kilo peer resource:

    cat <<'EOF' | kubectl apply -f - apiVersion: kilo.squat.ai/v1alpha1 kind: Peer metadata: name: squat spec: allowedIPs:

    • 10.5.0.1/32 publicKey: GY5aT1N9dTR/nJnT1N2f4ClZWVj0jOAld0r8ysWLyjg= persistentKeepalive: 10 EOF
  • No IP on interface using kubeadm

    No IP on interface using kubeadm

    Disclaimer: I've quite new to Kubernetes so bear with me if something I'm saying doesn't make sense.

    Context: I've been trying out various ways of deploying Kubernetes behind NATs (e.g. spreading a cluster between my house, a friend's and a cloud provider) and kilo seems to perfectly address this usecase as Wireguard has NAT traversal built-in. Other networking points seem to lack as deep support for NAT (maybe with the exception of Weave).

    Issue: No matter what I do, I cannot seem to get kilo to give an IP address to the Wireguard interface if I deploy a cluster with kubeadm. This is even with a single server on a public IP (so shouldn't have any NAT interference).

    I do see the annotations if I do kubectl get node -o yaml but the config doesn't seem to be propagated to the interface and kgctl tells me: did not find any valid Kilo nodes in the cluster I've tried with both standalone kilo as well as integrating with flannel and neither seems to work.

    However, if I instead use k3s, everything works flawlessly as I would expect. I would prefer to use kubeadm to get the more bare-metal experience but I'm happy enough with k3s now.

    Is this a known issue or could I get some extra logs to help debug this? Thanks!

  • Unable to access local nodes when using topologies

    Unable to access local nodes when using topologies

    Hi Lucas!

    Thank you for this great project!

    I am trying to set up a multi provider k3s cluster using kilo. The machines roughly look like:

    1. oci location - 2 machines (both only has local ip address assigned to the local interfaces, ext ips are managed via the internet gateway of the cloud provider)
    2. gcp location - 1 machine

    I haven't got to doing a multi provider setup yet. I am still trying to get the 2 machines in oci to talk to each other.

    I am trying to use kilo as CNI directly, the network configuration is as follows: oci-master - internal ip 10.1.20.3, external (using placeholders here) oci-worker - internal ip 10.1.20.2, external

    The machines can ping each other directly using the 10.1.20.x addresses.

    My issue is that, once they come up, I can't get the pods launched on each machine to talk to each other. I can ping pods on the machines that runs it, but not from master -> worker and vice versa.

    on my laptop

    > kubectl get po -o wide
    NAME                        READY   STATUS    RESTARTS   AGE   IP          NODE         NOMINATED NODE   READINESS GATES
    my-nginx-74f94c7795-j7kzv   1/1     Running   0          99m   10.42.1.5   oci-worker   <none>           <none>
    

    but on oci-master

    > ping 10.42.1.5
    PING 10.42.1.5 (10.42.1.5): 56 data bytes
    ^C--- 10.42.1.5 ping statistics ---
    4 packets transmitted, 0 packets received, 100% packet loss
    

    I think i should be able to reach every pod from any node on the cluster (AFAIK).

    Please let me know if there is additional info that would be helpful to include!

    My setup details are below

    I provisioned the machines as follows: oci-master:

    k3sup install \
        --ip <ext-master-ip> \
        --k3s-version 'v1.17.0+k3s.1' \
        --k3s-extra-args '--no-flannel --no-deploy metrics-server --no-deploy servicelb --no-deploy traefik --default-local-storage-path /k3s-local-storage --node-name oci-master --node-external-ip <ext-master-ip> --node-ip 10.1.20.3'
    
    kubectl annotate node oci-master \
        kilo.squat.ai/force-external-ip="<ext-ip-master>/32" \
        kilo.squat.ai/force-internal-ip="10.1.20.3/24" \
        kilo.squat.ai/location="oci" \
        kilo.squat.ai/leader="true" 
    

    oci-worker:

    k3sup join \
        --ip <ext-worker-ip> \
        --server-ip <ext-master-ip> \
        --k3s-version 'v1.17.0+k3s.1' \
        --k3s-extra-args '--no-flannel --node-name oci-worker --node-external-ip <ext-worker-ip> --node-ip 10.1.20.2''
    
    kubectl annotate node oci-worker \
        kilo.squat.ai/force-external-ip="<ext-worker-ip>/32" \
        kilo.squat.ai/force-internal-ip="10.1.20.2/24" \
        kilo.squat.ai/location="oci" 
    

    Finally setting up kilo

    kubectl apply -f k3s-kilo.yaml
    

    I had to do the same changes suggested in #11 and #27 to make sure that kilo pods has the correct permissions, but I was able to get the pods to come up correctly.

    I am able see logs like these when taking pod logs (with log-level=debug) on oci-master

    {"caller":"mesh.go:410","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2020-02-09T09:12:46.095414595Z"}
    {"caller":"mesh.go:412","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"ExternalIP":{"IP":"<ext-ip-master>","Mask":"/////w=="},"Key":"<key>","InternalIP":{"IP":"10.1.20.3","Mask":"////AA=="},"LastSeen":1581239566,"Leader":true,"Location":"oci","Name":"oci-master","Subnet":{"IP":"10.42.0.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.1","Mask":"//8AAA=="}},"ts":"2020-02-09T09:12:46.095454981Z"}
    

    on oci-worker

    {"caller":"mesh.go:410","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2020-02-09T10:44:48.564218597Z"}
    {"caller":"mesh.go:508","component":"kilo","level":"debug","msg":"successfully checked in local node in backend","ts":"2020-02-09T10:45:18.478913052Z"}
    {"caller":"mesh.go:675","component":"kilo","level":"debug","msg":"local node is not the leader","ts":"2020-02-09T10:45:18.4804814Z"}      
    {"caller":"mesh.go:410","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2020-02-09T10:45:18.481320232Z"}  
    {"caller":"mesh.go:412","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"ExternalIP":{"IP":"<ext-ip-worker>","Mask":"/////w=="},"Key":"<key>","InternalIP":{"IP":"10.1.20.2","Mask":"////AA=="},"LastSeen":1581245118,"Leader":false,"Location":"oci","Name":"oci-worker","Subnet":{"IP":"10.42.1.0","Mask":"////AA=="},"WireGuardIP":null},"ts":"2020-02-09T10:45:18.481367592Z"}
    

    oci-master

    > ifconfig
    ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
            inet 10.1.20.3  netmask 255.255.255.0  broadcast 10.1.20.255
            inet6 fe80::200:17ff:fe02:2f31  prefixlen 64  scopeid 0x20<link>
            ether 00:00:17:02:2f:31  txqueuelen 1000  (Ethernet)
            RX packets 945623  bytes 2361330833 (2.3 GB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 851708  bytes 304538145 (304.5 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    kilo0: flags=209<UP,POINTOPOINT,RUNNING,NOARP>  mtu 1420
            inet 10.4.0.1  netmask 255.255.0.0  destination 10.4.0.1
            unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 1354843  bytes 457783326 (457.7 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 1354843  bytes 457783326 (457.7 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    tunl0: flags=193<UP,RUNNING,NOARP>  mtu 8980
            inet 10.42.0.1  netmask 255.255.255.255
            tunnel   txqueuelen 1000  (IPIP Tunnel)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 5  bytes 420 (420.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    > ip route
    default via 10.1.20.1 dev ens3
    default via 10.1.20.1 dev ens3 proto dhcp src 10.1.20.3 metric 100
    10.1.20.0/24 dev ens3 proto kernel scope link src 10.1.20.3
    10.4.0.0/16 dev kilo0 proto kernel scope link src 10.4.0.1
    10.42.1.0/24 via 10.1.20.2 dev tunl0 proto static onlink
    

    oci-worker

    > ifconfig
    ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
            inet 10.1.20.2  netmask 255.255.255.0  broadcast 10.1.20.255
            inet6 fe80::200:17ff:fe02:1682  prefixlen 64  scopeid 0x20<link>
            ether 00:00:17:02:16:82  txqueuelen 1000  (Ethernet)
            RX packets 231380  bytes 781401888 (781.4 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 221393  bytes 29979034 (29.9 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    kube-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 10.42.1.1  netmask 255.255.255.0  broadcast 0.0.0.0
            inet6 fe80::38f7:34ff:fed9:897e  prefixlen 64  scopeid 0x20<link>
            ether 26:d7:aa:ce:37:f8  txqueuelen 1000  (Ethernet)
            RX packets 21865  bytes 10732037 (10.7 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 19269  bytes 7046706 (7.0 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 78258  bytes 29977684 (29.9 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 78258  bytes 29977684 (29.9 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    tunl0: flags=193<UP,RUNNING,NOARP>  mtu 8980
            inet 10.42.1.1  netmask 255.255.255.255
            tunnel   txqueuelen 1000  (IPIP Tunnel)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 10  bytes 840 (840.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth5ee1a633: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::24d7:aaff:fece:37f8  prefixlen 64  scopeid 0x20<link>
            ether 26:d7:aa:ce:37:f8  txqueuelen 0  (Ethernet)
            RX packets 12748  bytes 10219673 (10.2 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 9890  bytes 4818258 (4.8 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth965708c2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::9cfc:9dff:fef1:dc7a  prefixlen 64  scopeid 0x20<link>
            ether 9e:fc:9d:f1:dc:7a  txqueuelen 0  (Ethernet)
            RX packets 22  bytes 1636 (1.6 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 21  bytes 1754 (1.7 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vethd34408af: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::5077:76ff:fe3a:1b01  prefixlen 64  scopeid 0x20<link>
            ether 52:77:76:3a:1b:01  txqueuelen 0  (Ethernet)
            RX packets 9091  bytes 816526 (816.5 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 9442  bytes 2233086 (2.2 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    > ip route
    default via 10.1.20.1 dev ens3
    default via 10.1.20.1 dev ens3 proto dhcp src 10.1.20.2 metric 100
    10.1.20.0/24 dev ens3 proto kernel scope link src 10.1.20.2
    10.4.0.1 via 10.1.20.3 dev tunl0 proto static onlink
    10.42.0.0/24 via 10.1.20.3 dev tunl0 proto static onlink
    10.42.1.0/24 dev kube-bridge proto kernel scope link src 10.42.1.1
    169.254.0.0/16 dev ens3 proto dhcp scope link src 10.1.20.2 metric 100
    

    Other things that I've tried

    Interestingly setting up another machine with a different region, I was able to see that the wireguard interfaces come up with correct allowed-ips and was even able to ping 10.1.20.2 (oci-worker) directly over wireguard. Presumably that is going from gcp-worker -> oci-master (leader for oci location) -> oci-worker

  • kilo question - informational

    kilo question - informational

    installing kilo on 4 nodes, 2 different public network space, does kilo encrypt comms between nodes ?

    how does one validate this encryption, meaning if it "automagically" encrypts node comunications how can i verify it from node to node?

    also, can my remote "workstation" (laptop) from far away be a client to the cluster, and also utilize the vpn for internet access along with management. Sorry the docs werent so clear to me. kilo is installed.

  • [Question]How Kilo works?

    [Question]How Kilo works?

    Hello! I wanted to build a high availability Kubernetes cluster with WireGuard and found Kilo. I thought the ease of deployment of Kilo was great, but I am wondering what the actual steps are to build the network. I would like to know how the Kilo manifest and Kilo docker images work. My apologies if I missed the documentation for the explanation. Thanks.

    This text was translated by DeepL

  • build(deps): bump express from 4.17.1 to 4.18.2 in /website

    build(deps): bump express from 4.17.1 to 4.18.2 in /website

    Bumps express from 4.17.1 to 4.18.2.

    Release notes

    Sourced from express's releases.

    4.18.2

    4.18.1

    • Fix hanging on large stack of sync routes

    4.18.0

    ... (truncated)

    Changelog

    Sourced from express's changelog.

    4.18.2 / 2022-10-08

    4.18.1 / 2022-04-29

    • Fix hanging on large stack of sync routes

    4.18.0 / 2022-04-25

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • build(deps): bump decode-uri-component from 0.2.0 to 0.2.2 in /website

    build(deps): bump decode-uri-component from 0.2.0 to 0.2.2 in /website

    Bumps decode-uri-component from 0.2.0 to 0.2.2.

    Release notes

    Sourced from decode-uri-component's releases.

    v0.2.2

    • Prevent overwriting previously decoded tokens 980e0bf

    https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.1...v0.2.2

    v0.2.1

    • Switch to GitHub workflows 76abc93
    • Fix issue where decode throws - fixes #6 746ca5d
    • Update license (#1) 486d7e2
    • Tidelift tasks a650457
    • Meta tweaks 66e1c28

    https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.0...v0.2.1

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • Calico or Althea support

    Calico or Althea support

    Hi, previously this all worked well with flannel on k8s, however we have migrated to StarlingX and it uses Calico for networking, and another deployment actually in the lab uses Althea, is there any possibility this could be adapted to calico and althea ?

  • build(deps): bump loader-utils from 1.4.0 to 1.4.2 in /website

    build(deps): bump loader-utils from 1.4.0 to 1.4.2 in /website

    Bumps loader-utils from 1.4.0 to 1.4.2.

    Release notes

    Sourced from loader-utils's releases.

    v1.4.2

    1.4.2 (2022-11-11)

    Bug Fixes

    v1.4.1

    1.4.1 (2022-11-07)

    Bug Fixes

    Changelog

    Sourced from loader-utils's changelog.

    1.4.2 (2022-11-11)

    Bug Fixes

    1.4.1 (2022-11-07)

    Bug Fixes

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • Peering clusters behind nat

    Peering clusters behind nat

    Hi, I have the following use case:

    • one main cluster is in AWS (with public IP); (cluster A);
    • multiple small clusters on the edge devices behind NAT (cluster B).

    Cluster A needs access to services in NATed clusters. I was hoping to use kilo to interconnect the clusters. Is this document [https://kilo.squat.ai/docs/multi-cluster-services/] a correct approach considering that clusters are behind NAT? Or do I need to use a different configuration?

An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

Oct 19, 2021
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration Karmada (Kubernetes Armada) is a Kubernetes management system that enables

Dec 30, 2022
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

Dec 18, 2022
Cloud-on-k8s- - Elastic Cloud on Kubernetes (ECK)

Elastic Cloud on Kubernetes (ECK) Elastic Cloud on Kubernetes automates the depl

Jan 29, 2022
K8s controller implementing Multi-Cluster Services API based on AWS Cloud Map.

AWS Cloud Map MCS Controller for K8s Introduction AWS Cloud Map multi-cluster service discovery for Kubernetes (K8s) is a controller that implements e

Dec 17, 2022
K8s-ingress-health-bot - A K8s Ingress Health Bot is a lightweight application to check the health of the ingress endpoints for a given kubernetes namespace.

k8s-ingress-health-bot A K8s Ingress Health Bot is a lightweight application to check the health of qualified ingress endpoints for a given kubernetes

Jan 2, 2022
provide api for cloud service like aliyun, aws, google cloud, tencent cloud, huawei cloud and so on

cloud-fitter 云适配 Communicate with public and private clouds conveniently by a set of apis. 用一套接口,便捷地访问各类公有云和私有云 对接计划 内部筹备中,后续开放,有需求欢迎联系。 开发者社区 开发者社区文档

Dec 20, 2022
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.
A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s.

A Kubernetes Operator, that helps DevOps team accelerate their journey into the cloud and K8s. OAM operator scaffolds all of the code required to create resources across various cloud provides, which includes both K8s and Non-K8s resources

Nov 30, 2021
Write controller-runtime based k8s controllers that read/write to git, not k8s

Git Backed Controller The basic idea is to write a k8s controller that runs against git and not k8s apiserver. So the controller is reading and writin

Dec 10, 2021
K8s-cinder-csi-plugin - K8s Pod Use Openstack Cinder Volume

k8s-cinder-csi-plugin K8s Pod Use Openstack Cinder Volume openstack volume list

Jul 18, 2022
K8s-go-structs - All k8s API Go structs

k8s-api go types Why? Its nice to have it all in a single package. . |-- pkg |

Jul 17, 2022
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Oct 15, 2022
CoreDNS plugin implementing K8s multi-cluster services DNS spec.

corends-multicluster Name multicluster - implementation of Multicluster DNS Description This plugin implements the Kubernetes DNS-Based Multicluster S

Dec 3, 2022
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

Jun 8, 2022
Cloud-gaming-operator - The one that manages VMs for cloud gaming built on GCE

cloud-gaming-operator GCE上に建てたクラウドゲーミング用のVMを管理するやつ 事前準備 GCEのインスタンスかマシンイメージを作成してお

Jan 22, 2022
Cloud Native Electronic Trading System built on Kubernetes and Knative Eventing

Ingenium -- Still heavily in prototyping stage -- Ingenium is a cloud native electronic trading system built on top of Kubernetes and Knative Eventing

Aug 29, 2022
Go WhatsApp Multi-Device Implementation in REST API with Multi-Session/Account Support

Go WhatsApp Multi-Device Implementation in REST API This repository contains example of implementation go.mau.fi/whatsmeow package with Multi-Session/

Dec 3, 2022
🔥 🔥 Open source cloud native security observability platform. Linux, K8s, AWS Fargate and more. 🔥 🔥
🔥 🔥   Open source cloud native security observability platform. Linux, K8s, AWS Fargate and more. 🔥 🔥

CVE-2021-44228 Log4J Vulnerability can be detected at runtime and attack paths can be visualized by ThreatMapper. Live demo of Log4J Vulnerability her

Jan 1, 2023
The OCI Service Operator for Kubernetes (OSOK) makes it easy to connect and manage OCI services from a cloud native application running in a Kubernetes environment.

OCI Service Operator for Kubernetes Introduction The OCI Service Operator for Kubernetes (OSOK) makes it easy to create, manage, and connect to Oracle

Sep 27, 2022