k3sup is a light-weight utility to get from zero to KUBECONFIG with k3s on any local or remote VM.

k3sup ๐Ÿš€ (said 'ketchup')

k3sup logo

k3sup is a light-weight utility to get from zero to KUBECONFIG with k3s on any local or remote VM. All you need is ssh access and the k3sup binary to get kubectl access immediately.

The tool is written in Go and is cross-compiled for Linux, Windows, MacOS and even on Raspberry Pi.

How do you say it? Ketchup, as in tomato.

Build Status Go Report Card GoDoc License: MIT GitHub All Releases

Contents:

What's this for? ๐Ÿ’ป

This tool uses ssh to install k3s to a remote Linux host. You can also use it to join existing Linux hosts into a k3s cluster as agents. First, k3s is installed using the utility script from Rancher, along with a flag for your host's public IP so that TLS works properly. The kubeconfig file on the server is then fetched and updated so that you can connect from your laptop using kubectl.

You may wonder why a tool like this needs to exist when you can do this sort of thing with bash.

k3sup was developed to automate what can be a very manual and confusing process for many developers, who are already short on time. Once you've provisioned a VM with your favourite tooling, k3sup means you are only 60 seconds away from running kubectl get pods on your own computer. If you are a local computer, you can bypass SSH with k3sup install --local

Uses

  • Bootstrap Kubernetes with k3s onto any VM with k3sup install - either manually, during CI or through cloud-init
  • Get from zero to kubectl with k3s on Raspberry Pi (RPi), VMs, AWS EC2, Packet bare-metal, DigitalOcean, Civo, Scaleway, and others
  • Build a HA, multi-master (server) cluster
  • Fetch the KUBECONFIG from an existing k3s cluster
  • Join nodes into an existing k3s cluster with k3sup join

Bootstrapping Kubernetes

Conceptual architecture Conceptual architecture, showing k3sup running locally against any VM such as AWS EC2 or a VPS such as DigitalOcean.

Do you love k3sup?

k3sup is free and open source, but requires time and effort to support users and build and test new features. Support this project via GitHub Sponsors.

Download k3sup (tl;dr)

k3sup is distributed as a static Go binary. You can use the installer on MacOS and Linux, or visit the Releases page to download the executable for Windows.

curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/local/bin/

k3sup --help

A note for Windows users

Windows users can use k3sup install and k3sup join with a normal "Windows command prompt".

Demo ๐Ÿ“ผ

In the demo I install Kubernetes (k3s) onto two separate machines and get my kubeconfig downloaded to my laptop each time in around one minute.

  1. Ubuntu 18.04 VM created on DigitalOcean with ssh key copied automatically
  2. Raspberry Pi 4 with my ssh key copied over via ssh-copy-id

Watch the demo:

asciicast

Who is the author? ๐Ÿ‘

k3sup is Open Source Software (OSS) and was created by Alex Ellis - the founder of OpenFaaS ยฎ & inlets. Alex is also an active part of the Docker & Kubernetes community as a CNCF Ambassador.

If you've benefitted from his open source projects or blog posts in some way, then and join dozens of other developers today by buying an Insiders Subscription ๐Ÿ† via GitHub Sponsors.

Usage โœ…

The k3sup tool is a client application which you can run on your own computer. It uses SSH to connect to remote servers and creates a local KUBECONFIG file on your disk. Binaries are provided for MacOS, Windows, and Linux (including ARM).

Pre-requisites for k3sup servers and agents

Some Linux hosts are configured to allow sudo to run without having to repeat your password. For those which are not already configured that way, you'll nee to make the following changes if you wish to use k3sup:

# sudo visudo

# Then add to the bottom of the file
# replace "alex" with your username i.e. "ubuntu"
alex ALL=(ALL) NOPASSWD: ALL

In most circumstances, cloud images for Ubuntu and other distributions will not require this step.

As an alternative, if you only need a single server you can log in interactively and run k3sup install --local instead of using SSH.

๐Ÿ‘‘ Setup a Kubernetes server with k3sup

You can setup a server and stop here, or go on to use the join command to add some "agents" aka nodes or workers into the cluster to expand its compute capacity.

Provision a new VM running a compatible operating system such as Ubuntu, Debian, Raspbian, or something else. Make sure that you opt-in to copy your registered SSH keys over to the new VM or host automatically.

Note: You can copy ssh keys to a remote VM with ssh-copy-id user@IP.

Imagine the IP was 192.168.0.1 and the username was ubuntu, then you would run this:

  • Run k3sup:
export IP=192.168.0.1
k3sup install --ip $IP --user ubuntu

# Or use a hostname and SSH key for EC2
export HOST="ec2-3-250-131-77.eu-west-1.compute.amazonaws.com"
k3sup install --host $HOST --user ubuntu \
  --ssh-key $HOME/ec2-key.pem

Other options for install:

  • --cluster - start this server in clustering mode using embedded etcd (embedded HA)
  • --skip-install - if you already have k3s installed, you can just run this command to get the kubeconfig
  • --ssh-key - specify a specific path for the SSH key for remote login
  • --local-path - default is ./kubeconfig - set the file where you want to save your cluster's kubeconfig. By default this file will be overwritten.
  • --merge - Merge config into existing file instead of overwriting (e.g. to add config to the default kubectl config, use --local-path ~/.kube/config --merge).
  • --context - default is default - set the name of the kubeconfig context.
  • --ssh-port - default is 22, but you can specify an alternative port i.e. 2222
  • --k3s-extra-args - Optional extra arguments to pass to k3s installer, wrapped in quotes, i.e. --k3s-extra-args '--no-deploy traefik' or --k3s-extra-args '--docker'. For multiple args combine then within single quotes --k3s-extra-args '--no-deploy traefik --docker'.
  • --k3s-version - set the specific version of k3s, i.e. v0.9.1
  • --ipsec - Enforces the optional extra argument for k3s: --flannel-backend option: ipsec
  • --print-command - Prints out the command, sent over SSH to the remote computer
  • --datastore - used to pass a SQL connection-string to the --datastore-endpoint flag of k3s. You must use the format required by k3s in the Rancher docs.

See even more install options by running k3sup install --help.

  • Now try the access:
export KUBECONFIG=`pwd`/kubeconfig
kubectl get node

Note that you should always use pwd/ so that a full path is set, and you can change directory if you wish.

Merging clusters into your KUBECONFIG

You can also merge the remote config into your main KUBECONFIG file $HOME/.kube/config, then use kubectl config get-contexts or kubectx to manage it.

The default "context" name for the remote k3s cluster is default, however you can override this as below.

For example:

k3sup install \
  --ip $IP \
  --user $USER \
  --merge \
  --local-path $HOME/.kube/config \
  --context my-k3s

Here we set a context of my-k3s and also merge into our main local KUBECONFIG file, so we could run kubectl config set-context my-k3s or kubectx my-k3s.

๐Ÿ˜ธ Join some agents to your Kubernetes server

Let's say that you have a server, and have already run the following:

export SERVER_IP=192.168.0.100
export USER=root

k3sup install --ip $SERVER_IP --user $USER

Next join one or more agents to the cluster:

export AGENT_IP=192.168.0.101

export SERVER_IP=192.168.0.100
export USER=root

k3sup join --ip $AGENT_IP --server-ip $SERVER_IP --user $USER

That's all, so with the above command you can have a two-node cluster up and running, whether that's using VMs on-premises, using Raspberry Pis, 64-bit ARM or even cloud VMs on EC2.

Create a multi-master (HA) setup with external SQL

The easiest way to test out k3s' multi-master (HA) mode with external storage, is to set up a Mysql server using DigitalOcean's managed service.

  • Get the connection string from your DigitalOcean dashboard, and adapt it

Before:

mysql://doadmin:80624d3936dfc8d2e80593@db-mysql-lon1-90578-do-user-6456202-0.a.db.ondigitalocean.com:25060/defaultdb?ssl-mode=REQUIRED

After:

mysql://doadmin:80624d3936dfc8d2e80593@tcp(db-mysql-lon1-90578-do-user-6456202-0.a.db.ondigitalocean.com:25060)/defaultdb

Note that we've removed ?ssl-mode=REQUIRED and wrapped the host/port in tcp().

export DATASTORE="mysql://doadmin:80624d3936dfc8d2e80593@tcp(db-mysql-lon1-90578-do-user-6456202-0.a.db.ondigitalocean.com:25060)/defaultdb

You can prefix this command with two spaces, to prevent it being cached in your bash history.

  • Create three VMs

Imagine we have the following three VMs, two will be servers, and one will be an agent.

export SERVER1=104.248.135.109
export SERVER2=104.248.25.221
export AGENT1=104.248.137.25
  • Install the first server
k3sup install --user root --ip $SERVER1 --datastore="${DATASTORE}"
  • Install the second server
k3sup install --user root --ip $SERVER2 --datastore="${DATASTORE}"
  • Join the first agent

You can join the agent to either server, the datastore is not required for this step.

k3sup join --user root --server-ip $SERVER1 --ip $AGENT1
  • Additional steps

If you run kubectl get node, you'll now see two masters/servers and one agent, however, we joined the agent to the first server. If the first server goes down, the agent will effectively also go offline.

kubectl get node

NAME              STATUS                        ROLES    AGE     VERSION
k3sup-1           Ready                         master   73s     v1.19.6+k3s1
k3sup-2           Ready                         master   2m31s   v1.19.6+k3s1
k3sup-3           Ready                         <none>   14s     v1.19.6+k3s1

There are two ways to prevent a dependency on the IP address of any one host. The first is to create a TCP load-balancer in the cloud of your choice, the second is for you to create a DNS round-robbin record, which contains all of the IPs of your servers.

In your DigitalOcean dashboard, go to the Networking menu and click "Load Balancer", create one in the same region as your Droplets and SQL server. Select your two Droplets, i.e. 104.248.34.61 and 142.93.175.203, and use TCP with port 6443.

If you want to run k3sup join against the IP of the LB, then you should also add TCP port 22

Make sure that the health-check setting is also set to TCP and port 6443. Wait to get your IP, mine was: 174.138.101.83

Save the LB into an environment variable:

export LB=174.138.101.83

Now use ssh to log into both of your servers, and edit their config files at /etc/systemd/system/k3s.service, update the lines --tls-san and the following address, to that of your LB:

ExecStart=/usr/local/bin/k3s \
    server \
        '--tls-san' \
        '104.248.135.109' \

Becomes:

ExecStart=/usr/local/bin/k3s \
    server \
        '--tls-san' \
        '174.138.101.83' \

Now run:

sudo systemctl daemon-reload && \
  sudo systemctl restart k3s-agent

And repeat these steps on the other server.

You can update the agent manually, via ssh and edit /etc/systemd/system/k3s-agent.service.env on the host, or use k3sup join again, but only if you added port 22 to your LB:

k3sup join --user root --server-ip $LB --ip $AGENT1

Finally, regenerate your KUBECONFIG file with the LB's IP, instead of one of the servers:

k3sup install --skip-install --ip $LB

Log into the first server, and stop k3s sudo systemctl stop k3s, then check that kubectl still functions as expected:

export KUBECONFIG=`pwd`/kubeconfig
kubectl get node -o wide

NAME              STATUS                        ROLES    AGE   VERSION
k3sup-1           NotReady                      master   23m   v1.19.6+k3s1
k3sup-2           Ready                         master   25m   v1.19.6+k3s1
k3sup-3           Ready                         <none>   22m   v1.19.6+k3s1

You've just simulated a failure of one of your masters/servers, and you can still access kubectl. Congratulations on building a resilient k3s cluster.

Create a multi-master (HA) setup with embedded etcd

In k3s v1.19.5+k3s1 a HA multi-master (multi-server in k3s terminology) configuration is available called "embedded etcd". A quorum of servers will be required, which means having an odd number of nodes and least three. See more

  • Initialize the cluster with the first server

Note the --cluster flag

export SERVER_IP=192.168.0.100
export USER=root

k3sup install \
  --ip $SERVER_IP \
  --user $USER \
  --cluster \
  --k3s-version v1.19.1+k3s1
  • Join each additional server

Note the new --server flag

export USER=root
export SERVER_IP=192.168.0.100
export NEXT_SERVER_IP=192.168.0.101

k3sup join \
  --ip $NEXT_SERVER_IP \
  --user $USER \
  --server-user $USER \
  --server-ip $SERVER_IP \
  --server \
  --k3s-version v1.19.1+k3s1

Now check kubectl get node:

kubectl get node
NAME              STATUS   ROLES    AGE     VERSION
paprika-gregory   Ready    master   8m27s   v1.19.2-k3s
cave-sensor       Ready    master   27m     v1.19.2-k3s

๐Ÿ‘จโ€๐Ÿ’ป Micro-tutorial for Raspberry Pi (2, 3, or 4) ๐Ÿฅง

In a few moments you will have Kubernetes up and running on your Raspberry Pi 2, 3 or 4. Stand by for the fastest possible install. At the end you will have a KUBECONFIG file on your local computer that you can use to access your cluster remotely.

Conceptual architecture, showing k3sup running locally against bare-metal ARM devices.

  • Download etcher.io for your OS

  • Flash an SD card using Raspbian Lite

  • Enable SSH by creating an empty file named ssh in the boot partition

  • Generate an ssh-key if you don't already have one with ssh-keygen (hit enter to all questions)

  • Find the RPi IP with ping -c raspberrypi.local, then set export SERVER_IP="" with the IP

  • Enable container features in the kernel, by editing /boot/cmdline.txt

  • Add the following to the end of the line: cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

  • Copy over your ssh key with: ssh-copy-id [email protected]

  • Run k3sup install --ip $SERVER_IP --user pi

  • Point at the config file and get the status of the node:

export KUBECONFIG=`pwd`/kubeconfig
kubectl get node -o wide

You now have kubectl access from your laptop to your Raspberry Pi running k3s.

If you want to join some nodes, run export IP="" for each additional RPi, followed by:

  • k3sup join --ip $IP --server-ip $SERVER_IP --user pi

Remember all these commands are run from your computer, not the RPi.

Now where next? I would recommend my detailed tutorial where I spend time looking at how to flash the SD card, deploy k3s, deploy OpenFaaS (for some useful microservices), and then get incoming HTTP traffic.

Try it now: Will it cluster? K3s on Raspbian

Caveats on security

If you are using public cloud, then make sure you see the notes from the Rancher team on setting up a Firewall or Security Group.

k3s docs: k3s configuration / open ports

If your ssh-key is password-protected

If the ssh-key is encrypted the first step is to try to connect to the ssh-agent. If this works, it will be used to connect to the server. If the ssh-agent is not running, the user will be prompted for the password of the ssh-key.

On most Linux systems and MacOS, ssh-agent is automatically configured and executed at login. No additional actions are required to use it.

To start the ssh-agent manually and add your key run the following commands:

eval `ssh-agent`
ssh-add ~/.ssh/id_rsa

You can now just run k3sup as usual. No special parameters are necessary.

k3sup --ip $IP --user user

Contributing

Sponsor on GitHub โ˜•๏ธ ๐Ÿ‘

k3sup is free and open source, but requires time and effort to support users and build and test new features. Support this project via GitHub Sponsors.

Blog posts & tweets

Blogs posts, tutorials, and Tweets about k3sup (#k3sup) are appreciated. Please send a PR to the README.md file to add yours.

Contributing via GitHub

Before contributing code, please see the CONTRIBUTING guide. Note that k3sup uses the same guide as inlets.dev.

Both Issues and PRs have their own templates. Please fill out the whole template.

All commits must be signed-off as part of the Developer Certificate of Origin (DCO)

License

MIT

๐Ÿ“ข What are people saying about k3sup?

Checkout the Announcement tweet

Similar tools & glossary

Glossary:

  • Kubernetes: master/slave
  • k3s: server/agent

Related tools:

  • k3s - Kubernetes as installed by k3sup. k3s is a compliant, light-weight, multi-architecture distribution of Kubernetes. It can be used to run Kubernetes locally or remotely for development, or in edge locations.
  • k3d - this tool runs a Docker container on your local laptop with k3s inside
  • kind - kind can run a Kubernetes cluster within a Docker container for local development. k3s is also suitable for this purpose through k3d. KinD is not suitable for running a remote cluster for development.
  • kubeadm - a tool to create fully-loaded, production-ready Kubernetes clusters with or without high-availability (HA). Tends to be heavier-weight and slower than k3s. It is aimed at cloud VMs or bare-metal computers which means it doesn't always work well with low-powered ARM devices.
  • k3v - "virtual kubernetes" - a very early PoC from the author of k3s aiming to slice up a single cluster for multiple tenants
  • k3sup-multipass - a helper to launch single node k3s cluster with one command using a multipass VM and optionally proxy the ingress to localhost for easier development.

Troubleshooting

If you're having issues, it's likely that this is a problem with K3s, and not with K3sup. How do we know that? Mostly from past issues.

Rancher provides support for K3s on their Slack in the #k3s channel. This should be your first port of call. Your second port of call is to raise an issue with the K3s maintainers in the K3s repo

Common issues:

  • Raspberry Pi - you haven't updated cmdline.txt to enable cgroups for CPU and memory

  • sudo: a terminal is required to read the password - see the Pre-requisites for k3sup agents and servers

  • K3s server didn't start. Log in and run sudo systemctl -u k3s

  • The K3s agent didn't start. Log in and run sudo systemctl -u k3s-agent

  • You tried to remove and re-add a server in an etcd cluster and it failed. This is a known issue, see the K3s issue tracker.

  • You tried to use an unsupported version of a database for HA. See this list from Rancher

Finally, if everything points to an issue that you can clearly reproduce with k3sup, feel free to open an issue here. To make sure you get a response, fill out the whole template and answer all the questions.

Getting access to your KUBECONFIG

You may have run into an issue where sudo access is required for kubectl access.

You should not run kubectl on your server or agent nodes. k3sup is designed to rewrite and/or merge your cluster's config to your local KUBECONFIG file. You should run kubectl on your laptop / client machine.

If you've lost your kubeconfig, you can use k3sup install --skip-install. See also the various flags for merging and setting a context name.

Smart cards and 2FA

Warning: issues requesting support for smart cards / 2FA will be closed immediately. The feature has been proven to work, and is provided as-is. We do not have time to debug your system.

You can use a smart card or 2FA security key such as a Yubikey. You must have your ssh-agent configured correctly, at that point k3sup will defer to the agent to make connections on MacOS and Linux. Find out more

Misc note on iptables

Note added by Eduardo Minguez Perez

Currently there is an issue in k3s involving iptables >= 1.8 that can affect the network communication. See the k3s issue and the corresponding kubernetes one for more information and workarounds. The issue has been observed in Debian Buster but it can affect other distributions as well.

Owner
Alex Ellis
Founder @openfaas @inlets. CNCF Ambassador
Alex Ellis
Comments
  • SSH Error - handshake failed

    SSH Error - handshake failed

    I tried adding a server today with the following command and the resulting output:

    $ k3sup install --context k3s-dev --ip 163.172.147.187 --user kscarlett --ssh-key ~/.ssh/id_rsa
    Public IP: <ip>
    ssh -i /Users/kscarlett/.ssh/id_rsa kscarlett@<ip>
    Error: unable to connect to <ip>:22 over ssh: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
    

    The strange thing is that when I copy-paste the SSH command it prints, it logs me in just fine. Of note is that when I SSH into the server, it takes ~5 seconds, while k3sup fails immediately.

    Expected Behaviour

    Successful SSH authentication, just as I get manually.

    Current Behaviour

    Near-immediate failure of the SSH command.

    Possible Solution

    Steps to Reproduce (for bugs)

    Seems like normal workflow - environment issue?

    Context

    I am unable to create a new server.

    Your Environment

    Local

    • OS: macOS 10.14.6
    • SSH: OpenSSH_7.9p1, LibreSSL 2.7.3

    Server

    • OS: Ubuntu 18.04.3 LTS
    • SSH: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
    • Hosted at Scaleway (C2L)
  • error when sudo requires a password

    error when sudo requires a password

    k3sup appears to require root access or passwordless sudo.

    Expected Behaviour

    Either the documentation should contain a note that passwordless sudo is required when using a non-root user for SSH, or the software should allow the user to enter a sudo password. This could be done on the command line (like ansible) or by using "ssh -t" to create a proper terminal so that sudo can prompt the user that is running k3sup for a sudo password.

    Current Behaviour

    [INFO]  Using v0.9.1 as release
    [INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-amd64.txt
    [INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.9.1/k3s
    [INFO]  Verifying binary download
    [INFO]  Installing k3s to /usr/local/bin/k3s
    sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
    Error: Error received processing command: Process exited with status 1
    

    Possible Solution

    use "ssh -t" to open a tty between the remote and the local user, so that sudo can ask for a password

    Steps to Reproduce (for bugs)

    1. create a user on the remote system that requires a password for sudo (the default, really)
    2. use k3sup --user to install as that user

    Context

    Your Environment

    Linux (Arch) on both hosts, VM locally hosted using libvirt/KVM

  • Error when using ed25519 key

    Error when using ed25519 key

    k3sup install tries to load my SSH key at ~/.ssh/id_rsa but I do not have an RSA key (I only use ed25519 these days):

    $ k3sup --ip 192.168.100.10 --user chk install docker
    Public IP: 192.168.100.10
    ssh -i /home/chk/.ssh/id_rsa [email protected]
    Error: unable to load the ssh key with path "/home/chk/.ssh/id_rsa": open /home/chk/.ssh/id_rsa: no such file or directory
    

    Expected Behaviour

    k3sup should not specify the SSH key file path unless explicitly requested to do so on the command line. The ssh command will default to using an ssh-agent if it is configured, and will default to using any default keys in ~/.ssh if they are available.

    Current Behaviour

    See summary above; k3sup defaults to trying to use a key file that does not exist.

    Possible Solution

    Steps to Reproduce (for bugs)

    Context

    Your Environment

    • What OS or type or VM are you using? Where is it hosted?

    • Operating System and version (e.g. Linux, Windows, MacOS):

  • Can not find tiller during app install of nginx-ingress

    Can not find tiller during app install of nginx-ingress

    Expected Behavior

    Install of ingress-nginx

    Current Behaviour

    $ k3sup app install nginx-ingress Using kubeconfig: /home/pi/.kube/config Using helm3 Client: armv7l, Linux 2020/02/14 08:13:02 User dir established as: /home/pi/.k3sup/ "stable" has been added to your repositories Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "stable" chart repository Update Complete. armNode architecture: "arm" Chart path: /tmp/charts VALUES values.yaml Command: /home/pi/.k3sup/bin/helm3/helm [upgrade --install nginx-ingress stable/nginx-ingress --namespace default --values /tmp/charts/nginx-ingress/values.yaml --set defaultBackend.enabled=false --set controller.image.repository=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm] Error: could not find tiller Error: exit code 1, stderr: Error: could not find tiller

    Steps to Reproduce (for bugs)

    1. k3sup install nginx-ingress

    Your Environment

    • What Kubernetes distribution are you using (for k3sup app)?

    1.17.2

    Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/arm"}
    Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/arm"}
    
    • Operating System and version (e.g. Linux, Windows, MacOS):
    $ cat /etc/os-release
    PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
    NAME="Raspbian GNU/Linux"
    VERSION_ID="10"
    VERSION="10 (buster)"
    VERSION_CODENAME=buster
    ID=raspbian
    ID_LIKE=debian
    HOME_URL="http://www.raspbian.org/"
    SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
    BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
    
  • Help wanted: Test on Windows 10

    Help wanted: Test on Windows 10

    Expected Behaviour

    I'd like to know if k3sup works on Windows 10, and if not, what is needed to make that possible.

    Current Behaviour

    Users can download the code and rebuild on Windows using Go to find out.

    Context

    It would be good to offer a binary for the main OSes

  • [Feature Request] Krypton support

    [Feature Request] Krypton support

    Anyone using Krypton to store their SSH private key on their phone for 2FA is currently unable to use k3sup with that private key, because somehow the SSH implementation in k3sup bypasses the way Krypton reroutes SSH calls.

    Expected Behaviour

    Usually when I use SSH with Krypton and the kr CLI utility it's seamless. It redirects any calls to the OpenSSH client over to krssh, where it requests approval from the Krypton app on my phone to use the private key. It accomplishes this by adding the following to the ~/.ssh/config file:

    # Added by Krypton
    Host *
    	IdentityAgent ~/.kr/krd-agent.sock
    	ProxyCommand /usr/local/bin/krssh %h %p
    	IdentityFile ~/.ssh/id_krypton
    	IdentityFile ~/.ssh/id_ed25519
    	IdentityFile ~/.ssh/id_rsa
    	IdentityFile ~/.ssh/id_ecdsa
    	IdentityFile ~/.ssh/id_dsa%
    

    What is supposed to happen is something similar to the following.

    
    โฏ ssh me.krypt.co
    Krypton โ–ถ Requesting SSH authentication from phone
    Krypton โ–ถ Phone approval required. Respond using the Krypton app
    Krypton โ–ถ Success. Request Allowed โœ”
    
    
                    ''....''
              '.-:/++++++++++/:-.'
           '-/++++++++++++++++++++/-'
         ':+++++++++++++++++++++++++/:'
         :++++++++++/:----:/++++++++++:
         :+++++++/-'        '-/+++++++:     _                               _
         :++++++/   -/++++\-  '/++++++:    | | __   _ __   _   _    _ __   | |_       ___    ___
         :+++++/   /++++++++\  '/+++++:    | |/ /  | '__| | | | |  | '_ \  | __|     / __|  / _ \
         :+++++:  :++++++++++:  :+++++:    |   <   | |    | |_| |  | |_) | | |_   _ | (__  | (_) |
         :+++++/  \++++++++++/  /+++++:    |_|\_\  |_|     \__, |  | .__/   \__| (_) \___|  \___/
         :++++++-   \++++++/   -++++++:                    |___/   |_|
         :+++++++:   '----'   :+++++++:
         .+++++++++/-......-/+++++++++.
          ./++++++++++++++++++++++++/.
           ':/++++++++++++++++++++/:'
             '-:/++++++++++++++/:-'
                 .-://++++//:-.
                      '..'
    
    
    Hello $user!
    
    You have successfully authenticated to the KryptCo test server!
    Add your key to GitHub by typing 'kr github'. Type 'kr' to see all available commands.
    
    Connection to me.krypt.co closed.
    

    Though, in this case it would be an open SSH connection to the host and then k3sup would do whatever it needs to and close the connection.

    Current Behaviour

    Currently, however, this does not happen. Instead, k3sup returns the following error:

    โฏ k3sup install --ip $ip --local-path ~/.kube/config --print-config --user $user
    Running: k3sup install
    2021/03/25 01:36:45 $ip
    Public IP: $ip
    Error: unable to load the ssh key with path "/Users/$user/.ssh/id_rsa": unable to parse private key: ssh: no key found
    
    ### NOTE: I scrubbed the output, so the path to my SSH key isn't actually trying to use $user
    

    I'm not entirely sure why, but for some reason when k3sup calls the SSH agent, it's not getting redirected via the SSH configuration setup by Krypton. This may be that it's directly calling ssh-agent instead of just invoking the ssh command from the OS (have not tried to confirm this, but it seems plausible).

    Possible Solution

    I'm not entirely sure at this point, but it seems possible that whatever Go module k3sup uses to call SSH is calling ssh-agent directly and not the OS' ssh command, which seems to bypass the ~/.ssh/config file.

    Steps to Reproduce (for bugs)

    1. Setup Krypton w/ Developer mode (https://krypt.co/)
    2. Install the kr CLI util (instruction also on https://krypt.co/)
    3. Pair device with kr
    4. Run kr sshconfig to make sure Krypton is intercepting SSH calls
      • You'll want to make sure you don't have any private keys setup, by the way. Krypton is designed to function without needing any private keys on the local machine at all.
    5. Attempt to run k3sup install on a remote host
    6. You'll get the Error: unable to load the ssh key with path "/Users/p4rsec/.ssh/id_rsa": unable to parse private key: ssh: no key found

    Context

    I use Krypton to keep my SSH private key secured on my phone. This lets me have 2FA on my SSH key for things like Github, and also makes my SSH key portable since it's stored on my phone (I tend to switch around hosts somewhat randomly so being able to just pair the krutility to my Krypton app is quite a pleasant experience).

    It's not an absolute deal breaker if this isn't something worth dedicating time to, but it's extremely strange that it's having trouble.

    Your Environment

    • What Kubernetes distribution are you using?
    kubectl version
    

    v1.20.4+k3s1

    • What OS or type or VM are you using for your cluster? Where is it hosted? (for k3sup install/join): Ubuntu 20.04 LTS on a Hades Canyon NUC (Intel i7 8809-G, 16GB RAM)

    • Operating System and version (e.g. Linux, Windows, MacOS): Host: Ubuntu 20.04 LTS Dev/control machine: macOS 11.2

    uname -a
    
    cat /etc/os-release
    

    "Be part of the solution"

    Subject to approval, are you willing to work on a Pull Request for this issue or feature request?

    Yes

  • Support issue for getting Kubeconfig file on AWS EC2 with K3s 1.22

    Support issue for getting Kubeconfig file on AWS EC2 with K3s 1.22

    * /usr/src/app/users/20039871/aws/.kube = <custom path to .kube folder>

    After running:

    k3sup install --host <ip addy> --cluster --ssh-key <pem key> --user ubuntu --k3s-channel stable --local-path <custom path to .kube folder> --context k3s --k3s-extra-args '--no-deploy traefik --write-kubeconfig <custom path to .kube folder> --write-kubeconfig-mode 644'

    It returns:

    Saving file to: /usr/src/app/users/20039871/aws/.kube
    
    # Test your cluster with:
    export KUBECONFIG=/usr/src/app/users/20039871/aws/.kube
    kubectl config set-context k3s
    kubectl get node -o wide
    Error: open /usr/src/app/users/20039871/aws/.kube: is a directory
    

    My security group for the VMs are config'd as follows:

     IPv4 | Custom UDP | UDP | 8472 | 0.0.0.0/0 | โ€“
     IPv4 | Custom TCP | TCP | 10250 | 0.0.0.0/0 | โ€“
     IPv4 | Custom TCP | TCP | 6443 | 0.0.0.0/0 | โ€“
     IPv4 | SSH | TCP | 22 | 0.0.0.0/0
    

    The above error persists and I am not sure how to fix. Ill gladly contrib money to project if you can help!

  • Question - how can I use k3sup when sudo requires a password

    Question - how can I use k3sup when sudo requires a password

    When running the command k3sup install --ip 192.168.1.182 --user stephen --cluster --ssh-key ~/.ssh/kube-master-1 get the following error: sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper Error: error received processing command: Process exited with status 1

    Expected Behaviour

    Should operate as intended in documentation

    Steps to Reproduce (for bugs)

    Context

    Your Environment

    Multiple VM's on a Proxmox 6.3-3 host

    • What Kubernetes distribution are you using?
    kubectl version:
    
    Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7+k3s1", GitCommit:"5a00e38db4c198fb0725a6b709380aed8053d637", GitTreeState:"clean", BuildDate:"2021-01-14T23:09:21Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
    
    • What OS or type or VM are you using for your cluster? Where is it hosted? (for k3sup install/join): Ubuntu 20.04
    • Operating System and version (e.g. Linux, Windows, MacOS):
    uname -a:
    Linux kube-master-1 5.4.0-62-generic #70-Ubuntu SMP Tue Jan 12 12:45:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    
    cat /etc/os-release:
    NAME="Ubuntu"
    VERSION="20.04.1 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.1 LTS"
    VERSION_ID="20.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=focal
    UBUNTU_CODENAME=focal
    

    "Be part of the solution"

    Subject to approval, are you willing to work on a Pull Request for this issue or feature request?

    Yes

  • Transparently use ssh-agent on linux/darwin

    Transparently use ssh-agent on linux/darwin

    Description

    This fixes #311.

    Attempt to create the ssh connection using a pre-existing ssh-agent first, this will allow users with pre-configured agents, yubikeys etc on Linux and Darwin to establish ssh connections without resorting to private keys.

    If this first connection attempt fails we fall through to the existing key based auth flow.

    Note that Windows is not supported in this PR as yubikeys do not currently use the ssh-agent.

    NOTE this does not include an update to the join command or docs pending an OK on the install implementation.

    Motivation and Context

    • [x] I have raised an issue to propose this change (required)

    How Has This Been Tested?

    • OSX
      • Using Yubikey, key present in ssh-agent, prompted for key pin, install completes successfully
      • Using private key file (existing flow), install completes successfully
    • Windows 10
      • Using private key file (existing flow), install completes successfully

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
      • Possibly not?
    • [ ] I have updated the documentation accordingly.
    • [x] I've read the CONTRIBUTION guide
    • [x] I have signed-off my commits with git commit -s
    • [ ] I have added tests to cover my changes.
    • [x] All new and existing tests passed.
  • Feature Request: Pass an optional url instead of server ip address to join nodes

    Feature Request: Pass an optional url instead of server ip address to join nodes

    I'm building out a k3s cluster in AWS with Terraform and using ip addresses to bootstrap the cluster is not optimal. I'd rather create internal load balancers and/or internal Route53 addresses to point to auto scaling instances (especially for the server ip for the join command).

    Given the current options, the best solution I've found is to use a cloud-init to provision an agent node to look up the current private ip address of a server in the server autoscaling group, and then use that ip on startup. I'm worried that this configuration might break if the server gets terminated.

    This arrangement could be simplified considerably by abstracting the ip address,

    Thanks!

  • Feature request: Support new-format of encrypted OpenSSH keys

    Feature request: Support new-format of encrypted OpenSSH keys

    The problem is that ssh-keygen now produces automatically a key in RFC4716-format. A few months ago, it was still the old "PEM" format. Go does not support SSH keys in the RFC4716-format. It does support keys in the PKCS#8 format but only unencrypted keys are supported currently.

    $ k3sup install --ip 127.0.0.1 --user kamikaze
    Running: k3sup install
    Public IP: 127.0.0.1
    ssh -i /home/kamikaze/.ssh/id_rsa -p 22 [email protected]
    Enter passphrase for '/home/kamikaze/.ssh/id_rsa':
    Error: unable to load the ssh key with path "/home/kamikaze/.ssh/id_rsa": ssh: cannot decode encrypted private keys
    
  • Add support for IPv6 addresses

    Add support for IPv6 addresses

    Why do you need this?

    My node only has a reachable IPv6 address, so I tried to use k3sup with the following command:

    k3sup install --context <cluster context> --user <vm user> --ip <IPv6>
    

    Expected Behaviour

    I expect this to work the same way as for an IPv4 address

    Current Behaviour

    I get the following output:

    Running: k3sup install
    2022/12/24 xx:xx:xx <IPv6>
    Public IP: <IPv6>
    Error: unable to connect to <IPv6> over ssh: dial tcp: address <IPv6>:22: too many colons in address
    

    Possible Solution

    Add a -6 flag for when the user wants to provide an IPv6 address. In such cases it might also be needed to specify the network adapter, like <IPv6>%eth0. This seems to be the case for ssh, but not for ping, so you'd have to check if you need this.

    Steps to Reproduce

    1. Download k3sup
    2. Run k3sup install --context <cluster context> --user <vm user> --ip <IPv6> (with everything between <> filled in ofc.)

    Your Environment

    • k3sup version:
    0.12.12
    
    • What Kubernetes distribution, client and server version are you using?
    N/A
    
    • What OS or type or VM are you using for your cluster? Where is it hosted? (for k3sup install/join):

    • Operating System and version (e.g. Linux, Windows, MacOS):

    Self-hosted. 
    Host OS: Windows 11 22H2 (build: 22621.963), using Hyper-V (Configuration version: 11.0) + multipass (for creating the VMs)
    
    VM OS instances: 
    Linux <vmname> 5.15.0-56-generic #62-Ubuntu SMP Tue Nov 22 19:54:14 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    
    PRETTY_NAME="Ubuntu 22.04.1 LTS"
    NAME="Ubuntu"
    VERSION_ID="22.04"
    VERSION="22.04.1 LTS (Jammy Jellyfish)"
    VERSION_CODENAME=jammy
    ID=ubuntu
    ID_LIKE=debian
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    UBUNTU_CODENAME=jammy
    

    Do you want to work on this?

    Subject to design approval, are you willing to work on a Pull Request for this issue or feature request?

    I would be willing, but I don't have very much experience with GoLang. 
    
  • Add --data-dir as a flag instead of passing it via --k3s-extra-args

    Add --data-dir as a flag instead of passing it via --k3s-extra-args

    Expected Behaviour

    It would be convenient to have a flag for --data-dir in the k3sup install and join commands.

    Current Behaviour

    A longer flag is required:

    --k3s-extra-args="--data-dir=/mnt/ssd/k3s"

    Possible Solution

    You don't need an SSD to test this, the command just installs k3s into a custom path for its data.

    Make --data-dir output additional arguments to k3s-extra-args, as per other flags we already have that work this way.

    Bear in mind that there is a setupAdditionalServer and setupAgent function, both of which work in a very similar way.

    Context

    Thought of this whilst looking into #373

    There also appears to be a bug in the join command which means extra args are ignored due to the ordering of string concatenation?

    Screenshot from 2022-08-26 11-38-26

    Related to: https://github.com/alexellis/k3sup/pull/388

  • How to use k3sup join a node in control node server

    How to use k3sup join a node in control node server

    I used two cloud server, e.g. A is 1.1.1.1, B is 2.2.2.2, they are brand new to the environment, and can be connected to each other,

    # In server A
    ssh -i /root/.ssh/test_150_to_43_rsa [email protected]
    # In server B
    ssh -i /root/.ssh/test_43_to_150_rsa [email protected]
    

    In server A

    export SERVER_IP=1.1.1.1
    export USER=root
    
    # Get wrong
    k3sup install --ip $SERVER_IP --user $USER
    # output
    # Error: unable to load the ssh key with path "/root/.ssh/id_rsa": unable
    # to read file: /root/.ssh/id_rsa, open /root/.ssh/id_rsa: no such file or directory
    
    # It works
    k3sup install --ip $SERVER_IP --user $USER --local
    

    then I want to join server B to cluster

    In server A

    export AGENT_IP=2.2.2.2
    
    k3sup join --ip $AGENT_IP --server-ip $SERVER_IP --user $USER --ssh-key /root/.ssh/test_150_to_43_rsa
    
    # Output
    # Running: k3sup join
    # Server IP: 1.1.1.1
    # Error: unable to connect to (server) 1.1.1.1:22 over ssh: ssh: handshake failed: ssh: unable to
    # authenticate, attempted methods [none publickey], no supported methods remain
    

    In server B

    export AGENT_IP=2.2.2.2
    export SERVER_IP=1.1.1.1
    export USER=root
    k3sup join --ip $AGENT_IP --server-ip $SERVER_IP --user $USER --ssh-key /root/.ssh/test_43_to_150_rsa
    
    # Output
    # Running: k3sup join
    # Server IP: 1.1.1.1
    # K107d91f95f6bfc5a2d0153b2b17d60abd5d20db87ceb73bbdcc4577483f6a33541::server:xxxxxxxx
    # Error: unable to connect to 2.2.2.2:22 over ssh: ssh: handshake failed: ssh: unable to
    # authenticate, attempted methods [none publickey], no supported methods remain
    

    I don't know why ๐Ÿ˜ญ๐Ÿ˜ญ

    Are you a GitHub Sponsor (Yes/No?)

    • [ ] Yes
    • [X] No

    Context

    Your Environment

    • k3sup 0.12.0
    • kubectl v1.23.8+k3s2
    • OS Linux Centos 7.6
  • [Enhancement] Suggest join commands after successfully completing `k3sup install`

    [Enhancement] Suggest join commands after successfully completing `k3sup install`

    Expected Behaviour

    Once a user successfully completes a k3sup install command, offer a couple other helpful "next step" commands for joining other nodes (both control plane and worker). This is a minor change to the output but could be helpful, similar to what kubeadm provides, to speed the process should a user want to join other nodes. The suggested commands can also be adaptive to the initial input. For example, if k3sup install was run with the flag --k3s-extra-args then the suggested next commands could include this flag and its arguments.

    For example, one suggested output may look like (including current output as of 0.11.0:

    $ k3sup install --ip 192.168.1.143 --user chip --cluster --tls-san 192.168.1.140 --k3s-channel=stable --k3s-extra-args "--disable traefik --disable servicelb --flannel-backend none --disable-network-policy"
    Running: k3sup install
    2021/11/23 08:46:41 192.168.1.143
    Public IP: 192.168.1.143
    [INFO]  Finding release for channel stable
    [INFO]  Using v1.21.5+k3s2 as release
    [INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s2/sha256sum-amd64.txt
    [INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s2/k3s
    [INFO]  Verifying binary download
    [INFO]  Installing k3s to /usr/local/bin/k3s
    [INFO]  Skipping installation of SELinux RPM
    [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
    [INFO]  Creating /usr/local/bin/crictl symlink to k3s
    [INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
    [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
    [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
    [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
    [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
    [INFO]  systemd: Enabling k3s unit
    Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service โ†’ /etc/systemd/system/k3s.service.
    [INFO]  systemd: Starting k3s
    Result: [INFO]  Finding release for channel stable
    [INFO]  Using v1.21.5+k3s2 as release
    [INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s2/sha256sum-amd64.txt
    [INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s2/k3s
    [INFO]  Verifying binary download
    [INFO]  Installing k3s to /usr/local/bin/k3s
    [INFO]  Skipping installation of SELinux RPM
    [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
    [INFO]  Creating /usr/local/bin/crictl symlink to k3s
    [INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
    [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
    [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
    [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
    [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
    [INFO]  systemd: Enabling k3s unit
    [INFO]  systemd: Starting k3s
     Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service โ†’ /etc/systemd/system/k3s.service.
    
    Saving file to: /home/chip/k8s/kube-vip/kubeconfig
    
    # Test your cluster with:
    export KUBECONFIG=/home/chip/k8s/kube-vip/kubeconfig
    kubectl config set-context default
    kubectl get node -o wide
    
    ###################
    To join more control plane (server) nodes to this instance, run:
    
    k3sup join --user chip --server-ip 192.168.1.143 --ip <IP_of_next_server> --server --k3s-channel stable --k3s-extra-args "--disable traefik --disable servicelb --flannel-backend none --disable-network-policy"
    
    or, to start joining agents run:
    
    k3sup join --user chip --server-ip 192.168.1.143 --ip <IP_of_agent> --k3s-channel stable
    

    Current Behaviour

    Currently, k3sup does not print suggested next-step commands.

    Are you a GitHub Sponsor (Yes/No?)

    • [ ] Yes
    • [x] No

    Possible Solution

    Print some templated commands at the end of the block sent to stdout.

    Steps to Reproduce

    1. Run k3sup install and observe the output to stdout.

    Context

    As a new user to k3sup, I wanted to deploy a HA cluster to understand the process. As this requires multiple commands against multiple server instances, I had to inspect the help for k3sup each time in order to build a command line that would accomplish the use case. While this was not difficult, it occurred to me that perhaps this could be made a little simpler for new (or even existing) users by shortcuting some of that process.

    Your Environment

    • What Kubernetes distribution are you using? K3s 1.21
    Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:58:09Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5+k3s2", GitCommit:"724ef700bab896ff252a75e2be996d5f4ff1b842", GitTreeState:"clean", BuildDate:"2021-10-05T19:59:14Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
    
    • What OS or type or VM are you using for your cluster? Where is it hosted? (for k3sup install/join): vSphere, self-hosted

    • Operating System and version (e.g. Linux, Windows, MacOS): Ubuntu 18.04

    uname -a
    Linux k8s-0 4.15.0-162-generic #170-Ubuntu SMP Mon Oct 18 11:38:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    
    cat /etc/os-release
    DISTRIB_ID=Ubuntu
    DISTRIB_RELEASE=18.04
    DISTRIB_CODENAME=bionic
    DISTRIB_DESCRIPTION="Ubuntu 18.04.6 LTS"
    NAME="Ubuntu"
    VERSION="18.04.6 LTS (Bionic Beaver)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 18.04.6 LTS"
    VERSION_ID="18.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=bionic
    UBUNTU_CODENAME=bionic
    

    "Be part of the solution"

    Subject to approval, are you willing to work on a Pull Request for this issue or feature request?

    • [x] Yes
    • [] No
  • Exclude windows compilation from all target on non x86_64

    Exclude windows compilation from all target on non x86_64

    Description

    Exclude windows compilation from all target on non x86_64

    Motivation and Context

    golang does not support windows on non x86_64 architectures which makes make all fail on e.g. aarch64

    How Has This Been Tested?

    I have compiled k3sup on x86_64 and aarch64.

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [x] I've read the CONTRIBUTION guide
    • [x] I have signed-off my commits with git commit -s
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
  • Return code for install command should be checked for all pipe commands

    Return code for install command should be checked for all pipe commands

    Currently the code just executes a command like

    curl -sfL https://get.k3s.io | K3S_URL='https://<ip>:6443' K3S_TOKEN=โ€™<token> INSTALL_K3S_VERSION='v1.19.1+k3s1' INSTALL_K3S_EXEC='server --server https://<ip>:6443' sh -s -

    in https://github.com/alexellis/k3sup/blob/master/cmd/join.go#L319. The return code is checked afterwards. The problem is that the return code of this command is always the return code of the last command in the pipe. So if curl fails here this will not be visible in the return code. This behaviour in bash can be changed by setting pipefail, see https://stackoverflow.com/questions/5934108/how-to-use-the-return-code-of-the-first-program-in-a-pipe-command-line. So the getscript in https://github.com/alexellis/k3sup/blob/master/cmd/install.go#L41 should be prefixed the command with set -o pipefail; so the return code is != 0 if one of the commands in the pipe is !=0

    Expected Behaviour

    If curl fails e.g. due to a network error or a DNS resolution issue an error should be returned.

    Current Behaviour

    If curl fails no feedback is given. The tool just stops without any feedback.

    Are you a GitHub Sponsor (Yes/No?)

    • [ ] Yes
    • [X] No

    Possible Solution

    I attached a patch as patch.txt

    Steps to Reproduce

    1. Blacklist one of the systems on which k3s should be installed in the router, so the internet access is blocked or tamper the /etc/resolv.conf file of a system so name resolution doesn't work properly.
    2. Run the installation on this modified machine via k3sup

    Context

    I tried to install on a machine that had a broken internet access and no proper error was returned as the curl failed which was not detected.

    Your Environment

    • What Kubernetes distribution are you using?
    Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1+k3s1", GitCommit:"b66760fccdddfc98fa107ec38c86c5e2814e559d", GitTreeState:"clean", BuildDate:"2020-09-17T17:17:18Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/arm"}
    
    • What OS or type or VM are you using for your cluster? Where is it hosted? (for k3sup install/join):

    Rasbian on RaspberryPi

    • Operating System and version (e.g. Linux, Windows, MacOS):
    uname -a
    Linux name 4.19.66-v7+ #1253 SMP Thu Aug 15 11:49:46 BST 2019 armv7l GNU/Linux
    
    cat /etc/os-release
    PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
    NAME="Raspbian GNU/Linux"
    VERSION_ID="9"
    VERSION="9 (stretch)"
    VERSION_CODENAME=stretch
    ID=raspbian
    ID_LIKE=debian
    HOME_URL="http://www.raspbian.org/"
    SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
    BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
    

    "Be part of the solution"

    Subject to approval, are you willing to work on a Pull Request for this issue or feature request?

    • [X] Yes
    • [ ] No
Cluster API k3s

Cluster API k3s Cluster API bootstrap provider k3s (CABP3) is a component of Cluster API that is responsible for generating a cloud-init script to tur

Dec 23, 2022
RancherOS v2 is an immutable Linux distribution built to run Rancher and it's corresponding Kubernetes distributions RKE2 and k3s

RancherOS v2 is an immutable Linux distribution built to run Rancher and it's corresponding Kubernetes distributions RKE2 and k3s. It is built using the cOS-toolkit and based on openSUSE

Dec 27, 2022
Faster way to switch between kubeconfig files.
Faster way to switch between kubeconfig files.

kubectl-cf Faster way to switch between kubeconfig files (not contexts). Usage of kubectl-cf: cf Select kubeconfig interactively cf [co

Oct 14, 2022
Kcs - KubeConfig Context Switcher
Kcs - KubeConfig Context Switcher

KubeConfig Context Switch Simple cli tool for kube config context switching with

Feb 24, 2022
Open URL in your local web browser from the SSH-connected remote environment.

opener Open URL in your local web browser from the SSH-connected remote environment. How does opener work? opener is a daemon process that runs locall

Oct 20, 2022
Help developer to sync between local file and remote apollo portal web since portal web is so messy to use

apollo-synchronizer Help developer to sync between local file and remote apollo portal web since portal web is so messy to use Features download names

Oct 27, 2022
Download your Fitbit weight history and connect to InfluxDB and Grafana

WemonFit Weight monitoring for Fitbit, using InfluxDB and Grafana Generating a new certificate openssl req -new -newkey rsa:2048 -nodes -keyout lo

Oct 22, 2022
Local Storage is one of HwameiStor components. It will provision the local LVM volume.
Local Storage is one of HwameiStor components. It will provision the local LVM volume.

Local Storage Module English | Simplified_Chinese Introduction Local Storage is one of modules of HwameiStor which is a cloud native local storage sys

Aug 6, 2022
Jan 4, 2022
A tool to build, deploy, and release any application on any platform.
A tool to build, deploy, and release any application on any platform.

Waypoint Website: https://www.waypointproject.io Tutorials: HashiCorp Learn Forum: Discuss Waypoint allows developers to define their application buil

Dec 28, 2022
OCI transport plugin for apt-get (i.e., apt-get over ghcr.io)

apt-transport-oci: OCI transport plugin for apt-get (i.e., apt-get over ghcr.io) apt-transport-oci is an apt-get plugin to support distributing *.deb

Nov 1, 2022
Tiny Go program to set Elgato Key Light options

Elgato Key Light Controller This is a tiny Go program to control the power, brightness, and temperature settings for Elgato Key Lights with known IP a

Feb 8, 2022
go-zero ไปŽ้›ถๅˆฐ k8s ้ƒจ็ฝฒ

ๅฏๅŠจ๏ผš ๆณจๆ„ไบ‹้กน๏ผš dockerfile ๆ–‡ไปถ้…็ฝฎไบ† LOCAL_HOST ็Žฏๅขƒๅ˜้‡ 1ใ€้กน็›ฎ็›ฎๅฝ•ไธ‹ๆ‰ง่กŒ ./docker.sh ่„šๆœฌ็”Ÿๆˆ rpcๆœๅŠกdocker้•œๅƒ ./docker.sh 2ใ€docker-compose-db ๅˆ›ๅปบ mysql redis etcd ๅฎนๅ™จ ๆ‰ง่กŒๅ‘ฝไปค

Dec 7, 2022
k0s - Zero Friction Kubernetes

k0s - Zero Friction Kubernetes k0s is an all-inclusive Kubernetes distribution with all the required bells and whistles preconfigured to make building

Jan 4, 2023
CDK - Zero Dependency Container Penetration Toolkit
 CDK - Zero Dependency Container Penetration Toolkit

CDK is an open-sourced container penetration toolkit, offering stable exploitation in different slimmed containers without any OS dependency. It comes with penetration tools and many powerful PoCs/EXPs helps you to escape container and takeover K8s cluster easily.

Dec 29, 2022
[WIP] Cheap, portable and secure NAS based on the Raspberry Pi Zero - with encryption, backups, and more

PortaDisk - Affordable Raspberry Pi Portable & Secure NAS Project Project Status: Early work in progress. web-unlock is still not ready for production

Nov 23, 2022
Web user interface and service agent for the monitoring and remote management of WinAFL.
Web user interface and service agent for the monitoring and remote management of WinAFL.

WinAFL Pet WinAFL Pet is a web user interface dedicated to WinAFL remote management via an agent running as a system service on fuzzing machines. The

Nov 9, 2022
Docker-based remote code runner / ๅŸบไบŽ Docker ็š„่ฟœ็จ‹ไปฃ็ ่ฟ่กŒๅ™จ
Docker-based remote code runner / ๅŸบไบŽ Docker ็š„่ฟœ็จ‹ไปฃ็ ่ฟ่กŒๅ™จ

Docker-based remote code runner / ๅŸบไบŽ Docker ็š„่ฟœ็จ‹ไปฃ็ ่ฟ่กŒๅ™จ

Nov 9, 2022
A set of tests to check compliance with the Prometheus Remote Write specification

Prometheus Remote Write Compliance Test This repo contains a set of tests to check compliance with the Prometheus Remote Write specification. The test

Dec 4, 2022