Ignite a Firecracker microVM

Weave Ignite

Ignite Logo

Weave Ignite is an open source Virtual Machine (VM) manager with a container UX and built-in GitOps management.

  • Combines Firecracker MicroVMs with Docker / OCI images to unify containers and VMs.
  • Works in a GitOps fashion and can manage VMs declaratively and automatically like Kubernetes and Terraform.

Ignite is fast and secure because of Firecracker. Firecracker is an open source KVM implementation from AWS that is optimised for high security, isolation, speed and low resource consumption. AWS uses it as the foundation for their serverless offerings (AWS Lambda and Fargate) that need to load nearly instantly while also keeping users isolated (multitenancy). Firecracker has proven to be able to run 4000 micro-VMs on the same host!

What is Ignite?

Read the announcement blog post here: https://www.weave.works/blog/fire-up-your-vms-with-weave-ignite

Ignite makes Firecracker easy to use by adopting its developer experience from containers. With Ignite, you pick an OCI-compliant image (Docker image) that you want to run as a VM, and then just execute ignite run instead of docker run. There’s no need to use VM-specific tools to build .vdi, .vmdk, or .qcow2 images, just do a docker build from any base image you want (e.g. ubuntu:18.04 from Docker Hub), and add your preferred contents.

When you run your OCI image using ignite run, Firecracker will boot a new VM in about 125 milliseconds (!) for you using a default 4.19 Linux kernel. If you want to use some other kernel, just specify the --kernel-image flag, pointing to another OCI image containing a kernel at /boot/vmlinux, and optionally your preferred modules. Next, the kernel executes /sbin/init in the VM, and it all starts up. After this, Ignite connects the VMs to any CNI network, integrating with e.g. Weave Net.

Ignite is a declarative Firecracker microVM administration tool, similar to how Docker manages runC containers. Ignite runs VM from OCI images, spins VMs up/down at lightning speed, and can manage fleets of VMs efficiently using GitOps.

The idea is that Ignite makes Firecracker VMs look like Docker containers. Now we can deploy and manage full-blown VM systems just like e.g. Kubernetes workloads. The images used are OCI/Docker images, but instead of running them as containers, it executes their contents as a real VM with a dedicated kernel and /sbin/init as PID 1.

Networking is set up automatically, the VM gets the same IP as any container on the host would.

And Firecracker is fast! Building and starting VMs takes just some fraction of a second, or at most some seconds. With Ignite you can get started with Firecracker in no time!

Use-cases

With Ignite, Firecracker is now much more accessible for end users, which means the ecosystem can achieve a next level of momentum due to the easy onboarding path thanks to the docker-like UX.

Although Firecracker was designed with serverless workloads in mind, it can equally well boot a normal Linux OS, like Ubuntu, Debian or CentOS, running an init system like systemd.

Having a super-fast way of spinning up a new VM, with a kernel of choice, running an init system like systemd allows running system-level applications like the kubelet, which need to “own” the full system.

Example use-cases:

  • Set up many secure VMs lightning fast. It's great for testing, CI and ephemeral workloads.
  • Launch and manage entire “app ready” stacks from Git because Ignite supports GitOps!
  • Run even legacy or special apps in lightweight VMs (eg for multi-tenancy, or using weird/edge kernels).

And - potentially - we can run a cloud of VMs ‘anywhere’ using Kubernetes for orchestration, Ignite for virtualization, GitOps for management, and supporting cloud native tools and APIs.

Scope

Ignite is different from Kata Containers or gVisor. They don’t let you run real VMs, but only wrap a container in a VM layer providing some kind of security boundary (or sandbox).

Ignite on the other hand lets you run a full-blown VM, easily and super-fast, but with the familiar container UX. This means you can “move down one layer” and start managing your fleet of VMs powering e.g. a Kubernetes cluster, but still package your VMs like containers.

Installing

Please check out the Releases Page.

How to install Ignite is covered in docs/installation.md or on Read the Docs.

Guidance on Cloud Providers' instances that can run Ignite is covered in docs/cloudprovider.md.

Getting Started

WARNING: In it's v0.X series, Ignite is in alpha, which means that it might change in backwards-incompatible ways.

asciicast

Note: At the moment ignite and ignited need root privileges on the host to operate due to certain operations (e.g. mount). This will change in the future.

# Let's run the weaveworks/ignite-ubuntu OCI image as a VM
# Use 2 vCPUs and 1GB of RAM, enable automatic SSH access and name it my-vm
ignite run weaveworks/ignite-ubuntu \
    --cpus 2 \
    --memory 1GB \
    --ssh \
    --name my-vm

# List running VMs
ignite ps

# List Docker (OCI) and kernel images imported into Ignite
ignite images
ignite kernels

# Get the boot logs of the VM
ignite logs my-vm

# SSH into the VM
ignite ssh my-vm

# Inside the VM you can check that the kernel version is different, and the IP address came from the container
# Also the memory is limited to what you specify, as well as the vCPUs
> uname -a
> ip addr
> free -m
> cat /proc/cpuinfo

# Rebooting the VM tells Firecracker to shut it down
> reboot

# Cleanup
ignite rm my-vm

For a walkthrough of how to use Ignite, go to docs/usage.md.

Getting Started the GitOps way

Ignite is a “GitOps-first” project, GitOps is supported out of the box using the ignited gitops command. Previously this was integrated as ignite gitops, but this functionality has now moved to ignited, Ignite's upcoming daemon binary.

In Git you declaratively store the desired state of a set of VMs you want to manage. ignited gitops reconciles the state from Git, and applies the desired changes as state is updated in the repo. It also commits and pushes any local changes/additions to the managed VMs back to the repository.

This can then be automated, tracked for correctness, and managed at scale - just some of the benefits of GitOps.

The workflow is simply this:

  • Run ignited gitops [repo], where repo is an SSH url to your Git repo
  • Create a file with the VM specification, specifying how much vCPUs, RAM, disk, etc. you’d like for the VM
  • Run git push and see your VM start on the host

See it in action! (Note: The screencast is from an older version which differs somewhat)

asciicast

For the complete guide, see docs/gitops.md.

Awesome Ignite

Want to see how awesome Ignite is?

Take a look at the awesome-ignite page!

Documentation

Please refer to the following documents powered by Read the Docs:

Frequently Asked Questions

See the FAQ.md document.

Architecture

docs/architecture.png

Want to know how Ignite really works under the hood? Check out this TGIK session from Joe Beda about it:

TGIK 082

Base images and kernels

A base image is an OCI-compliant image containing some operating system (e.g. Ubuntu). You can follow normal docker build patterns for customizing your VM's rootfs.

A kernel image is an OCI-compliant image containing a /boot/vmlinux (an uncompressed kernel) executable (can be a symlink). You can also put supporting kernel modules in /lib/modules if needed. You can mix and match any kernel and any base image to create a VM.

As the upstream centos:7 and ubuntu:18.04 images from Docker Hub don't have all the utilities and packages you'd expect in a VM (e.g. an init system), we have packaged some reference base images and a sample kernel image to get started quickly.

You can use the following pre-built images with Ignite. They are built on the normal Docker Hub images, but add systemd, openssh, and similar utilities.

Base Images

These prebuilt images can be given to ignite run directly.

Kernel Images

Tutorials

Contributing

Please see CONTRIBUTING.md and our Code Of Conduct.

Other interesting resources include:

Getting Help

If you have any questions about, feedback for or problems with ignite:

Your feedback is always welcome!

Maintainers

License

Apache 2.0

Owner
Weaveworks
weaving containers into applications
Weaveworks
Comments
  • Bump CNI Plugins to v1.0.1

    Bump CNI Plugins to v1.0.1

    CNI Plugins have officially graduated to stable with version 1.0.1 released yesterday. The biggest breaking change is the removal of the flannel plugin.

    Release notes here: https://github.com/containernetworking/plugins/releases/tag/v1.0.1

  • Enable multiple non-IP interface to be connected via tc redirect

    Enable multiple non-IP interface to be connected via tc redirect

    Hi @darkowlzz @stealthybox 👋 This is the PR that covers https://github.com/weaveworks/ignite/issues/832 and https://github.com/weaveworks/ignite/issues/831. At a high level it has the following impact:

    1. Introduced a new CL argument --sandbox-env-vars accepting a comma-separated list of key=value pairs.

    These values are passed to the respective container runtimes and used as env variables. I've had a choice two either create a new API version and add env vars as a new spec field or pass them around in VM's annotations. I've opted for the second option to minimise the impact of this change. I'm not sure if it's a good idea, happy to change it if necessary.

    2. Introduced a new bool arg called wait to StartVM function - if set to false, this bypasses the waitForSpawn check.

    This flag defaults to true for all existing function invocations to preserve backwards compatibility. However, when used via API, users can set this to false and skip the check for ignite-spawn. The purpose is to get the container PID to configure additional interfaces before ignite-spawn is fully initialised.

    3. Ignite-spawn can wait for a number of interfaces to be connected before firing up the VM.

    This is controlled through an environment variable called IGNITE_INTFS. To preserve backwards compatibility it defaults to 1, so without any variables set, the behaviour is the same as now. However, if this value is set to 1 on higher, SetupContainerNetworking will wait for that number of interfaces to be connected (up to a maximum timeout).

    4. Ignite-spawn will connect additional veth and tap interfaces via tc redirect.

    For backwards compatibility, the behaviour is to always use the current way of interconnecting interfaces (via bridge). However, if there's no IP on the interface, it will be interconnected with a VM via tc redirect.


    In general, all these changes strive to preserve the happy-path behavior of pre-existing code, so no major changes are expected for existing users.

  • where to get wireguard kernel with k3s ubuntu 20.04

    where to get wireguard kernel with k3s ubuntu 20.04

    error: Module wireguard not found in directory /lib/modules/5.4.43

    $ ignite run weaveworks/ignite-ubuntu:20.04-amd64 --kernel-image weaveworks/ignite-kernel:5.4.43 --cpus 32 --ssh --memory 4GB --size 10GB --ssh
    
    $ apt install wireguard
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    wireguard is already the newest version (1.0.20200513-1~20.04.2).
    
    $ curl -sfL https://get.k3s.io | K3S_URL=https://master:6443 K3S_TOKEN=token  sh -
    
    $ journalctl -f -u k3s-agent
    

    failed to run command: export SUBNET_IP=$(echo $SUBNET | cut -d'/' -f 1); ip link del flannel.1 2>/dev/null; echo $PATH >&2; wg-add.sh flannel.1 && wg set flannel.1 listen-port 51820 private-key privatekey && ip addr add $SUBNET_IP/32 dev flannel.1 && ip link set flannel.1 up && ip route add $NETWORK dev flannel.1 Err: exit status 1 Output: /var/lib/rancher/k3s/data/986d5e8cf570f904598f9a5d531da2430e5a6171d22b7addb1e4a7c5b87a47d0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/rancher/k3s/data/986d5e8cf570f904598f9a5d531da2430e5a6171d22b7addb1e4a7c5b87a47d0/bin/aux\nmodprobe: FATAL: Module wireguard not found in directory /lib/modules/5.4.43\nError: Unknown device type.\n/var/lib/rancher/k3s/data/986d5e8cf570f904598f9a5d531da2430e5a6171d22b7addb1e4a7c5b87a47d0/bin/aux/wg-add.sh: line 26: boringtun: command not found\n/var/lib/rancher/k3s/data/986d5e8cf570f904598f9a5d531da2430e5a6171d22b7addb1e4a7c5b87a47d0/bin/aux/wg-add.sh: line 29: boringtun: command not found\n/var/lib/rancher/k3s/data/986d5e8cf570f904598f9a5d531da2430e5a6171d22b7addb1e4a7c5b87a47d0/bin/aux/wg-add.sh: line 32: wireguard-go: command not found"

  • Detect available containerd-shim versions defaulting to legacy linux runtime

    Detect available containerd-shim versions defaulting to legacy linux runtime

    closes #390 /kind bug

    Docker ships with containerd, but even the newest versions of docker-ce ship a version of containerd.io-1.2.6 that is lacking the matching containerd-shim-runc-v1 binary for plugin.RuntimeRuncV1. This client creation code calculates the matching binary names for our supported runtimes and attempts to do a fallback to the newest supported runtime by using the existence of that shim binary in the ignite-host's PATH as a heuristic for that runtime actually working. It also adds support for the upcoming plugin.RuntimeRuncV2 which supports multiple containers per shim.

    This solves a bug where our previous hard-coded default of RuncV1 causes ignite to fail to start a vm when using containerd packages that do not have the matching shim binary:

    sudo ignite-0.6.0 run weaveworks/ignite-ubuntu
    INFO[0000] Created VM with ID "1dbc72beaced7e96" and name "delicate-firefly" 
    FATA[0000] failed to start container for VM "1dbc72beaced7e96": runtime "io.containerd.runc.v1" binary not installed "containerd-shim-runc-v1": file does not exist: unknown 
    

    When the heuristic fails, we consider this a non-fatal error -- containerd may be running with a different PATH and mount namespace. The U/X for that failure mode as of this patch looks like this:

    sudo ignite run weaveworks/ignite-ubuntu
    INFO[0000] Created VM with ID "ec8371f59d595017" and name "sparkling-wave" 
    INFO[0001] Networking is handled by "cni"               
    INFO[0001] Started Firecracker VM "ec8371f59d595017" in a container with ID "ignite-ec8371f59d595017" 
    
    sudo mv /usr/bin/containerd-shim{,.disabled}
    
    sudo ignite run weaveworks/ignite-ubuntu
    ERRO[0000] a containerd-shim could not be found for runtimes: [io.containerd.runc.v2 io.containerd.runc.v1], io.containerd.runtime.v1.linux 
    INFO[0000] Created VM with ID "5ee35502c3736f02" and name "dark-firefly" 
    FATA[0000] failed to start container for VM "5ee35502c3736f02": failed to start shim: exec: "containerd-shim": executable file not found in $PATH: unknown 
    

    Future Work:

    • Functions to check the runtimes should be added to containerd libraries to prevent coupling clients to containerd's filesystem and environment dependencies
    • A pre-flight check using code from #360 could wrap this error.
    • A user-facing config struct for the containerd runtime string and options could be added.
  • Cast to uint64 for Darwin platform

    Cast to uint64 for Darwin platform

    I was futzing around with https://github.com/srl-labs/containerlab/ - just trying to build and run the unit tests on OS X to add a feature, and I got dragged down into some dependencies.

    stat.Rdev is a int32 on Darwin, so I just casted it to a uint64 to make the compiler happy

    https://cs.opensource.google/go/x/sys/+/master:unix/ztypes_darwin_arm64.go;l=73

    I'm not proficient with Go, so I don't know if this is wise or not, but I figured I'd submit it back

  • Add command ignite cp

    Add command ignite cp

    This implements cp command on top of #495 .

    Implements bidirectional copy from host to VM and VM to host using sftp. The copy command syntax is similar to docker cp with source and destination that can have VM reference name or ID separated by a filepath using ":". VM reference in copy source means to copy from VM to host and VM reference in copy destination means to copy from host to VM.

    Example usage:

    $ ignite cp localfile.txt my-vm:remotefile.txt
    $ ignite cp my-vm:remotefile.txt localfile.txt
    

    File permissions and owners are also applied to the copied files. Symlinks are followed and the destination files are copied.

    Fixes #419

  • Upgrade kernel versions

    Upgrade kernel versions

    • Upgrade from kernel 4.14.182 to 4.14.223
    • Upgrade from kernel 4.19.125 to 4.19.178
    • Upgrade from kernel 5.4.43 to 5.4.102

    Source: https://www.kernel.org/

    • Note: Versions as of 2021-03-04

    • Future proposal: Instead of pinned patch versions, determine the latest patch version via CI. Add a scheduled github workflow that downloads the latest patch versions on a weekly basis.

  • Multi host ignite VMs networking based on WeaveNet not working as expected (reopening issue #628)

    Multi host ignite VMs networking based on WeaveNet not working as expected (reopening issue #628)

    Hello WeaveWorks team,

    I am reopening issue #628, the applied fix did make things slightly better but it did not fix the underlying issue. Two Hosts, on both I run the WeaveWorks CNI docker image as described in issue #628. I then installed ignite on both hosts and started a Ignite VM on each host with the flag --network-plugin cni respectively. On each host, the output of ifconfig is (I filtered out the other network interfaces that are not relevant to this issue):

    ignite0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
            inet 10.61.0.1  netmask 255.255.0.0  broadcast 10.61.255.255
            inet6 fe80::dca5:f0ff:fed9:7481  prefixlen 64  scopeid 0x20<link>
            ether de:a5:f0:d9:74:81  txqueuelen 1000  (Ethernet)
            RX packets 189  bytes 17839 (17.8 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 201  bytes 20068 (20.0 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vethd0e92add: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::14e9:28ff:fed4:5cda  prefixlen 64  scopeid 0x20<link>
            ether 16:e9:28:d4:5c:da  txqueuelen 0  (Ethernet)
            RX packets 189  bytes 20485 (20.4 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 230  bytes 23476 (23.4 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vethwe-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
            inet6 fe80::ac38:4ff:feac:c4f3  prefixlen 64  scopeid 0x20<link>
            ether ae:38:04:ac:c4:f3  txqueuelen 0  (Ethernet)
            RX packets 196  bytes 22470 (22.4 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 92  bytes 10557 (10.5 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vethwe-datapath: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
            inet6 fe80::f8ae:8eff:feef:6077  prefixlen 64  scopeid 0x20<link>
            ether fa:ae:8e:ef:60:77  txqueuelen 0  (Ethernet)
            RX packets 92  bytes 10557 (10.5 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 196  bytes 22470 (22.4 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vxlan-6784: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 65535
            inet6 fe80::f067:50ff:fe4f:45f8  prefixlen 64  scopeid 0x20<link>
            ether f2:67:50:4f:45:f8  txqueuelen 1000  (Ethernet)
            RX packets 240  bytes 163152 (163.1 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 174  bytes 155592 (155.5 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
            inet 10.32.0.1  netmask 255.240.0.0  broadcast 10.47.255.255
            inet6 fe80::7cb0:f6ff:fe9e:1b0e  prefixlen 64  scopeid 0x20<link>
            ether 7e:b0:f6:9e:1b:0e  txqueuelen 1000  (Ethernet)
            RX packets 195  bytes 19650 (19.6 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 59  bytes 6808 (6.8 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    

    and

    ignite0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
            inet 10.61.0.1  netmask 255.255.0.0  broadcast 10.61.255.255
            inet6 fe80::d8aa:29ff:fe1c:2e35  prefixlen 64  scopeid 0x20<link>
            ether da:aa:29:1c:2e:35  txqueuelen 1000  (Ethernet)
            RX packets 294  bytes 28890 (28.8 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 300  bytes 33022 (33.0 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 552  bytes 48676 (48.6 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 552  bytes 48676 (48.6 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth5cf312db: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::39:f3ff:fe5e:2075  prefixlen 64  scopeid 0x20<link>
            ether 02:39:f3:5e:20:75  txqueuelen 0  (Ethernet)
            RX packets 294  bytes 33006 (33.0 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 330  bytes 36565 (36.5 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vethwe-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
            inet6 fe80::d47a:82ff:fe1e:5807  prefixlen 64  scopeid 0x20<link>
            ether d6:7a:82:1e:58:07  txqueuelen 0  (Ethernet)
            RX packets 149  bytes 15746 (15.7 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 111  bytes 12216 (12.2 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vethwe-datapath: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
            inet6 fe80::88f:cbff:fe49:a42b  prefixlen 64  scopeid 0x20<link>
            ether 0a:8f:cb:49:a4:2b  txqueuelen 0  (Ethernet)
            RX packets 111  bytes 12216 (12.2 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 149  bytes 15746 (15.7 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vxlan-6784: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 65535
            inet6 fe80::a41b:37ff:fe63:e69e  prefixlen 64  scopeid 0x20<link>
            ether a6:1b:37:63:e6:9e  txqueuelen 1000  (Ethernet)
            RX packets 304  bytes 317620 (317.6 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 371  bytes 325090 (325.0 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
            inet 10.40.0.0  netmask 255.240.0.0  broadcast 10.47.255.255
            inet6 fe80::d0e2:beff:fe0c:6c35  prefixlen 64  scopeid 0x20<link>
            ether d2:e2:be:0c:6c:35  txqueuelen 1000  (Ethernet)
            RX packets 148  bytes 13584 (13.5 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 77  bytes 8348 (8.3 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    

    I can ping 10.40.0.0 from 10.32.0.1 and vise versa, this tells me that WeaveWorks CNI is working as expected. But when I SSH into each VM, they both have the same output for ifconfig:

    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 10.61.0.2  netmask 255.255.0.0  broadcast 10.61.255.255
            inet6 fe80::3804:88ff:fec8:a6ef  prefixlen 64  scopeid 0x20<link>
            ether 3a:04:88:c8:a6:ef  txqueuelen 1000  (Ethernet)
            RX packets 303  bytes 34991 (34.9 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 215  bytes 25732 (25.7 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    

    As you can see, they both have the same IP on the WeaveWorks CNI network (obviously not what was expected), rather than each having a dedicated IP. There is no more hanging or delay, so that issue is gone now, but I still cant do multi host Ignite VM networking using WeaveNet CNI. I also removed the file /etc/cni/net.d/10-ignite.conflist before running the ignite run command. The content of the newly created /etc/cni/net.d/10-ignite.conflist file is:

    {
    	"cniVersion": "0.4.0",
    	"name": "ignite-cni-bridge",
    	"plugins": [
    		{
    			"type": "bridge",
    			"bridge": "ignite0",
    			"isGateway": true,
    			"isDefaultGateway": true,
    			"promiscMode": true,
    			"ipMasq": true,
    			"ipam": {
    				"type": "host-local",
    				"subnet": "10.61.0.0/16"
    			}
    		},
    		{
    			"type": "portmap",
    			"capabilities": {
    				"portMappings": true
    			}
    		},
    		{
    			"type": "firewall"
    		}
    	]
    }
    

    Or the documentation is lacking some instructions here, or there is a bug somewhere. Could someone please have a look and help me solve this issue please?

    Thanks

  • Incorrect privileges on image root

    Incorrect privileges on image root

    When running a VM with the weaveworks/ignite-ubuntu:latest image I ran into problems starting the systemd-resolved service (which is expected by kubeadm, at least when using kubespray). It turns out that this problem was caused by the non-root systemd-resolve user not being able to access the sytemd-resolved binary.

    It turns out that this is because of incorrect privileges on the image's root directory:

    root@17b9791da8ac3e62:~# ls -la /
    total 36
    drwx------ 22 root root  1024 Jul 26 11:00 .
    

    A normal Linux system has drwxr-xr-x (0755) for /. Both the Ubuntu and CentOS images have this problem.

  • Container ID outputted at `start` time confusing on the first pull

    Container ID outputted at `start` time confusing on the first pull

    I may be doing something wrong?

    alex@nuc7:~$ sudo kvm-ok
    INFO: /dev/kvm exists
    KVM acceleration can be used
    alex@nuc7:~$ sudo ignite run weaveworks/ignite-ubuntu --cpus 2 --memory 1024 --ssh --name my-vm
    can't find kernel: no ID/name matches for "weaveworks/ignite-ubuntu"
    alex@nuc7:~$ 
    
    alex@nuc7:~$ cat /etc/os-release 
    NAME="Ubuntu"
    VERSION="18.04.2 LTS (Bionic Beaver)"
    
  • make tidy should run in a container by default

    make tidy should run in a container by default

    make tidy currently runs directly on your host relying on the tools and their respective versions. While there is make tidy-in-docker, it is not documented. Let's either replace make tidy in the documentation with make-tidy-docker or change the functionality of make tidy to be containerized.

  • Bump actions/setup-python from 3.1.0 to 4.4.0

    Bump actions/setup-python from 3.1.0 to 4.4.0

    Bumps actions/setup-python from 3.1.0 to 4.4.0.

    Release notes

    Sourced from actions/setup-python's releases.

    Add support to install multiple python versions

    In scope of this release we added support to install multiple python versions. For this you can try to use this snippet:

        - uses: actions/setup-python@v4
          with:
            python-version: |
                3.8
                3.9
                3.10
    

    Besides, we changed logic with throwing the error for GHES if cache is unavailable to warn (actions/setup-python#566).

    Improve error handling and messages

    In scope of this release we added improved error message to put operating system and its version in the logs (actions/setup-python#559). Besides, the release

    v4.3.0

    • Update @​actions/core to 1.10.0 version #517
    • Update @​actions/cache to 3.0.4 version #499
    • Only use github.token on github.com #443
    • Improvement of documentation #477 #479 #491 #492

    Add check-latest input and bug fixes

    In scope of this release we add the check-latest input. If check-latest is set to true, the action first checks if the cached version is the latest one. If the locally cached version is not the most up-to-date, the version will then be downloaded from python-versions repository. By default check-latest is set to false. For PyPy it will to try to reach https://downloads.python.org/pypy/versions.json

    Example of usage:

    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4
        with:
          python-version: '3.9'
          check-latest: true
      - run: python --version
    

    Besides, it includes such changes as

    v4.1.0

    In scope of this pull request we updated actions/cache package as the new version contains fixes for caching error handling. Moreover, we added a new input update-environment. This option allows to specify if the action shall update environment variables (default) or not.

    Update-environment input

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Bump github.com/containerd/containerd from 1.5.0-beta.4 to 1.5.16

    Bump github.com/containerd/containerd from 1.5.0-beta.4 to 1.5.16

    Bumps github.com/containerd/containerd from 1.5.0-beta.4 to 1.5.16.

    Release notes

    Sourced from github.com/containerd/containerd's releases.

    containerd 1.5.16

    Welcome to the v1.5.16 release of containerd!

    The sixteenth patch release for containerd 1.5 contains a fix for CVE-2022-23471.

    Notable Updates

    See the changelog for complete list of changes

    Please try out the release binaries and report any issues at https://github.com/containerd/containerd/issues.

    Contributors

    • Derek McGowan
    • Danny Canter
    • Phil Estes
    • Sebastiaan van Stijn

    Changes

    • Github Security Advisory GHSA-2qjp-425j-52j9
      • Prepare release notes for v1.5.16
      • CRI stream server: Fix goroutine leak in Exec
    • [release/1.5] update to go1.18.9 (#7767)
      • [release/1.5] update to go1.18.9

    Dependency Changes

    This release has no dependency changes

    Previous release can be found at v1.5.15

    containerd 1.5.15

    Welcome to the v1.5.15 release of containerd!

    The fifteenth patch release for containerd 1.5 includes various fixes including a fix for a long time issue with CNI resource leakage.

    Notable Updates

    • Fix CNI leaks by changing pod network setup order in CRI plugin (#7464)
    • Fix request retry on push (#7479)
    • Fix lease labels unexpectedly overwriting expiration (#7746)

    ... (truncated)

    Commits
    • 2e3140a Merge pull request from GHSA-2qjp-425j-52j9
    • 189c7c3 Prepare release notes for v1.5.16
    • 6cd1152 CRI stream server: Fix goroutine leak in Exec
    • 2f59a97 Merge pull request #7767 from thaJeztah/1.5_update_go_1.18.9
    • 46e2ef0 [release/1.5] update to go1.18.9
    • 99a380d Merge pull request #7759 from dmcgowan/prepare-1.5.15
    • 9ab22bf Prepare release notes for v1.5.15
    • a0a9a0e Merge pull request #7746 from austinvazquez/cherry-pick-c4dee237f57a7f7895aaa...
    • 1de818a Fix order of operations when setting lease labels
    • 7b7a9fb Merge pull request #7722 from thaJeztah/1.5_protobuf_extensions_fix
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the Security Alerts page.
  • Bump peter-evans/create-pull-request from 4.0.1 to 4.2.3

    Bump peter-evans/create-pull-request from 4.0.1 to 4.2.3

    Bumps peter-evans/create-pull-request from 4.0.1 to 4.2.3.

    Release notes

    Sourced from peter-evans/create-pull-request's releases.

    Create Pull Request v4.2.3

    What's Changed

    Full Changelog: https://github.com/peter-evans/create-pull-request/compare/v4.2.2...v4.2.3

    Create Pull Request v4.2.2

    What's Changed

    New Contributors

    Full Changelog: https://github.com/peter-evans/create-pull-request/compare/v4.2.1...v4.2.2

    Create Pull Request v4.2.1

    What's Changed

    Full Changelog: https://github.com/peter-evans/create-pull-request/compare/v4.2.0...v4.2.1

    Create Pull Request v4.2.0

    ⚙️ Improves the proxy implementation to properly support the standard environment variables http_proxy, https_proxy and no_proxy

    What's Changed

    Full Changelog: https://github.com/peter-evans/create-pull-request/compare/v4.1.4...v4.2.0

    Create Pull Request v4.1.4

    ⚙️ Bumps @actions/core to transition away from deprecated runner commands.

    What's Changed

    Full Changelog: https://github.com/peter-evans/create-pull-request/compare/v4.1.3...v4.1.4

    Create Pull Request v4.1.3

    What's Changed

    ... (truncated)

    Commits
    • 2b011fa fix: add check for missing token input (#1324)
    • 331d02c fix: support github server url for pushing to fork (#1318)
    • d7db273 fix: handle update after force pushing base to a new commit (#1307)
    • ee93d78 test: set default branch to main (#1310)
    • 6c704eb docs: clarify limitations of push-to-fork with restricted token
    • 88bf0de docs: correct examples
    • b38e8b0 docs: replace set-output in example
    • b4d5173 feat: switch proxy implementation (#1269)
    • ad43dcc build(deps): bump @​actions/io from 1.1.1 to 1.1.2 (#1280)
    • c2f9cef build(deps): bump @​actions/exec from 1.1.0 to 1.1.1 (#1279)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Build Kernels with additional modules

    Build Kernels with additional modules

    How can I build kernels with some additional features and use them when I run ignite run? I build them from ignite/images/kernel via make but couldn't find where are compiled files are located?

    after that how can I use the generated kernel with ignite run?

    Some examples would be useful

  • ignited: allow running help, completion, version subcommands without root permission

    ignited: allow running help, completion, version subcommands without root permission

    This is pretty much a copy and paste of https://github.com/weaveworks/ignite/pull/676 but for ignited command.

    I also did some minor extra things:

    • Move all GenericCheckErr calls from to PersistentPreRun stage of cobra. A couple of them are already there to begin with (this specially helps ignited command as Preload checks for MANIFEST_DIR and fails).
    • Remove TODO statement on isNonRootCommand() function (we already handle each subcommands).
    • Fix typo??? in populating ignited provider (it was ignite).
  • Bump actions/checkout from 2 to 3.1.0

    Bump actions/checkout from 2 to 3.1.0

    Bumps actions/checkout from 2 to 3.1.0.

    Release notes

    Sourced from actions/checkout's releases.

    v3.1.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/actions/checkout/compare/v3.0.2...v3.1.0

    v3.0.2

    What's Changed

    Full Changelog: https://github.com/actions/checkout/compare/v3...v3.0.2

    v3.0.1

    v3.0.0

    • Updated to the node16 runtime by default
      • This requires a minimum Actions Runner version of v2.285.0 to run, which is by default available in GHES 3.4 or later.

    v2.4.2

    What's Changed

    Full Changelog: https://github.com/actions/checkout/compare/v2...v2.4.2

    v2.4.1

    • Fixed an issue where checkout failed to run in container jobs due to the new git setting safe.directory

    v2.4.0

    • Convert SSH URLs like org-<ORG_ID>@github.com: to https://github.com/ - pr

    v2.3.5

    Update dependencies

    v2.3.4

    v2.3.3

    ... (truncated)

    Changelog

    Sourced from actions/checkout's changelog.

    v3.1.0

    v3.0.2

    v3.0.1

    v3.0.0

    v2.3.1

    v2.3.0

    v2.2.0

    v2.1.1

    • Changes to support GHES (here and here)

    v2.1.0

    v2.0.0

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Executes an OCI image using firecracker.

oci-image-executor Executes an OCI image using Firecracker. Logs from the executed process (both stdout and stderr) are sent to stdout. Logs from the

Dec 28, 2022
Opsani Ignite for Kubernetes: Evaluate Applications for Optimization
Opsani Ignite for Kubernetes: Evaluate Applications for Optimization

Opsani Ignite for Kubernetes Opsani Ignite analyzes applications running on a Kubernetes cluster in order to identify performance and reliability risk

Aug 5, 2022
A CLI to control firecracker

firecracker-ctl A CLI to control firecracker Short introduction Starting a VM Note: x86 guest # Download Kernel + RootFS

Oct 31, 2021
Weave Ignite is an open source Virtual Machine (VM) manager with a container UX and built-in GitOps management.
Weave Ignite is an open source Virtual Machine (VM) manager with a container UX and built-in GitOps management.

Weave Ignite is an open source Virtual Machine (VM) manager with a container UX and built-in GitOps management.

Nov 16, 2021
Executes an OCI image using firecracker.

oci-image-executor Executes an OCI image using Firecracker. Logs from the executed process (both stdout and stderr) are sent to stdout. Logs from the

Dec 28, 2022