Daemon based on liblxc offering a REST API to manage containers

LXD

LXD

LXD is a next generation system container and virtual machine manager.
It offers a unified user experience around full Linux systems running inside containers or virtual machines.

It's image based with pre-made images available for a wide number of Linux distributions
and is built around a very powerful, yet pretty simple, REST API.

To get a better idea of what LXD is and what it does, you can try it online!
Then if you want to run it locally, take a look at our getting started guide.

Release announcements can be found here: https://linuxcontainers.org/lxd/news/
And the release tarballs here: https://linuxcontainers.org/lxd/downloads/

Status

Type Service Status
CI (client) GitHub Build Status
CI (server) Jenkins Build Status
LXD documentation ReadTheDocs Read the Docs
Go documentation Godoc GoDoc
Static analysis GoReport Go Report Card
Translations Weblate Translation status
Project status CII Best Practices CII Best Practices

Installing LXD from packages

The LXD daemon only works on Linux but the client tool (lxc) is available on most platforms.

OS Format Command
Linux Snap snap install lxd
Windows Chocolatey choco install lxc
MacOS Homebrew brew install lxc

More instructions on installing LXD for a wide variety of Linux distributions and operating systems can be found on our website.

Installing LXD from source

We recommend having the latest versions of liblxc (>= 3.0.0 required) available for LXD development. Additionally, LXD requires Golang 1.13 or later to work. On ubuntu, you can get those with:

sudo apt update
sudo apt install acl autoconf dnsmasq-base git golang libacl1-dev libcap-dev liblxc1 liblxc-dev libsqlite3-dev libtool libudev-dev libuv1-dev make pkg-config rsync squashfs-tools tar tcl xz-utils ebtables

Note that when building LXC yourself, ensure to build it with the appropriate security related libraries installed which our testsuite tests. Again, on ubuntu, you can get those with:

sudo apt install libapparmor-dev libseccomp-dev libcap-dev

There are a few storage backends for LXD besides the default "directory" backend. Installing these tools adds a bit to initramfs and may slow down your host boot, but are needed if you'd like to use a particular backend:

sudo apt install lvm2 thin-provisioning-tools
sudo apt install btrfs-tools

To run the testsuite, you'll also need:

sudo apt install curl gettext jq sqlite3 uuid-runtime bzr socat

From Source: Building the latest version

These instructions for building from source are suitable for individual developers who want to build the latest version of LXD, or build a specific release of LXD which may not be offered by their Linux distribution. Source builds for integration into Linux distributions are not covered here and may be covered in detail in a separate document in the future.

When building from source, it is customary to configure a GOPATH which contains the to-be-built source code. When the sources are done building, the lxc and lxd binaries will be available at $GOPATH/bin, and with a little LD_LIBRARY_PATH magic (described later), these binaries can be run directly from the built source tree.

The following lines demonstrate how to configure a GOPATH with the most recent LXD sources from GitHub:

mkdir -p ~/go
export GOPATH=~/go
go get -d -v github.com/lxc/lxd/lxd
cd $GOPATH/src/github.com/lxc/lxd

When the build process starts, the Makefile will use go get and git clone to grab all necessary dependencies needed for building.

From Source: Building a Release

To build an official release of LXD, download and extract a release tarball, and then set up GOPATH to point to the _dist directory inside it, which is configured to be used as a GOPATH and contains snapshots of all necessary sources. LXD will then build using these snapshots rather than grabbing 'live' sources using go get and git clone. Once the release tarball is downloaded and extracted, set the GOPATH as follows:

cd lxd-3.18
export GOPATH=$(pwd)/_dist

Starting the Build

Once the GOPATH is configured, either to build the latest GitHub version or an official release, the following steps can be used to build LXD.

The actual building is done by two separate invocations of the Makefile: make deps -- which builds libraries required by LXD -- and make, which builds LXD itself. At the end of make deps, a message will be displayed which will specify environment variables that should be set prior to invoking make. As new versions of LXD are released, these environment variable settings may change, so be sure to use the ones displayed at the end of the make deps process, as the ones below (shown for example purposes) may not exactly match what your version of LXD requires:

make deps
# Use the export statements printed in the output of 'make deps' -- these are examples: 
export CGO_CFLAGS="${CGO_CFLAGS} -I${GOPATH}/deps/dqlite/include/ -I${GOPATH}/deps/raft/include/"
export CGO_LDFLAGS="${CGO_LDFLAGS} -L${GOPATH}/deps/dqlite/.libs/ -L${GOPATH}/deps/raft/.libs/"
export LD_LIBRARY_PATH="${GOPATH}/deps/dqlite/.libs/:${GOPATH}/deps/raft/.libs/:${LD_LIBRARY_PATH}"
export CGO_LDFLAGS_ALLOW="-Wl,-wrap,pthread_create"
make

From Source: Installing

Once the build completes, you simply keep the source tree, add the directory referenced by $GOPATH/bin to your shell path, and set the LD_LIBRARY_PATH variable printed by make deps to your environment. This might look something like this for a ~/.bashrc file:

# No need to export GOPATH:
GOPATH=~/go
# But we need to export these:
export PATH="$PATH:$GOPATH/bin"
export LD_LIBRARY_PATH="${GOPATH}/deps/dqlite/.libs/:${GOPATH}/deps/raft/.libs/:${LD_LIBRARY_PATH}"

Now, the lxd and lxc binaries will be available to you and can be used to set up LXD. The binaries will automatically find and use the dependencies built in $GOPATH/deps thanks to the LD_LIBRARY_PATH environment variable.

Machine Setup

You'll need sub{u,g}ids for root, so that LXD can create the unprivileged containers:

echo "root:1000000:65536" | sudo tee -a /etc/subuid /etc/subgid

Now you can run the daemon (the --group sudo bit allows everyone in the sudo group to talk to LXD; you can create your own group if you want):

sudo -E PATH=$PATH LD_LIBRARY_PATH=$LD_LIBRARY_PATH $GOPATH/bin/lxd --group sudo

Security

LXD, similar to other container and VM managers provides a UNIX socket for local communication.

WARNING: Anyone with access to that socket can fully control LXD, which includes the ability to attach host devices and filesystems, this should therefore only be given to users who would be trusted with root access to the host.

When listening on the network, the same API is available on a TLS socket (HTTPS), specific access on the remote API can be restricted through Canonical RBAC.

More details are available here.

Getting started with LXD

Now that you have LXD running on your system you can read the getting started guide or go through more examples and configurations in our documentation.

Bug reports

Bug reports can be filed at: https://github.com/lxc/lxd/issues/new

Contributing

Fixes and new features are greatly appreciated but please read our contributing guidelines first.

Support and discussions

Forum

A discussion forum is available at: https://discuss.linuxcontainers.org

Mailing-lists

We use the LXC mailing-lists for developer and user discussions, you can find and subscribe to those at: https://lists.linuxcontainers.org

IRC

If you prefer live discussions, some of us also hang out in #lxcontainers on irc.freenode.net.

FAQ

How to enable LXD server for remote access?

By default LXD server is not accessible from the networks as it only listens on a local unix socket. You can make LXD available from the network by specifying additional addresses to listen to. This is done with the core.https_address config variable.

To see the current server configuration, run:

lxc config show

To set the address to listen to, find out what addresses are available and use the config set command on the server:

ip addr
lxc config set core.https_address 192.168.1.15

When I do a lxc remote add over https, it asks for a password?

By default, LXD has no password for security reasons, so you can't do a remote add this way. In order to set a password, do:

lxc config set core.trust_password SECRET

on the host LXD is running on. This will set the remote password that you can then use to do lxc remote add.

You can also access the server without setting a password by copying the client certificate from .config/lxc/client.crt to the server and adding it with:

lxc config trust add client.crt

How do I configure LXD storage?

LXD supports btrfs, ceph, directory, lvm and zfs based storage.

First make sure you have the relevant tools for your filesystem of choice installed on the machine (btrfs-progs, lvm2 or zfsutils-linux).

By default, LXD comes with no configured network or storage. You can get a basic configuration done with:

    lxd init

lxd init supports both directory based storage and ZFS. If you want something else, you'll need to use the lxc storage command:

lxc storage create default BACKEND [OPTIONS...]
lxc profile device add default root disk path=/ pool=default

BACKEND is one of btrfs, ceph, dir, lvm or zfs.

Unless specified otherwise, LXD will setup loop based storage with a sane default size.

For production environments, you should be using block backed storage instead both for performance and reliability reasons.

How can I live migrate a container using LXD?

Live migration requires a tool installed on both hosts called CRIU, which is available in Ubuntu via:

sudo apt install criu

Then, launch your container with the following,

lxc launch ubuntu $somename
sleep 5s # let the container get to an interesting state
lxc move host1:$somename host2:$somename

And with luck you'll have migrated the container :). Migration is still in experimental stages and may not work for all workloads. Please report bugs on lxc-devel, and we can escalate to CRIU lists as necessary.

Can I bind mount my home directory in a container?

Yes. This can be done using a disk device:

lxc config device add container-name home disk source=/home/$USER path=/home/ubuntu

For unprivileged containers, you will also need one of:

  • Pass shift=true to the lxc config device add call. This depends on shiftfs being supported (see lxc info)
  • raw.idmap entry (see Idmaps for user namespace)
  • Recursive POSIX ACLs placed on your home directory

Either of those can be used to allow the user in the container to have working read/write permissions. When not setting one of those, everything will show up as the overflow uid/gid (65536:65536) and access to anything that's not world readable will fail.

Privileged containers do not have this issue as all uid/gid inthe container are the same outside. But that's also the cause of most of the security issues with such privileged containers.

How can I run docker inside a LXD container?

In order to run Docker inside a LXD container the security.nesting property of the container should be set to true.

lxc config set <container> security.nesting true

Note that LXD containers cannot load kernel modules, so depending on your Docker configuration you may need to have the needed extra kernel modules loaded by the host.

You can do so by setting a comma separate list of kernel modules that your container needs with:

lxc config set <container> linux.kernel_modules <modules>

We have also received some reports that creating a /.dockerenv file in your container can help Docker ignore some errors it's getting due to running in a nested environment.

Hacking on LXD

Directly using the REST API

The LXD REST API can be used locally via unauthenticated Unix socket or remotely via SSL encapsulated TCP.

Via Unix socket

curl --unix-socket /var/lib/lxd/unix.socket \
    -H "Content-Type: application/json" \
    -X POST \
    -d @hello-ubuntu.json \
    lxd/1.0/containers

Via TCP

TCP requires some additional configuration and is not enabled by default.

lxc config set core.https_address "[::]:8443"
curl -k -L \
    --cert ~/.config/lxc/client.crt \
    --key ~/.config/lxc/client.key \
    -H "Content-Type: application/json" \
    -X POST \
    -d @hello-ubuntu.json \
    "https://127.0.0.1:8443/1.0/containers"

JSON payload

The hello-ubuntu.json file referenced above could contain something like:

{
    "name":"some-ubuntu",
    "ephemeral":true,
    "config":{
        "limits.cpu":"2"
    },
    "source": {
        "type":"image",
        "mode":"pull",
        "protocol":"simplestreams",
        "server":"https://cloud-images.ubuntu.com/releases",
        "alias":"18.04"
    }
}
Owner
LXC - Linux Containers
Linux container projects
LXC - Linux Containers
Comments
  • Cannot delete LXD zfs backed containers: dataset is busy

    Cannot delete LXD zfs backed containers: dataset is busy

    Minty fresh Ubuntu 18.04 system LXD v3.0.0 (latest from apt, how to get v3.0.1?)

    Started seeing this beginning last week crop up arbitrarily across my infrastructure. Out of ~10 delete operations, I have seen this happen to 3 containers on 2 different systems.

    ~# lxc delete test1
    Error: Failed to destroy ZFS filesystem: cannot destroy 'lxd/containers/test1': dataset is busy
    
    ~# lxc ls
    +-------+---------+---------------------+-------------------------------+------------+-----------+
    | NAME  |  STATE  |        IPV4         |             IPV6              |    TYPE    | SNAPSHOTS |
    +-------+---------+---------------------+-------------------------------+------------+-----------+
    | doxpl | RUNNING | 46.4.158.225 (eth0) | 2a01:4f8:221:1809::601 (eth0) | PERSISTENT | 0         |
    +-------+---------+---------------------+-------------------------------+------------+-----------+
    | test1 | STOPPED |                     |                               | PERSISTENT | 0         |
    +-------+---------+---------------------+-------------------------------+------------+-----------+
    

    Tried googling around a bit and I have tried the most common tips on figuring out what might be keeping the dataset busy: There are no snapshots or dependencies, dataset is unmounted i.e. zfs list reports

    NAME                                                                          USED  AVAIL  REFER  MOUNTPOINT
    lxd                                                                          3.51G   458G    24K  none
    lxd/containers                                                               2.24G   458G    24K  none
    lxd/containers/doxpl                                                         1.04G   766M  2.25G  /var/lib/lxd/storage-pools/lxd/containers/doxpl
    lxd/containers/test1                                                         1.20G  6.80G  1.20G  none
    lxd/custom                                                                     24K   458G    24K  none
    lxd/deleted                                                                    24K   458G    24K  none
    lxd/images                                                                   1.27G   458G    24K  none
    lxd/images/7d4aa78fb18775e6c3aa2c8e5ffa6c88692791adda3e8735a835e0ba779204ec  1.27G   458G  1.27G  none
    lxd/snapshots                                                                  24K   458G    24K  none
    

    Could LXD still be holding the dataset? I see there are a number of zfs related fixes in v3.0.1 but I cannot do an apt upgrade to this version..?

    Edit: issuing systemctl restart lxd does not resolve the issue, so maybe not lxd after all. Strange...

  • lxc container not restarting

    lxc container not restarting

    Restarting a container is not working.

     root@inxovh-hy002 (node2):~# lxc info
    apistatus: stable
    apiversion: "1.0"
    auth: trusted
    environment:
      addresses:
      - 10.0.0.2:8443
      architectures:
      - x86_64
      - i686
      driver: lxc
      driverversion: 2.0.6
      kernel: Linux
      kernelarchitecture: x86_64
      kernelversion: 4.4.0-47-generic
      server: lxd
      serverpid: 21246
      serverversion: 2.6.2
      storage: zfs
      storageversion: "5"
    config:
      core.https_address: 10.0.0.2:8443
      core.trust_password: true
      storage.zfs_pool_name: zdata/lxd
      storage.zfs_use_refquota: "true"
    public: false
    

    Here the restart:

    root@inxovh-hy002 (node2):~# lxc restart inxovh-xmpp001
    
    error: Error calling 'lxd forkstart inxovh-xmpp001 /var/lib/lxd/containers /var/log/lxd/inxovh-xmpp001/lxc.conf': err='exit status 1'
      lxc 20161130165729.698 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:234     - No such file or directory - failed to change apparmor profile to lxd-inxovh-    xmpp001_</var/lib/lxd>//&:lxd-inxovh-xmpp001_<var-lib-lxd>:
      lxc 20161130165729.698 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
      lxc 20161130165729.698 ERROR lxc_start - start.c:__lxc_start:1338 - Failed to spawn container "inxovh-xmpp001".
      lxc 20161130165730.215 ERROR lxc_conf - conf.c:run_buffer:347 - Script exited with status 1
      lxc 20161130165730.215 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "inxovh-xmpp001".
    
    Try `lxc info --show-log inxovh-xmpp001` for more info
    

    Here the log

    root@inxovh-hy002 (node2):~# lxc info --show-log inxovh-xmpp001
    Name: inxovh-xmpp001
    Remote: unix:/var/lib/lxd/unix.socket
    Architecture: x86_64
    Created: 2016/11/04 22:15 UTC
    Status: Stopped
    Type: persistent
    Profiles: default
    
    Log:
    
            lxc 20161130165729.540 WARN     lxc_seccomp - seccomp.c:do_resolve_add_rule:265 - Seccomp: failed to resolve syscall: .
            lxc 20161130165729.540 WARN     lxc_seccomp - seccomp.c:do_resolve_add_rule:266 - This syscall will NOT be blacklisted.
            lxc 20161130165729.540 WARN     lxc_seccomp - seccomp.c:do_resolve_add_rule:265 - Seccomp: failed to resolve syscall: .
            lxc 20161130165729.540 WARN     lxc_seccomp - seccomp.c:do_resolve_add_rule:266 - This syscall will NOT be blacklisted.
            lxc 20161130165729.698 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:234 - No such file or directory - failed to change apparmor profile to lxd-inxovh-xmpp001_</var/lib/lxd>//&:lxd-inxovh-xmpp001_<var-lib-lxd>:
            lxc 20161130165729.698 ERROR    lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
            lxc 20161130165729.698 ERROR    lxc_start - start.c:__lxc_start:1338 - Failed to spawn container "inxovh-xmpp001".
            lxc 20161130165730.215 ERROR    lxc_conf - conf.c:run_buffer:347 - Script exited with status 1
            lxc 20161130165730.215 ERROR    lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "inxovh-xmpp001".
            lxc 20161130165730.215 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_cgroup failed to receive response
            lxc 20161130165730.215 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_cgroup failed to receive response
    

    Status is stoped:

    root@inxovh-hy002 (node2):~# lxc list  inxovh-xmpp001
    +----------------+---------+------+------+------------+-----------+
    |      NAME      |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
    +----------------+---------+------+------+------------+-----------+
    | inxovh-xmpp001 | STOPPED |      |      | PERSISTENT | 0         |
    +----------------+---------+------+------+------------+-----------+
    

    i've to manually start:

    root@inxovh-hy002 (node2):~# lxc start  inxovh-xmpp001
    root@inxovh-hy002 (node2):~#
    
  • Networking does not work in fresh Bionic container

    Networking does not work in fresh Bionic container

    Tried with LXD v2.21 on Ubuntu 16.04 and LXD v3.0.0 on 18.04 (system upgraded from 16.04)

    Networking does not come up and container does not get an Ip assigned on my network bridge.

    On both my 16.04 and 18.04 host system, a xenial image comes up just fine.

    I have tried provisioning from ubuntu:bionicas well as images:ubuntu/bionic/amd64 with identical results.

    /var/log/syslog on the host shows in all cases lines similar to

    Apr 29 20:25:15 krellide kernel: [6056886.886248] audit: type=1400 audit(1525026315.592:23530): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-bionic-template-xlemp72_</var/lib/lxd>" name="/sys/fs/cgroup/unified/" pid=19042 comm="systemd" fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
    Apr 29 20:25:15 krellide kernel: [6056886.886297] audit: type=1400 audit(1525026315.592:23531): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-bionic-template-xlemp72_</var/lib/lxd>" name="/sys/fs/cgroup/unified/" pid=19042 comm="systemd" fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
    Apr 29 20:25:16 krellide kernel: [6056887.323323] audit: type=1400 audit(1525026316.029:23532): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-bionic-template-xlemp72_</var/lib/lxd>" name="/run/systemd/unit-root/var/lib/lxcfs/" pid=19482 comm="(networkd)" flags="ro, nosuid, nodev, remount, bind"
    

    These lines are not present in syslog when provisioning other versions of Ubuntu (Xenial/Zesty). Interestingly upgrading an existing Xenial container to Bionic does not cause any networking issues.

    Without knowing much about apparmor, I am assuming that the DENIED ... networkd line is an indicator of the culprit here. Any assistance would be much appreciated :)

  • Container creation finishes but operation doesn't

    Container creation finishes but operation doesn't

    Information

    • Distribution: Ubuntu
    • Distribution version: 16.04
    • The output of "lxc info" or if that fails:
      • Kernel version: 4.4.0-65-generic
      • LXC version: 2.0.7
      • LXD version: 2.10.1
      • Storage backend in use: zfs 0.6.5.6-0ubuntu15

    Issue

    I'm working on a lxd api client to automatically create and start containers. I'm currently writing some automated tests and sometimes the test will time out because the operation of the containers creation wont ever finish.

    But the container seems to be successfully created when listing existing containers. For debugging purposes, instead of just waiting for the /wait to finish I also added a simple status-polling every 5 seconds – it also continuously returns that the operation is still running.

    It happens more often the more I run the same test, as soon as I restart the lxd service it works properly for a while again.

    While running the service in terminal and killing it with ^C suddenly it decides to finish the operation before exiting.

    What the test does:

    • Create persistent container
    • Run command
    • Create snapshot 1
    • Run command
    • Create snapshot 2
    • Run command
    • Create another persistent container based on snapshot 1 (this is where it gets stuck)

    Logs

  • LXD Static IP configuration - clear + working documentation seems scarce

    LXD Static IP configuration - clear + working documentation seems scarce

    The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.

    Required information

    • Distribution: Ubuntu
    • Distribution version: 16.10
    • The output of "lxc info" or if that fails:
      • Kernel version: 4.8.0-22-generic
      • LXC version: 2.4.1
      • LXD version: 2.4.1
      • Storage backend in use: dir

    Issue description

    Goal is to have LXD containers with static IPs which can communication with host + other containers.

    Steps to reproduce

    Simplest approach seems to be setting /etc/default/lxd-bridge LXD_CONFILE to a container,IP pairs + Ubuntu 16.10 seems to have removed this file.

    I have 100s of LXC container,IP pairs to port to LXD + prefer a solution that avoids the old iptables nat rule approach.

    None of the https://github.com/lxc/lxd/issues/2083 approaches seem to produce useful results.

    The

    echo -e "lxc.network.0.ipv4 = 144.217.33.224\nlxc.network.0.ipv4.gateway = 149.56.27.254\n" | lxc config set template-yakkety raw.lxc -

    comes close, as my test container does end up with the correct IP assigned.

    Maybe this is the correct approach, along with setting up the host base interface (eth2) in my case, to use br0, rather than eth2 + somehow bridging lxdbr0 to br0.

    Suggestions appreciated, as all the Ubuntu docs seem wrong + the LXD 2.0 Introduction series seems to be missing basic networking examples for large scale LXD deployments.

    Once I have a working approach, I'll publish all steps back here, so others can accomplish this easier.

    Thanks.

  • [Upgrade from mixed-storage LXD] LXD won't restart after upgrade to 2.10.1

    [Upgrade from mixed-storage LXD] LXD won't restart after upgrade to 2.10.1

    Hello,

    I believe that I have a variant of the problem seen in issue #3024 which I have been following with interest. After upgrade to 2.10.1 from 2.8.x, lxd cannot start up.

    Required information

    • Distribution: Ubuntu
    • Distribution version: 16.04LTS
    • The output of "lxc info" or if that fails:
      • Kernel version:
      • LXC version: 2.10.1
      • LXD version: 2.10.1
      • Storage backend in use: LVM (thinpools)

    Issue description

    I have two systems, sys1 and sys2. Sys1 is using dir storage, while sys2 is using LVM.

    With sys1, I migrated from 2.8.x to 2.9.x and then to 2.10.x. After resolving an issue with a change in profile inheritance of the disk device after the 2.9.x upgrade, sys1 seems to have upgraded to 2.10.x ok.

    With sys2, I migrated directly from 2.8.x to 2.10.x. This was inadvertent, as I had just sorted out the 2.9.x issue on sys1 and intended to move sys2 to 2.9.x. When lxd attempted to restart, the lxc command line client stopped responding.

    Checking /var/log/lxd/lxd.log, we see:

    lvl=info msg="LXD 2.10.1 is starting in normal mode" path=/var/lib/lxd t=2017-03-06T14:34:02-0500
    lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-03-06T14:34:02-0500
    lvl=info msg="Kernel uid/gid map:" t=2017-03-06T14:34:02-0500
    lvl=info msg=" - u 0 0 4294967295" t=2017-03-06T14:34:02-0500
    lvl=info msg=" - g 0 0 4294967295" t=2017-03-06T14:34:02-0500
    lvl=info msg="Configured LXD uid/gid map:" t=2017-03-06T14:34:02-0500
    lvl=info msg=" - u 0 100000 65536" t=2017-03-06T14:34:02-0500
    lvl=info msg=" - g 0 100000 65536" t=2017-03-06T14:34:02-0500
    lvl=warn msg="Database already contains a valid entry for the storage pool: lxd." t=2017-03-06T14:34:03-0500
    lvl=warn msg="Storage volumes database already contains an entry for the container." t=2017-03-06T14:34:03-0500
    lvl=info msg="LXD 2.10.1 is starting in normal mode" path=/var/lib/lxd t=2017-03-06T14:44:02-0500
    lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-03-06T14:44:02-0500
    lvl=info msg="Kernel uid/gid map:" t=2017-03-06T14:44:02-0500
    lvl=info msg=" - u 0 0 4294967295" t=2017-03-06T14:44:02-0500
    lvl=info msg=" - g 0 0 4294967295" t=2017-03-06T14:44:02-0500
    lvl=info msg="Configured LXD uid/gid map:" t=2017-03-06T14:44:02-0500
    lvl=info msg=" - u 0 100000 65536" t=2017-03-06T14:44:02-0500
    lvl=info msg=" - g 0 100000 65536" t=2017-03-06T14:44:02-0500
    lvl=warn msg="Database already contains a valid entry for the storage pool: lxd." t=2017-03-06T14:44:03-0500
    lvl=warn msg="Storage volumes database already contains an entry for the container." t=2017-03-06T14:44:03-0500
    

    journalctl -u lxd

    Mar 06 14:34:02 sys2 systemd[1]: Starting LXD - main daemon...
    Mar 06 14:34:02 sys2 lxd[4416]: lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-03-06T14:34:02-0500
    Mar 06 14:34:03 sys2 lxd[4416]: lvl=warn msg="Database already contains a valid entry for the storage pool: lxd." t=2017-03-06T14:34:03-0500
    Mar 06 14:34:03 sys2 lxd[4416]: lvl=warn msg="Storage volumes database already contains an entry for the container." t=2017-03-06T14:34:03-0500
    Mar 06 14:34:13 sys2 lxd[4416]: error: device or resource busy
    Mar 06 14:34:13 sys2 systemd[1]: lxd.service: Main process exited, code=exited, status=1/FAILURE
    Mar 06 14:44:02 sys2 lxd[4417]: error: LXD still not running after 600s timeout.
    Mar 06 14:44:02 sys2 systemd[1]: lxd.service: Control process exited, code=exited status=1
    Mar 06 14:44:02 sys2 systemd[1]: Failed to start LXD - main daemon.
    Mar 06 14:44:02 sys2 systemd[1]: lxd.service: Unit entered failed state.
    Mar 06 14:44:02 sys2 systemd[1]: lxd.service: Failed with result 'exit-code'.
    Mar 06 14:44:02 sys2 systemd[1]: lxd.service: Service hold-off time over, scheduling restart.
    Mar 06 14:44:02 sys2 systemd[1]: Stopped LXD - main daemon.
    Mar 06 14:44:02 sys2 systemd[1]: Starting LXD - main daemon...
    Mar 06 14:44:02 sys2 lxd[8637]: lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-03-06T14:44:02-0500
    Mar 06 14:44:03 sys2 lxd[8637]: lvl=warn msg="Database already contains a valid entry for the storage pool: lxd." t=2017-03-06T14:44:03-0500
    Mar 06 14:44:03 sys2 lxd[8637]: lvl=warn msg="Storage volumes database already contains an entry for the container." t=2017-03-06T14:44:03-0500
    Mar 06 14:44:13 sys2 lxd[8637]: error: device or resource busy
    Mar 06 14:44:13 sys2 systemd[1]: lxd.service: Main process exited, code=exited, status=1/FAILURE
    

    Sample of /var/lib/lxd/containers:

    drwx------+ 5 root   root    4096 Jan  2 15:59 astro3
    lrwxrwxrwx  1 root   root      16 Jan  2 11:35 astro3.lv -> /dev/lxd/astro3
    drwxr-xr-x+ 5 root   root    4096 Jan  9 12:13 vault
    lrwxrwxrwx  1 root   root      14 Jan 11 15:01 vault.lv -> /dev/lxd/vault
    drwx------  2 root   root    4096 Feb 15 13:48 vgate1
    lrwxrwxrwx  1 root   root      15 Feb 15 13:48 vgate1.lv -> /dev/lxd/vgate1
    drwxr-xr-x+ 5 root   root    4096 Jan 18 13:52 vpn1
    lrwxrwxrwx  1 root   root      13 Jan 25 16:22 vpn1.lv -> /dev/lxd/vpn1
    

    File tree listing of /var/lib/lxd/storage-pools:

    .
    └── lxd
        └── containers
    

    That is, the storage-pools area is empty. (Were the container rootfs links supposed to be migrated to storage-pools?)

    The images area seems untouched:

    root@sol2:/var/lib/lxd/containers# ls /var/lib/lxd/images/
    11fc1b1d39b9f9cd7e9491871f1421ac4278e1d599ecf5d180f2a6e2483bd172
    11fc1b1d39b9f9cd7e9491871f1421ac4278e1d599ecf5d180f2a6e2483bd172.lv
    11fc1b1d39b9f9cd7e9491871f1421ac4278e1d599ecf5d180f2a6e2483bd172.rootfs
    18e7ed74d0d653894f65343afbc35b92c6781933c273943d882c36a5c5535533
    18e7ed74d0d653894f65343afbc35b92c6781933c273943d882c36a5c5535533.lv
    457a80ea4720900b69e5542cea5351f58021331bc96e773e4855a3e2ce1e6595
    457a80ea4720900b69e5542cea5351f58021331bc96e773e4855a3e2ce1e6595.lv
    457a80ea4720900b69e5542cea5351f58021331bc96e773e4855a3e2ce1e6595.rootfs
    543e662b70958f5b87f68b20eb0a205d8c4b14c41f80699e9a98b3b851883d15
    543e662b70958f5b87f68b20eb0a205d8c4b14c41f80699e9a98b3b851883d15.lv
    543e662b70958f5b87f68b20eb0a205d8c4b14c41f80699e9a98b3b851883d15.rootfs
    a570ce23e1dae791e7b8b2f2bcb98c1404273e97c7a1fb972bf0f5835ac3e869
    a570ce23e1dae791e7b8b2f2bcb98c1404273e97c7a1fb972bf0f5835ac3e869.lv
    b5b03165de7c450f5f9793c8b2eb4a364fbd81124a01511f854dd379eef52abb
    b5b03165de7c450f5f9793c8b2eb4a364fbd81124a01511f854dd379eef52abb.rootfs
    bfd17410a8c7fe6397dba3e353a23001243bc43af87acf25544d6b0ab624f9f8
    bfd17410a8c7fe6397dba3e353a23001243bc43af87acf25544d6b0ab624f9f8.rootfs
    d7c16c4fedd3308b5bffdb91f491b8458610c6115d37ace8ba4bcf5c29b23cc6
    d7c16c4fedd3308b5bffdb91f491b8458610c6115d37ace8ba4bcf5c29b23cc6.lv
    d7c16c4fedd3308b5bffdb91f491b8458610c6115d37ace8ba4bcf5c29b23cc6.rootfs
    e12c3c1aed259ce62b4a5e8dc5fe8b92d14d36e611b3beae3f55c94df069eeed
    e12c3c1aed259ce62b4a5e8dc5fe8b92d14d36e611b3beae3f55c94df069eeed.lv
    ff52f536d2896f358bc913d592828ecf1b39fae45e4ee4825930091e8793ac28
    ff52f536d2896f358bc913d592828ecf1b39fae45e4ee4825930091e8793ac28.rootfs
    

    Output from pvs and vgs and -- highly edited for readability -- output from lvs:

      PV         VG   Fmt  Attr PSize   PFree  
      /dev/sda5  dat1 lvm2 a--  931.13g 181.13g
      /dev/sda6  lxd  lvm2 a--    2.56t      0 
    
      VG   #PV #LV #SN Attr   VSize   VFree  
      dat1   1   1   0 wz--n- 931.13g 181.13g
      lxd    1  42   0 wz--n-   2.56t      0 
    
     LV        VG   Attr       LSize   Pool   Origin   Data%  Meta%
     LXDPool  lxd  twi-aotz--   2.56t                   3.91   2.12
     astro3   lxd  Vwi-aotz--  10.00g LXDPool          20.69 
     vault    lxd  Vwi-aotz--  10.00g LXDPool          12.34
     vgate1   lxd  Vwi-a-tz-- 300.00g LXDPool           1.85
     vpn1     lxd  Vwi-aotz-- 300.00g LXDPool           1.88
    

    Data from lxd.db:

    sqlite> select * from storage_pools;
    1|lxd|lvm
    sqlite> select * from storage_pools_config;
    166|1|volume.size|300GB
    167|1|size|21GB
    168|1|source|lxd
    169|1|lvm.thinpool_name|LXDPool
    170|1|lvm.vg_name|lxd
    sqlite> select * from storage_volumes;
    1|astro3|1|0
    sqlite> select * from storage_volumes_config;
    67|1|block.filesystem|ext4
    68|1|size|300GB
    

    It looks somewhat odd to me that host astro3 has an entry in the storage_volumes tables when nothing else does. It does differ in being a privileged container.

    Any help you can provide to get regular access restored will be greatly appreciated. For the moment, the containers continue to provide their services. Let me know if I can provide any other useful data or perform any non-destructive tests.

  • "lxc exec" frequently runs into I/O timeouts

    Required information

    • Distribution: Ubuntu
    • Distribution version: 22.04 (in development)
    • The output of "lxc info":
    config:
      core.https_address: '[::]'
      images.auto_update_interval: "24"
      images.remote_cache_expiry: "60"
      storage.lvm_thinpool_name: LXDPool
    api_extensions:
    - storage_zfs_remove_snapshots
    - container_host_shutdown_timeout
    - container_stop_priority
    - container_syscall_filtering
    - auth_pki
    - container_last_used_at
    - etag
    - patch
    - usb_devices
    - https_allowed_credentials
    - image_compression_algorithm
    - directory_manipulation
    - container_cpu_time
    - storage_zfs_use_refquota
    - storage_lvm_mount_options
    - network
    - profile_usedby
    - container_push
    - container_exec_recording
    - certificate_update
    - container_exec_signal_handling
    - gpu_devices
    - container_image_properties
    - migration_progress
    - id_map
    - network_firewall_filtering
    - network_routes
    - storage
    - file_delete
    - file_append
    - network_dhcp_expiry
    - storage_lvm_vg_rename
    - storage_lvm_thinpool_rename
    - network_vlan
    - image_create_aliases
    - container_stateless_copy
    - container_only_migration
    - storage_zfs_clone_copy
    - unix_device_rename
    - storage_lvm_use_thinpool
    - storage_rsync_bwlimit
    - network_vxlan_interface
    - storage_btrfs_mount_options
    - entity_description
    - image_force_refresh
    - storage_lvm_lv_resizing
    - id_map_base
    - file_symlinks
    - container_push_target
    - network_vlan_physical
    - storage_images_delete
    - container_edit_metadata
    - container_snapshot_stateful_migration
    - storage_driver_ceph
    - storage_ceph_user_name
    - resource_limits
    - storage_volatile_initial_source
    - storage_ceph_force_osd_reuse
    - storage_block_filesystem_btrfs
    - resources
    - kernel_limits
    - storage_api_volume_rename
    - macaroon_authentication
    - network_sriov
    - console
    - restrict_devlxd
    - migration_pre_copy
    - infiniband
    - maas_network
    - devlxd_events
    - proxy
    - network_dhcp_gateway
    - file_get_symlink
    - network_leases
    - unix_device_hotplug
    - storage_api_local_volume_handling
    - operation_description
    - clustering
    - event_lifecycle
    - storage_api_remote_volume_handling
    - nvidia_runtime
    - container_mount_propagation
    - container_backup
    - devlxd_images
    - container_local_cross_pool_handling
    - proxy_unix
    - proxy_udp
    - clustering_join
    - proxy_tcp_udp_multi_port_handling
    - network_state
    - proxy_unix_dac_properties
    - container_protection_delete
    - unix_priv_drop
    - pprof_http
    - proxy_haproxy_protocol
    - network_hwaddr
    - proxy_nat
    - network_nat_order
    - container_full
    - candid_authentication
    - backup_compression
    - candid_config
    - nvidia_runtime_config
    - storage_api_volume_snapshots
    - storage_unmapped
    - projects
    - candid_config_key
    - network_vxlan_ttl
    - container_incremental_copy
    - usb_optional_vendorid
    - snapshot_scheduling
    - snapshot_schedule_aliases
    - container_copy_project
    - clustering_server_address
    - clustering_image_replication
    - container_protection_shift
    - snapshot_expiry
    - container_backup_override_pool
    - snapshot_expiry_creation
    - network_leases_location
    - resources_cpu_socket
    - resources_gpu
    - resources_numa
    - kernel_features
    - id_map_current
    - event_location
    - storage_api_remote_volume_snapshots
    - network_nat_address
    - container_nic_routes
    - rbac
    - cluster_internal_copy
    - seccomp_notify
    - lxc_features
    - container_nic_ipvlan
    - network_vlan_sriov
    - storage_cephfs
    - container_nic_ipfilter
    - resources_v2
    - container_exec_user_group_cwd
    - container_syscall_intercept
    - container_disk_shift
    - storage_shifted
    - resources_infiniband
    - daemon_storage
    - instances
    - image_types
    - resources_disk_sata
    - clustering_roles
    - images_expiry
    - resources_network_firmware
    - backup_compression_algorithm
    - ceph_data_pool_name
    - container_syscall_intercept_mount
    - compression_squashfs
    - container_raw_mount
    - container_nic_routed
    - container_syscall_intercept_mount_fuse
    - container_disk_ceph
    - virtual-machines
    - image_profiles
    - clustering_architecture
    - resources_disk_id
    - storage_lvm_stripes
    - vm_boot_priority
    - unix_hotplug_devices
    - api_filtering
    - instance_nic_network
    - clustering_sizing
    - firewall_driver
    - projects_limits
    - container_syscall_intercept_hugetlbfs
    - limits_hugepages
    - container_nic_routed_gateway
    - projects_restrictions
    - custom_volume_snapshot_expiry
    - volume_snapshot_scheduling
    - trust_ca_certificates
    - snapshot_disk_usage
    - clustering_edit_roles
    - container_nic_routed_host_address
    - container_nic_ipvlan_gateway
    - resources_usb_pci
    - resources_cpu_threads_numa
    - resources_cpu_core_die
    - api_os
    - container_nic_routed_host_table
    - container_nic_ipvlan_host_table
    - container_nic_ipvlan_mode
    - resources_system
    - images_push_relay
    - network_dns_search
    - container_nic_routed_limits
    - instance_nic_bridged_vlan
    - network_state_bond_bridge
    - usedby_consistency
    - custom_block_volumes
    - clustering_failure_domains
    - resources_gpu_mdev
    - console_vga_type
    - projects_limits_disk
    - network_type_macvlan
    - network_type_sriov
    - container_syscall_intercept_bpf_devices
    - network_type_ovn
    - projects_networks
    - projects_networks_restricted_uplinks
    - custom_volume_backup
    - backup_override_name
    - storage_rsync_compression
    - network_type_physical
    - network_ovn_external_subnets
    - network_ovn_nat
    - network_ovn_external_routes_remove
    - tpm_device_type
    - storage_zfs_clone_copy_rebase
    - gpu_mdev
    - resources_pci_iommu
    - resources_network_usb
    - resources_disk_address
    - network_physical_ovn_ingress_mode
    - network_ovn_dhcp
    - network_physical_routes_anycast
    - projects_limits_instances
    - network_state_vlan
    - instance_nic_bridged_port_isolation
    - instance_bulk_state_change
    - network_gvrp
    - instance_pool_move
    - gpu_sriov
    - pci_device_type
    - storage_volume_state
    - network_acl
    - migration_stateful
    - disk_state_quota
    - storage_ceph_features
    - projects_compression
    - projects_images_remote_cache_expiry
    - certificate_project
    - network_ovn_acl
    - projects_images_auto_update
    - projects_restricted_cluster_target
    - images_default_architecture
    - network_ovn_acl_defaults
    - gpu_mig
    - project_usage
    - network_bridge_acl
    - warnings
    - projects_restricted_backups_and_snapshots
    - clustering_join_token
    - clustering_description
    - server_trusted_proxy
    - clustering_update_cert
    - storage_api_project
    - server_instance_driver_operational
    - server_supported_storage_drivers
    - event_lifecycle_requestor_address
    - resources_gpu_usb
    - clustering_evacuation
    - network_ovn_nat_address
    - network_bgp
    - network_forward
    - custom_volume_refresh
    - network_counters_errors_dropped
    - metrics
    - image_source_project
    - clustering_config
    - network_peer
    - linux_sysctl
    - network_dns
    - ovn_nic_acceleration
    - certificate_self_renewal
    - instance_project_move
    - storage_volume_project_move
    - cloud_init
    - network_dns_nat
    - database_leader
    - instance_all_projects
    - clustering_groups
    - ceph_rbd_du
    - instance_get_full
    - qemu_metrics
    - gpu_mig_uuid
    - event_project
    - clustering_evacuation_live
    - instance_allow_inconsistent_copy
    - network_state_ovn
    - storage_volume_api_filtering
    - image_restrictions
    - storage_zfs_export
    - network_dns_records
    - storage_zfs_reserve_space
    - network_acl_log
    - storage_zfs_blocksize
    - metrics_cpu_seconds
    - instance_snapshot_never
    - certificate_token
    api_status: stable
    api_version: "1.0"
    auth: trusted
    public: false
    auth_methods:
    - tls
    environment:
      addresses:
      - 172.20.153.19:8443
      - '[2001:8b0:664:0:12d:8950:2717:8277]:8443'
      - 10.10.25.1:8443
      - 10.0.3.1:8443
      - 172.17.0.1:8443
      - 10.72.127.1:8443
      - 10.36.63.1:8443
      - 10.172.192.14:8443
      architectures:
      - x86_64
      - i686
      certificate: |
        -----BEGIN CERTIFICATE-----
        MIIGjDCCBHSgAwIBAgIQLf1ycWKsIfHRUbru8rWZujANBgkqhkiG9w0BAQsFADA2
        MRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRYwFAYDVQQDDA1yb290QG5p
        ZWp3ZWluMB4XDTE2MDMyMjE1MTMwOVoXDTI2MDMyMDE1MTMwOVowNjEcMBoGA1UE
        ChMTbGludXhjb250YWluZXJzLm9yZzEWMBQGA1UEAwwNcm9vdEBuaWVqd2VpbjCC
        AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAJ6TXPWcmkUAh2lg+tHbLgqw
        J47kyIUX760E7BrpRqPyrIP9wtAqjdpazcX83GwbukgKkfFr/FRNv0/iP5rkbqq3
        ss92+Z2eOuqvQictrIaFcknwPjFC7P4RDE/UmRhMrdMd1jWNSFo1rT7HUHPMe2q9
        W3vdT8znj7U3blXuGtPgD8y8eNznJjdnjtwgx4F/70z5N2F4zD4OixrSLp7cluLx
        NdlLDdN5uMBxp9byY1QtrjkKHfdL8qBOifeQS544QGZgUGLfa5W5/DQvOQmji+NC
        f5UU2j7hbcOYA8S4CopM5jFwpX3X2oy/2tt2/JlAAKKYQtmFh3u7MAC2ndhN4TvO
        ukzYU9l+xvjSukeUc6f9m3TOpcn6zw9pR0iwFKlQsfQgUt7tcHZYfcKoq0Tczl2D
        /pa6vJNLaQ6i/8uYWcCyXuqZKvCl/WCoYVuu4xZc2VBXUPGwDDw6ukJlslVqoUTO
        gUVpNvSjAPAh8o0Vdks5UR60NMEVnMKryeBNJJ4qi8du3iCa3h2I2EWb6nCcj33t
        XW4Llrz3U5jl+UZncWPXitpAORxbP8VjnJWJPruT0P5a4jvafG0cfUkTFTolUJ6y
        31mohIYOrfLG5NDIU4YRxSxXRq4REbS3rjyhIox4ugFVcryZj8StvbU5TfhKQZWi
        aBISFxCtz21EQJI4Gy45AgMBAAGjggGUMIIBkDAOBgNVHQ8BAf8EBAMCBaAwEwYD
        VR0lBAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADCCAVkGA1UdEQSCAVAwggFM
        gghuaWVqd2VpboIQMTcyLjIwLjE1My4xOS8yNYIpMjAwMTo4YjA6YmZmMjplYjE0
        OjQxZTI6NTQwZTo4MGYwOjFiZjIvNjSCKTIwMDE6OGIwOmJmZjI6ZWIxNDoyYzky
        OmY5MTc6OTJmZTpmYjc1LzY0gikyMDAxOjhiMDpiZmYyOmViMTQ6MTk3MjpmMWY2
        OjcxNTk6MmIxZC82NIIpMjAwMTo4YjA6YmZmMjplYjE0OjZhZjc6MjhmZjpmZWNj
        OmUzNDYvNjSCHGZlODA6OjZhZjc6MjhmZjpmZWNjOmUzNDYvNjSCCzEwLjAuMy4x
        LzI0ghxmZTgwOjpkMDg1OjIxZmY6ZmU5NTozMWI3LzY0ghtmZTgwOjpmYzE1OjRm
        ZjpmZThkOmE3NjkvNjSCHGZlODA6OmZjZTI6MjFmZjpmZTFlOjgwYzUvNjQwDQYJ
        KoZIhvcNAQELBQADggIBAJPdZjgcIIBT3UhuGjLWFda5/o9MSdEB2cl2ISo185D1
        tb7DGl3M7fUsu/N9VfMkz9QtP5R/sCYly3hyZLgKj5dz9c43BXwOMUdYaB+KShVB
        k7FE8s+V1VI2WCwXTtzHs5MgREe9TGMRg7BBzkat5m6gCIXhjO0jf2hdyuR/A4Z/
        RbAqh7jcDDHUZbdS/xBgE0eUfKsyAsDbru7JIBAfbrmUounwwLHzGycWpaxVBxqP
        3e3Zw6ousN9ELqvFs8nxz5UxUpmG3ynpwaZd3HULowrb+Fujjn+O+Ozwj7Uthgo7
        Hm+G8rVFPXxgK3mDkEAGfChPSga5QCfCOiyR7p3X4kLhZ2ONXFTHAWHIwvzMvmQm
        8nS233VygRb2+RFnzoFoIX9VWzGUtVzLm3kyNAw8esgGk7SKDGbhhGi6uQ5zK5q9
        7/zECXl6TFRKvm5CnIQW3maAA72mdLgfJBYsXecBpGqNtwKBHNvZ4BxQYoMHKu/i
        9CGuRyUNrAlACbWXFCcrl2dqZ/XfOXwXK9ln8xAWjYj1eQNks93YuBa7BDm3v2XH
        bYcD3BGs/ftUw2HMkWmwJG4BY3HKmT6QcayUGEWFT8oOA+BvNKakb0UYED4CKzh2
        DRGHayVJ5fsGw8Q5zni4YJTaGnhu7Clo5g3KhiNRL+FZX/r/u49WI5Or9xOPnOxL
        -----END CERTIFICATE-----
      certificate_fingerprint: 2a2f687296fb3e74ae352eb303843725690d16fd2b57fd373f44d46fbb8721d6
      driver: qemu | lxc
      driver_version: 6.1.1 | 4.0.12
      firewall: nftables
      kernel: Linux
      kernel_architecture: x86_64
      kernel_features:
        netnsid_getifaddrs: "true"
        seccomp_listener: "true"
        seccomp_listener_continue: "true"
        shiftfs: "false"
        uevent_injection: "true"
        unpriv_fscaps: "true"
      kernel_version: 5.15.0-18-generic
      lxc_features:
        cgroup2: "true"
        core_scheduling: "true"
        devpts_fd: "true"
        idmapped_mounts_v2: "true"
        mount_injection_file: "true"
        network_gateway_device_route: "true"
        network_ipvlan: "true"
        network_l2proxy: "true"
        network_phys_macvlan_mtu: "true"
        network_veth_router: "true"
        pidfd: "true"
        seccomp_allow_deny_syntax: "true"
        seccomp_notify: "true"
        seccomp_proxy_send_notify_fd: "true"
      os_name: Ubuntu
      os_version: "22.04"
      project: default
      server: lxd
      server_clustered: false
      server_name: niejwein
      server_pid: 1763865
      server_version: "4.23"
      storage: zfs | btrfs
      storage_version: 2.0.6-1ubuntu3 | 5.4.1
      storage_supported_drivers:
      - name: dir
        version: "1"
        remote: false
      - name: lvm
        version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.45.0
        remote: false
      - name: zfs
        version: 2.0.6-1ubuntu3
        remote: false
      - name: ceph
        version: 15.2.14
        remote: true
      - name: btrfs
        version: 5.4.1
        remote: false
      - name: cephfs
        version: 15.2.14
        remote: true
    

    Issue description

    I have lots of things that use lxc exec to run processes in a container, some of which run for a reasonably long time: for instance, I normally run Launchpad tests that way which can take anything up to hours, charmcraft uses it as part of preparing its build container, etc. Until perhaps a couple of months ago this was generally extremely reliable (unless the lxd snap was upgraded, but I know how to avoid that). Now, however, I frequently find that it emits this error after a while and then exits:

    Error: write unix @->/var/snap/lxd/common/lxd/unix.socket: i/o timeout
    

    This is naturally very frustrating when it happens in the middle of a long task and interrupts it so that I have to start again. Is there a missing retry loop here or something?

    Steps to reproduce

    This is quite intermittent and I'm afraid I don't have a reliable reproduction recipe, but it happens to me frequently (multiple times a day) during long lxc exec runs. Running charmcraft pack in a charm recipe seems a particularly good way to run into it.

    Information to attach

    The only thing I've been able to observe so far is that the main daemon log says:

    t=2022-03-11T13:12:30+0000 lvl=warn msg="Detected poll(POLLNVAL) event."
    t=2022-03-11T13:12:30+0000 lvl=warn msg="Detected poll(POLLNVAL) event: exiting."
    

    I found https://discuss.linuxcontainers.org/t/lxd-unix-socket-i-o-timeout/13188 with the same issue, but no response.

  • Ceph clean

    Ceph clean

    Addresses #6174 You can attach rbd with a command like lxc config device add c1 ceph-rbd1 disk source=ceph:my-pool/my-volume ceph.user_name=admin ceph.cluster_name=ceph path=/ceph

    You can attach a fs with a command like: lxc config device add c1 ceph-fs1 disk source=cephfs:my-fs/some-path ceph.user_name=admin ceph.cluster_name=ceph path=/cephfs

  • How to access container from the LAN?

    How to access container from the LAN?

    Support you configured your LXD server for remote access and now can manage containers on remote machine. How do you actually run a web server on your container and access it from network?

    First, let's say that your container is able to access the network already through lxcbr0 interface created automatically on host by LXC. But this interface is allocated for NAT (which is for one way connections), so to be able to listen to incoming connections, you need to create another interface like lxcbr0 (called bridge) and link it to the network card (eth0) where you want to listen for incoming stuff.

    So the final setup should be:

    • lxcbr0 - mapped to eth0 on guest - NAT
    • lxcbr1 - mapped to eth1 on guest - LAN that gets address from LAN DHCP and listens for connection

    The target system is Ubuntu 15.10

  • How to re-import zfs containers after blitzing the lxd db?

    How to re-import zfs containers after blitzing the lxd db?

    Required information

    • Distribution: 16.04 - 4.8.0-49-generic
    • Distribution version:
    • The output of "lxc info" or if that fails:

    root@opti-bram-srv01:/var/lib/lxd/storage-pools/mirr1tb/containers/strongswan# lxc info config: {} api_extensions:

    • storage_zfs_remove_snapshots
    • container_host_shutdown_timeout
    • container_syscall_filtering
    • auth_pki
    • container_last_used_at
    • etag
    • patch
    • usb_devices
    • https_allowed_credentials
    • image_compression_algorithm
    • directory_manipulation
    • container_cpu_time
    • storage_zfs_use_refquota
    • storage_lvm_mount_options
    • network
    • profile_usedby
    • container_push
    • container_exec_recording
    • certificate_update
    • container_exec_signal_handling
    • gpu_devices
    • container_image_properties
    • migration_progress
    • id_map
    • network_firewall_filtering
    • network_routes
    • storage
    • file_delete
    • file_append
    • network_dhcp_expiry
    • storage_lvm_vg_rename
    • storage_lvm_thinpool_rename
    • network_vlan
    • image_create_aliases
    • container_stateless_copy
    • container_only_migration
    • storage_zfs_clone_copy
    • unix_device_rename
    • storage_lvm_use_thinpool
    • storage_rsync_bwlimit
    • network_vxlan_interface api_status: stable api_version: "1.0" auth: trusted public: false environment: addresses: [] architectures:
      • x86_64
      • i686 certificate: | -----BEGIN CERTIFICATE----- MIIFhzCCA2+gAwIBAgIRAPlQ+Rn7SHqtur/gi1/NGpgwDQYJKoZIhvcNAQELBQAw PTEcMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEdMBsGA1UEAwwUcm9vdEBv cHRpLWJyYW0tc3J2MDEwHhcNMTcwNDMwMTYwMzM4WhcNMjcwNDI4MTYwMzM4WjA9 MRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMR0wGwYDVQQDDBRyb290QG9w dGktYnJhbS1zcnYwMTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAKjm dQPNTcmRHS6xWFFx6vnrDporneM9HFOnlv3t6JzuPOU+SYD6swlSiLolDhOHNoof kxLVy2zr5gYSJjuK6u3hJMS7Vkx8WCoeGX+pE/mqlENhsii3jwBK5fSXqiOXI6Ea nHG5bs0PY4jz1lPb6U2gO+lz4UaMRTtXeaylCwNC8u+z+Vu/DWq88K2xd1sruvJK WT348eSg8/yTBVr5HeXWfsr1jdYC2O+AEcni1rYn0V5j7HGZXOqNNR5VRavrzVfc C21uZOHTs5F3x1e29PpflC3eAq6Qpyh8jm9E3BpH35c4hNjUnUmSVlyWk1tqTIJ1 GZ2UtfHpvDl+2cqeanWameJSuNq0nZGlRpSYVhXJszIb9lFa9eH8ibBO3sL4Tn8t Sq1rPzgou3Za9lXnegt8TkypCx8mPSxIlNcHdgfdy0nbKImcnVnIObdf9R9h3hTe MdRedkTReHnEQ0cUMST4JWTo6GWbg11N+VZCCs7cqx5maEfpN+MDSbepP1YO3raO +RzjH7KJMWK2195wEzQvLXFNK/Ci0RBPXS71o4S2HOv8Ru8mM7EQ9jHt2pIpCpRM SFvPISgnlVXwBc2YySvOoHQVqxFIJ+OdQJY1bJchO+vyoQBDHhHZ02uE/ZPFFC6n 0C6ZsU8P3YwiDi7ANC3ioZ2/E9cZXKpz8PyesYznAgMBAAGjgYEwfzAOBgNVHQ8B Af8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBKBgNV HREEQzBBgg9vcHRpLWJyYW0tc3J2MDGHBAoKWAqHBKwREGSHBAoKCwqHBAoKxwGH BAo8yAqHBAoKYwGHBKwSAAGHBMCoegEwDQYJKoZIhvcNAQELBQADggIBADcAlg7U FF4qcxp0hh4vXePpsGgGZm+VCdqwNYvMdZ9+3340Lkmu0/Wyvgayj5XN57M1DpkC xbYHW5ElGOE8V2s2RHMYpM/lLSFQ51NhL5/lSs0ZZ5s++JK6mw6pCQoQ0EFLzaXH 5ibQElIo3ztMiDJSIp/QEDI+VXcnPF29Y49UCwd+mimUIdbaV/I0N6ZY3HM4ZnZo jmYg6Hssx22/CiWAoA4pEaCmzv/e2J6Y2a5qj4aAG2jYgYAJRl1BYNG0KY3zV8Cg hCuxKgNdsgsnzR5GYzCUXSy0csJqjcoA4EvUI1NIbDhFs4RJOCOt6dQx3Ta/5A8c D51tDPJTCB4ywGvmZVH4JxT+KmnZG5YlMlfArLd4eyT9GOcThAFjBgJZIgRKtSQS B3OGZEA5XSZnsnr2I2lPCpRmR0dC0coXlLjk9JwSWdcqzYjF3G0dN1Eou5K2m3Wi FBDZRkpv66LVAO/sOq0VWTvwQl5DRxh+9R2xrlaM4iJJE47hKpo3KVLMw1ZfSMKF MqqvOUm+8i7fDOmqHvtkN4p208qYtxS1wpiY6fTcRkbvOTd+2afCoyVvzJn1W8Ea nvz4djbNv7x8mexTht23zAiPYwYP4aaTbcHczkz8nfJoy55hDio11dx3qx9Im2Xs Bzr8cP+MSa/mAD1C+kgClGgmzBAOQhUn5L0f -----END CERTIFICATE----- certificate_fingerprint: 9997e229418451999ec250cf6a0e3bfd61a5c42a5c1c51222c3bc6c8312e4b16 driver: lxc driver_version: 2.0.7 kernel: Linux kernel_architecture: x86_64 kernel_version: 4.8.0-49-generic server: lxd server_pid: 2767 server_version: "2.13" storage: "" storage_version: ""
      • Storage backend in use: ZFS

    Issue description

    Hi,

    I mistakenly uninstalled LXD and I think destroyed the database, now lxd list is showing nothing.

    I still have all my containers zfs storage intact so wondering how to import them into a fresh LXD install.

    Tried a few things related to using "lxc storage" but struggling to actually get the containers back into LXD database.

    Bit of a n00b here, network engineer by trade, trying to dabble in Linux!!!

    Cheers! Jon.

    Steps to reproduce

    1. Step one root@opti-bram-srv01:/var/lib/lxd/storage-pools/mirr1tb/containers/strongswan# lxc list +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+----------

    2. Step two root@opti-bram-srv01:/var/lib/lxd/storage-pools/mirr1tb/containers/strongswan# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT large1-5 1.36T 682G 710G - 38% 48% 1.00x ONLINE - mirr1tb 928G 122G 806G - 6% 13% 1.00x ONLINE - store500 464G 221M 464G - 0% 0% 1.00x ONLINE -

    3. Step three

    Output of zfs list showing the actual storage is still there:

    root@opti-bram-srv01:/var/lib/lxd/storage-pools/mirr1tb/containers/strongswan# zfs list NAME USED AVAIL REFER MOUNTPOINT large1-5 682G 667G 681G /large1-5 mirr1tb 122G 777G 96K none mirr1tb/containers 107G 777G 96K none mirr1tb/containers/ansible 459M 19.6G 860M /var/lib/lxd/storage-pools/mirr1tb/containers/ansible mirr1tb/containers/backup-alpinehub 40.9M 777G 40.9M /var/lib/lxd/storage-pools/mirr1tb/containers/backup-alpinehub mirr1tb/containers/backup-alpinespoke 40.6M 777G 40.6M /var/lib/lxd/storage-pools/mirr1tb/containers/backup-alpinespoke mirr1tb/containers/backup-vmhost-hub1-virl 20.4G 777G 20.4G /var/lib/lxd/storage-pools/mirr1tb/containers/backup-vmhost-hub1-virl mirr1tb/containers/backups 8.85G 71.2G 9.42G /var/lib/lxd/storage-pools/mirr1tb/containers/backups mirr1tb/containers/containers 96K 777G 96K none mirr1tb/containers/custom 192K 777G 96K none mirr1tb/containers/custom/lxdhome 96K 777G 96K /var/lib/lxd/storage-pools/pool1/custom/lxdhome mirr1tb/containers/deleted 96K 777G 96K none mirr1tb/containers/dns 313M 19.7G 821M /var/lib/lxd/storage-pools/mirr1tb/containers/dns mirr1tb/containers/images 96K 777G 96K none mirr1tb/containers/nextcloud 9.79G 10.2G 10.4G /var/lib/lxd/storage-pools/mirr1tb/containers/nextcloud mirr1tb/containers/nzb 34.5G 777G 35.0G /var/lib/lxd/storage-pools/mirr1tb/containers/nzb mirr1tb/containers/openstack2 4.62M 777G 755M /var/lib/lxd/containers/openstack2.zfs mirr1tb/containers/ovpn 22.7M 20.0G 27.9M /var/lib/lxd/storage-pools/mirr1tb/containers/ovpn mirr1tb/containers/plex 3.13G 16.9G 2.71G /var/lib/lxd/storage-pools/mirr1tb/containers/plex mirr1tb/containers/pritunl 1.99G 18.0G 2.09G /var/lib/lxd/storage-pools/mirr1tb/containers/pritunl mirr1tb/containers/smokeping 932M 19.1G 1.31G /var/lib/lxd/storage-pools/mirr1tb/containers/smokeping mirr1tb/containers/strongswan 415M 19.6G 1.00G /var/lib/lxd/storage-pools/mirr1tb/containers/strongswan mirr1tb/containers/unifi 7.13G 12.9G 7.55G /var/lib/lxd/storage-pools/mirr1tb/containers/unifi mirr1tb/containers/unimus 489M 19.5G 1.05G /var/lib/lxd/storage-pools/mirr1tb/containers/unimus mirr1tb/containers/vmhost 16.9G 3.11G 17.3G /var/lib/lxd/storage-pools/mirr1tb/containers/vmhost mirr1tb/containers/vpn-ras 2.11G 7.89G 1.53G /var/lib/lxd/storage-pools/mirr1tb/containers/vpn-ras

    Information to attach

    dmesg.txt

    • [ ] any relevant kernel output (dmesg)
    • [ ] container log (lxc info NAME --show-log)
    • [ ] main daemon log (/var/log/lxd.log)
    • [ ] output of the client with --debug
    • [ ] output of the daemon with --debug
  • macvlan NICs losing connectivity when LXD is reloaded

    macvlan NICs losing connectivity when LXD is reloaded

    Required information

    • Distribution: Ubuntu
    • Distribution version: 22.04 x86_64
    • Hardware/Virtual: Dedicated physical Dell R series server
    • The output of "lxc info" or if that fails:
    [lxd_11089_lxcinfo.txt](https://github.com/lxc/lxd/files/9933234/lxd_11089_lxcinfo.txt)
    

    Issue description

    All instances are using MACVLAN interfaces for now till I get everything over to LXD.

    All VMs are dropping off the network. When trying to restart them they just go into a stopped state. When trying to start them we then get the error message:

    ~$ lxc start vmname
    Error: Failed to start device "eth0": Failed to set the MAC address: Failed to run: ip link set dev macd8b62eeb address 00:16:3e:87:19:1f: exit status 2 (RTNETLINK answers: Address already in use))
    

    I notice that the dev name changes every time I try start the VM…

    Failed to run: ip link set dev macd8b62eeb address 00:16:3e:87:19:1f
    Failed to run: ip link set dev macef515ed2 address 00:16:3e:87:19:1f
    Failed to run: ip link set dev mac99318f7d address 00:16:3e:87:19:1f
    ...
    

    I can manually delete the device and start the VM:

    ip link show | grep -B 1 '00:16:3e:87:19:1f'
        29: maca35b59f9@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
        link/ether 00:16:3e:87:19:1f brd ff:ff:ff:ff:ff:ff
    
    sudo ip link delete maca35b59f9
    lxc start vmname
    

    Interestingly it doesn't matter whether this is a Windows image I built or those using Ubuntu Server & Desktop VM images downloaded from the default image repo images.linuxcontainers.org.

    Steps to reproduce

    1. Create VM (images:ubuntu/22.04/cloud) with MACVLAN interface
    2. Wait about a week for VMs to start loosing network connectivity
    3. Try restart them

    Information to attach

    • [x] Any relevant kernel output (dmesg)
    • [x] Container log (lxc info NAME --show-log)
    • [x] Container configuration (lxc config show NAME --expanded)
    • [x] Main daemon log (at /var/log/lxd/lxd.log or /var/snap/lxd/common/lxd/logs/lxd.log)
    • [x] Output of the client with --debug
    • [x] Output of the daemon with --debug (alternatively output of lxc monitor while reproducing the issue)

    last reboot due to this issue

    $ last | grep reboot
    reboot   system boot  5.15.0-52-generi Thu Oct 27 17:18
    

    dmesg

    root@server1:/home/someadmin# dmesg | grep windowsvm1
    [  121.322993] audit: type=1400 audit(1666905560.115:61): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-windowsvm1_</var/snap/lxd/common/lxd>" pid=16353 comm="apparmor_parser"
    [  121.341129] audit: type=1400 audit(1666905560.131:62): apparmor="DENIED" operation="open" profile="lxd-windowsvm1_</var/snap/lxd/common/lxd>" name="/var/lib/snapd/hostfs/run/systemd/resolve/stub-resolv.conf" pid=16413 comm="lxd" requested_mask="r" denied_mask="r" fsuid=0 ouid=101
    [588092.580580] audit: type=1400 audit(1667493544.754:269): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxd-windowsvm1_</var/snap/lxd/common/lxd>" pid=2179130 comm="apparmor_parser"
    [588092.600952] audit: type=1400 audit(1667493544.774:270): apparmor="DENIED" operation="open" profile="lxd-windowsvm1_</var/snap/lxd/common/lxd>" name="/var/lib/snapd/hostfs/run/systemd/resolve/stub-resolv.conf" pid=2179134 comm="lxd" requested_mask="r" denied_mask="r" fsuid=0 ouid=101
    
    root@server1:/home/someadmin# dmesg | grep maca35b59f9
    
    

    instance log: before "ip link delete {device}" The qemu.log disappears even though the instance is running, but only reappears when the instance is started successfully.

    root@server1:~$ lxc info --show-log ubuntudesktopvm2
    Name: ubuntudesktopvm2
    Status: RUNNING
    Type: virtual-machine
    Architecture: x86_64
    Location: server1.domain.tld
    PID: 22501
    Created: 2022/08/24 04:29 EDT
    Last Used: 2022/10/27 17:19 EDT
    
    Resources:
      Processes: 132
      Disk usage:
        root: 7.12GiB
      CPU usage:
        CPU usage (in seconds): 35027
      Memory usage:
        Memory (current): 3.70GiB
        Memory (peak): 3.84GiB
      Network usage:
        enp5s0:
          Type: broadcast
          State: UP
          Host interface: mac08fe77dc
          MAC address: 00:16:3e:0c:57:c6
          MTU: 1500
          Bytes received: 529.02MB
          Bytes sent: 305.63MB
          Packets received: 3759880
          Packets sent: 1370752
          IP addresses:
            inet6: fe80::216:3eff:fe0c:57c6/64 (link)
        lo:
          Type: loopback
          State: UP
          MTU: 65536
          Bytes received: 6.63GB
          Bytes sent: 6.63GB
          Packets received: 49331802
          Packets sent: 49331802
          IP addresses:
            inet:  127.0.0.1/8 (local)
            inet6: ::1/128 (local)
    Error: open /var/snap/lxd/common/lxd/logs/ubuntudesktopvm2/qemu.log: no such file or directory
    

    container config #1: windowsvm1

    # lxc config show windowsvm1 --expanded
    architecture: x86_64
    config:
      cloud-init.user-data: |
        #cloud-config
        first_logon_behaviour: false
        set_timezone: America/New_York
        users:
          - name: someadmin
            passwd: someadminpassword
            primary_group: Administrators
        winrm_enable_basic_auth: true
        winrm_configure_https_listener: true
      image.architecture: amd64
      image.description: MS WS 2022 S,D,c (20220817-2048z)
      image.os: Windows
      image.release: "2022"
      image.serial: 20220817-2048z
      image.type: virtual-machine
      image.variant: Standard, Desktop Experience, cloudbase-init
      limits.cpu: "4"
      limits.memory: 6GiB
      security.syscalls.intercept.sysinfo: "true"
      volatile.base_image: c870435e5901d2f20f3bd0418a4945fbae4f38f4327eec8bf9b5343cfa4574f6
      volatile.cloud-init.instance-id: c821ad49-9fd6-4d9a-b14d-2a1746ce46c9
      volatile.eth0.host_name: mac0a7e5062
      volatile.eth0.hwaddr: 00:16:3e:87:19:1f
      volatile.eth0.last_state.created: "false"
      volatile.last_state.power: RUNNING
      volatile.uuid: 2c78483b-2a86-4cd5-9e4c-69f47e602f28
      volatile.vsock_id: "33"
    devices:
      eth0:
        name: eth0
        nictype: macvlan
        parent: eno1
        type: nic
      root:
        path: /
        pool: sp00
        type: disk
    ephemeral: false
    profiles:
    - default
    stateful: false
    description: ""
    

    container config #2: ubuntudesktopvm1

    lxc config show ubuntudesktopvm1 --expanded
    architecture: x86_64
    config:
      cloud-init.user-data: |
        #cloud-config
        packages:
          - apt-transport-https
          - gpg
        package_upgrade: true
        timezone: America/New_York
      image.architecture: amd64
      image.description: Ubuntu jammy amd64 (20220821_07:42)
      image.os: Ubuntu
      image.release: jammy
      image.serial: "20220821_07:42"
      image.type: disk-kvm.img
      image.variant: desktop
      limits.cpu: "6"
      limits.memory: 6GiB
      security.syscalls.intercept.sysinfo: "true"
      volatile.base_image: d7c196be900f47cbcc6167031bc1521ec31a11e6b117ebebbc6234f41fe57edf
      volatile.cloud-init.instance-id: add76d39-6ffa-43b9-8331-67b172686ff7
      volatile.eth0.host_name: mac22d6f498
      volatile.eth0.hwaddr: 00:16:3e:f9:d2:d5
      volatile.eth0.last_state.created: "false"
      volatile.last_state.power: RUNNING
      volatile.uuid: 092a4884-128c-4b05-b4b5-876d322f9df9
      volatile.vsock_id: "37"
    devices:
      eth0:
        name: eth0
        nictype: macvlan
        parent: eno1
        type: nic
      root:
        path: /
        pool: sp00
        size: 30GiB
        type: disk
    ephemeral: false
    profiles:
    - default
    stateful: false
    description: ""
    

    Main daemon log: cat /var/snap/lxd/common/lxd/logs/lxd.log

    cat /var/snap/lxd/common/lxd/logs/lxd.log
    time="2022-10-31T15:34:15-04:00" level=warning msg=" - Couldn't find the CGroup network priority controller, network priority will be ignored"
    time="2022-10-31T15:34:23-04:00" level=warning msg="Failed to delete operation" class=task description="Pruning leftover image files" err="Operation not found" operation=0579cc32-cea0-44a1-9bfb-614c4b0f7d11 project= status=Success
    time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Remove orphaned operations" err="Operation not found" operation=b36749d9-3590-45e2-9370-01beb5d5560b project= status=Success
    time="2022-10-31T15:34:24-04:00" level=warning msg="Failed to delete operation" class=task description="Cleaning up expired images" err="Operation not found" operation=2073d73d-a990-461c-b36c-025debcfb13d project= status=Success
    time="2022-11-03T12:18:18-04:00" level=error msg="Failed writing error for HTTP response" err="open /var/snap/lxd/common/lxd/logs/windowsvm1/qemu.log: no such file or directory" url="/1.0/instances/{name}/logs/{file}" writeErr="<nil>"
    

    Output of the client with --debug n/a

    Main daemon log: cat /var/snap/lxd/common/lxd/logs/lxd.log.1

    cat /var/snap/lxd/common/lxd/logs/lxd.log.1
    time="2022-10-27T17:18:45-04:00" level=warning msg=" - Couldn't find the CGroup network priority controller, network priority will be ignored"
    time="2022-10-27T17:18:55-04:00" level=warning msg="Failed to delete operation" class=task description="Pruning leftover image files" err="Operation not found" operation=fcbca487-bcf3-47ad-94b1-4823f8082a10 project= status=Success
    time="2022-10-27T17:18:55-04:00" level=warning msg="Failed to delete operation" class=task description="Cleaning up expired images" err="Operation not found" operation=493db1a7-1e7c-4ec5-840d-f05e24bf0aec project= status=Success
    time="2022-10-27T17:18:55-04:00" level=warning msg="Failed to delete operation" class=task description="Remove orphaned operations" err="Operation not found" operation=9294261c-4769-429d-815d-f0b2c7e0a963 project= status=Success
    time="2022-10-27T17:18:56-04:00" level=warning msg="Failed to delete operation" class=task description="Synchronizing images" err="Operation not found" operation=c9944a20-8fa9-4365-85d1-e80a6cb29943 project= status=Success
    time="2022-10-27T17:19:21-04:00" level=warning msg="Starting VM without default firmware (-bios or -kernel in raw.qemu)" instance=citrixnetscaler1 instanceType=virtual-machine project=default
    time="2022-10-27T17:19:23-04:00" level=warning msg="Starting VM without default firmware (-bios or -kernel in raw.qemu)" instance=citrixnetscaler2 instanceType=virtual-machine project=default
    time="2022-10-27T17:20:18-04:00" level=error msg="Failed to advertise vsock address" err="Failed connecting to lxd-agent: Get \"https://custom.socket/1.0\": dial vsock vm(37):8443: connect: connection timed out" instance=ubuntudesktopvm1 instanceType=virtual-machine project=default
    time="2022-10-27T17:20:19-04:00" level=warning msg="Could not get VM state from agent" err="Failed connecting to agent: Get \"https://custom.socket/1.0\": dial vsock vm(37):8443: connect: connection timed out" instance=ubuntudesktopvm1 instanceType=virtual-machine project=default
    time="2022-10-27T17:20:19-04:00" level=error msg="Failed writing error for HTTP response" err="write unix /var/snap/lxd/common/lxd/unix.socket->@: write: broken pipe" url=/1.0/instances writeErr="write unix /var/snap/lxd/common/lxd/unix.socket->@: write: broken pipe"
    time="2022-10-27T17:20:34-04:00" level=error msg="Failed to advertise vsock address" err="Failed connecting to lxd-agent: Get \"https://custom.socket/1.0\": dial vsock vm(39):8443: connect: connection timed out" instance=ubuntudesktopvm2 instanceType=virtual-machine project=default
    time="2022-10-31T15:33:53-04:00" level=warning msg="Could not handover member's responsibilities" err="Failed to transfer leadership: No online voter found"
    

    lxc monitor

    [lxd_11089_lxcmonitor.txt](https://github.com/lxc/lxd/files/9933256/lxd_11089_lxcmonitor.txt)
    
    
  • Don't allow manual targeting of a member outside of the allowed groups of a restricted project.

    Don't allow manual targeting of a member outside of the allowed groups of a restricted project.

    When using a restricted project with restricted.cluster.groups set, it should not be possible to create an instance on a cluster member that is not part of those allowed groups. Currently this is possible by using the manual --target flag.

    ubuntu@kalmar:~$ lxc init ubuntu:j test0 --vm --target kalmar --project ci
    Creating test0
    ubuntu@kalmar:~$ lxc project show ci
    config:
      features.images: "true"
      features.networks: "true"
      features.profiles: "true"
      features.storage.buckets: "true"
      features.storage.volumes: "true"
      limits.containers: "0"
      limits.cpu: "80"
      limits.disk: 600GB
      limits.memory: 300GB
      limits.virtual-machines: "10"
      restricted: "true"
      restricted.cluster.groups: ci
      restricted.devices.nic: allow
      restricted.networks.access: br0
    description: ""
    name: ci
    used_by:
    - /1.0/instances/test0?project=ci
    - /1.0/profiles/default?project=ci
    - /1.0/images/ce64164ca818f565d3667c817b7659d38e279dadc24eb6be97ec95728a681de2?project=ci
    ubuntu@kalmar:~$ lxc cluster group show ci
    description: ""
    members:
    - tatanga
    name: ci
    ubuntu@kalmar:~$ lxc info test0
    Error: Instance not found
    ubuntu@kalmar:~$ lxc info test0 --project ci
    Name: test0
    Status: STOPPED
    Type: virtual-machine
    Architecture: aarch64
    Location: kalmar
    Created: 2023/01/06 19:16 UTC
    

    Reported by @morphis

  • Instance: Fixes delete of ephemeral VM on stop

    Instance: Fixes delete of ephemeral VM on stop

    Moves the IsRunning check out of the internal delete() function and into the exported Delete() function. This avoids failing the the delete() call from the onStop() function when trying to delete an ephemeral VM on stop. The delete() function is only called from Delete() and onStop().

    Although containers were not affected (because their notion of running doesn't include an ongoing Stop operation that VMs do) I've also moved the check in the LXC driver to the same place for consistency.

    Fixes #11261

    Tests: https://github.com/lxc/lxc-ci/pull/690

    Signed-off-by: Thomas Parrott [email protected]

  • LXD fails to stop/delete ephemeral VMs

    LXD fails to stop/delete ephemeral VMs

    Required information

    • Distribution: Ubuntu
    • Distribution version: Jammy / Lunar
    • The output of "lxc info" (pasted below)

    Issue description

    LXD fails to stop/delete ephemeral VMs.

    Steps to reproduce

    paride@diglett:~$ lxc launch ubuntu:focal --vm --ephemeral
    Creating the instance
    Instance name is: immune-goose
    Starting immune-goose
    
    paride@diglett:~$ lxc stop immune-goose
    Error: The instance is already running
    Try `lxc info --show-log immune-goose` for more info
    

    The output of the suggested command is:

    paride@diglett:~$ lxc info --show-log immune-goose
    Name: immune-goose
    Status: STOPPED
    Type: virtual-machine (ephemeral)
    Architecture: x86_64
    Created: 2023/01/08 15:52 UTC
    Last Used: 2023/01/08 15:52 UTC
    
    Log:
    
    warning: tap: open vhost char device failed: Permission denied
    warning: tap: open vhost char device failed: Permission denied
    qemu-system-x86_64: warning: 9p: degraded performance: a reasonable high msize should be chosen on client/guest side (chosen msize is <= 8192). See https://wiki.qemu.org/Documentation/9psetup#msize for details.
    

    Information to attach

    • [ ] Any relevant kernel output (dmesg)
    • [x] Container log (lxc info NAME --show-log)
    • [ ] Container configuration (lxc config show NAME --expanded)
    • [ ] Main daemon log (at /var/log/lxd/lxd.log or /var/snap/lxd/common/lxd/logs/lxd.log)
    • [ ] Output of the client with --debug
    • [ ] Output of the daemon with --debug (alternatively output of lxc monitor while reproducing the issue)
    paride@diglett:~$ lxc info
    config:
      core.https_address: '[::]'
      core.trust_password: true
    api_extensions:
    - storage_zfs_remove_snapshots
    - container_host_shutdown_timeout
    - container_stop_priority
    - container_syscall_filtering
    - auth_pki
    - container_last_used_at
    - etag
    - patch
    - usb_devices
    - https_allowed_credentials
    - image_compression_algorithm
    - directory_manipulation
    - container_cpu_time
    - storage_zfs_use_refquota
    - storage_lvm_mount_options
    - network
    - profile_usedby
    - container_push
    - container_exec_recording
    - certificate_update
    - container_exec_signal_handling
    - gpu_devices
    - container_image_properties
    - migration_progress
    - id_map
    - network_firewall_filtering
    - network_routes
    - storage
    - file_delete
    - file_append
    - network_dhcp_expiry
    - storage_lvm_vg_rename
    - storage_lvm_thinpool_rename
    - network_vlan
    - image_create_aliases
    - container_stateless_copy
    - container_only_migration
    - storage_zfs_clone_copy
    - unix_device_rename
    - storage_lvm_use_thinpool
    - storage_rsync_bwlimit
    - network_vxlan_interface
    - storage_btrfs_mount_options
    - entity_description
    - image_force_refresh
    - storage_lvm_lv_resizing
    - id_map_base
    - file_symlinks
    - container_push_target
    - network_vlan_physical
    - storage_images_delete
    - container_edit_metadata
    - container_snapshot_stateful_migration
    - storage_driver_ceph
    - storage_ceph_user_name
    - resource_limits
    - storage_volatile_initial_source
    - storage_ceph_force_osd_reuse
    - storage_block_filesystem_btrfs
    - resources
    - kernel_limits
    - storage_api_volume_rename
    - macaroon_authentication
    - network_sriov
    - console
    - restrict_devlxd
    - migration_pre_copy
    - infiniband
    - maas_network
    - devlxd_events
    - proxy
    - network_dhcp_gateway
    - file_get_symlink
    - network_leases
    - unix_device_hotplug
    - storage_api_local_volume_handling
    - operation_description
    - clustering
    - event_lifecycle
    - storage_api_remote_volume_handling
    - nvidia_runtime
    - container_mount_propagation
    - container_backup
    - devlxd_images
    - container_local_cross_pool_handling
    - proxy_unix
    - proxy_udp
    - clustering_join
    - proxy_tcp_udp_multi_port_handling
    - network_state
    - proxy_unix_dac_properties
    - container_protection_delete
    - unix_priv_drop
    - pprof_http
    - proxy_haproxy_protocol
    - network_hwaddr
    - proxy_nat
    - network_nat_order
    - container_full
    - candid_authentication
    - backup_compression
    - candid_config
    - nvidia_runtime_config
    - storage_api_volume_snapshots
    - storage_unmapped
    - projects
    - candid_config_key
    - network_vxlan_ttl
    - container_incremental_copy
    - usb_optional_vendorid
    - snapshot_scheduling
    - snapshot_schedule_aliases
    - container_copy_project
    - clustering_server_address
    - clustering_image_replication
    - container_protection_shift
    - snapshot_expiry
    - container_backup_override_pool
    - snapshot_expiry_creation
    - network_leases_location
    - resources_cpu_socket
    - resources_gpu
    - resources_numa
    - kernel_features
    - id_map_current
    - event_location
    - storage_api_remote_volume_snapshots
    - network_nat_address
    - container_nic_routes
    - rbac
    - cluster_internal_copy
    - seccomp_notify
    - lxc_features
    - container_nic_ipvlan
    - network_vlan_sriov
    - storage_cephfs
    - container_nic_ipfilter
    - resources_v2
    - container_exec_user_group_cwd
    - container_syscall_intercept
    - container_disk_shift
    - storage_shifted
    - resources_infiniband
    - daemon_storage
    - instances
    - image_types
    - resources_disk_sata
    - clustering_roles
    - images_expiry
    - resources_network_firmware
    - backup_compression_algorithm
    - ceph_data_pool_name
    - container_syscall_intercept_mount
    - compression_squashfs
    - container_raw_mount
    - container_nic_routed
    - container_syscall_intercept_mount_fuse
    - container_disk_ceph
    - virtual-machines
    - image_profiles
    - clustering_architecture
    - resources_disk_id
    - storage_lvm_stripes
    - vm_boot_priority
    - unix_hotplug_devices
    - api_filtering
    - instance_nic_network
    - clustering_sizing
    - firewall_driver
    - projects_limits
    - container_syscall_intercept_hugetlbfs
    - limits_hugepages
    - container_nic_routed_gateway
    - projects_restrictions
    - custom_volume_snapshot_expiry
    - volume_snapshot_scheduling
    - trust_ca_certificates
    - snapshot_disk_usage
    - clustering_edit_roles
    - container_nic_routed_host_address
    - container_nic_ipvlan_gateway
    - resources_usb_pci
    - resources_cpu_threads_numa
    - resources_cpu_core_die
    - api_os
    - container_nic_routed_host_table
    - container_nic_ipvlan_host_table
    - container_nic_ipvlan_mode
    - resources_system
    - images_push_relay
    - network_dns_search
    - container_nic_routed_limits
    - instance_nic_bridged_vlan
    - network_state_bond_bridge
    - usedby_consistency
    - custom_block_volumes
    - clustering_failure_domains
    - resources_gpu_mdev
    - console_vga_type
    - projects_limits_disk
    - network_type_macvlan
    - network_type_sriov
    - container_syscall_intercept_bpf_devices
    - network_type_ovn
    - projects_networks
    - projects_networks_restricted_uplinks
    - custom_volume_backup
    - backup_override_name
    - storage_rsync_compression
    - network_type_physical
    - network_ovn_external_subnets
    - network_ovn_nat
    - network_ovn_external_routes_remove
    - tpm_device_type
    - storage_zfs_clone_copy_rebase
    - gpu_mdev
    - resources_pci_iommu
    - resources_network_usb
    - resources_disk_address
    - network_physical_ovn_ingress_mode
    - network_ovn_dhcp
    - network_physical_routes_anycast
    - projects_limits_instances
    - network_state_vlan
    - instance_nic_bridged_port_isolation
    - instance_bulk_state_change
    - network_gvrp
    - instance_pool_move
    - gpu_sriov
    - pci_device_type
    - storage_volume_state
    - network_acl
    - migration_stateful
    - disk_state_quota
    - storage_ceph_features
    - projects_compression
    - projects_images_remote_cache_expiry
    - certificate_project
    - network_ovn_acl
    - projects_images_auto_update
    - projects_restricted_cluster_target
    - images_default_architecture
    - network_ovn_acl_defaults
    - gpu_mig
    - project_usage
    - network_bridge_acl
    - warnings
    - projects_restricted_backups_and_snapshots
    - clustering_join_token
    - clustering_description
    - server_trusted_proxy
    - clustering_update_cert
    - storage_api_project
    - server_instance_driver_operational
    - server_supported_storage_drivers
    - event_lifecycle_requestor_address
    - resources_gpu_usb
    - clustering_evacuation
    - network_ovn_nat_address
    - network_bgp
    - network_forward
    - custom_volume_refresh
    - network_counters_errors_dropped
    - metrics
    - image_source_project
    - clustering_config
    - network_peer
    - linux_sysctl
    - network_dns
    - ovn_nic_acceleration
    - certificate_self_renewal
    - instance_project_move
    - storage_volume_project_move
    - cloud_init
    - network_dns_nat
    - database_leader
    - instance_all_projects
    - clustering_groups
    - ceph_rbd_du
    - instance_get_full
    - qemu_metrics
    - gpu_mig_uuid
    - event_project
    - clustering_evacuation_live
    - instance_allow_inconsistent_copy
    - network_state_ovn
    - storage_volume_api_filtering
    - image_restrictions
    - storage_zfs_export
    - network_dns_records
    - storage_zfs_reserve_space
    - network_acl_log
    - storage_zfs_blocksize
    - metrics_cpu_seconds
    - instance_snapshot_never
    - certificate_token
    - instance_nic_routed_neighbor_probe
    - event_hub
    - agent_nic_config
    - projects_restricted_intercept
    - metrics_authentication
    - images_target_project
    - cluster_migration_inconsistent_copy
    - cluster_ovn_chassis
    - container_syscall_intercept_sched_setscheduler
    - storage_lvm_thinpool_metadata_size
    - storage_volume_state_total
    - instance_file_head
    - instances_nic_host_name
    - image_copy_profile
    - container_syscall_intercept_sysinfo
    - clustering_evacuation_mode
    - resources_pci_vpd
    - qemu_raw_conf
    - storage_cephfs_fscache
    - network_load_balancer
    - vsock_api
    - instance_ready_state
    - network_bgp_holdtime
    - storage_volumes_all_projects
    - metrics_memory_oom_total
    - storage_buckets
    - storage_buckets_create_credentials
    - metrics_cpu_effective_total
    - projects_networks_restricted_access
    - storage_buckets_local
    - loki
    - acme
    - internal_metrics
    - cluster_join_token_expiry
    - remote_token_expiry
    - init_preseed
    - storage_volumes_created_at
    - cpu_hotplug
    - projects_networks_zones
    api_status: stable
    api_version: "1.0"
    auth: trusted
    public: false
    auth_methods:
    - tls
    environment:
      addresses:
      - 10.245.168.20:8443
      - 10.0.1.1:8443
      - 10.0.19.1:8443
      - 10.0.18.1:8443
      - 192.168.122.1:8443
      - 10.109.225.1:8443
      - '[fd42:1b7e:739f:50c0::1]:8443'
      - 172.17.0.1:8443
      architectures:
      - x86_64
      - i686
      certificate: |
        -----BEGIN CERTIFICATE-----
        MIIFSjCCAzKgAwIBAgIRAIqNRgk6JiVdB08oha7hMlIwDQYJKoZIhvcNAQELBQAw
        NTEcMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBk
        aWdsZXR0MB4XDTE4MDYxNDE2MDY1NVoXDTI4MDYxMTE2MDY1NVowNTEcMBoGA1UE
        ChMTbGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBkaWdsZXR0MIIC
        IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAqMMxb5zfQa0vXUoZ1DLRS3XK
        FJuuC8V6lAxKlcHw6riLDZDj4SJ/ucooEKtwHFvUXi/VyTWDNjle87lImwRML/Ud
        wOymRgb3kthgmoR22WhuhLdp+V2D5wEKicEcT/EVDcrIKtczz4NVkBsb7YXWn2vH
        YFaQTDR7DwOW25hZmll079GHAHLhldpO15YXI3GF5amGVhAHlGXRoa95CdEuWvZV
        nKt3Gb/CceMvactjCRffNK/Hn2XfO5m/HFk092yoTO+z6u5L0uxOnIYAxB5aQSKX
        4nSS62BOqduiiLysETsEYdgN5r4drsXZoU9DW0i8f4vOtMuQf4QHFE+Z/g/ldVpr
        9KyI3R6xMBnPbQ2EamUYsUleEleOV3272FzsTb9nJKl5+rHuRcVoAH3rmxJfWOZk
        fKm4ag/wkfYbT3Z3S3XDX2m1tguH2wCMNZMOwh8llrQlow3E3EE31HvzN7Ep9NaS
        ZOKet+o+jjT/PvvwZi97bAGAoL7/RGOoHREvIIWEeiczvGZmoPv3sY1f6Q5Sr6zT
        fH3x5xWSizmzDSJ2ydSbKEedbJqxh+KG8Lf0kEKRBADDnZTAfgy0VTtfYpugBSpL
        +ZMB6Dj42s0yLGFJBBomhlCFBy4B5fTQlnxROD9k+C2f0qKNCs+MBxGKF6v7zFq1
        5ZK/7q00CuS2sgb5XwsCAwEAAaNVMFMwDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQM
        MAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHgYDVR0RBBcwFYIHZGlnbGV0dIcE
        CvWoFIcEwKh6ATANBgkqhkiG9w0BAQsFAAOCAgEAl2+HaoYsayRyqAx9ZBtLc9C4
        VZJofRRxrO2IzxYm2SRLugmwS3abU3J5xjk3WJUJjDcnmTVrwEurWnoxD/VljHt0
        YQTCWj6Fuk9X3TlsHjRmqo109limH0VOn//xZ3dtI3IFnOYzYjGMLX0QxkTJk95o
        CtzHhT5FXxO2FDibRLIUVlJv9ZXMdFM3mxGy/I16ktHMjI+HVad4uXwL/7ZcA9u3
        X8on8iun01YTtKozKPM6DSjTh0QR6kAqtroPeGPcxiCAVwQV5yB5wIuglf0tXZdU
        YTScUlbrYmvGKvhVj363sFLnnZO7/SN65564Rw1T8Mto5J9z7u5/3fa31UcDmvYf
        6v/QeHXegCoDANGWNL2ZuYlU5/xUSDa30LERJZFg12LS43e1VPikrwOomfWyf0At
        /saRlSop1/9E4Ez+LZh4pI5D0VjClJs5901SfSlumNEfOJHCnE6Eeg5MKU5YORLA
        Grjif62zmROqkcb4xNFz/jrTnoSECo4Ypbq1PSBW1n6bD26Ml6gaf2TGKrYMPaCd
        r8YZ/n/UOIZRuTqsHcuB4NbeWL11390gX0elDNNxEY1G+anLEFgfQh+TWVGMr6Qk
        ASvFnsLiqKakm94ust7i8P0qs1n8xAxrOGLNChtS7kjC2+y7a4plCyCVz59KQMa5
        ZOZsAgwmZ69aOySDxSE=
        -----END CERTIFICATE-----
      certificate_fingerprint: 8a9fd84e11b5d47d094948343c45b1875adccc1d48073c6d067019e963c224cf
      driver: lxc | qemu
      driver_version: 5.0.1 | 7.1.0
      firewall: nftables
      kernel: Linux
      kernel_architecture: x86_64
      kernel_features:
        idmapped_mounts: "true"
        netnsid_getifaddrs: "true"
        seccomp_listener: "true"
        seccomp_listener_continue: "true"
        shiftfs: "false"
        uevent_injection: "true"
        unpriv_fscaps: "true"
      kernel_version: 5.19.0-23-generic
      lxc_features:
        cgroup2: "true"
        core_scheduling: "true"
        devpts_fd: "true"
        idmapped_mounts_v2: "true"
        mount_injection_file: "true"
        network_gateway_device_route: "true"
        network_ipvlan: "true"
        network_l2proxy: "true"
        network_phys_macvlan_mtu: "true"
        network_veth_router: "true"
        pidfd: "true"
        seccomp_allow_deny_syntax: "true"
        seccomp_notify: "true"
        seccomp_proxy_send_notify_fd: "true"
      os_name: Ubuntu
      os_version: "23.04"
      project: default
      server: lxd
      server_clustered: false
      server_event_mode: full-mesh
      server_name: diglett
      server_pid: 2521301
      server_version: "5.9"
      storage: zfs
      storage_version: 2.1.5-1ubuntu6
      storage_supported_drivers:
      - name: cephfs
        version: 15.2.17
        remote: true
      - name: cephobject
        version: 15.2.17
        remote: true
      - name: dir
        version: "1"
        remote: false
      - name: lvm
        version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.47.0
        remote: false
      - name: zfs
        version: 2.1.5-1ubuntu6
        remote: false
      - name: btrfs
        version: 5.4.1
        remote: false
      - name: ceph
        version: 15.2.17
        remote: true
    
  • Work toward better OpenSSF Best Practices badge

    Work toward better OpenSSF Best Practices badge

    Currently, LXD meets all the criteria to obtain the passing badge. When checking the criteria to meet the silver badge, it seems that most of them only need to be filled out as we already apply those best practices. Same goes for the gold badge.

    As of today, LXD compliance level with the 3 badges is:

    Passing: 100% Silver: 13% Gold: 17%

  • VM: qemu-img convert Permission denied

    VM: qemu-img convert Permission denied

    I'm running a CI pipeline in a LXD container. Jobs run in a container or a VM with up to 2 concurrent containers/VMs. Machine/laptop has 4 cores and 16G of RAM, container has 8G. There's a job that has 4 builds in VMs (1G per VM). The third build launches with the below error.

    My relatively-uneducated guess points at a race condition or bug in the qemu-img apparmor profile application.

    Required information

    The Debian host as LXD built from source. The Ubuntu container uses the snap build.

    • Distribution: host=debian, container=ubuntu
    • Distribution version: host=bullseye, container=jammy
    • The output of "lxc info" or if that fails:
    config:
      core.proxy_http: https://x.x.x.x:x
      core.proxy_https: https://x.x.x.x:x
      core.trust_password: true
    api_extensions:
    - storage_zfs_remove_snapshots
    - container_host_shutdown_timeout
    - container_stop_priority
    - container_syscall_filtering
    - auth_pki
    - container_last_used_at
    - etag
    - patch
    - usb_devices
    - https_allowed_credentials
    - image_compression_algorithm
    - directory_manipulation
    - container_cpu_time
    - storage_zfs_use_refquota
    - storage_lvm_mount_options
    - network
    - profile_usedby
    - container_push
    - container_exec_recording
    - certificate_update
    - container_exec_signal_handling
    - gpu_devices
    - container_image_properties
    - migration_progress
    - id_map
    - network_firewall_filtering
    - network_routes
    - storage
    - file_delete
    - file_append
    - network_dhcp_expiry
    - storage_lvm_vg_rename
    - storage_lvm_thinpool_rename
    - network_vlan
    - image_create_aliases
    - container_stateless_copy
    - container_only_migration
    - storage_zfs_clone_copy
    - unix_device_rename
    - storage_lvm_use_thinpool
    - storage_rsync_bwlimit
    - network_vxlan_interface
    - storage_btrfs_mount_options
    - entity_description
    - image_force_refresh
    - storage_lvm_lv_resizing
    - id_map_base
    - file_symlinks
    - container_push_target
    - network_vlan_physical
    - storage_images_delete
    - container_edit_metadata
    - container_snapshot_stateful_migration
    - storage_driver_ceph
    - storage_ceph_user_name
    - resource_limits
    - storage_volatile_initial_source
    - storage_ceph_force_osd_reuse
    - storage_block_filesystem_btrfs
    - resources
    - kernel_limits
    - storage_api_volume_rename
    - macaroon_authentication
    - network_sriov
    - console
    - restrict_devlxd
    - migration_pre_copy
    - infiniband
    - maas_network
    - devlxd_events
    - proxy
    - network_dhcp_gateway
    - file_get_symlink
    - network_leases
    - unix_device_hotplug
    - storage_api_local_volume_handling
    - operation_description
    - clustering
    - event_lifecycle
    - storage_api_remote_volume_handling
    - nvidia_runtime
    - container_mount_propagation
    - container_backup
    - devlxd_images
    - container_local_cross_pool_handling
    - proxy_unix
    - proxy_udp
    - clustering_join
    - proxy_tcp_udp_multi_port_handling
    - network_state
    - proxy_unix_dac_properties
    - container_protection_delete
    - unix_priv_drop
    - pprof_http
    - proxy_haproxy_protocol
    - network_hwaddr
    - proxy_nat
    - network_nat_order
    - container_full
    - candid_authentication
    - backup_compression
    - candid_config
    - nvidia_runtime_config
    - storage_api_volume_snapshots
    - storage_unmapped
    - projects
    - candid_config_key
    - network_vxlan_ttl
    - container_incremental_copy
    - usb_optional_vendorid
    - snapshot_scheduling
    - snapshot_schedule_aliases
    - container_copy_project
    - clustering_server_address
    - clustering_image_replication
    - container_protection_shift
    - snapshot_expiry
    - container_backup_override_pool
    - snapshot_expiry_creation
    - network_leases_location
    - resources_cpu_socket
    - resources_gpu
    - resources_numa
    - kernel_features
    - id_map_current
    - event_location
    - storage_api_remote_volume_snapshots
    - network_nat_address
    - container_nic_routes
    - rbac
    - cluster_internal_copy
    - seccomp_notify
    - lxc_features
    - container_nic_ipvlan
    - network_vlan_sriov
    - storage_cephfs
    - container_nic_ipfilter
    - resources_v2
    - container_exec_user_group_cwd
    - container_syscall_intercept
    - container_disk_shift
    - storage_shifted
    - resources_infiniband
    - daemon_storage
    - instances
    - image_types
    - resources_disk_sata
    - clustering_roles
    - images_expiry
    - resources_network_firmware
    - backup_compression_algorithm
    - ceph_data_pool_name
    - container_syscall_intercept_mount
    - compression_squashfs
    - container_raw_mount
    - container_nic_routed
    - container_syscall_intercept_mount_fuse
    - container_disk_ceph
    - virtual-machines
    - image_profiles
    - clustering_architecture
    - resources_disk_id
    - storage_lvm_stripes
    - vm_boot_priority
    - unix_hotplug_devices
    - api_filtering
    - instance_nic_network
    - clustering_sizing
    - firewall_driver
    - projects_limits
    - container_syscall_intercept_hugetlbfs
    - limits_hugepages
    - container_nic_routed_gateway
    - projects_restrictions
    - custom_volume_snapshot_expiry
    - volume_snapshot_scheduling
    - trust_ca_certificates
    - snapshot_disk_usage
    - clustering_edit_roles
    - container_nic_routed_host_address
    - container_nic_ipvlan_gateway
    - resources_usb_pci
    - resources_cpu_threads_numa
    - resources_cpu_core_die
    - api_os
    - container_nic_routed_host_table
    - container_nic_ipvlan_host_table
    - container_nic_ipvlan_mode
    - resources_system
    - images_push_relay
    - network_dns_search
    - container_nic_routed_limits
    - instance_nic_bridged_vlan
    - network_state_bond_bridge
    - usedby_consistency
    - custom_block_volumes
    - clustering_failure_domains
    - resources_gpu_mdev
    - console_vga_type
    - projects_limits_disk
    - network_type_macvlan
    - network_type_sriov
    - container_syscall_intercept_bpf_devices
    - network_type_ovn
    - projects_networks
    - projects_networks_restricted_uplinks
    - custom_volume_backup
    - backup_override_name
    - storage_rsync_compression
    - network_type_physical
    - network_ovn_external_subnets
    - network_ovn_nat
    - network_ovn_external_routes_remove
    - tpm_device_type
    - storage_zfs_clone_copy_rebase
    - gpu_mdev
    - resources_pci_iommu
    - resources_network_usb
    - resources_disk_address
    - network_physical_ovn_ingress_mode
    - network_ovn_dhcp
    - network_physical_routes_anycast
    - projects_limits_instances
    - network_state_vlan
    - instance_nic_bridged_port_isolation
    - instance_bulk_state_change
    - network_gvrp
    - instance_pool_move
    - gpu_sriov
    - pci_device_type
    - storage_volume_state
    - network_acl
    - migration_stateful
    - disk_state_quota
    - storage_ceph_features
    - projects_compression
    - projects_images_remote_cache_expiry
    - certificate_project
    - network_ovn_acl
    - projects_images_auto_update
    - projects_restricted_cluster_target
    - images_default_architecture
    - network_ovn_acl_defaults
    - gpu_mig
    - project_usage
    - network_bridge_acl
    - warnings
    - projects_restricted_backups_and_snapshots
    - clustering_join_token
    - clustering_description
    - server_trusted_proxy
    - clustering_update_cert
    - storage_api_project
    - server_instance_driver_operational
    - server_supported_storage_drivers
    - event_lifecycle_requestor_address
    - resources_gpu_usb
    - clustering_evacuation
    - network_ovn_nat_address
    - network_bgp
    - network_forward
    - custom_volume_refresh
    - network_counters_errors_dropped
    - metrics
    - image_source_project
    - clustering_config
    - network_peer
    - linux_sysctl
    - network_dns
    - ovn_nic_acceleration
    - certificate_self_renewal
    - instance_project_move
    - storage_volume_project_move
    - cloud_init
    - network_dns_nat
    - database_leader
    - instance_all_projects
    - clustering_groups
    - ceph_rbd_du
    - instance_get_full
    - qemu_metrics
    - gpu_mig_uuid
    - event_project
    - clustering_evacuation_live
    - instance_allow_inconsistent_copy
    - network_state_ovn
    - storage_volume_api_filtering
    - image_restrictions
    - storage_zfs_export
    - network_dns_records
    - storage_zfs_reserve_space
    - network_acl_log
    - storage_zfs_blocksize
    - metrics_cpu_seconds
    - instance_snapshot_never
    - certificate_token
    - instance_nic_routed_neighbor_probe
    - event_hub
    - agent_nic_config
    - projects_restricted_intercept
    - metrics_authentication
    - images_target_project
    - cluster_migration_inconsistent_copy
    - cluster_ovn_chassis
    - container_syscall_intercept_sched_setscheduler
    - storage_lvm_thinpool_metadata_size
    - storage_volume_state_total
    - instance_file_head
    - instances_nic_host_name
    - image_copy_profile
    - container_syscall_intercept_sysinfo
    - clustering_evacuation_mode
    - resources_pci_vpd
    - qemu_raw_conf
    - storage_cephfs_fscache
    - network_load_balancer
    - vsock_api
    - instance_ready_state
    - network_bgp_holdtime
    - storage_volumes_all_projects
    - metrics_memory_oom_total
    - storage_buckets
    - storage_buckets_create_credentials
    - metrics_cpu_effective_total
    - projects_networks_restricted_access
    - storage_buckets_local
    - loki
    - acme
    - internal_metrics
    - cluster_join_token_expiry
    - remote_token_expiry
    - init_preseed
    - storage_volumes_created_at
    - cpu_hotplug
    - projects_networks_zones
    api_status: stable
    api_version: "1.0"
    auth: trusted
    public: false
    auth_methods:
    - tls
    environment:
      addresses: []
      architectures:
      - x86_64
      - i686
      certificate: |
        -----BEGIN CERTIFICATE-----
        MIIB/zCCAYSgAwIBAgIRAJV1Z2VlfrMczscSfV9NWqAwCgYIKoZIzj0EAwMwMjEc
        MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzESMBAGA1UEAwwJcm9vdEBjaTAx
        MB4XDTIzMDEwMjIxMTYzOFoXDTMyMTIzMDIxMTYzOFowMjEcMBoGA1UEChMTbGlu
        dXhjb250YWluZXJzLm9yZzESMBAGA1UEAwwJcm9vdEBjaTAxMHYwEAYHKoZIzj0C
        AQYFK4EEACIDYgAEIadtPceJQd4giTqrvFyHZ/SCEGN1bCJY+faZO0CBp6ok4dBF
        eAqyeEqQD0oYi4XabyOugRG3msRYqFcS2IxCUR9uGeXdNHd88fI+0nDNbbsnWqm2
        FSWi9j+PqDjX2y8+o14wXDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYB
        BQUHAwEwDAYDVR0TAQH/BAIwADAnBgNVHREEIDAeggRjaTAxhwR/AAABhxAAAAAA
        AAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2kAMGYCMQCKOEU0jXyLkeZF1GWQX9hn
        Fx/QR8Oqm4OGpjN6jqGSiH3GA4ZqxdjpKGcde+AkwyoCMQDRsyvEi9zMXKA+pHts
        Ol2goBoFeQnYQWDr2bo2z4VAK8cC+v8n3akhA66z0EWumEk=
        -----END CERTIFICATE-----
      certificate_fingerprint: cc90ddac681b08353ea73cc2432a371bc170385e74b652997e3125c22df3d103
      driver: lxc | qemu
      driver_version: 5.0.1 | 7.1.0
      firewall: nftables
      kernel: Linux
      kernel_architecture: x86_64
      kernel_features:
        idmapped_mounts: "false"
        netnsid_getifaddrs: "true"
        seccomp_listener: "true"
        seccomp_listener_continue: "true"
        shiftfs: "false"
        uevent_injection: "true"
        unpriv_fscaps: "false"
      kernel_version: 5.10.0-9-amd64
      lxc_features:
        cgroup2: "true"
        core_scheduling: "true"
        devpts_fd: "true"
        idmapped_mounts_v2: "true"
        mount_injection_file: "true"
        network_gateway_device_route: "true"
        network_ipvlan: "true"
        network_l2proxy: "true"
        network_phys_macvlan_mtu: "true"
        network_veth_router: "true"
        pidfd: "true"
        seccomp_allow_deny_syntax: "true"
        seccomp_notify: "true"
        seccomp_proxy_send_notify_fd: "true"
      os_name: Ubuntu
      os_version: "22.04"
      project: default
      server: lxd
      server_clustered: false
      server_event_mode: full-mesh
      server_name: ci01
      server_pid: 638
      server_version: "5.9"
      storage: dir
      storage_version: "1"
      storage_supported_drivers:
      - name: cephfs
        version: 15.2.17
        remote: true
      - name: cephobject
        version: 15.2.17
        remote: true
      - name: dir
        version: "1"
        remote: false
      - name: lvm
        version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30)
        remote: false
      - name: btrfs
        version: 5.4.1
        remote: false
      - name: ceph
        version: 15.2.17
        remote: true
    

    Issue description

    Creating a VM frequently fails when extracting the disk image. Example error:

    Error: Failed instance creation: Failed creating instance from image: Failed converting image to raw at "/var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/ci-ccb13070-35ea-4681-
    8d2a-a6572176a350/root.img": Failed to run: nice -n19 qemu-img convert -f qcow2 -O raw -T none /var/snap/lxd/common/lxd/images/2cee477c19bb59c0c364a4f92ff910cf0fcdaf99f68232858fe4dd69a78ab8c4
    .rootfs /var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/ci-ccb13070-35ea-4681-8d2a-a6572176a350/root.img: Process exited with non-zero value 1 (qemu-img: /var/snap/lxd/common/
    lxd/storage-pools/default/virtual-machines/ci-ccb13070-35ea-4681-8d2a-a6572176a350/root.img: error while converting raw: Could not create '/var/snap/lxd/common/lxd/storage-pools/default/virtu
    al-machines/ci-ccb13070-35ea-4681-8d2a-a6572176a350/root.img': Permission denied)
    

    Steps to reproduce

    lxc launch --vm images:debian/bullseye
    
    # alternatively, high cpu load plus
    while :; do
            ctn=$(python3 -c 'print(str(__import__("uuid").uuid4()))')
            ctn="ci-${ctn}"
            lxc --debug init images:debian/bullseye --vm "${ctn}"
            lxc delete -f "${ctn}"
    done
    

    Information to attach

    • [ ] Any relevant kernel output (dmesg) none
    • [ ] Container log (lxc info NAME --show-log) container not created
    • [ ] Container configuration (lxc config show NAME --expanded) n/a
    • [ ] Main daemon log (at /var/log/lxd/lxd.log or /var/snap/lxd/common/lxd/logs/lxd.log)
    • [ ] Output of the client with --debug
    • [ ] Output of the daemon with --debug (alternatively output of lxc monitor while reproducing the issue)
Sap Api Integrations Inbound Delivery Reads

sap-api-integrations-inbound-delivery-reads sap-api-integrations-inbound-delivery-reads は、外部システム(特にエッジコンピューティング環境)をSAPと統合することを目的に、SAP API で出荷データ を取得する

Sep 9, 2022
Nuclei is a fast tool for configurable targeted vulnerability scanning based on templates offering massive extensibility and ease of use.
Nuclei is a fast tool for configurable targeted vulnerability scanning based on templates offering massive extensibility and ease of use.

Fast and customisable vulnerability scanner based on simple YAML based DSL. How • Install • For Security Engineers • For Developers • Documentation •

Dec 30, 2022
Open Source runtime scanner for Linux containers (LXD), It performs security audit checks based on CIS Linux containers Benchmark specification
Open Source runtime scanner for Linux containers (LXD), It performs security audit checks based on CIS Linux containers  Benchmark specification

lxd-probe Scan your Linux container runtime !! Lxd-Probe is an open source audit scanner who perform audit check on a linux container manager and outp

Dec 26, 2022
DNS server using miekg/dns offering dynamic subdomains, time-over-dns, and standard zone file support.

dns-go DNS server using miekg/dns offering dynamic subdomains, time-over-dns, and standard zone file support. dynamic subdomains web.myapp.192.168.1.1

Dec 14, 2021
Kubedock is a minimal implementation of the docker api that will orchestrate containers on a Kubernetes cluster, rather than running containers locally.

Kubedock Kubedock is an minimal implementation of the docker api that will orchestrate containers on a kubernetes cluster, rather than running contain

Nov 11, 2022
Go-db-connection-api - API REST in Go that connect to SQL DB and manage task of projects

Go Todo REST API Example A RESTful API example for simple application with Go It

Jan 26, 2022
Dwarka - API gateway offers REST API to manage various device controlled using MQTT protocol

dwarka API gateway offers REST API to manage various device controlled using 'MQ

Sep 16, 2022
REST based Redis client built on top of Upstash REST API

An HTTP/REST based Redis client built on top of Upstash REST API.

Jul 31, 2022
Go (Golang) API REST with Gin FrameworkGo (Golang) API REST with Gin Framework

go-rest-api-aml-service Go (Golang) API REST with Gin Framework 1. Project Description Build REST APIs to support AML service with the support of exte

Nov 21, 2021
Books-rest api - Simple CRUD Rest API architecture using postgresql db with standard Library

books-rest_api Simple CRUD Rest API architecture using postgresql db with standa

Feb 8, 2022
The rest api that can manage the iptables rules of the remote host

fiewall-api firewall api是基于firewalld来远程管理iptables规则的rest-api,无需部署agent Features 指定一个主机ip,让这个主机上的iptables增加一个规则 处理单个IP或CIDR范围(xx.xx.xx.xx/mask,mac,inte

Mar 24, 2022
A microservice gateway developed based on golang.With a variety of plug-ins which can be expanded by itself, plug and play. what's more,it can quickly help enterprises manage API services and improve the stability and security of API services.
A microservice gateway developed based on golang.With a variety of plug-ins which can be expanded by itself, plug and play. what's more,it can quickly help enterprises manage API services and improve the stability and security of API services.

Goku API gateway is a microservice gateway developed based on golang. It can achieve the purposes of high-performance HTTP API forwarding, multi tenant management, API access control, etc. it has a powerful custom plug-in system, which can be expanded by itself, and can quickly help enterprises manage API services and improve the stability and security of API services.

Dec 29, 2022
nerdctl daemon (Docker API)
nerdctl daemon (Docker API)

nerdctld This is a daemon offering a nerdctl.sock endpoint. It can be used with DOCKER_HOST=unix://nerdctl.sock. Normally the nerdctl tool is a CLI-on

Dec 15, 2022
CetusGuard is a tool that allows to protect the Docker daemon socket by filtering the calls to its API endpoints.

CetusGuard CetusGuard is a tool that allows to protect the Docker daemon socket by filtering the calls to its API endpoints. Some highlights: It is wr

Dec 23, 2022
REST Layer, Go (golang) REST API framework
REST Layer, Go (golang) REST API framework

REST Layer REST APIs made easy. REST Layer is an API framework heavily inspired by the excellent Python Eve. It helps you create a comprehensive, cust

Dec 16, 2022
Experimental code execution microservice based on Docker containers.
Experimental code execution microservice based on Docker containers.

ranna ランナー - Experimental code runner microservice based on Docker containers. ⚠ PLEASE READ BEFORE USE First of all, this project is currently work i

Dec 9, 2022
A simple library to extract video and audio frames from media containers (based on libav).
A simple library to extract video and audio frames from media containers (based on libav).

Reisen A simple library to extract video and audio frames from media containers (based on libav, i.e. ffmpeg). Dependencies The library requires libav

Jan 2, 2023
Simple Go-based permission setter for containers running as non root users

Simple Go-based permission setter for containers running as non root users

May 17, 2022
Furui - A process-based communication control system for containers

furui Communication control of the container runtime environment(now only docker

Mar 26, 2022
📖 Build a RESTful API on Go: Fiber, PostgreSQL, JWT and Swagger docs in isolated Docker containers.
📖 Build a RESTful API on Go: Fiber, PostgreSQL, JWT and Swagger docs in isolated Docker containers.

?? Tutorial: Build a RESTful API on Go Fiber, PostgreSQL, JWT and Swagger docs in isolated Docker containers. ?? The full article is published on Marc

Dec 27, 2022