Failed to create BTRFS subvolume: ERROR: target path already exists

Hey.

When I run

lxd init --auto --storage-backend btrfs

in a lxd container (lxd in lxd), I get the following error:

Apr 21 21:34:24 c lxd[2862]: lvl=eror msg="Failed to create BTRFS subvolume \"/var/lib/lxd/storage-pools/default/containers\": ERROR: target path already exists: /var/lib/lxd/storage-pools/default/containers\n." t=2018-04-21T21:34:24+0000

This is with lxd version 2.21 from ubuntu xenial-backports both on the host and in the container, both with btrfs storage.

What would be the right way to initialize the nested lxd instance with btrfs?

lxd init --auto --storage-backend btrfs

This works fine on a system that’s not running on btrfs, so I suspect it’s related to /var/lib/lxd is on btrfs. Testing that next.

Same setup on btrfs on the host works fine too, so it’s likely isolated to inside a container. Trying that next.

In a nested container, I’m getting lxd init --auto --storage-backend btrfs which is somewhat expected given that the default is to use a loop device and those aren’t allowed in containers.

The interactive lxd init has additional logic for this particular case:

root@c1:~# lxd init
Do you want to configure a new storage pool (yes/no) [default=yes]? 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, btrfs) [default=btrfs]: 
Would you like to create a new btrfs subvolume under /var/lib/lxd (yes/no) [default=yes]: 

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no) [default=yes]? 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
Would you like to create a new network bridge (yes/no) [default=yes]? 
What should the new bridge be called [default=lxdbr0]? 
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
LXD has been successfully configured.
root@c1:~# lxc storage list
+---------+-------------+--------+------------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |               SOURCE               | USED BY |
+---------+-------------+--------+------------------------------------+---------+
| default |             | btrfs  | /var/lib/lxd/storage-pools/default | 1       |
+---------+-------------+--------+------------------------------------+---------+

In your case, I’d suggest deleting /var/lib/lxd/storage-pools/default if this somehow exists without a storage pool being listed in lxc storage list and then run the interactive lxd init which will have the special btrfs nesting handling.

Thanks @stgraber A few more details:

It’s a privileged container launched like:

read -r -d '' raw_lxc <<RAW_LXC
lxc.aa_profile=unconfined
lxc.mount.auto=proc:rw sys:rw cgroup:rw
lxc.cgroup.devices.allow=a
lxc.cap.drop=
lxc.aa_allow_incomplete=1
RAW_LXC
sudo lxc launch \
  --config security.privileged=true \
  --config security.nesting=true \
  --config raw.lxc="${raw_lxc}" \
  ... \
  "ubuntu:16.04" "c"

I also create loop device files before init:

for i in {0..8}; do if ! test -e "/dev/loop${i}"; then mknod "/dev/loop${i}" b 7 ${i}; fi; done

/var/lib/lxd/storage-pools/ and sudo lxc storage list are empty and don’t show any existing pools.

Since this is for a CI setup, I’m looking for an automated way (with --auto or an alternative for scripted initialization).

I’ve had good luck with simply running lxd init --auto inside of the nested container. A subsequent lxc info will show that the btrfs storage driver is in use.

For me it’s storage: dir w/o explicitly specifying --storage-backend btrfs.

You’re right! my bad, I didn’t realize that the behavior changed somewhere along the way. Running 2.0.x inside of a container behaves like I described, but apparently that no longer works with 2.21 inside the container.

If automation is your goal, and --auto can no longer do this for us… I discovered that running lxc storage create default btrfs source=/var/lib/lxd/storage-pools/default will create the storage pool successfully, if you do it prior to lxd init which would create the default ‘dir’ pool, which would then it would be in the way.

So it looks like

  • Either create and configure everything about lxd with the command line (don’t run lxd init)
  • or more simply, create a differently named btrfs pool and adjust the default profile accordingly
  • or less simply, recreate the default pool as btrfs, but I’d suspect having to unlink it from the default profile, temporarily

I’ve been running 3.0 with nested 2.0.x in my CI and so hadn’t noticed

I just wanted to add that some times when re-setting up lxd or recreating storage pools for testing, etc. I have had to take a deeper approach to reset the storage pools. I typically use a raw block device, such as /dev/sdb, allocated completely for the pool. Maybe there is a simpler way or a better way.

Maybe this is useful for someone else.

Wipe Block Device for New Storage Pool Creation

It seems I have to do this level of reset when using btrfs pools, but not when using zfs pools.