Having trouble creating a container in a second storage pool (Permissions Error)

(Please note as a new user I can ONLY add one image to a post, so I’ve put all my images together in one large image at the bottom and referenced them by number where they need to be)

ok, so here’s the scenario:

I’m running ubuntu server 20.04, the primary drive where my OS and the LXD/LXC snap package installed is almost full, so I’ve added a second 1TB drive to the system.

I’ve split that drive into two partitions each 500gb in size.

I’ve created 2 dir based storage pools in these 2 partitions as follows:


Creating a container in the default pool works, as it always has, with no problems.

With my newly created pools however, I keep hitting the same issue time and time again:


I’ve poured over quite a few posts and cries for help here and in askabuntu among other places, which while not the same gave me a few ideas to try, namely that the partitions had “noexec” on them. They did originally, but I’ve since removed that flag, and now exec is allowed, but still container creation keeps failing.


I also thought that maybe something was wrong with the way LXD had configured the storage pools, as it has created these folders in it’s snap folder:


Which appear to be empty (I suspect it may have wanted to make symlinks)

I have since ruled that out however, because when I look in the folders on the second disk I see that LXD has indeed created the root file systems for the containers:


Which I suspect it would not have done, if it had been using these empty folders.

Right now, I’m totally out of ideas as to what I’m doing wrong.


Hi, I’m not sure what the problem is but why did you mount that partitions instead you can create with lxc command like that.
lxc storage create tmppool <storage_type> source=/dev/sdb1
I have tested with zfs but you can use btrfs or lvm as well.

Hi @cemzafer ,

I have mounted the partitions, because I set them up to use the “dir” storage type.

I need to use the “dir” storage type as this system is used to deploy software applications under development, and the continuous deployment engine being used needs to be able to write files directly into the containers file system after an application is built, but before the container is started for the first time.

I was originally contemplating using btrfs, but once I found that my CI engine couldn’t write to that (zfs and others being the same) I had no choice but to use “dir”

For what it’s worth, I do suspect that it’s definitely something to do with the partition mount, because if I create a container in “default” and verify it works, then move it into one of the new pools, it too fails to start with the same error.

(I’m still experimenting and trying different things at present to see if I can narrow down the source of the permissions error)

I see, if that is the case, you can use lvm instead of dir.
lxc storage create lvmpool lvm source=<device/partition>

@cemzafer thanks, I’ll try that again, but I did try lvm at one point and couldn’t find a way to give my CI access to the container file system.

I’m currently trying some experiments using btrfs, and “lxc exec” to get the deployment code onto the container file systems.


For future reference it would be even better to put each screen output as text rather than an image using the 3 backticks to wrap code blocks. That way the text is searchable and we don’t have to zoom in to see what it says :slight_smile:

Please show output (as text) of lxc storage show devcontainers and lxc storage show syscontainers.


Unfortunately I was unable to do that (The backticks thing as text) at the time I posted the original, I was actually sat on the other end of a citrix win-frame device accessing it via an RDP redirect, and you don’t get a hope in hells chance of C&P outta those things :slight_smile: so it was better than nothing the way I did it. (Windows Key + shift + S)

Output for lxc storage show, I’ll have to do later on, I’ve actually destroyed the dir partitions and re-created them using btrfs at the moment while I try an experiment on something, will take me a few hours to finish said experiment, then I’ll re-create the 2 partitions as “dir” based pools, re-test and send the results.


Right then here goes:

I’ve recreated the file systems as I had them when I first posted this, and mounted them at exactly the same mount points… first step “created the pools”

root@cloudbox1node4nic1:~# lxc storage create syscontainers dir source=/mnt/storage1
Storage pool syscontainers created
root@cloudbox1node4nic1:~# lxc storage create devcontainers dir source=/mnt/storage2
Storage pool devcontainers created

Storages are created

root@cloudbox1node4nic1:~# lxc storage list
|     NAME      | DESCRIPTION | DRIVER |                     SOURCE                     | USED BY |
| default       |             | dir    | /var/snap/lxd/common/lxd/storage-pools/default | 1       |
| devcontainers |             | dir    | /mnt/storage2                                  | 0       |
| syscontainers |             | dir    | /mnt/storage1                                  | 0       |

mounts are mounted as follows:

├─/mnt/storage1                       /dev/sdb1              ext4        rw,nosuid,nodev,relatime
├─/mnt/storage2                       /dev/sdb2              ext4        rw,nosuid,nodev,relatime

Created a new container in one of the new pools:

root@cloudbox1node4nic1:~# lxc launch ubuntu:20.04 test1 -s syscontainers
Creating test1
Starting test1


in completeness however:

root@cloudbox1node4nic1:~# lxc storage show syscontainers
  source: /mnt/storage1
description: ""
name: syscontainers
driver: dir
- /1.0/instances/test1
status: Created
- none
root@cloudbox1node4nic1:~# lxc storage show devcontainers
  source: /mnt/storage2
description: ""
name: devcontainers
driver: dir
used_by: []
status: Created
- none

I honestly have no idea what/why/how/who, all I can do is blame it on aliens!

It was something to do with the way the partitions had been created however, beacuse short of not being able to get my CI/CD to access the file system, when I blew the partitions away, re-created them and changed to btrfs I had no problems.

I really did expect this to fail in the same way as the first attempt did, so I guess all I can say is there must have been something badly wrong with the original partitioning, but what…

well I ain’t got a scooby on that one!

@cemzafer , @tomp - Thanks for your input…


In my first incarnation of setting up the 2 partitions as “dir” based pools, I had the “noexec” flag applied to the mount point, because I had not actually applied this flag directly in the fstab file and it was put on as a default for Ubuntu, I didn’t notice that it was on the mount until I started digging (After I made this initial post)

I did at some point before I started my “btrfs” experiments, notice that flag was on the mount, and changed the “fstab” file to remove it… but that was all I removed.

I unmounted and remounted the partitions, expecting LXD to just see the change and be good with it, but it obviously didn’t, however at the time I DIDN’T KNOW THIS :slight_smile:

I just continued on the path of “Drat, that didn’t work either”

Fast forward, I created the btrfs partitions, destroyed them, uncommented the “fstab” file lines, recreated extfs4 partitions and re-added them to LXD, which at this point obviously now paid attention to the new flags, and hence why it worked when I flowed up on @tomp 's question.

Why am I so sure of this right now?

Well I was setting up some containers about an hour ago, and I needed one of the users to be able to “sudo” in one of them, and guess what…

Yup, I got an setuid error while inside the container.

This time however, I understood the error, so I went straight to “fstab” and checked, and yes… “nosuid” is also on by default in Ubuntu and has to be explicitly removed by adding “suid” on the mount options list…

I unmounted and re-mounted the partitions, and nope… still got the error, so I thought… I know… I need to reboot the machine to change a BIOS setting anyway, so I’ll call in to the server room on my way past, set it away re-booting and go get a cup of coffee while I think about things…

After I returned to my desk (Fresh coffee in hand) and the server had finished rebooting, I logged in and quickly tried the sudo test I’d been doing… and lo-behold, it worked…

So the TLDR here is:

If your using the “dir” backend via a separate drive mount under Ubuntu to create storage pools for LXD, make sure that you turn off the “nosuid” and “noexec” flags in your mount entries before you create the new storage pools.

in my case, my mount points look like this

/dev/disk/by-uuid/dfb75c7c-2df3-4da4-a77e-8d8a387e3ab7 /mnt/storage1 ext4 rw,user,auto,exec,suid 0 0
/dev/disk/by-uuid/8da4edbf-9f4b-45d1-8bae-5f01ba5c795c /mnt/storage2 ext4 rw,user,auto,exec,suid 0 0

Funny enough, the mounts still have the “nodev” flag on, which is also on by default, but that doesn’t in anyway seem to be causing any problems with the “/dev” folder in the root file system.

It’s took me 2 and a half days, to work out that everything was just because I was being an idiot :joy:

oh well…

1 Like