If I attach a storage volume to a profile, it shows in the profile as a disk device.
Is a storage volume simply a shortcut to using disk devices +raw.idmap?
Is there a way to list all storage volumes attached to a container?
"lxc config device show " almost does that, but it does not show the storage volume’s name. (I also wonder why it doesn’t show other devices, such as the root or nic devices).
I’ve been using what I think is equivalent functionality by defining disk devices in a profile and attaching the profile to one or more containers.
If I want the device to be writable by a user in the container, I also add a raw.idmap for that user in the profile (the same profile or a separate one).
The built-in storage volume capabilities of lxd make it very easy to use, but I also need to easily keep track of the containers that use a certain volume. If the volume is attached to a profile and then attached to containers via the profile, I can get this information from the profile, but then I’m adding a level of indirection and make it more complicated.
I found some answers:
A volume shows up as a disk device, with an additional “pool” parameter.
To find out which which containers and profiles are attached to a volume, use:
lxc storage volume show
I can use this, together with “lxc config device show ” to keep track of which containers are attached to which volume, so I don’t need to use profiles for tracking.
A storage volume is disk space that has been allocated from a storage pool, to be used by a container. Container images are also stored in storage volumes.
To list your storage pools, run the following. Here, there is a single storage pool which happens to be called lxd.
$ lxc storage list
+------+-------------+--------+--------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+------+-------------+--------+--------+---------+
| lxd | | zfs | lxd | 6 |
+------+-------------+--------+--------+---------+
Let’s list the storage volumes in the storage pool. I have not created a separate storage volume, so we are seeing the volumes created by LXD for my containers, plus the storage volumes for any cached container images. There are two containers and three container images, in total five. I do not know why lxc storage list lxd shows six instead of five. Probably because when I sudo zfs list, I can see a deleted image so that may account for the sixth storage volume.
But what are these container images? Can we easily see what those hashes correspond to?
We do that with lxc image list. Indeed, the hashes correspond with the hashes of the storage volumes.
$ lxc image list
+--------------------------------------+--------------+--------+---------------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+--------------------------------------+--------------+--------+---------------------------------------------------+--------+----------+-------------------------------+
| openwrt-chaos_calmer-20181120-212658 | 2e4d679ce33b | no | openwrt 15.05.1 x86_64 (default) (20181120_21:26) | x86_64 | 2.17MB | Nov 20, 2018 at 7:29pm (UTC) |
+--------------------------------------+--------------+--------+---------------------------------------------------+--------+----------+-------------------------------+
| | 018d083aec13 | no | ubuntu 16.04 LTS amd64 (release) (20181114) | x86_64 | 158.12MB | Nov 15, 2018 at 11:03pm (UTC) |
+--------------------------------------+--------------+--------+---------------------------------------------------+--------+----------+-------------------------------+
| | 8f9da4cd832b | no | ubuntu 18.04 LTS amd64 (release) (20181101) | x86_64 | 174.44MB | Nov 13, 2018 at 12:19am (UTC) |
+--------------------------------------+--------------+--------+---------------------------------------------------+--------+----------+-------------------------------+
In retrospect, a disk device is disk space from the host filesystem that has been shared to a container.
Is it possible to convert an existing disk device (directory, zfs dataset, etc.) into a custom storage volume, so I can manage it as a storage volume (e.g. attach it to containers)?
It seems not. For example, given an existing z1/t1 zfs dataset, I tried this:
Is there a way to make a storage volume read-only? Perhaps for all containers, except one?
I can use a disk device as read-only by not mapping the user ids (as long as the directories don’t have o+w permission). Another way is to bind-mount it read-only to another directory and share that.