When you run incus admin init, Incus will run incus storage create for you with the appropriate parameters that were selected in the wizard. That means that you can freely create additional storage pools, then get Incus
to use that new storage pool by default
to move instances, etc from one storage pool to another
to empty a storage pool, then safely delete it because you are using now the new storage pool
Therefore, your question is what parameters to put into incus storage create in order to create the new storage pool. What are the parameters of the new storage pool?
The issue is creating the pool on a different disk. Normally I would just do incus storage create btrfspool2 btrfs. But this will create the pool and .img file on the system’s disk. I need to create the img file on a different disk.
Thanks for the link but can’t find the answer to my question. What Ive done is create the loop device myself, and use it while creating the storage pool, using the source option. But… then I have to manage the loop device myself, including creating a service to set the loop device on boot. I was wondering if there’s an “official” recommended way to do this, that would let incus take care of this new pool as it does the “default” ones. This also prevent us from using normal incus tools to manage the pool, eg resizing.
(actually, you are right. It’s a bit nuanced and we have to make it easier.)
We use Incus to launch a VM and in there incus admin init a BTRFS loop device. This helps to verify the end goal. Later on, we try to create our own separate loop device that resembles the one that was created with incus admin init.
$ incus launch --vm images:ubuntu/24.04/cloud myincusserver
Launching myincusserver
$ incus exec myincusserver -- su -l ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@myincusserver:~$ sudo apt install -y -qq incus btrfs-progs
...
ubuntu@myincusserver:~$ sudo usermod -a -G incus-admin ubuntu
ubuntu@myincusserver:~$ newgrp incus-admin
ubuntu@myincusserver:~$ incus admin init
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
Size in GiB of the new loop device (1GiB minimum) [default=5GiB]: 2GiB
Would you like to create a new local network bridge? (yes/no) [default=yes]:
...
ubuntu@myincusserver:~$ incus storage list
+---------+--------+----------------------------------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+---------+--------+----------------------------------+-------------+---------+---------+
| default | btrfs | /var/lib/incus/disks/default.img | | 1 | CREATED |
+---------+--------+----------------------------------+-------------+---------+---------+
ubuntu@myincusserver:~$ incus storage show default
config:
size: 2GiB
source: /var/lib/incus/disks/default.img
description: ""
name: default
driver: btrfs
used_by:
- /1.0/profiles/default
status: Created
locations:
- none
ubuntu@myincusserver:~$
By looking through the Incus code on btrfs, we see that Incus first creates a sparse file using the Go filesystem libraries (ensureSparceFile()), then uses losetup to attach it to a loop device, and finally uses mkfs.btrfs to create the btrfs filesystem (mkFSType()in there. It’s not clear though whether losetup is invoked here, and it might not be invoked.
Let’s give it a try. First dd the storage pool and losetup the loop file.
ubuntu@myincusserver:~$ sudo dd if=/dev/zero of=/newpool.img bs=1k count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 2.49355 s, 411 MB/s
ubuntu@myincusserver:~$ sudo losetup /dev/loop1 /newpool.img
ubuntu@myincusserver:~$
Then, try to create the storage pool. Somehow, Incus does not allow us to use newpool.img as the source. This means that latter it will not have the proper configuration and will require from us to losetup manually after every reboot. However, Incus should be able to losetup for us.
But let’s see.
ubuntu@myincusserver:~$ incus storage create newpool btrfs source=/newpool.img source.wipe=true
Error: Provided path does not reside on a btrfs filesystem (detected ext4)
ubuntu@myincusserver:~$ incus storage create newpool btrfs source=/dev/loop1 source.wipe=true
Storage pool newpool created
ubuntu@myincusserver:~$ incus storage list
+---------+--------+--------------------------------------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+---------+--------+--------------------------------------+-------------+---------+---------+
| default | btrfs | /var/lib/incus/disks/default.img | | 1 | CREATED |
+---------+--------+--------------------------------------+-------------+---------+---------+
| newpool | btrfs | 6eba2dc0-148c-4ce2-ad11-c3814824b191 | | 0 | CREATED |
+---------+--------+--------------------------------------+-------------+---------+---------+
ubuntu@myincusserver:~$ incus storage show newpool
config:
source: 6eba2dc0-148c-4ce2-ad11-c3814824b191
volatile.initial_source: /dev/loop1
description: ""
name: newpool
driver: btrfs
used_by: []
status: Created
locations:
- none
ubuntu@myincusserver:~$
When I use incus storage edit newpool, I cannot change the configuration so that the source becomes /newpool.img.
A bit of extra work is needed to figure this one out. We need to create BTRFS storage pools manually so that the source is the name of the loop file, not an ID.
Very interesting… I did very similar to your approach. I created the img file using truncate. And I didn’t know about the source.wipe=true option for a second run (I was using wipefs for this). Like you, I also get a uuid as the source of the pool.
To reattach the loop dev I use a service that requires incus (Before=incus.service).
So my recipe to resize the storage is to use truncate -s +nnGB on the .img file, then resize the loop dev with losetup -c, and btrfs filesystem resize max on the img file.
Im still not sure if I need all these commands, but it’s working…
It just isn’t very elegant. And it’s sure to fail when later on you don’t remeber any of this…
In the source, I see there is some magic for handling the incus-managed loopback volumes:
internal/server/storage/drivers/driver_btrfs.go
func (d *btrfs) Create() error {
...
loopPath := loopFilePath(d.name)
if d.config["source"] == "" || d.config["source"] == loopPath {
// Create a loop based pool.
d.config["source"] = loopPath
internal/server/storage/drivers/utils.go
// loopFilePath returns the loop file path for a storage pool.
func loopFilePath(poolName string) string {
return filepath.Join(internalUtil.VarPath("disks"), fmt.Sprintf("%s.img", poolName))
}
As far as I can see: to create a loopback-backed pool called “foo”, the path must be exactly /var/lib/incus/disks/foo.img, and this is hard-coded into the source.
Unfortunately, this doesn’t help the OP who wants to place the loopback image somewhere else (unless they could get away with a symlink)