BTRFS OS Disk Subvolume for /var/lib/incus

I’m working on building a test box that will have snapshots on my root btrfs raid1 array during updates and some timeline updates as well using snapper. I will have btrfs arrays for Incus storage in /mnt. My questions will be in regards to subvolumes layouts for /var/lib/incus.

Should I just make /var/lib/incus it’s own subvolume or certain directories in it so it’s ignored by the root volume snapshots? I am planning on letting Incus manage it’s own scheduled snapshots over the /mnt arrays that it will have added as storage.

Thank you in advance for the help. :slight_smile:

Having /var/lib/incus itself be a subvol should be fine.
If the bulk of your Incus storage is elsewhere, then you may want to create two volumes, one for images and one for backups and then set storage.backups_volume and storage.images_volume to point to that.

By doing that, you’re effectively keeping all your Incus data on the Incus storage pool (/mnt) with /var/lib/incus only really holding the database and certificates. That just better splits the two locations.

2 Likes

Thank you. Just to be sure I’m understanding, your talking about the directories under /var/lib/{backups,images}?

I would set my default storage and profiles to point /mnt/btrfs-mirror.

Is that that best way to tackle this? I didn’t notice any storage best practices in the docs either. Did I miss anything there? Thanks again for the help. :slight_smile:

Yeah, creating a couple of custom volumes and setting those two keys will empty those folders and move their content onto those volumes.

So from scratch assuming your /mnt is also btrfs, it’d look like:

incus storage create default btrfs source=/mnt
incus profile device add default root disk pool=default path=/
incus storage volume create default backups
incus storage volume create default images
incus config set storage.backups_volume=default/backups
incus config set storage.images_volume=default/images
1 Like

That is what I was thinking and makes sense, but I wanted to make sure.

Ideally, I would just create a subvolume dedicated for Incus under /mnt/incus and move /var/lib/incus to that subvolume and not use /var/lib/incus at all if possible. This would give more of an appliance configuration to make the base OS a bit more resilient while not being locked into specific packages for the OS when things break. Debian 13 is going to be my base OS, so breakage shouldn’t be an issue very often and I will be using the zabbly repo for the latest stable Incus.

I’m just working on figuring out a few others features I need like cloud backup for my data that the containers will be accessing, etc. That is out of purview here.

This is obviously a test box so I can mess it up as much as I need/want to get it right. Thank you for pointing me in the right direction. :slight_smile:

Okay, you can do that but you’ll need to make sure to use a bind-mount.
Incus will not work correctly if /var/lib/incus is a symlink.

1 Like

That config above seems to work pretty well in my testing.

One thing I noticed in the docs was that BTRFS isn’t the best with VM’s. I wasn’t aware of that, since I haven’t used BTRFS much. I’ve been on ZFS based machines for awhile. Not sure if that is being developed in future releases, but I’m guessing that still stands in kernel 6.12.x.

uname -r
6.12.35+deb13-amd64

I’m not anticipating using VM’s, I’m pretty much only going to be deploying containers, whether with OCI or Docker in Incus. I think my use case should be fine.

I guess if I need VM support, I can add a new volume with ZFS or EXT4, unless there are better options.

Also, does Incus fare ok with bcache on btrfs? Just looking to give the spinning rust disks some additional caching and performance. Thank you again!

bcache under btrfs shouldn’t really matter to Incus.

The comment about VM running on btrfs is mostly because btrfs really works best when it can do proper copy-on-write (cow) but that works best when dealing with many small files, some getting modified while some remain the same as the original image.

With VMs being backed by a single file changing constantly we instead have to mark that file in btrfs as nocow, effectively turning off the most useful part of btrfs. That makes things like snapshots use a rather less optimized codepath than what you’d get on containers.

For those running a lot of VMs on local storage, your best options are ZFS or LVM.
For containers, ZFS and btrfs are mostly similar feature wise though ZFS has a bit of an edge on quota flexibility and having more per-volume configuration options, though btrfs has the advantage of being directly in the mainline kernel.

1 Like

Great info! Thank you. If I absolutely need VM’s, I can create a pool with a compatible FS backing the volume. I kind of want checksumming for data verification like ZFS has in my current install. I have ECC memory running in my box as well, all useful for data correction and bitrot detection.

Since I’m planning on docker (Hoping to move to LXC exclusively with the advent of OCI in Incus) and lxc containers for 99.9% of my workloads, I think I’ll be fine.

I’ve stopped using VM’s as I don’t find a whole lot of value in them anymore, especially with them being “fat” nowadays and containers, whether OCI or whatnot are just better and “lighter”.