Storage Pools - Basic help creating a pool and questions

For a long time Windows user trying to get up to speed with Incus and Linux

The server being used here is an HPE DL-380 with 256gb of ram and two hba
mode 480gb ssd drives with a new Debian 12 install

I’m watching videos and reading and it takes me awhile to grasp all the new stuff

  1. Installed Incus
  2. ran $ adduser bret incus-admin
  3. ran incus admin init
    but screwed up the storage pool… trying to fix

On creating an instance get this error.

root@debian:/home/bret# incus launch images:debian/12 db12-first
Launching db12-first
Error: Failed instance creation: Failed creating instance record: Failed initializing instance: Failed getting root disk: No root device could be found

current details no instances

root@debian:/home/bret# incus list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

Default Profile below

root@debian:/home/bret# incus profile show default
config: {}
description: Default Incus profile
devices: {}
name: default
used_by: []
project: default

created a dir pool (shown below)

root@debian:/home/bret# incus storage list
+--------+--------+--------------------+---------+---------+
|  NAME  | DRIVER |    DESCRIPTION     | USED BY |  STATE  |
+--------+--------+--------------------+---------+---------+
| ssd-02 | dir    | SSD Drive - Slot 2 | 0       | CREATED |
+--------+--------+--------------------+---------+---------+

In the various threads on this, it seems the best way is to add my ssd-02 storage pool to
my default profile so I can launch an instance.
Please help me with that.

Also seeking explanation on this paragraph in the pools docs
For Dir pools
“The directory storage driver is a basic backend that stores its data in a standard file and directory structure. This driver is quick to set up and allows inspecting the files directly on the disk, which can be convenient for testing. However, Incus operations are not optimized for this driver.”

Incus operations not optimized…
I have access to and am in the process of creating more advanced…or maybe preferred is the right word implementations of Incus, but would appreciate a comment on this.

…and this paragraph also in the Dir driver Pools docs
The dir driver in Incus is fully functional and provides the same set of features as other drivers. However, it is much slower than all the other drivers because it must unpack images and do instant copies of instances, snapshots and images.

Here I’m looking for the “preferred” driver type to use on my Debian server.

Regards

incus profile device add default root disk pool=ssd-02 path=/

If there was one driver that handled everything great in all situations, we wouldn’t have a whole bunch of drivers to choose from :slight_smile:

Everything is a compromise and varies based on what’s the main target.

In general dir is the worst option if you care about disk space as there is no copy on write and no real support for snapshots. On the upside, it works everywhere and doesn’t need any additional partitions (or loop devices) nor additional tools.

ZFS is pretty good overall, it supports all the copy-on-write stuff and it’s good for both containers and VMs. But it’s an out of tree kernel module and not under the GPL which is causing some folks to avoid it. It’s also known for requiring more memory than other options, though also provides excellent performance through the use of that memory.

btrfs is a good option for those who can’t do ZFS and primarily care about containers. It’s not good for VMs and it has pretty weak quota support (compared to ZFS anyways), but is otherwise pretty fully featured.

LVM is a good option for those who can’t do ZFS and primarily care about VMs.
It’s just not great for containers are every container needs its own LV and there’s no good support for backup/migration.

Then there are the remote storage drivers which are their own can of worms and compromises, but that doesn’t really apply here.

That did the trick. I have now created an instance.

root@debian:/data/images# incus list
+------------+---------+------+------+-----------+-----------+
|    NAME    |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+------------+---------+------+------+-----------+-----------+
| db12-first | RUNNING |      |      | CONTAINER | 0         |
+------------+---------+------+------+-----------+-----------+

(When created, I was advised the instance has no network. Will address that in another post), I believe I have assigned networks to instances, so not panicking.

root@debian:/data/images# incus profile show default
config: {}
description: Default Incus profile
devices:
  root:
    path: /
    pool: ssd-02
    type: disk
name: default
used_by:
- /1.0/instances/db12-first
project: default

The ssd-02 device is a 480GB ssd

Here is the storage info displayed by: incus storage show ssd-02

root@debian:/data/images# incus storage show ssd-02
config:
  source: /data/
description: SSD Drive - Slot 2
name: ssd-02
driver: dir
used_by:
- /1.0/instances/db12-first
- /1.0/profiles/default
status: Created
locations:
- none

My next question was how much space did the new instance use. So I consoled into
the instance. Here is the listing:

root@debian:/data/images# incus exec db12-first -- /bin/bash
root@db12-first:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb2        23G  3.0G   19G  14% /
none            492K  4.0K  488K   1% /dev
udev            126G     0  126G   0% /dev/tty
tmpfs           100K     0  100K   0% /dev/incus
tmpfs           100K     0  100K   0% /dev/.incus-mounts
tmpfs           126G     0  126G   0% /dev/shm
tmpfs            51G   84K   51G   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock

Is this how to determine the size of the instance?

If I understand this correctly, I need to create some custom profile to use which has
driver type sizes defined…or modify the default profile

Is that correct?

Thanks for comments, suggestions, and patience.

df -h in your environment (dir storage driver on a filesystem that’s not set up for project quotas) will always return the total consumption of the pool and not the per-instance size information.

There is no way to get the instance size on the dir backend unless the underlying filesystem supports project quotas and has them enabled.

When you run incus admin init to perform the initial configuration of Incus, you are actually running a setup wizard that helps you get going. That means that you can either replace that command with individual incus commands to setup the storage pool, the networking, etc. Or, you can make changes to your initial configuration and adapt appropriately.

For the networking part, you would incus network list to see what’s already there. You are looking if you already have a managed (by Incus) network, or if you do not, you create one. Then, add that network to your Incus profile.

The dir storage driver has a benefit that the space is not pre-allocated; Incus will be using here the free space of the disk. And you can poke into that space from the host and have a look.

For your learning journey and your fast system, your initial experience should be fine. The next level would be to use a pre-allocated file (a loop file) with either ZFS or btrfs. Those are created with incus storage create and the appropriate parameters. I think there’s a caveat that these are only created in the root filesystem of the host.

You can have multiple storage pools and you can move instances between them. The incus move command has a syntax that you specify where to move to.

It’s also possible to use an empty partition to setup ZFS, btrfs, LVM, etc. Or, if you already have any of those, you can specify some part of them to be used as a storage pool for Incus. In the case of ZFS, you would use a ZFS dataset for Incus.

In a 30GiB storage pool (pre-allocated file or separate partition) you could easily fit more than a dozen Debian/Ubuntu instances at the same time. If you can furnish a partition, I suggest 100GiB or more to avoid space issues in the future.

Thanks for the responses. I believe I’m carrying some VM concepts into the container
model and I need to adjust my thinking.
I’ll happily keep creating instances, reading and configuring different versions.

I’ll be back, thanks to you all