I have a container instance for running postgresql, and the database data is stored on a custom volume. Today my database crashed as it thought it was out of available disk space, which makes no sense?
incus storage volume info dpool custom/volume_postgresql17
Name: volume_postgresql17
Type: custom
Content type: filesystem
Usage: 9.73GiB
Created: 2025/11/20 10:54 CET
Yet, if I check this volume from inside the container:
Any kind of ZRaid won’t be able to use the RAW disk space reported as certain amount of space is reserved for checksums etc. 17GB is about 85% from 20GB which seems about right for a ZRaid2 pool. At least that is what my ZRaid2 pool on 6 disks uses / reports.
Another data point is that you should never go over 80% of capacity usage to leave space for all the ZFS data optimizations required for optimum speed.
This is what I learned using ZFS best practices by reading docs / block posts etc.
It’s just a single NVMe drive, no mirror, zraid or such. The database is also approx 500 MB, not anywhere near the sizes reported.
zpool status dpool
pool: dpool
state: ONLINE
scan: scrub repaired 0B in 00:07:25 with 0 errors on Mon Dec 1 00:52:44 2025
config:
NAME STATE READ WRITE CKSUM
dpool ONLINE 0 0 0
nvme-nvme.c0a9-323333324538364145443545-43543130303050335053534438-00000001-part1 ONLINE 0 0 0
errors: No known data errors
And the db size:
[root@postgresql15:/var/lib/postgresql]# du -hsc 17/
485M 17/
485M total
Can you show zfs list -t all as well as zfs get all dpool/incus/custom/default_volume_postgresql17?
If the pool is getting pretty full, the total size will shrink, so an almost full pool would be my first guess. But there could also be some other ZFS property at play here on this dataset.
I don’t think it’s the pool being full, but walking to work this morning I realized I hadn’t checked if snapshots was the issue…
I should also add I have three pools, rpool for my root file system, bulk for my HDD pool and dpool which I run incus on. I’m using sanoid for snapshots (which are sent to the bulk pool).
Anyways, some data:
zfs list dpool
NAME USED AVAIL REFER MOUNTPOINT
dpool 212G 704G 96K /nvme
I have somewhat a similar issue, but I think snapshots should not be the cause of my problem.
I’m using one pool with the TrueNAS driver with a single HDD. I have created two volumes to store my media content. However, the size of the volumes seems to be capped way below the actual limit of the pool.
Is that only affecting that existing volume or also new volumes created with a larger size from the beginning?
With TrueNAS, the volumes are created on the remote TrueNAS storage appliance and then exported to Incus over iSCSI. I don’t recall how the size reporting works though, whether it’s what Incus sees when accessing the volume over iSCSI or if it’s the size reported by the TrueNAS API instead.
If I do incus storage volume create truenas-storage test-incus and then increase the size by editing this volume. This update is not reflected in the instance even after a restart.
However, if I do incus storage volume create truenas-storage test-incus2 size=2000GiB then the instance sees 1.9Tb available.
No, it’s not, I’d expect a storage driver like TrueNAS to either support the live size change or to fail with an error, not pretend that it’s been done and then ignoring it.