LXC copy runs out of disk space

Okay I give up trying to figure this out.

I’ve got a container on a disk storage pool that i’m trying to move or copy to a Ceph storage pool.

When I do so it eventually errors with:

Error: Failed to run: rsync -a -HAX --sparse --devices --delete --checksum --numeric-ids --xattrs --bwlimit 0 -q /var/snap/lxd/common/lxd/storage-pools/homepool2/containers/nexus2/ /var/snap/lxd/common/lxd/storage-pools/lxd-pool1/containers/nexus: rsync: write failed on "/var/snap/lxd/common/lxd/storage-pools/lxd-pool1/containers/nexus/rootfs/root/repo/blobs/default/content/vol-05/chap-27/db8e241f-c71e-467c-a471-da12dc006722.bytes": No space left on device (28)

I’ve tried symlinking and bind mounting stuff all over the place but not managed to make it go away.

a) /var/snap/lxd/common/lxd/storage-pools/homepool2/containers/nexus2/ doesn’t contain anything on my file system that root can see at least, so how is rsync managing to copy anything.
b) anyone got any bright ideas about where or what to bind mount a larger disk into to get it to move
c) if its going to ceph, why is it writing anything to disk anyway?

That’s where the CEPH volume is mounted, my guess is simply that your default block volume size is too small.

lxc storage set lxd-pool1 volume.size 50GB or some suitable size should fix this issue for you.

This is only a problem with block based storage like LVM or CEPH where each volume needs a block device of the right size. Unless this is manually tweaked, we default to 10GB which is apparently not enough in this case.

1 Like

aaaah I mulled that over this afternoon when in another container which had a 10GB root disk, now it makes sense!

Thanks!