Compressed snapshots?

lxd

(eva2000) #1

Now that i have a number of working lxd containers and snapshots made, disk usage is going up and alot of it is due to snapshot sizes. Is there any support for compressed snapshot storage using gzip, bzip2, xz or zstd would be nice especially if you could support multi-threaded versions like pigz, lbzip2, pbzip2, pxz and zstd native multithreading :slight_smile:


(Stéphane Graber) #2

I wouldn’t be opposed to a branch which adds support for that in the dir storage backend.
All other backends won’t need this as they support block-level copy-on-write and often compression too.

That will seriously slow down the speed of snapshots on dir backend, which are already quite horribly slow and racy.


(eva2000) #3

Yeah i am using dir storage backend right now. Is it possible to migrate existing dir storage backend containers to other storage ?

multi-threaded compression variant tools should provide much speedier compression though.

Question, does lxc/lxd interact in anyway with snapshots at times other than snapshot creation and restore ? If not, can I just manually compress the snapshot directory for that snapshot i.e. /var/snap/lxd/common/lxd/storage-pools/default/snapshots/container-name/snapshot-name and uncompress it when needed for restore ?


(Stéphane Graber) #4

LXD 3.2 will have support for moving containers between storage pools.

LXD doesn’t access the snapshot data after the snapshot is created and before it’s restored, so when using the dir backend, you should be fine with your plan, though note that there’s no guarantee that we won’t be changing that behavior in future releases.

Parallel compression tools do tend to go faster but at the cost of taking most of your CPU, making everything else slow. Some may find it annoying that an automatic snapshot makes their system unusable for minutes (on a large container).

Memory consumption is also an issue with parallel compression as this usually involves a copy of the memory for every thread/subprocess.


(eva2000) #5

cheers @stgraber