Hello,
I’m using LXD to create VMs based on a qcow2 image, but noticed that each time I create a new VM, qemu-image runs to convert the qcow2 image to raw. This takes time and the image is also quite large.
Is there a way to pre-convert the qcow2 image so that I don’t have to convert it every time I create a new VM based on that image? I’m looking for a way to speed up the VM creation process and reduce the storage space required for the raw image. Thank you in advance for your help.
We can convert via qemu-img (decompress) and save the raw image.
If I understand correctly, lxd vm needs the raw image (root fs?) and the metadata file
Maybe somehow we can provide that files directly to skip converting stage?
+---------+--------+------------------------------------------------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+---------+--------+------------------------------------------------+-------------+---------+---------+
| default | dir | /var/snap/lxd/common/lxd/storage-pools/default | | 3 | CREATED |
+---------+--------+------------------------------------------------+-------------+---------+---------+
I have a custom VM which is around 30GB unpacked (raw) and 6 gb as qcow2
I’m using the same image for all VMs, so there is no need to run qemu-image convert to convert the qcow2 image to raw every time I launch a new VM, maybe it’s possible to use cached raw image without calling qemu-image convert
The more interesting question is why LXD doesn’t use the full power of qcow2 images? They allow to create multilayer copy-on-write images where one image is like a layer above another (similar to docker). It will allow to to significantly reduce space usage because there will be no need to completely copy base image - just create a new “layer”. This technique for example is used by libvirt. Moreover, qccow2 is a special format which optimized for using with qemu, which can increase the overall vm performance
QCOW2 is faster than a raw image when on spinning rust and when not dealing with an underlying storage layer that’s itself copy-on-write.
99% of LXD users are either using zfs or btrfs, both of which are copy on write filesystems, so that part of QCOW2 becomes redundant. Both of those also support compression, making that part of QCOW2 similarly redundant.
Then QCOW2 is a bit of a pain to work with when you need to do things like directly access its partitions, modify the GPT table on the disk, …
It can be done, but you usually need to use qemu-nbd to create a local network disk which can then be mapped through the kernel nbd driver to finally interact with the content of the QCOW2 image.
With a raw disk image, there’s none of that, you just tell the kernel to map it and you’re done.
The zfs, btrfs, lvm and ceph storage pool drivers will only convert an image from qcow2 to raw once and then subsequent instances created are from snapshots of that image volume created on the first instance.
I had similar experience with large windows server images.
Once I got an image with all necessary config & installs (exchange server …), I published the vm instance as image.
From that image in local cache (zfs data set) it was fast enough to launch new instances.
Problem was, for each project, you can either have own image cache or globally shared cache for all projects.
I once proposed, having a project as image cache for all others and the possibility of launching an instance by using images cross projects or from that special purpose project, template project.
The image cache isn’t project specific, instead LXD stores all image based on their fingerprint. If you have the same image (according to fingerprint) in 5 projects, you only have the one copy on disk.
The image still needs to be part of the project, so you’d need to do an lxc image copy 8eb253c8ed68 --project 102 --target-project 304 for it to be present there.
Or since those are normal upstream images, what would normally happen is that you do lxc launch images:alpine/edge --project 304 and LXD will get the fingerprint from the image server, see it has it already in the image store and just add a record for it in project 304 without ever downloading it or unpacking it again.
I’m creating with default profile, not sure which storage is used, but as you can see qemu-img convert happens all time for each vm creation and its painful for 30GB raw image
Seem like I missing something, I used init --auto to setup lxd on a server via a bash script
afaik, default storage is zfs, but still getting qemu-img called each time
Use lxc profile show default and look for the root disk device’s pool property, then you can correlate that with the lxc storage list output and check what type of pool it is.