Pre-converting qcow2 image for faster LXD VM creation

Hello,
I’m using LXD to create VMs based on a qcow2 image, but noticed that each time I create a new VM, qemu-image runs to convert the qcow2 image to raw. This takes time and the image is also quite large.

Is there a way to pre-convert the qcow2 image so that I don’t have to convert it every time I create a new VM based on that image? I’m looking for a way to speed up the VM creation process and reduce the storage space required for the raw image. Thank you in advance for your help.

lxc image import metadata.tar packer-qemu-image --alias my-vm-image

We can convert via qemu-img (decompress) and save the raw image.
If I understand correctly, lxd vm needs the raw image (root fs?) and the metadata file

Maybe somehow we can provide that files directly to skip converting stage?

Which lxc command are you running which is slow and which storage pool type are you using?

I’m running this:

lxc launch vm-image worker-x --vm -c limits.cpu=4 -c limits.memory=3GiB

I installed lxd via snap with all defaults

and which storage pool type are you using?

Please show lxc storage list and then lxc storage show <pool> for the storage pool name.

1 Like

lxc storage list:

+---------+--------+------------------------------------------------+-------------+---------+---------+
|  NAME   | DRIVER |                     SOURCE                     | DESCRIPTION | USED BY |  STATE  |
+---------+--------+------------------------------------------------+-------------+---------+---------+
| default | dir    | /var/snap/lxd/common/lxd/storage-pools/default |             | 3       | CREATED |
+---------+--------+------------------------------------------------+-------------+---------+---------+

lxc storage show default

config:
  source: /var/snap/lxd/common/lxd/storage-pools/default
description: ""
name: default
driver: dir
used_by:
- /1.0/instances/worker-init
- /1.0/instances/worker-init/snapshots/snap
- /1.0/profiles/default
status: Created
locations:
- none

I have a custom VM which is around 30GB unpacked (raw) and 6 gb as qcow2

I’m using the same image for all VMs, so there is no need to run qemu-image convert to convert the qcow2 image to raw every time I launch a new VM, maybe it’s possible to use cached raw image without calling qemu-image convert

The more interesting question is why LXD doesn’t use the full power of qcow2 images? They allow to create multilayer copy-on-write images where one image is like a layer above another (similar to docker). It will allow to to significantly reduce space usage because there will be no need to completely copy base image - just create a new “layer”. This technique for example is used by libvirt. Moreover, qccow2 is a special format which optimized for using with qemu, which can increase the overall vm performance

1 Like

QCOW2 is faster than a raw image when on spinning rust and when not dealing with an underlying storage layer that’s itself copy-on-write.

99% of LXD users are either using zfs or btrfs, both of which are copy on write filesystems, so that part of QCOW2 becomes redundant. Both of those also support compression, making that part of QCOW2 similarly redundant.

Then QCOW2 is a bit of a pain to work with when you need to do things like directly access its partitions, modify the GPT table on the disk, …
It can be done, but you usually need to use qemu-nbd to create a local network disk which can then be mapped through the kernel nbd driver to finally interact with the content of the QCOW2 image.

With a raw disk image, there’s none of that, you just tell the kernel to map it and you’re done.

The zfs, btrfs, lvm and ceph storage pool drivers will only convert an image from qcow2 to raw once and then subsequent instances created are from snapshots of that image volume created on the first instance.

I had similar experience with large windows server images.
Once I got an image with all necessary config & installs (exchange server …), I published the vm instance as image.
From that image in local cache (zfs data set) it was fast enough to launch new instances.

Problem was, for each project, you can either have own image cache or globally shared cache for all projects.

I once proposed, having a project as image cache for all others and the possibility of launching an instance by using images cross projects or from that special purpose project, template project.

lxc launch vm1 --project x image --from-project z
Launch an instance from image in different project · Issue #10089 · lxc/lxd (github.com)

The image cache isn’t project specific, instead LXD stores all image based on their fingerprint. If you have the same image (according to fingerprint) in 5 projects, you only have the one copy on disk.

Must the image be public?
Below trying to launch an instance in project 304, using an image from project 102:

lxc image ls --project 102
+-------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |             DESCRIPTION             | ARCHITECTURE |      TYPE       |   SIZE   |          UPLOAD DATE          |
+-------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
|       | 8eb253c8ed68 | no     | Ubuntu jammy amd64 (20230320_07:43) | x86_64       | VIRTUAL-MACHINE | 264.26MB | Mar 20, 2023 at 8:57pm (UTC)  |
+-------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
|       | 783ce49586cc | no     | Alpine edge amd64 (20230110_23:28)  | x86_64       | CONTAINER       | 3.46MB   | Jan 12, 2023 at 11:37pm (UTC) |
+-------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
lxc image ls --project 304
+-------+--------------+--------+-------------------------------------+--------------+-----------+---------+------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |             DESCRIPTION             | ARCHITECTURE |   TYPE    |  SIZE   |         UPLOAD DATE          |
+-------+--------------+--------+-------------------------------------+--------------+-----------+---------+------------------------------+
|       | 359ce69f7224 | no     | Ubuntu jammy amd64 (20230221_12:58) | x86_64       | CONTAINER | 65.31MB | Feb 22, 2023 at 1:27pm (UTC) |
+-------+--------------+--------+-------------------------------------+--------------+-----------+---------+------------------------------+
lxc launch -p cardanoS 8eb253c8ed68 --project 304
Creating the instance
Error: Image not found

The image still needs to be part of the project, so you’d need to do an lxc image copy 8eb253c8ed68 --project 102 --target-project 304 for it to be present there.

Or since those are normal upstream images, what would normally happen is that you do lxc launch images:alpine/edge --project 304 and LXD will get the fingerprint from the image server, see it has it already in the image store and just add a record for it in project 304 without ever downloading it or unpacking it again.

I’m creating with default profile, not sure which storage is used, but as you can see qemu-img convert happens all time for each vm creation and its painful for 30GB raw image

Seem like I missing something, I used init --auto to setup lxd on a server via a bash script
afaik, default storage is zfs, but still getting qemu-img called each time

Use lxc profile show default and look for the root disk device’s pool property, then you can correlate that with the lxc storage list output and check what type of pool it is.

lxd init --auto always uses dir pool type.

ZFS is the default suggested option only during interactive initialisation.

1 Like

oh, makes sense, can I setup from a script to use zfs, maybe specifying an option at init run?

I’m not sure if you can use lxd init --auto combined with the --storage-backend flag (the command’s help suggests you can, see lxd init --help).

Alternatively you can use a preseed file piped to lxd init, see

https://linuxcontainers.org/lxd/docs/master/howto/initialize/#non-interactive-configuration

1 Like