Running a regular VM image file under Incus

Excuse me if these are dumb questions, but I may be overlooking something obvious.

1. Importing existing VM images

What’s the simplest way to take a raw VM image file, say foo.qcow2 or foo.img (raw), and run it under Incus as a VM?

(1) Thanks to another user’s posting, I found the convoluted path of going via an incus “image”: create image, launch from image, delete image.

cat <<EOS >metadata.yaml
architecture: x86_64
creation_date: $(date +%s)
properties:
  description: custom virtual machine
  os: custom
  release: $(date +%Y%m%d)
EOS
tar -czf metadata.tar.gz metadata.yaml
incus image import metadata.tar.gz foo.qcow2 --alias custom1
incus create custom1 myvm -s default
incus image delete custom1
incus start --console myvm

Am I missing something simpler? Things I’ve tried:

(2) I can create an empty VM:

$ incus create myvm --empty --vm -s default --device root,size=30GiB
Creating myvm

…but I don’t see an official way to initialize the storage from an existing VM file. I guess depending on the storage type I could overwrite it in-place:

# ls -ls /var/lib/incus/storage-pools/default/virtual-machines/myvm/root.img
0 -rw------- 1 root root 32212254720 Mar 25 12:35 /var/lib/incus/storage-pools/default/virtual-machines/myvm/root.img
# qemu-img convert -O raw foo.qcow2 /var/lib/incus/storage-pools/default/virtual-machines/myvm/root.img
# incus start --console myvm

(Although the size of this root.img may now no longer match what incus thinks it is)

(3) I see that it’s possible to export a VM instance and reimport it. However the export gives a tarball with a large bundle of files. It seems rather hard to replicate this structure to allow a simple image file to be imported via incus import

(4) Storage volumes can also be exported and imported. However, volume export/import only applies to “custom” volumes on VM (which represent additional drives attached to an instance), rather than the primary drive.

(5) There’s incus-migrate which is not supplied with incus by default. It’s a very manual process:

  • It refuses to run unless it’s run as root (this should not be necessary as I’m just reading a VM file)

    $ ./bin.linux.incus-migrate.x86_64
    Error: This tool must be run as root
    
  • It won’t talk to the local incus server over a Unix socket:

    $ sudo ./bin.linux.incus-migrate.x86_64
    Please provide Incus server URL: unix://
    Error: Failed to get remote certificate: Get "unix://:8443": unsupported protocol scheme "unix"
    

    As a result you have to jump through the trust setup hoops.

However, in the end I did manage to transfer an image this way, although it was not straightforward. This was the result:

# file /var/lib/incus/storage-pools/default/virtual-machines/myvm/root.img
/var/lib/incus/storage-pools/default/virtual-machines/myvm/root.img: QEMU QCOW2 Image (v3), 42949672960 bytes

(I note this has been imported as qcow2 rather than raw)

2. Booting non-UEFI images

Having imported an image via incus-migrate, I’m having trouble booting it:

BdsDxe: failed to load Boot0001 "UEFI QEMU QEMU HARDDISK " from PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/Scsi(0x0,0x1): Not Found
>>Start PXE over IPv4.
...

Tthis image does boot under libvirt just fine, and also under GNS3. I’ve checked it has security.secureboot=no.

# incus config show -e myvm
architecture: x86_64
config:
  security.secureboot: "false"
  volatile.cloud-init.instance-id: c865bb8b-dae4-461c-9e3b-19c9b4061f6a
  volatile.eth0.host_name: tap182b5fca
  volatile.eth0.hwaddr: 00:16:3e:38:6c:71
  volatile.last_state.power: RUNNING
  volatile.uuid: f1c132af-8ef5-4f44-81de-bbe15eff94d9
  volatile.uuid.generation: f1c132af-8ef5-4f44-81de-bbe15eff94d9
  volatile.vsock_id: "3686203789"
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

I’m wondering if I have to override some settings in qemu.conf, but I don’t know what those might be.

Oddly, I can’t see any reference to the /var/lib/incus/... storage directory in the qemu process. It’s not on the qemu command line:

incus     168039 27.6  0.1 1686292 109816 ?      Sl   13:26   5:37 /opt/incus/bin/qemu-system-x86_64 -S -name myvm -uuid f1c132af-8ef5-4f44-81de-bbe15eff94d9 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/myvm/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/myvm/qemu.spice -pidfile /run/incus/myvm/qemu.pid -D /var/log/incus/myvm/qemu.log -smbios type=2,manufacturer=LinuxContainers,product=Incus -runas incus

And it’s not in qemu.conf:

# grep /var/lib /run/incus/myvm/qemu.conf
path = "/var/lib/incus/devices/myvm/config.mount"
# grep img /run/incus/myvm/qemu.conf
#

3. Converting a custom volume to an incus image or a VM

Related to question 1: I am trying to build VM images, using an incus VM as the builder, with a second disk attached (i.e. custom volume which appears as /dev/sdb) that I mount and build the image into.

Once I’ve done this, I’m trying to find what’s the simplest way to:

  1. Convert this custom volume into either an incus image or an incus VM instance, so I can boot it for testing?
  2. Convert this custom volume into a qcow2 file that I can take elsewhere to run on a different VM platform?

I can’t see a direct way (from the incus command line) to publish a custom storage volume as an image; nor to launch a VM with a custom storage volume as its root.

option 1: Given dir storage, I guess I could pick up /var/lib/incus/storage-pools/default/custom/POOL_VOL/root.img directly. That’s very unofficial. It appears to be a plain image, rather than qcow2:

# file /var/lib/incus/storage-pools/default/custom/default_testzfs/root.img
/var/lib/incus/storage-pools/default/custom/default_testzfs/root.img: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 41943039 sectors, extended partition table (last)

option 2: I can do incus storage volume export, which gives me:

$ tar -tvzf default_testzfs.tgz
-rw-r--r-- root/root       325 2024-03-25 10:06 backup/index.yaml
-rw------- root/root 21474836480 2024-03-25 10:06 backup/volume.img

But when I untar it, it loses its sparseness:

$ tar -S -xvzf default_testzfs.tgz
backup/index.yaml
backup/volume.img
$ ls -ls backup/
total 20971548
       4 -rw-r--r-- 1 nsrc nsrc         325 Mar 25 10:06 index.yaml
20971544 -rw------- 1 nsrc nsrc 21474836480 Mar 25 10:06 volume.img
$ du -sch backup
21G	backup
21G	total

I can recover that by sending it through qemu-img convert, but it does require a potentially very large temporary file first. (tar is supposed to handle sparse files, so I wonder if that’s a bug?)

Thanks,

Brian.

If you intend for the image to be used to create multiple instances, then generating a metadata tarball and importing both the metadata tarball and qcow2 image with incus image import is definitely the way to go.

Incus 0.7 will include a new incus-simplestreams tool which has a generate-metadata command you can use to generate that tarball from a few questions.

If this is just a one time thing, then your best bet is to use qemu-img to convert it from qcow2 to a raw disk image, at which point you can use the incus-migrate tool to create a new instance from that raw disk image.

For UEFI, what you did above only turned off UEFI SecureBoot, the firmware is still UEFI.
If you need a regular BIOS, you need to both turn off SecureBoot AND set security.csm=true to get the legacy compatibility firmware running.

Thanks for that. It’s a one-time migration I’m thinking of here, not launching mutiple clones from the same base (in which case an ‘image’ makes perfect sense)

Part of my problem was passing a qcow2 file to incus-migrate, which is happily accepted as if it were a raw image. I’ve raised an issue for that: incus-migrate should detect non-raw images · Issue #658 · lxc/incus · GitHub

1 Like

Trying to use incus-migrate to import a raw ing file to an instance and getting the following:

root@incus1:/home/user# incus-migrate
The local Incus server is the target [default=yes]:
Would you like to create a container (1) or virtual-machine (2)?: 2
Project to create the instance in [default=default]:
Name of the new instance: vmname
Please provide the path to a disk, partition, or raw image file: /home/user/system.img
Does the VM support UEFI booting? [default=yes]:
Does the VM support UEFI Secure Boot? [default=yes]:

Instance to be created:
  Name: vmname
  Project: default
  Type: virtual-machine
  Source: /home/user/system.img

Additional overrides can be applied at this stage:
1) Begin the migration with the above configuration
2) Override profile list
3) Set additional configuration options
4) Change instance storage pool or volume size
5) Change instance network

Please pick one of the options above [default=1]: 1
Error: Source instance name must be provided for cluster member move

Any ideas? Did a search on the error and found nothing.

I use standalone incus servers, not a cluster, so I’ve not come across this error.

In source code:

func createFromMigration(ctx context.Context, s *state.State, r *http.Request, projectName string, profiles []api.Profile, req *api.InstancesPost) response.Response {
...
        // Decide if this is an internal cluster move request.
        var clusterMoveSourceName string
        if r != nil && isClusterNotification(r) {
                if req.Source.Source == "" {
                        return response.BadRequest(fmt.Errorf("Source instance name must be provided for cluster member move"))
                }

                clusterMoveSourceName = req.Source.Source
        }
func isClusterNotification(r *http.Request) bool {
        return r.Header.Get("User-Agent") == clusterRequest.UserAgentNotifier
}

I have also tried specifying the host with the URL when asked instead of accepting that it is local (even thought it is local) and the code then determines that it is local anyway and then provides the same error.

Not sure why it thinks this is a cluster member move.

Guess I will open a ticket in git