KVM VM to LXD 4.0 VM or container

Hi all

Since LXD 4 is now live, what is the best way to convert KVM VMs into LXD VMs or LXC containers.

Is this still the recommended way or has it been superseded?

Interestingly enough all guides and google hits Im getting are upto 6 years old.

lxd-p2c should work fine to convert a VM or physical system into a container.

If you want to move an existing disk image into a LXD virtual machine, the way I’ve been doing it so far is by creating an empty VM with lxc init blah --vm --empty then replacing its disk image with the one from the existing VM.

Note however that LXD’s machine type may not match what you have and so may not boot. LXD only supports Q35 (recent Intel virtualized platform) and UEFI.

If your VM relies on older virtualized hardware (doesn’t support virtio for example) or if it’s using a legacy BIOS rather than UEFI, it will not boot under LXD.

Also, if your VM is UEFI but not secure boot capable, you’ll need to set security.secureboot=false so that it will boot.

1 Like

Thanks Stephane.

Does security.secureboot=false go in the lxd vm profile?

Are any other changes required in vm profiles similar to what we do on priviledged containers running docker or kubernetes for example?

Either in a profile or directly on the instance.

Most users won’t have a need for that vm profile from the initial announcement as the images on our image server just work fine out of the box now.

So if you use ubuntu:18.04 you still need the profile to get cloud-init to behave.
If you’re using the upcoming 20.04 image at ubuntu-daily:20.04, it will work out of the box and same goes for any images from images: like images:ubuntu/18.04.

In time we expect all images on ubuntu: will also include LXD agent support and so will similarly not need any fancy vm profile anymore.

Hi Stephane

Im trying to find the disk file path of the newly created lxd VM so I can move my qcow2 image to lxd.
Any idea where that;s located?

Also wondering on where the VM HDD config goes, my current kvm imageshas 2x qcow images, one is dynamic.qcow2 and the other is the main HDD

LXD stores everything as raw images so you’ll need to use qemu-img to convert to raw.

The location of the disks differs based on storage pool driver, what are you using?

$ lxc storage ls

Sorry new to this, I managed to convert the kvm images but not sure where to place them under LXD snap install…

Have two images on KVM, one HDD is pointed as boot (1MB or so), and the second HDD is the main one, 8 GB or so.

I spent sme time googling, but couldnt find anything current on where to place the raw images on snap installs.

lxc storage ls

` lxc storage ls

±--------±------------±-------±-----------------------------------------------±--------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
±--------±------------±-------±-----------------------------------------------±--------+
| default | | dir | /var/snap/lxd/common/lxd/storage-pools/default | 15 |
±--------±------------±-------±-----------------------------------------------±--------+`

LXD expects a single disk for the instance so you may need to do some reshuffling within your images to get there.

The per-instance img file in your case will be at /var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/NAME.

Create this first with lxc init --empty --vm NAME

Ok thanks. Let me try again.

Do I need any changes in the config file, currently it looks like:

config:
  user.user-data: |
    #cloud-config
    ssh_pwauth: yes

    users:
      - name: ubuntu
        passwd: "xxxx"
        lock_passwd: false
        groups: lxd
        shell: /bin/bash
        sudo: ALL=(ALL) NOPASSWD:ALL
description: LXD profile for virtual machines
devices:
  config:
    source: cloud-init:config
    type: disk
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: vm
used_by: []

Is your default storage pool ZFS, if so can you run zfs list to show the volumes created by LXD?

Thanks