Running virtual machines with LXD 4.0

I suspect we’ll eventually revamp lxd-p2c and turn it into a lxd-importer capable of importing both systems into containers or disks into virtual-machines.

But that’s not on our roadmap for the next 6 months and it’s also realistically going to fail for most existing virtual machines due to most solutions out there not using the combination of UEFI and virtio devices that we ourselves use.

1 Like

@stgraber I tried to install VMware ESXI 6.5 into LXD VM. But it ran into unknown network adapters, because the default network adapter is virtio.

In fact, I installed VMware ESXI 6.5 into kvm with network adapter model e1000 successfully before.

How to modify the LXD VM default network adapter model?

This isn’t supported. You could try to hack something together by using raw.qemu but I’m also not sure what exactly will happen after that. I doubt that ESXi is going to be very happy with running in nested virtualization.

Given that instance name is esxi6, /var/snap/lxd/common/lxd/logs/esxi6/qemu.conf defined the default network adapter.

After reading LXD source code, I found it is generated by hard coded template.

Hi,
thanks for this steps to install windows virtual machine.
I installed successfull on win10 Pro for wks edition.
A little question, if I publish the image is it available for other? And is there also an available list of VM windows ready to use like the linux os?

thanks

lxc publish only publishes to your local image store on your machine.
You can mark the image public which then allows anyone who can access your system over the network to pull the image, though I wouldn’t recommend this with a Windows image.

There is no public images directly ready for LXD consumption and if there were, Windows licensing terms would not allow us to distribute them anyway.

1 Like

Sorry for my question, maybe it is annoying and I just don’t understand how LXD manage VM.
I want to use LXD LTS on my new host in production and since now I can also use vm with lxd, can i avoid to use kvm on that host?
I use kvm only for windows on the previous host and I want to use only LXD on the new one, honestly I don’ thave any issue with KVM but I prefer to use just one software.
I have to run a SQL server on the windows 2016 VM.
Any suggestion?

Thanks

I am trying to boot the RockNSM Linux ISO, but I must be missing a step. I used your Windows example and did the following:

  • lxc init rocknsm --empty --vm -c security.secureboot=false -c limits.cpu=4 -c limits.memory=8GB
  • lxc config device override rocknsm root size=256GB
  • echo -n '-drive file=/path/to/rocknsm.iso,index=0,media=cdrom,if=ide' | lxc config set rocknsm raw.qemu -
  • lxc start rocknsm ; lxc console rocknsm
  • Spam ESC like mad, but it never seems to do anything and the VM starts to go through the remote network boot steps. Eventually the UEFI shell comes up. Is there a way to lxc config set rocknsm boot=bootdrive or another way to avoid the remote boot timeout?
  • Hitting ENTER in the shell loads BIOS emulator and I can select the Boot Manager and choose UEFI QEMU DVD_ROM QM00001.
  • If I select that as the boot device, I am immediately in grub for the ISO. However, when I select the install option, I get a clear blank screen that never changes.

There is no I/O that seems to be happening by watching iostat or any processor usage by watching top. When I strace the qemu-system-x86 process, it is constantly in ppoll.

Any ideas? Is there a better way to debug? I am not much of a grub person and don’t know what to look for in the shell if I break out of the menu with c.

I am really hoping to be able to use LXD instead of dragging in the X libraries and 172 packages by using virtinstall. Thanks!

1 Like

Another question:
Hardware passthrough support for GPU, USB, NIC, disks and more

Is it possible also for VM and also for windows os?

Yes but it being a virtual machine, things work a bit differently.

Any hardware passthrough requires your physical system to have properly segmented PCI devices and to support IOMMU both at the firmware and kernel level.

When that’s the case, you can then do physical passthrough of GPU and NIC. However keep in mind that unlike containers, with VMs, the hardware cannot be shared and so will disappear from the host, this can be a bit of an issue with GPUs at least.

Disks are quite a bit easier and should work pretty much like in containers by using 9p. This may not be easy for Windows to consume though.

USB isn’t something we’ve dealt with yet but is planned to be looked at in the next 6 months or so.

1 Like

I’m interested to disk passthrough, but if I undestand correctly do you recommend to use storage configuration of lxd to assign new disk instead than RDM to VM, especially foir windows?

I do the same things but use VNC connection like for the windows VM.
echo -n ‘-device virtio-vga -vnc :1 -drive file=/path/to/rear.iso,index=0,media=cdrom,if=ide’ | lxc config set clonecharles raw.qemu -

and After I can connect with VNC to localhost:1

Thank you, that worked perfectly!

I have to install also some other package on a new server plus lxc? For example libvirtd or quemu? Because on a new installation I can’t start the vnc connection.
What I want is to can install remotely trough a vnc connection a lxd VM on another server on which I don’t install any graphical environment and after installation to use the vnc connection like a remote console connection

thanks

No extra packages needed, passing -vnc :1 through raw.qemu should work fine for now. We are working on a graphical remote lxc console integration but it’s harder to do than it sounds :slight_smile:

2 Likes

I’m really disappointed. It is a firewall trouble!!
Sorry :frowning:

Hi, do you think that it is possible/feasible to run pfsense inside an LXD vm?
Thank you.

I believe pfsense is FreeBSD based and at least one user has been struggling to get a working install media with virtio-scsi support. I don’t know if pfsense install media is any different, if it is, then things should work fine.

I have had success running VMs in the past. I’m on a new install of 20.04 and I’m unable to create VMs with any image. LXD is creating unbootable qemu VMs.

snap-id:      J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking:     latest/stable
installed:          4.2                    (15753) 74MB -

My storage pool is a dedicated btrfs partition:

$ lxc storage show default
config:
  size: 40GB
  source: /dev/disk/by-uuid/da43bcae-1497-45e7-b17c-512979097fcc
  volatile.initial_source: /dev/nvme0n1p2
description: ""
name: default
driver: btrfs

$ lxc launch images:ubuntu/focal/cloud ubvm --vm

Connecting to the spice console:

Full qemu conf:


# Machine
[machine]
graphics = "off"
type = "q35"
accel = "kvm"
usb = "off"
graphics = "off"

[global]
driver = "ICH9-LPC"
property = "disable_s3"
value = "1"

[global]
driver = "ICH9-LPC"
property = "disable_s4"
value = "1"
[boot-opts]
strict = "on"

# Console
[chardev "console"]
backend = "pty"

# Graphical console
[spice]
unix = "on"
addr = "/var/snap/lxd/common/lxd/logs/ubvm/qemu.spice"
disable-ticketing = "on"

# CPU
[smp-opts]
cpus = "1"
sockets = "1"
cores = "1"
threads = "1"






# Memory
[memory]
size = "1073741824B"

# Firmware (read only)
[drive]
file = "/snap/lxd/current/share/qemu/OVMF_CODE.fd"
if = "pflash"
format = "raw"
unit = "0"
readonly = "on"

# Firmware settings (writable)
[drive]
file = "/var/snap/lxd/common/lxd/virtual-machines/ubvm/qemu.nvram"
if = "pflash"
format = "raw"
unit = "1"

# Qemu control
[chardev "monitor"]
backend = "socket"
path = "/var/snap/lxd/common/lxd/logs/ubvm/qemu.monitor"
server = "on"
wait = "off"

[mon]
chardev = "monitor"
mode = "control"

[device "qemu_pcie0"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.0"
chassis = "0"
multifunction = "on"

# Balloon driver
[device "qemu_balloon"]
driver = "virtio-balloon-pci"
bus = "qemu_pcie0"
addr = "00.0"

multifunction = "on"

# Random number generator
[object "qemu_rng"]
qom-type = "rng-random"
filename = "/dev/urandom"

[device "dev-qemu_rng"]
driver = "virtio-rng-pci"
bus = "qemu_pcie0"
addr = "00.1"

rng = "qemu_rng"


# Input
[device "qemu_keyboard"]
driver = "virtio-keyboard-pci"
bus = "qemu_pcie0"
addr = "00.2"



# Input
[device "qemu_tablet"]
driver = "virtio-tablet-pci"
bus = "qemu_pcie0"
addr = "00.3"



# Vsock
[device "qemu_vsock"]
driver = "vhost-vsock-pci"
bus = "qemu_pcie0"
addr = "00.4"

guest-cid = "16"


# LXD serial identifier
[device "dev-qemu_serial"]
driver = "virtio-serial-pci"
bus = "qemu_pcie0"
addr = "00.5"



[chardev "qemu_serial-chardev"]
backend = "ringbuf"
size = "16B"

[device "qemu_serial"]
driver = "virtserialport"
name = "org.linuxcontainers.lxd"
chardev = "qemu_serial-chardev"
bus = "dev-qemu_serial.0"

[device "qemu_pcie1"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.1"
chassis = "1"


# SCSI controller
[device "qemu_scsi"]
driver = "virtio-scsi-pci"
bus = "qemu_pcie1"
addr = "00.0"



[device "qemu_pcie2"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.2"
chassis = "2"


# Config drive
[fsdev "qemu_config"]
fsdriver = "local"
security_model = "none"
readonly = "on"
path = "/var/snap/lxd/common/lxd/virtual-machines/ubvm/config"

[device "dev-qemu_config"]
driver = "virtio-9p-pci"
bus = "qemu_pcie2"
addr = "00.0"

mount_tag = "config"
fsdev = "qemu_config"
multifunction = "on"

[device "qemu_pcie3"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.3"
chassis = "3"


# GPU
[device "qemu_gpu"]
driver = "virtio-vga"
bus = "qemu_pcie3"
addr = "00.0"



[device "qemu_pcie4"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.4"
chassis = "4"


# Network card ("eth0" device)
[netdev "lxd_eth0"]
type = "tap"
vhost = "on"
ifname = "tapcb642aab"
script = "no"
downscript = "no"

[device "dev-lxd_eth0"]
driver = "virtio-net-pci"
bus = "qemu_pcie4"
addr = "00.0"

netdev = "lxd_eth0"
mac = "00:16:3e:9a:fc:0c"
bootindex = "1"


# root drive
[drive "lxd_root"]
file = "/var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/ubvm/root.img"
format = "raw"
if = "none"
cache = "none"
aio = "native"
discard = "on"

[device "dev-lxd_root"]
driver = "scsi-hd"
bus = "qemu_scsi.0"
channel = "0"
scsi-id = "0"
lun = "1"
drive = "lxd_root"
bootindex = "0"

Thanks