Libvirtd inside LCX - Error: virtiofs requires shared memory

Hi, I’ve setup an LXC container where I can successfully run nested virtualization (libvirt).
But for some months following package updates, I can no longer start a VM with Libvirt (Vagrant) inside an LXC container. Libvirtd configuration inside LXC container is default.

I get this error:

Error while attaching new device to domain. Call to virDomainAttachDeviceFlags failed: unsupported configuration: 'virtiofs' requires shared memory

Do you have any solutions that could help me please ?

My Vagrantfile

# Molecule managed
Vagrant.configure('2') do |config|
  if Vagrant.has_plugin?('vagrant-cachier')
    config.cache.scope = 'machine'
  end
  config.vm.define "ansible-lvm-2375154-D11" do |c|
    ##
    # Box definition
    ##
    c.vm.box = "generic/debian11"
    ##
    # Config options
    ##
    c.vm.synced_folder ".", "/vagrant", disabled: true
    c.ssh.insert_key = true
    c.vm.hostname = "ansible-lvm-D11"
    ##
    # Network
    ##
    ##
    # instance_raw_config_args
    ##
    ##
    # Provider
    ##
    c.vm.provider "libvirt" do |libvirt, override|
      libvirt.memory = 512
      libvirt.driver = "kvm"
      libvirt.nic_model_type = "virtio"
      libvirt.qemu_use_session = false
      libvirt.storage :file, :size => '1G', :device => 'sdb'
      libvirt.storage :file, :size => '1G', :device => 'sdc'
      libvirt.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :target_type => 'virtio'
    end
  end
end

LXC config container

# lxc config show gitlab-runner-lxc-molecule-01
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu jammy amd64 (20220606_08:50)
  image.os: Ubuntu
  image.release: jammy
  image.serial: "20220606_08:50"
  image.type: squashfs
  image.variant: cloud
  limits.memory.swap: "false"
  linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter
  volatile.base_image: b9bd1a4570080612164741f4b90200ff32762632b421cdb95dd9a44067774395
  volatile.cloud-init.instance-id: cadd34de-9176-41fd-9fd8-8bb9868fef09
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: fefaf272-81ce-4064-a3d6-c2813477451c
  raw.lxc: |
    lxc.apparmor.profile=unconfined
    lxc.mount.auto=proc:rw sys:rw
    lxc.cap.drop=
    lxc.cgroup.devices.allow=a
  security.nesting: "true"
  security.privileged: "true"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.setxattr: "true"
devices:
  aadisable:
    path: /sys/module/apparmor/parameters/enabled
    source: /dev/null
    type: unix-char
  aadisable2:
    source: /dev/kmsg
    type: unix-char
  kvm:
    source: /dev/kvm
    type: unix-char
  root:
    path: /
    pool: default
    type: disk
  shared_etc_ssh_authorized_keys:
    path: /etc/ssh/authorized_keys
    source: /etc/ssh/authorized_keys
    type: disk
  shared_root_ssh:
    path: /root/.ssh
    source: /root/.ssh
    type: disk
  tun:
    source: /dev/net/tun
    type: unix-char
  vhost-net:
    source: /dev/vhost-net
    type: unix-char
  vhost-vsock:
    source: /dev/vhost-vsock
    type: unix-char
ephemeral: false
profiles:
- default
stateful: false
description: ""

Versions

Host

  • Ubuntu 20.04.5 LTS (host LXC)
  • libvirtd (libvirt) 6.0.0
  • QEMU emulator version 4.2.1 (Debian 1:4.2-3ubuntu6.23
  • lxc --version =5.7
  • snap list
Name    Version      Rev    Tracking       Publisher   Notes
core18  20221027     2620   latest/stable  canonical✓  base
core20  20221027     1695   latest/stable  canonical✓  base
lxd     5.7-c62733b  23889  latest/stable  canonical✓  -
snapd   2.57.5       17576  latest/stable  canonical✓  snapd
  • uname -a
    Linux gitlab-runners 5.15.0-52-generic #58~20.04.1-Ubuntu SMP Thu Oct 13 13:09:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Guest

  • Ubuntu 22.04.1 LTS
  • libvirtd (libvirt) 8.0.0
  • QEMU emulator version 6.2.0 (Debian 1:6.2+dfsg-2ubuntu6.5)

Best regards

Hi!

I can guess that problem is not directly connected to LXC. It looks like before upgrade 9pfs was used to organize shared folder between VM and your container, but after upgrade it seems like virtiofs is used. It required shared memory region between VM and host (in your case it’s container qemu process).

Please, refer to libvirt: Sharing files with Virtiofs

Feel free to ask

Regards,
Alex

We still export the shares via 9p as well as virtiofs, so if you manually unmount the virtiofs mounts and remount them as 9p it should work.

But im a bit confused. You talk of nested virt. But running a VM inside a container isn’t nested virt, there’s only one VM. And LXD doesn’t use 9p/virtiofs with containers.

No, I’m not talking about nested virtualization here. I’m talking about interaction between vagrant <-> libvirt <-> qemu inside the container. It looks like after some update of packages inside the container default VM configuration was changed, so, now it uses virtiofs by default instead of 9pfs. There is no problem from LXC/LXD side.

1 Like