Root storage device missing from "lxc config show" command

My objective is to resize the operating system disk for a Linux virtual machine that I created using lxc.

When I run lxc config show myvm01, I cannot see any disk / block storage devices attached to the virtual machine.

architecture: x86_64
  image.architecture: amd64
  image.description: ubuntu 22.04 LTS amd64 (release) (20230302)
  image.label: release
  image.os: ubuntu
  image.release: jammy
  image.serial: "20230302"
  image.type: disk-kvm.img
  image.version: "22.04"
  limits.cpu: "2"
  limits.memory: 4GB
  volatile.base_image: 8b150d2b1813da98b1fe2984a8ad7f9531c57c4a0689bbabf1f8322e853c4032 e34ad528-5252-468f-a35f-084bb94832d1
  volatile.eth1.hwaddr: 00:16:3e:32:cd:7a
  volatile.last_state.power: STOPPED
  volatile.last_state.ready: "false"
  volatile.uuid: 47debd12-9653-4fd9-b802-231575777ecb
  volatile.vsock_id: "22"
devices: {}
ephemeral: false
- bridged
stateful: false
description: ""

When I run the lxc config device list myvm01 command, the output is completely empty.

However, I can run the lxc config device override myvm01 root size=15GB command, and it successfully resizes the disk.

Question: Why is the root storage device completely missing from the output of lxc config show and lxc config device list commands? This is incredibly confusing.

This is because without the --expanded flag added to that show command you only see the instance’s local config and not the config applied from the profile(s).


What’s the default rootfs state size? This was unclear to me when I tried to create a stateful snapshot of a default LXD VM.

Its effectively zero if using a storage pool that has default size restrictions on volumes.


LXD VMs do have a small filesystem volume for storing configuration and other support files (such as the lxd-agent and its certificates), but when doing stateful snapshots or migrations it needs somewhere to store the saved state.

LXD won’t store it in the root filesystem of the host as that could allow one to accidentally fill up the filesystem with potentially large state files. Instead you need to set size.state on the root disk device to indicate that maximum size you’re willing to allow a stateful dump to consume.

Yeah, I encountered that when I tried to take a stateful snapshot. I was told, by the error message, that the rootfs state size had to be smaller than the allotted amount of RAM. Kinda confusing, since I didn’t know what the default / current state size is. I manually set it to a smaller size, and that’s when I ran into my other issue with taking stateful snapshots.

I think it has to be bigger than the ram size, what specific errors did you get?

Let me try to repro.

# lxc launch --vm images:ubuntu/23.04 u01
# lxc snapshot --stateful u01
Error: Stateful snapshot requires migration.stateful to be set to true
# lxc stop u01
# lxc config edit u01 # Add config.migration.stateful: true
# lxc start u01
Error: Stateful start requires that the instance limits.memory is less than size.state on the root disk device
# lxc config show --expanded u01
architecture: x86_64
  image.architecture: amd64
  image.description: Ubuntu lunar amd64 (20230619_07:42)
  image.os: Ubuntu
  image.release: lunar
  image.serial: "20230619_07:42"
  image.type: disk-kvm.img
  image.variant: default
  migration.stateful: "true"
  volatile.base_image: bba7d5687f728e476f38ffe611cf4de3aadfafde651eccffbb186af7d04457ab 2b353a29-4f77-4d2c-ac70-f78a6fc72bf5
  volatile.eth0.hwaddr: 00:16:3e:8c:b5:15
  volatile.last_state.power: STOPPED
  volatile.last_state.ready: "false"
  volatile.uuid: 7efd6f92-1a95-4390-b0a4-3c6168fb7027
  volatile.uuid.generation: 7efd6f92-1a95-4390-b0a4-3c6168fb7027
  volatile.vsock_id: "32"
    name: eth0
    network: lxdbr0
    type: nic
    path: /
    pool: default
    type: disk
ephemeral: false
- default
stateful: false
description: ""

Yeah that makes sense now, its saying the memory limit has to be less than the root disk size.state

The default vm memory limit is 1GiB

1 Like

I guess I had that backwards, sorry.

1 Like