Error when launching VMs: phys-bits too low (36)

Hello!

When launching any virtual machine with Incus 6.14 on Alpine Linux edge, I get the following error message:

~ $ incus launch --vm images:debian/12 winbuilder -c security.secureboot=false
Launching winbuilder
Error: Failed instance creation: Failed to run: forklimits limit=memlock:unlimited:unlimited fd=3 fd=4 -- /usr/bin/qemu-system-x86_64 -S -name winbuilder -uuid abd74554-866a-459a-8d31-89498c66864a -daemonize -cpu host,hv_passthrough,migratable=no,+invtsc -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/winbuilder/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/winbuilder/qemu.spice -pidfile /run/incus/winbuilder/qemu.pid -D /var/log/incus/winbuilder/qemu.log -rtc base=2025-07-20T11:02:17 -smbios type=2,manufacturer=LinuxContainers,product=Incus -run-with user=nobody: : exit status 1

~ $ incus info winbuilder --show-log
Name: winbuilder
Description: 
Status: STOPPED
Type: virtual-machine
Architecture: x86_64
Created: 2025/07/20 13:02 CEST
Last Used: 1970/01/01 01:00 CET

Log:

qemu-system-x86_64: Address space limit 0xfffffffff < 0x12bfffffff phys-bits too low (36)

I did some searching and found some reports that disabling memory hotplugging can fix this for some people, however I don’t know how to do that with Incus. As I assume this is a hardware related issue, here is the beginning of incus info --resources:

System:
  UUID: 3d6a4801-50b1-11cb-95c0-eae14eeaecfb
  Vendor: LENOVO
  Product: 4291WF5
  Family: ThinkPad X220
  Version: ThinkPad X220
  Serial: R9G76L7
  Type: unknown
  Chassis:
      Vendor: LENOVO
      Type: Notebook
      Version: Not Available
      Serial: R9G76L7
  Motherboard:
      Vendor: LENOVO
      Product: 4291WF5
      Serial: 1ZK6B18Z7TL
      Version: Not Available
  Firmware:
      Vendor: LENOVO
      Version: 8DET76WW (1.46 )
      Date: 06/21/2018

Load:
  Processes: 252
  Average: 1.33 0.37 0.13

CPU:
  Architecture: x86_64
  Vendor: GenuineIntel
  Name: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz
  Caches:
    - Level 1 (type: Data): 32KiB
    - Level 1 (type: Instruction): 32KiB
    - Level 2 (type: Unified): 256KiB
    - Level 3 (type: Unified): 3MiB
  Cores:
    - Core 0
      Frequency: 1796Mhz
      Threads:
        - 0 (id: 0, online: true, NUMA node: 0)
        - 1 (id: 1, online: true, NUMA node: 0)
    - Core 1
      Frequency: 1565Mhz
      Threads:
        - 0 (id: 2, online: true, NUMA node: 0)
        - 1 (id: 3, online: true, NUMA node: 0)
  Frequency: 1680Mhz (min: 800Mhz, max: 3200Mhz)

Memory:
  Free: 5.67GiB
  Used: 2.33GiB
  Total: 8.00GiB
~ $ grep -m 1 'address sizes' /proc/cpuinfo
address sizes	: 36 bits physical, 48 bits virtual

Any advice is very appreciated, I’d love to manage my VMs with Incus instead of plain QEMU (which works, by the way)!

I’ve found a solution, the issue appears to be that my system has 8GB of memory but Incus appears to set a QEMU maxmem of 32GB by default. I was able to start the VM with the following addition to its config:

  raw.qemu.conf: |
    [memory]
    maxmem = "6144M"

What does incus profile show default say?

This is on an Ubuntu 22.04 system with 16GiB and incus 6.0.4:

# incus launch images:ubuntu/24.04/cloud --vm test
# incus exec test free
Error: VM agent isn't currently running
# incus exec test free
               total        used        free      shared  buff/cache   available
Mem:          964144      251024      744096       19852      100236      713120
Swap:              0           0           0

i.e. I get 1GiB by default, and I have no problem running on a system with <32GiB.

As for address sizes:

# grep -m 1 'address sizes' /proc/cpuinfo
address sizes	: 39 bits physical, 48 bits virtual
# grep -m 1 'model name' /proc/cpuinfo
model name	: Intel(R) N100

Yes, memory is 1GiB by default, however the QEMU maxmem was 32GiB in my case, see /run/incus/<instance>/qemu.conf. But I found that 8GiB is not the limit even in my case actually, it is somewhere between a maxmem of 21000MiB-22000MiB. I assume that the phys-bits determine the amount of addressable memory, in the case of my CPU that seems to be less than 32GiB so QEMU complains about the maxmem of 32GiB for a good reason.

# incus profile show default
config:
  raw.qemu.conf: |
    [memory]
    maxmem = "6144M"
  security.secureboot: "false"
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: zfs
    type: disk
name: default
used_by:
- ...
~ $ grep -m 1 'model name' /proc/cpuinfo
model name	: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz
~ $ grep -m 1 'address sizes' /proc/cpuinfo
address sizes	: 36 bits physical, 48 bits virtual

Probably a difference between 6.0.4 and 6.14 then. On my 6.0.4 system /run/incus/test/qemu.conf does not have a maxmem parameter; it has

# Memory
[memory]
size = "1024M"
...
[object "mem0"]
qom-type = "memory-backend-memfd"
size = "1024M"
share = "on"

Anyway, glad that you have a solution, and thanks for sharing it in case anyone else is affected.

Good to know, thanks for checking! Do you think this would warrant opening an issue, given that more systems may be affected? My ThinkPad X220 (and older hardware) is not very rare among Linux users I am sure

Checking in the 6.14 source for maxmem:

./internal/server/instance/drivers/driver_qemu_templates.go:    // Sets fixed values for slots and maxmem to support memory hotplug.
./internal/server/instance/drivers/driver_qemu_templates.go:                    "maxmem": fmt.Sprintf("%dM", opts.maxSizeMB),
./internal/server/instance/drivers/driver_qemu_templates.go:                    // That's even with maxmem capped at the total system memory.
./internal/server/instance/drivers/driver_qemu_templates.go:    if section.Entries["size"] == section.Entries["maxmem"] {
./internal/server/instance/drivers/driver_qemu_config_test.go:                  maxmem = "16384M"
./internal/server/instance/drivers/driver_qemu_config_test.go:                  maxmem = "16384M"

qemuMemoryOpts has two members:

type qemuMemoryOpts struct {
        memSizeMB int64
        maxSizeMB int64
}

It’s initialized in /internal/server/instance/drivers/driver_qemu.go in function addCPUMemoryConfig().

The relevant logic is:

                // Reduce the maximum by one bit to allow QEMU some headroom.
                cpuPhysBits--

                // Calculate the max memory limit.
                maxMemoryBytes = int64(math.Pow(2, float64(cpuPhysBits)))

                // Cap to 1TB.
                if maxMemoryBytes > 1024*1024*1024*1024 {
                        maxMemoryBytes = 1024 * 1024 * 1024 * 1024
                }
...
		*conf = append(*conf, qemuMemory(&qemuMemoryOpts{memSizeBytes / 1024 / 1024, maxMemoryBytes / 1024 / 1024})...)

Your system says physBits is 36, which is 64GiB, so this is halved to 32GiB as you observe.

The question then is where the problem lies, which prevents qemu from starting unless maxmem is ~21GiB or less. Is this a bug in your Sandy Bridge CPU (launched in 2011)? And if so, should incus have workarounds for old buggy CPUs? Or a bug in qemu? (0x12bfffffff = 75GiB, why does it need this much address space when maxmem is 32GiB?) (*)

I think it’s worth reporting, as there have been recent changes in this area.

My feeling is that it’s at least worth a note in the documentation. It might also be worth a user-level configuration setting for maxmem, so that it can be more easily overridden than adding raw qemu config.


(*) I think this explains your result. 75-64 = 11, so reducing maxmem to 32-11 = 21 fixes it for you.

We have an issue open to add direct control on hotplug amounts for those few weird systems, so you’ll soon be able to fully turn off hotplug support to avoid the issue.

1 Like