Pinning over 64 vcpu to vm is not reflecting it on guest

Creating a vm with 64 pinned cores

lxc config set sampleVM limits.cpu 0-63
lxc exec sampleVM -- bash -c "lscpu | egrep 'CPU\(s\)'"

Show the 64 cores inside the vm or:

CPU(s):                          64
On-line CPU(s) list:             0-63
NUMA node0 CPU(s):               0-63

Now updating it to 127 cores, the guest does not show the additional resources added:

lxc config set sampleVM limits.cpu 0-127
lxc exec sampleVM -- bash -c "lscpu | egrep 'CPU\(s\)'"
CPU(s):                          64
On-line CPU(s) list:             0-63
NUMA node0 CPU(s):               0-63

Any idea why?

I am running snap lxd 4.24, host ubuntu 20.04, guest ubuntu 20.04, cpu AMD EPYC 7742 64-Core Processor (128 vcpu). Running the command in the host show all vcpus or:

lscpu | egrep 'CPU\(s\)'
CPU(s):                          128
On-line CPU(s) list:             0-127
NUMA node0 CPU(s):               0-127

Any ideas @stgraber ?

Hmm, and that’s a VM, not a container, correct?

If so, can you show cat /var/snap/lxd/common/lxd/sampleVM/logs/qemu.conf ?

That is correct Mr Graber, its a VM. I was not able to find:
/var/snap/lxd/common/lxd/sampleVM/logs/qemu.conf

but maybe you meant:
cat /var/snap/lxd/common/lxd/logs/sampleVM/qemu.conf

Here is the content of it with limits.cpu 0-127:

cat /var/snap/lxd/common/lxd/logs/sampleVM/qemu.conf

# Machine
[machine]
graphics = "off"
type = "q35"
accel = "kvm"
usb = "off"

[global]
driver = "ICH9-LPC"
property = "disable_s3"
value = "1"

[global]
driver = "ICH9-LPC"
property = "disable_s4"
value = "1"
[boot-opts]
strict = "on"

# Console
[chardev "console"]
backend = "pty"

# Memory
[memory]
size = "1024M"

# CPU
[smp-opts]
cpus = "128"
sockets = "1"
cores = "64"
threads = "2"


[object "mem0"]

qom-type = "memory-backend-memfd"
size = "1024M"
policy = "bind"
host-nodes.0 = "0"

[numa]
type = "node"
nodeid = "0"
memdev = "mem0"




[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "0"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "0"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "1"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "1"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "2"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "2"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "3"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "3"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "4"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "4"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "5"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "5"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "6"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "6"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "7"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "7"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "8"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "8"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "9"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "9"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "10"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "10"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "11"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "11"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "12"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "12"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "13"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "13"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "14"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "14"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "15"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "15"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "16"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "16"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "17"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "17"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "18"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "18"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "19"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "19"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "20"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "20"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "21"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "21"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "22"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "22"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "23"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "23"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "24"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "24"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "25"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "25"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "26"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "26"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "27"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "27"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "28"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "28"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "29"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "29"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "30"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "30"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "31"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "31"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "32"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "32"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "33"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "33"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "34"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "34"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "35"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "35"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "36"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "36"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "37"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "37"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "38"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "38"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "39"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "39"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "40"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "40"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "41"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "41"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "42"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "42"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "43"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "43"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "44"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "44"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "45"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "45"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "46"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "46"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "47"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "47"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "48"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "48"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "49"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "49"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "50"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "50"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "51"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "51"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "52"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "52"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "53"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "53"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "54"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "54"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "55"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "55"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "56"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "56"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "57"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "57"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "58"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "58"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "59"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "59"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "60"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "60"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "61"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "61"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "62"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "62"
thread-id = "1"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "63"
thread-id = "0"

[numa]
type = "cpu"
node-id = "0"
socket-id = "0"
core-id = "63"
thread-id = "1"



# Firmware (read only)
[drive]
file = "/snap/lxd/current/share/qemu/OVMF_CODE.fd"
if = "pflash"
format = "raw"
unit = "0"
readonly = "on"

# Firmware settings (writable)
[drive]
file = "/var/snap/lxd/common/lxd/virtual-machines/sampleVM/qemu.nvram"
if = "pflash"
format = "raw"
unit = "1"

# Qemu control
[chardev "monitor"]
backend = "socket"
path = "/var/snap/lxd/common/lxd/logs/sampleVM/qemu.monitor"
server = "on"
wait = "off"

[mon]
chardev = "monitor"
mode = "control"

[device "qemu_pcie0"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.0"
chassis = "0"
multifunction = "on"

# Balloon driver
[device "qemu_balloon"]
driver = "virtio-balloon-pci"
bus = "qemu_pcie0"
addr = "00.0"

multifunction = "on"

# Random number generator
[object "qemu_rng"]
qom-type = "rng-random"
filename = "/dev/urandom"

[device "dev-qemu_rng"]
driver = "virtio-rng-pci"
bus = "qemu_pcie0"
addr = "00.1"

rng = "qemu_rng"


# Input
[device "qemu_keyboard"]
driver = "virtio-keyboard-pci"
bus = "qemu_pcie0"
addr = "00.2"



# Input
[device "qemu_tablet"]
driver = "virtio-tablet-pci"
bus = "qemu_pcie0"
addr = "00.3"



# Vsock
[device "qemu_vsock"]
driver = "vhost-vsock-pci"
bus = "qemu_pcie0"
addr = "00.4"

guest-cid = "95"


# Virtual serial bus
[device "dev-qemu_serial"]
driver = "virtio-serial-pci"
bus = "qemu_pcie0"
addr = "00.5"



# LXD serial identifier
[chardev "qemu_serial-chardev"]
backend = "ringbuf"
size = "16B"

[device "qemu_serial"]
driver = "virtserialport"
name = "org.linuxcontainers.lxd"
chardev = "qemu_serial-chardev"
bus = "dev-qemu_serial.0"

# Spice agent
[chardev "qemu_spice-chardev"]
backend = "spicevmc"
name = "vdagent"

[device "qemu_spice"]
driver = "virtserialport"
name = "com.redhat.spice.0"
chardev = "qemu_spice-chardev"
bus = "dev-qemu_serial.0"

# Spice folder
[chardev "qemu_spicedir-chardev"]
backend = "spiceport"
name = "org.spice-space.webdav.0"

[device "qemu_spicedir"]
driver = "virtserialport"
name = "org.spice-space.webdav.0"
chardev = "qemu_spicedir-chardev"
bus = "dev-qemu_serial.0"

# USB controller
[device "qemu_usb"]
driver = "qemu-xhci"
bus = "qemu_pcie0"
addr = "00.6"
p2 = "4"
p3 = "4"


[chardev "qemu_spice-usb-chardev1"]
  backend = "spicevmc"
  name = "usbredir"

[chardev "qemu_spice-usb-chardev2"]
  backend = "spicevmc"
  name = "usbredir"

[chardev "qemu_spice-usb-chardev3"]
  backend = "spicevmc"
  name = "usbredir"

[device "qemu_spice-usb1"]
  driver = "usb-redir"
  chardev = "qemu_spice-usb-chardev1"

[device "qemu_spice-usb2"]
  driver = "usb-redir"
  chardev = "qemu_spice-usb-chardev2"

[device "qemu_spice-usb3"]
  driver = "usb-redir"
  chardev = "qemu_spice-usb-chardev3"

[device "qemu_pcie1"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.1"
chassis = "1"


# SCSI controller
[device "qemu_scsi"]
driver = "virtio-scsi-pci"
bus = "qemu_pcie1"
addr = "00.0"



[device "qemu_pcie2"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.2"
chassis = "2"


# Config drive (9p)
[fsdev "qemu_config"]
fsdriver = "local"
security_model = "none"
readonly = "on"
path = "/var/snap/lxd/common/lxd/devices/sampleVM/config.mount"

[device "dev-qemu_config-drive-9p"]
driver = "virtio-9p-pci"
bus = "qemu_pcie2"
addr = "00.0"
mount_tag = "config"
fsdev = "qemu_config"
multifunction = "on"

# Config drive (virtio-fs)
[chardev "qemu_config"]
backend = "socket"
path = "/var/snap/lxd/common/lxd/logs/sampleVM/virtio-fs.config.sock"

[device "dev-qemu_config-drive-virtio-fs"]
driver = "vhost-user-fs-pci"
bus = "qemu_pcie2"
addr = "00.1"
chardev = "qemu_config"
tag = "config"


[device "qemu_pcie3"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.3"
chassis = "3"


# GPU
[device "qemu_gpu"]
driver = "virtio-vga"
bus = "qemu_pcie3"
addr = "00.0"



[device "qemu_pcie4"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.4"
chassis = "4"


# root drive
[drive "lxd_root"]
file = "/dev/zvol/nvme0n1/virtual-machines/sampleVM.block"
format = "raw"
if = "none"
cache = "none"
aio = "native"
discard = "on"
media = "disk"
file.locking = "off"
readonly = "off"

[device "dev-lxd_root"]
driver = "scsi-hd"
bus = "qemu_scsi.0"
channel = "0"
scsi-id = "0"
lun = "1"
drive = "lxd_root"
bootindex = "0"


[device "qemu_pcie5"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.5"
chassis = "5"


[device "qemu_pcie6"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.6"
chassis = "6"


[device "qemu_pcie7"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "1.7"
chassis = "7"


[device "qemu_pcie8"]
driver = "pcie-root-port"
bus = "pcie.0"
addr = "2.0"
chassis = "8"
multifunction = "on"

Ok, the config looks correct, it’s telling QEMU to setup 128 threads over 64 cores.
What OS are you running in the VM?

Host OS: Ubuntu 20.04

Additionally:

  • Guest OS: ubuntu 20.04
  • cpu: AMD EPYC 7742 64-Core Processor (128 vcpu)
  • Command used to check cpu count on host/guest: lscpu | egrep ‘CPU(s)’

Does setting limits.cpu to 128 get you a different behavior?

Nope, here are the command that I just ran:

lxc config set sampleVM limits.cpu 128
lxc start sampleVM
lxc config show sampleVM | grep limits.cpu
  limits.cpu: "128"

lxc exec sampleVM -- bash -c "lscpu | egrep 'CPU\(s\)'"
CPU(s):                          64
On-line CPU(s) list:             0-63
NUMA node0 CPU(s):               0-63

cat /var/snap/lxd/common/lxd/logs/sampleVM/qemu.conf
.....
# CPU
[smp-opts]
cpus = "128"
sockets = "1"
cores = "128"
threads = "1"
.....
root@v1:~# lscpu | egrep 'CPU\(s\)'
CPU(s):                          128
On-line CPU(s) list:             0-127
NUMA node0 CPU(s):               0-127
# CPU
[smp-opts]
cpus = "128"
sockets = "1"
cores = "128"
threads = "1"

Can you post the full lxc config show --expanded sampleVM when running it with limits.cpu=128?

Here is the result:

lxc config show --expanded sampleVM
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20220321)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20220321"
  image.type: disk-kvm.img
  image.version: "20.04"
  limits.cpu: "128"
  volatile.base_image: fa11d27d5042addf8a0c275c8903d313f896c01a2f68eb78d1742295dd3a6fb8
  volatile.eth0.host_name: tap33eb4bd9
  volatile.eth0.hwaddr: 00:16:3e:6b:b7:bc
  volatile.last_state.power: RUNNING
  volatile.uuid: e2577c88-a7e4-4265-92d9-15b3dc6a73d3
  volatile.vsock_id: "95"
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: nvme0n1
    type: disk
ephemeral: false
profiles:
- compute
stateful: false
description: ""

^
||
|| That seems to be the correct behavior. Could this flag on the host kernel be the one responsible of it?

cat /etc/default/grub
.........
GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity isolcpus=20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127"
.........

Does not seems like it because even trying to pin core 20-127 shows 64 vcores inside the os.

Here is something interesting that I just found while checking the console during the boot of the VM:

........
Using ACPI (MADT) for SMP configuration information
ACPI: HPET id: 0x8086a201 base: 0xfed00000
TSC deadline timer available
smpboot: 108 Processors exceeds NR_CPUS limit of 64
smpboot: Allowing 64 CPUs, 0 hotplug CPUs
KVM setup pv sched yield
[mem 0x40000000-0xafffffff] available for PCI devices
Booting paravirtualized kernel on KVM
........

108 Processors exceeds NR_CPUS limit of 64
According to duckduckgo:

Maximum number of processors that an SMP kernel could support

But I am not setting it on the host. I will now try to find out where can I set that flag on the linux kernel but it seems like its being set on the guest’s kernel and not on the host.

What image did you use to create the VM?
lxc launch ubuntu sampleVM --vm
or another one?

I used lxc launch images:ubuntu/20.04/cloud in my case.
So it could just be a difference in kernel as ubuntu I believe uses the linux-kvm kernel whereas our images use linux-virtual or linux-generic.

stgraber@dakara:~$ lxc launch images:ubuntu/20.04/cloud v1 -c limits.cpu=128 --vm
Creating v1
Starting v1
stgraber@dakara:~$ lxc exec v1 bash
root@v1:~# grep NR_CPUS /boot/config-5.4.0-105-generic 
CONFIG_NR_CPUS_RANGE_BEGIN=8192
CONFIG_NR_CPUS_RANGE_END=8192
CONFIG_NR_CPUS_DEFAULT=8192
CONFIG_NR_CPUS=8192
root@v1:~# uname -a
Linux v1 5.4.0-105-generic #119-Ubuntu SMP Mon Mar 7 18:49:24 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
root@v1:~# 

You may want to file a bug at OpenID transaction in progress to get the CONFIG_NR_CPUS increased.

Thanks Mr Graber,
Creating a VM using images:ubuntu/20.04/cloud instead of just ubuntu took care of the problem in the kernel with NR_CPUS limit of 64. The VM now sees all cores allocated to it even tho 20-127 shows as offline.

lxc delete sampleVM
lxc launch images:ubuntu/20.04/cloud sampleVM -c limits.cpu=128 --vm
lxc exec sampleVM -- bash -c "lscpu | egrep 'CPU\(s\)'"
CPU(s):                          128
On-line CPU(s) list:             0-19
Off-line CPU(s) list:            20-127
NUMA node0 CPU(s):               0-127

PS. In my case, only 0-19 show as online because I am instructing the kernel in the host to not use cores 20-127.

@stgraber,
Apparently, it seems like because I am not allowing the host to queue proceses on cores 20-127, using images:ubuntu/20.04/cloud does not let me pin/consume cores 20-127 but with the one used when I just type
lxc launch ubuntu sampleVM -p compute --vm

The vm is able to consume the cores that I don’t let the host use. Is this something that I should also bring up while filing the bug at OpenID transaction in progress?

Cheers,
EE