Limits.cpu in NUMA node

I’m using quad-socket NUMA server as below.
If I try to pin CPU cores from different nodes to a lxd container,
it only allocates a portion of those cores.
Restarting the container after configuring CPU pinning does not help.
How can I assign CPU cores from NUMA nodes to a lxd container?
I appreciate any comment.

FYI. my commands to pin cores
ubuntu@r9:~$ numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 4 8 12 16 20 24 28
node 0 size: 48278 MB
node 0 free: 26509 MB
node 1 cpus: 1 5 9 13 17 21 25 29
node 1 size: 48361 MB
node 1 free: 23128 MB
node 2 cpus: 2 6 10 14 18 22 26 30
node 2 size: 48382 MB
node 2 free: 30684 MB
node 3 cpus: 3 7 11 15 19 23 27 31
node 3 size: 48382 MB
node 3 free: 25055 MB
node distances:
node 0 1 2 3
0: 10 21 21 21
1: 21 10 21 21
2: 21 21 10 21
3: 21 21 21 10
ubuntu@r9:~$ lxc launch ubuntu:bionic bionic
ubuntu@r9:~$ lxc config set bionic limits.cpu 3,7,11,15,2,6,10,14,18,22,26,30
ubuntu@r9:~$ lxc exec bionic – bash
root@bionic:~# top
Tasks: 28 total, 2 running, 26 sleeping, 0 stopped, 0 zombie
%Cpu0 : 8.4 us, 56.2 sy, 0.0 ni, 35.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 7.6 us, 2.7 sy, 0.0 ni, 89.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 4.0 us, 1.7 sy, 0.0 ni, 94.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 1.3 us, 2.0 sy, 0.0 ni, 96.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 15625000 total, 15291832 free, 293184 used, 39984 buff/cache
KiB Swap: 8388604 total, 8388604 free, 0 used. 15331816 avail Mem

Perhaps, does this affect the CPU pinning?
I booted the quad-socket server with CPU-isolated cores as below.
If lxd assigns cores just that are under control of Linux kernel, this situation makes sense.
(i.e. Even though KVM assigns non-schedule-able cores to VM, lxd might not.)

ubuntu@r9:~$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.15.0-51-generic root=UUID=341b9db3-0131-4ac8-92ff-1f9acc544ca4 ro hugepagesz=2M hugepages=32768 isolcpus=1,5,9,13,17,21,25,29,2,6,10,14,18,22,26,30 nohz_full=1,5,9,13,17,21,25,29,2,6,10,14,18,22,26,30 rcu_nocbs=1,5,9,13,17,21,25,29,2,6,10,14,18,22,26,30 iommu=pt intel_iommu=on

Well, I removed some isolated cores from grub kernel boot parameter and everything works fine.

In summary, my issue is not related to NUMA but CPU isolation that prevents Linux kernel from scheduling processes to those isolated CPUs. This may come from sharing of host kernel with container. If isolated CPUs are assigned to a container, host kernel cannot schedule any process including process in container to those CPUs, which may lock/freeze the container.

I think mixed assignment of CPUs, I mean some CPUs are isolated and some are not, could allow various scenarios for NFV domain where CPU isolation and pinning is required to achieve high performance packet processing.