Lxc/lxd 2.0.11 isolcpu + limits.cpu

Hello there!

I am relatively new to lxc/lxd. I am running an Ubuntu16.04 dedicated server with lxc/lxd installed from default xenial-updates repository. Lxc/lxd version 2.0.11 as said in the title.

My server runs on an intel i7-4790K quad core cpu giving me 8 logical cpu’s to use. I want to isolate a certain logical core for a specific container without the need to set cpu affinity for every task on on the server. A logical approach on the first look seems to be, to use the ‘isolcpus’ kernel parameter, to remove the desired core from load balancing and then pin the container via 'lxc config set limits.cpu ’ to the before isolated cpu.

This doesn’t seem to work however. After isolating half my logical cores from load balancing (core’s 4,5,6,7) I see only 0,1,2 and 3 being used in htop. This itself wasn’t really surprising, even though, I assumed kernel threads would still run on the isolated cores, which they don’t in htop, but I am also not able to assign the isolated cores via limits.cpu to the desired container. After using limits.cpu the container processes run on core’s 0,1,2 and 3. It seems like 4,5,6 and 7 are completely ignored by the system now (Just by judging on htop output).

What am I doing wrong? 2.0.11 release notes mention support for isolcpus kernel parameter. I assumed pinning isolated cpu’s via limits.cpu would work therefore. As far as I understand right now, limits.cpu uses cpuset from cgrups to set core affinity. What I could find on google so far seems to support using isolcpus parameter together with cpuset. Could someone maybe give me a practical example of how reserving a logical cpu core for a container should look like?

Cheers in advance.

P.S: This is my first post in this forum. I didn’t really read about posting rules I admit. If I did broke any posting regulations, I do apologize.

pps: command line for cpu isolation in /etc/default/grub = GRUB_CMDLINE_LINUX_DEFAULT="quiet isolcpus=4,5,6,7"
Yes, I did run ‘update-grub’ with sudo after editing the file.

Command used to pin the container onto the last logical core = lxc config set prbf2-main limits.cpu 7-7
prbf2-main is the name of the container I want to pin to core nr.7.

Isolated CPUs do not take part in load-balancing which means that assinging them to a cgroup will do nothing and also means that you need to manually assign tasks to them.

Thx for the answer. I was afraid it would have to do with it. I found this article which gave me the impression both features could be used. Just to explain why I thought it could work.

How would I manually assign the lxc container task to an isolated core or what do I need to do to isolate a core in a way, only the desired container is processed on it?

The LXD syntax for pinning to a particular core would be 2-2 (so a range made of only the second CPU), but given how isolcpu works, that seems unlikely to do what you want.

I think with the way things currently work, you’d effectively need to set specific CPUs or range of CPUs on each of your containers. What we’d need to fully support what you want here is a way to tell LXD not to balance containers on any CPU that has been directly selected for another container.

Note that all of this only matters if you do resource overcommit. If you don’t, then the container scheduler in LXD will do what you want, which is to pin the container you want on that CPU and then use the rest for the remaining containers. It’s only when you have more CPUs requested than physical CPUs that the scheduler will start placing two containers on the same CPU.

And of course, this all implies that scheduling is used, which at the very least requires that you set limits.cpu to a number of CPU for all your containers, otherwise any container which doesn’t have a limits.cpu set will be able to access all CPUs.

Yes, that would be great!

Yes, in a way that’s the problem I am facing. It’s not other containers but rather a quite busy host. I want to make sure a certain application within the container isn’t bothered by what the host is doing. Apparently though, the more viable approach would be moving every application on the host into containers and then do commit cpu ressources per container. Instead of having most applications on the host and just a few in containers like I am having now. At least that’s the conclusion I am pulling out so far.

Or you’d want to create a cgroup for the host itself and move all the host processes into that, leaving the remaining CPUs free for container use.

I don’t know if systemd offers an easy way to do that, but you could always write a small script which creates a cgroup under /sys/fs/cgroup/cpuset and move all existing tasks to it, their children will automatically get the same configuration, so the result should be that only LXD will get to use the rest (make sure you don’t put LXD itself in the restricted cgroup though).