Access to /sys and Grafana

Hi,

I assume there is still no option to hide info from /sys for containers.
Currently I can see all disks attachment to the host from within the container.

Would that be a request for lxcfs?

If indeed there is no option yet, is there anything we can do in the Grafana template to only show relevant disk info per container?

Gr, Justin

Yeah, it’s a kernel thing. LXCFS could be used to hide some stuff, but the more of /sys we mask, the slower it makes the entire thing as we have to intercept all those file accesses.

If you’re looking for container-specific metrics, I’d probably recommend using Incus’ own metrics endpoint rather than running something like a node exporter inside of the instance.

If you’re looking for container-specific metrics, I’d probably recommend using Incus’ own metrics endpoint rather than running something like a node exporter inside of the instance.

I’m using the LXD/Incus metrics. But they contain all host disk metrics in each container view.
That is what I was hoping to reduce to only show container specific disk metrics.

Hmm, what do you see when you do incus query /1.0/metrics | grep NAME-OF-CONTAINER | grep disk?

@stgraber thanks for the suggestion to query the metrics direcly.

I’m still in the progress of migrating to Incus. So the example I have is unfortunately still lxd 5.20 based.

It seems not all containers show all host disks, although non is privileged.

How could that be possible, that on the same host. One container shows only it’s own LVM disk
(plus corresponding dm’s) and swap and the other shows all disks of the host.

lxc query /1.0/metrics |grep container1 |grep disk_read_bytes
lxd_disk_read_bytes_total{device="nvme10n1",name="container1",project="default",type="container"} 1.022779392e+09
lxd_disk_read_bytes_total{device="nvme8n1",name="container1",project="default",type="container"} 4.4703199232e+10
lxd_disk_read_bytes_total{device="dm-5",name="container1",project="default",type="container"} 4.4703199232e+10
lxd_disk_read_bytes_total{device="dm-20",name="container1",project="default",type="container"} 4.4703199232e+10
lxd_disk_read_bytes_total{device="dm-24",name="container1",project="default",type="container"} 4.4703199232e+10
lxc query /1.0/metrics |grep container2 |grep disk_read_bytes
lxd_disk_read_bytes_total{device="nvme1n1",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-7",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-9",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-15",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="nvme10n1",name="container2",project="default",type="container"} 1.23076608e+08
lxd_disk_read_bytes_total{device="dm-16",name="container2",project="default",type="container"} 5.6799010816e+10
lxd_disk_read_bytes_total{device="nvme3n1",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-28",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-20",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-1",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="nvme0n1",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-13",name="container2",project="default",type="container"} 5.6799010816e+10
lxd_disk_read_bytes_total{device="dm-26",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-18",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-5",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-29",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="nvme4n1",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="nvme9n1",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="nvme6n1",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="nvme5n1",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="nvme7n1",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="nvme2n1",name="container2",project="default",type="container"} 5.6799010816e+10
lxd_disk_read_bytes_total{device="nvme8n1",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-2",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-27",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-22",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-10",name="container2",project="default",type="container"} 0
lxd_disk_read_bytes_total{device="dm-25",name="container2",project="default",type="container"} 5.6799010816e+10
lxd_disk_read_bytes_total{device="dm-24",name="container2",project="default",type="container"} 0

I thought it might be container OS dependent, but does not seem to be that way;

for x in `lxc ls -c name -f compact |grep -v NAME | cut -f3 -d" "`; do echo $x; lxc exec $x -- cat /etc/os-release |grep PRETTY_NAME; lxc query /1.0/metrics |grep $x | grep disk_read_bytes | wc -l; done
container1
PRETTY_NAME="Amazon Linux 2"
5
container2
PRETTY_NAME="AlmaLinux 9.3 (Shamrock Pampas Cat)"
29
container3
PRETTY_NAME="Amazon Linux 2"
5
container4
PRETTY_NAME="AlmaLinux 9.3 (Shamrock Pampas Cat)"
29
container5
PRETTY_NAME="AlmaLinux 9.3 (Shamrock Pampas Cat)"
29
container6
PRETTY_NAME="CentOS Linux 7 (Core)"
29
container7
PRETTY_NAME="CentOS Linux 7 (Core)"
29
container8
PRETTY_NAME="Amazon Linux 2"
29

Incus simply reports the resources as the Linux kernel sees them.
So the list of disks that you see is the list of block devices that the kernel says the container touched in one way or another.

Normally that should only be whatever block devices are behind the container’s root filesystem and then potential swap devices. But then any extra disk or shared path passed to the container would cause that list to expand to cover whatever is behind those disks and paths.

Getting a full cat /proc/self/mountinfo from within an affected container may help here.

It’s also obviously possible that the kernel is incorrectly reporting block read/write, effectively mistakenly associated I/O operations with the container when they’re in fact from a host process.