How to hide of host root device from container lsblk

Hello.

Can are you suggest how to possible hide or delete host root device name from container.
Just for example - Im add in container 25G device as root, but root device is still present in list.

root@btrfs-1804:~# lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0     7:0    0  25.1G  0 loop /
vda     252:0    0 139.7G  0 disk 
├─vda1  252:1    0 139.6G  0 part 
└─vda15 252:15   0   100M  0 part 

Reboot of container/host didnt help.
Root device was been added as
lxc config device add btrfs-1804 root disk pool=default path=/

What I do wrong?

It’s most likely finding it in /sys/class/block, nothing you can really do to hide stuff in there, though being able to see physical storage on your host doesn’t mean the container can make any use of it.

Are its possible to hide it in any way? Maybe additional AppArmor rules? We are migrate from openvz and have some troubles at this moment.

You could try with custom apparmor rules preventing specific paths on /sys, but /sys has symlinks all over the place, so properly preventing this information from being extracted is a bit of a loosing game.

Also, those blocks will still show up in some files in /proc like /proc/partitions and you can’t so easily prevent access to that.

Are its not any way to prevent mounting this resources inside container? If AppArmor very difficult to hide it.

Those devices aren’t usable by the container, so it’s not mounted into it.
It’s just visible just like almost every bit of hardware information is visible to containers.

I’m doubling this “feature request” - OpenVZ/Virtuozzo is more hosting friendly in abstracting HW from Virtual Environment view. I wouldn’t like if users of VE would see my disk usage (can easily do /sys/class/block/nvme0n1/stat for example)

As side note, say it confuses/make redundant usage of system monitoring and metrics collection tool - if I collect metrics with Zabbix/Collectd and using auto disk discovery , exposing raw block devices triggers such discovery, making data double, triple… depending on how much VEs I do have.

This topic and some others, make me think of main focus on LXD from the project/dev team perspective is “isolation thing, like Docker, but on steroids”, not a “split clients on hosting into smaller environments, where client <> your friendly team member”

There’s really nothing we can do at the LXC/LXD level. This isn’t something that userspace can do. It would need the Linux kernel to do it which is what OpenVZ and others have done through custom patches.

Upstream Linux kernel has so far had no interest whatsoever in so called device namespaces so we don’t expect this to change any time soon.

As I pointed out above, one may be able to make this less of an issue by either altering the permissions in the host’s /sys through udev rules or by setting up apparmor policies that would prevent access from the instance. Though with both of those you run the risk of blocking entries that software in the instance does access, causing crashes.