Thanks Stephane for quick reply.
block device is not mounted on host.
However, I am providing the same block device to all the containers. would that be an issue?
Also,
brw-rw---- 1 root root 259, 2 Mar 6 21:04 nvme2n1p1
brw-rw---- 1 root root 259, 3 Mar 6 21:04 nvme2n1p2
brw-rw---- 1 root root 259, 4 Mar 6 21:04 nvme2n1p3
brw-rw---- 1 root root 259, 5 Mar 6 21:04 nvme2n1p4
Also, nvme list doesn’t show anything
df -k
/dev/sda5 14842581 11797234 2273251 84% /dev/nvme2n1p1
but it doesn’t show /dev/nme2n1p2
does this mean its treating this block device as sda(scsci disk)?
I tried not adding the same device to multiple containers as well. I can successfully add it to the container but application inside the container cannot access it.
using strace did lead me to a conclusion that - when the process is started by systemd, then it can open the device with permission error. But if I try to run the application from command line as root, it can successfully open the file.
Anything to do with cgroup settings?
Below is what I shared earlier -
“printf ‘lxc.cgroup.devices.allow = a\nlxc.mount.auto = proc:rw\nlxc.mount.auto = sys:rw\nlxc.mount.auto = cgroup-full:rw\nlxc.apparmor.profile = unconfined\n’ | lxc config set mycontainer raw.lxc -”
lxc config device add mycontainer nvme1 unix-block path=/dev/nvme2n1p1