Weird mountpoints/disks, harmless?

Hi,

I’m creating a ZVOL which I then want to pass to a privileged container to then create an ext4 fs on it. I’ve managed to do this, but looking at mount/lsblk I noticed something odd:

#host
administrator@lxd-test:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 14.9G 0 disk
├─sda1 8:1 0 14G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 952M 0 part [SWAP]
sdb 8:16 0 111.8G 0 disk
├─sdb1 8:17 0 111.8G 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 111.8G 0 disk
├─sdc1 8:33 0 111.8G 0 part
└─sdc9 8:41 0 8M 0 part
zd0 230:0 0 1G 0 disk
└─zd0p1 230:1 0 1023M 0 part

# container
/dev/sda1 on /dev/zd0p1 type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/zd0p1 on /mnt type ext4 (rw,relatime,stripe=2,data=ordered)
root@spike-test1:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 14.9G 0 disk
├─sda1 8:1 0 14G 0 part /dev/zd0p1
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 952M 0 part
sdb 8:16 0 111.8G 0 disk
├─sdb1 8:17 0 111.8G 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 111.8G 0 disk
├─sdc1 8:33 0 111.8G 0 part
└─sdc9 8:41 0 8M 0 part
zd0 230:0 0 1G 0 disk
└─zd0p1 230:1 0 1023M 0 part /mnt

notice how in the container zd0p1 is listed as the mount point for sda1… is that expected/safe or is that a symptom of something that will eventually go wrong?

fwiw in case anybody cares here’s the step I had to take:

on host

  • sudo zfs create -V 1G data/testvol -> /dev/zd0
  • fdisk /dev/zd0 (initially I exported zd0 to the container and fdisk’ed in there, but the parition device is created on the host and does not exist on the container)
  • lxc config device add spike-test1 testvol unix-block path=/dev/zd0p1

on the container

  • mkfs.ext4 /dev/zd0p1
  • mount

the container is set as privileged.

best,

Spike

some more info I just realized…

if I look at df -h before attaching the device, the container looks “normal”:

root@spike-test1:~# df -h
Filesystem Size Used Avail Use% Mounted on
data/lxd/containers/spike-test1 212G 769M 212G 1% /
none 492K 0 492K 0% /dev
udev 7.8G 0 7.8G 0% /dev/fuse
tmpfs 100K 0 100K 0% /dev/lxd
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 8.4M 7.9G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup

right after attaching…

administrator@lxd-test:~$ lxc config device add spike-test1 testvol unix-block path=/dev/zd0p1

it becomes a new line shows up in df -h’s output:

/dev/sda1 14G 2.1G 11G 17% /dev/zd0p1

notice that /dev/sda1 doesn’t even exist:

root@spike-test1:~# ls -l /dev/sd*
ls: cannot access ‘/dev/sd*’: No such file or directory

that device is actually the / on the host:

administrator@lxd-test:~$ df -h
Filesystem                       Size  Used Avail Use% Mounted on
...
/dev/sda1                         14G  2.1G   11G  17% /
...

how is that ending up being exposed on the container?

thanks,

Spike

That’s normal.

The way you pass devices into an unprivileged container is by bind-mounting the device node from /dev/ of the host onto the device name in the container.

In this case /dev/zd0p1 from your host’s /dev is getting cloned at /var/lib/lxd/devices/… and then bind-mounted onto /dev/zd0p1 in the container. As far as the kernel is concerned this is therefore a bind-mount from a file on /dev/sda1 of the host onto /dev/zd0p1 in the container.

The mount table output is somewhat confusing but this is all correct.

I just got a similar thing. /dev/zd16p1 is a partiton of 200Gb on a 200GB ZVOL

lxc config device add backup data unix-block path=/data source=/dev/zd16p1

In guest:

# df -h
/dev/sdc2                          117G   12G  100G  10% /data

It reports the storage space of the host / partition !

I stupidly did

#mkfs.ext4 /data

It seems to have done the job, no error. Did I harm the host ?

Nope, that’s how bind-mounts of files work. Your /data is in fact a device node equivalent to /dev/zd16p1, running stat against it should confirm it.

Thank but it still isn’t useful because containers are not allow to mount, right ? The device should be added as a “disk” to be used in the container if I understand all that well.