Hi,
I’m trying to run a privileged container in such a way that it will have direct access to the disks available to the host system, but I can’t seem to figure out how to do this reliably.
So far, I’ve tried to use: lxc config device add c1 xvdb disk source=/dev/xvdb
Which works for exposing /dev/xvdb
to the container, but then when I try to use it to create a ZFS pool, I receive an error because the two partitions that are created by ZFS are not exposed inside the container.
For example:
(host)$ lxc launch dxos-dev c1 Creating c1 Starting c1 (host)$ ls -l /dev/xvdb* brw-rw---- 1 root disk 202, 16 Apr 26 22:01 /dev/xvdb (host)$ lxc config device add c1 xvdb unix-block source=/dev/xvdb Device xvdb added to c1 (host)$ lxc exec c1 /bin/bash (container)# ls -l /dev/xvdb* brw-rw---- 1 root root 202, 16 Apr 26 22:02 /dev/xvdb (container)# zpool create tank /dev/xvdb cannot label 'xvdb': failed to detect device partitions on '/dev/xvdb1': 19 (container)# ls -l /dev/xvdb* brw-rw---- 1 root root 202, 16 Apr 26 22:04 /dev/xvdb (container)# exit exit (host)$ ls -l /dev/xvdb* brw-rw---- 1 root disk 202, 16 Apr 26 22:04 /dev/xvdb brw-rw---- 1 root disk 202, 17 Apr 26 22:04 /dev/xvdb1 brw-rw---- 1 root disk 202, 25 Apr 26 22:04 /dev/xvdb9
The issue appears to be that the partitions generated by ZFS when zpool create
was called, do not automatically get exposed inside the container, thus causing zpool create
to fail.
As can be seen in the last command (ran on the host), the disk was properly partitioned by the zpool create
command.
Is there a way to make it so these disk devices from the host’s /dev
directory get automatically exposed inside the container?
In case it’s useful, I’ve configured my default profile like the following:
(host)$ lxc profile show default config: raw.lxc: | lxc.apparmor.profile = unconfined lxc.cgroup.devices.allow = a lxc.mount.auto = proc:rw lxc.mount.auto = sys:rw lxc.mount.auto = cgroup-full:rw security.privileged: "true" description: Default LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk zfs: source: /dev/zfs type: unix-char name: default used_by: - /1.0/containers/c1
Both the host VM running in AWS, and the container, are running the same OS (i.e. same rootfs contents) which is based on a recent Ubuntu 18.04 beta release.