Hi,
I am running LXD version 4.21 on Ubuntu 20.04.
I would like to run udev
inside my privileged container and let it manage the device nodes for loopback devices. My use-case is the following: I would like to prepare hard-drive images inside the container and need to access the partitions after formatting the image file. The process is illustrated by the following simplified script:
#!/bin/bash
# Name of the image file.
IMAGE=hd.img
# Create empty image.
dd if=/dev/zero of=${IMAGE} bs=512 count=4194304
# Find a loopback device and attach it to the image.
sudo losetup -f ${IMAGE}
DEVICE=$(losetup -a | grep ${IMAGE} | cut -d':' -f 1)
# Init image with GPT partition table.
sudo sgdisk -Z ${DEVICE}
sudo sgdisk -o ${DEVICE}
# Create a partition.
sudo sgdisk -n 1:2048:4096 -t 0:8300 -c 0:Linux ${DEVICE}
After the image has been created and formatted, I would like to reload the Kernel’s partition table by running
sudo partprobe /dev/loop20
so that the new partition appears as, e.g. /dev/loop20p1
.
Here is my container configuration, which already incorporates ideas from this thread: https://github.com/lxc/lxd/issues/1841
architecture: x86_64
config:
image.description: Base image
linux.kernel_modules: overlay
raw.apparmor: mount,
raw.lxc: |
lxc.cgroup.devices.allow = c 4:* rwm
lxc.cgroup.devices.allow = b 7:* rwm
lxc.cgroup.devices.allow = b 8:* rwm
lxc.cgroup.devices.allow = c 10:236 rwm
lxc.cgroup.devices.allow = c 10:237 rwm
lxc.cgroup.devices.allow = c 116:* rwm
lxc.cgroup.devices.allow = c 188:* rwm
lxc.cgroup.devices.allow = b 252:* rwm
lxc.cgroup.devices.allow = b 253:* rwm
lxc.mount.auto=sys:rw proc:mixed cgroup:mixed
security.nesting: "true"
security.privileged: "true"
volatile.base_image: f5da84d0ffe30fb083d87d7990c08a5ee89dc58e148b43f5a6344d62899c3a71
volatile.idmap.base: "0"
volatile.idmap.current: '[]'
volatile.idmap.next: '[]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: RUNNING
volatile.net-bridge.host_name: veth-2320020504
volatile.net-bridge.hwaddr: 00:16:3e:ac:17:c0
volatile.uuid: 4ab7c28a-4420-43e9-8407-918d878eafd8
With the above configuration udev
starts fine:
$ systemctl status udev.service
● systemd-udevd.service - udev Kernel Device Manager
Loaded: loaded (/lib/systemd/system/systemd-udevd.service; static; vendor preset: enabled)
Active: active (running) since Mon 2022-02-07 13:26:31 UTC; 16min ago
Docs: man:systemd-udevd.service(8)
man:udev(7)
Main PID: 58 (systemd-udevd)
Status: "Processing with 40 children at max"
Tasks: 1
CGroup: /system.slice/systemd-udevd.service
└─58 /lib/systemd/systemd-udevd
It also receives the relevant events
$ udevadm monitor
...
KERNEL[4903.633614] change /devices/virtual/block/loop20 (block)
UDEV [4903.720529] change /devices/virtual/block/loop20 (block)
...
KERNEL[5041.497195] add /devices/virtual/block/loop20/loop20p1 (block)
UDEV [5041.507141] add /devices/virtual/block/loop20/loop20p1 (block)
But the respective device (/dev/loop20p1
) does not appear inside my container, while it does on the host. Furthermore, I observed that partprobe
takes very long to finish, a couple of minutes, which seems also strange. But it does not give any error, though.
Do you have any ideas?
Best,
Holger
EDIT: My container image is based on Ubuntu 18.04.