Few questions about zvols on LXD manged ZFS pools

Hi.

I have some stuff doesn’t like ZFS so have to use another file system within a container, so have to use zvol, and this answer works fine:

Question 1:
Any problems of this zvol sitting on a LXD managed ZFS pool in the same structure as lxd created block volumes, e.g.
lxdpool00/custom/instance08_disk01
and then for the instance > config > device > source:
/dev/zvol/lxdpool00/custom/instance08_disk01

Question 2:
Is there a difference between the zfs block volume when created via lxc storage volume create... --type block command, and a zvol created via zfs ?

Question 3:
I see devices created by lxc and zfs commands below /dev/zvol, however I’m not permitted to attach a LXD block device to a container (only for non zfs compatible stuff), but I can attach the zvol created with zfs. Is this intended?

Thanks.

If manually created, it’d be better to put it at lxdpool00/my-zvol/something just to avoid potential name conflicts down the line.

Nope, we create a zvol internally for --type block.

You can attach it as a mounted disk using disk type and then pool=your-pool and source=your-volume. If you just want to see it exposed as /dev/sdX, then this isn’t currently possible. We have https://github.com/lxc/lxd/issues/10077 for that.

Good morning and thank you for your quick and helpful response.

I’m going to mark you response above as the solution. I’ll add to #10077.

Just for the record, a good example of a use case is something like Docker, and there are others mentioned in this great site that also don’t like ZFS or a different file system is recommended, so currently have to:

  1. create a zvol via zfs,
  2. format to xfs or ext4, mount and chown 1000000:1000000 it
  3. added to the container type Ubuntu 22.04 instance and mounted at:
    /mnt-acme/disk01
  4. within the container, docker directories are then bind mounted from this disk to the default locations via fstab:
    /mnt-acme/disk01/docker/var/lib/docker /var/lib/docker...

Thanks!

does this method still give you the message on the host system “overlayfs: upper fs missing required features” when running docker in an unprivileged LXC?

Good morning

Sadly I’ve abandoned Docker Swarm in unpriviledged system containers for VMs because I couldn’t get it to work to to the restrictions and despite this solution:

In regards to your question, I don’t recall seeing that error so I thought I would give it another spin again, but now when trying to attach the block type disk created via sudo zfs create... using the command lxc config device add {instanceName} {deviceName} disk source=/dev/zvol/{customDir}/{blockDeviceName} I get a new error preventing me from attaching block devices to containers:

Error: Invalid device "{deviceName}" on container "{instanceName}" of project "{projectName}": Attaching disks not backed by a pool is forbidden

So the goalposts have moved so now we’re further away from getting Docker Swarm working in an unprivileged system container backed by ZFS storage pools.

Host: Physical Dell server, x86_64 Ubuntu Server 22.04
LXD: 5.6-794016a

Can you show lxc project show --project <projectName> I suspect you have restrictions in your project.

Good morning @tomp

DUH, that’s it, I had switch to a restricted project.

@chribro I’ve run out of time today to be able to retest swarm in LXD unprivileged containers in the default project with default restrictions, and collect all the error messages that come up when launching a stack.

1 Like

Good morning

On one of the LXD hosts:

  • OS: Ubuntu Server 22.04 x86_64, physical Dell server
  • LXD: 5.6-794016a

Started the new system container for Docker with:

  • a second block disk EXT4 formatted, mounted at default Docker paths (/etc/docker ; /var/lib/docker/; ...)
  • Docker v20.10.18 installed
  • configured as a Swarm manager with a second Docker node on another host
  • no stacks nor any other configuration

LXD instances for docker have the settings as per the article linked in previous threads:

  limits.memory.swap: "false"
  linux.kernel_modules: bridge,ip_tables,ip6_tables,iptable_nat,iptable_mangle,netlink_diag,nf_nat,overlay,br_netfilter,bonding,ip_vs,ip_vs_dh,ip_vs_ftp,ip_vs_lblc,ip_vs_lblcr,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sed,ip_vs_sh,ip_vs_wlc,ip_vs_wrr,xfrm_user,xt_conntrack,xt_MASQUERADE
  security.nesting: "true"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.setxattr: "true"

Here is the LXD host’s /var/log/syslog entries when starting that container which has entries matching your query:

Oct 12 05:31:27 {LXDHostName} systemd[2416332]: Started snap.lxd.lxc.c1e3d36e-3495-4957-948c-4a9f67f7b894.scope.
Oct 12 05:31:28 {LXDHostName} systemd[2416332]: Started snap.lxd.lxc.ac70b77a-9457-4767-8c44-7762f3083d43.scope.
Oct 12 05:31:48 {LXDHostName} systemd[2416332]: Started snap.lxd.lxc.e95d600b-6e4d-4083-ae35-5895eaa38050.scope.
Oct 12 05:31:48 {LXDHostName} systemd[2416332]: Started snap.lxd.lxc.f4dd8995-d1fd-4645-91f8-5faa9eaedac2.scope.
Oct 12 05:31:48 {LXDHostName} networkd-dispatcher[2849]: WARNING:Unknown index 59 seen, reloading interface list
Oct 12 05:31:48 {LXDHostName} systemd-udevd[750066]: Using default interface naming scheme 'v249'.
Oct 12 05:31:48 {LXDHostName} kernel: [1337627.774919] EXT4-fs (zd336): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Oct 12 05:31:49 {LXDHostName} kernel: [1337627.861538] audit: type=1400 audit(1665567109.059:368): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-{LXDInstanceName}_</var/snap/lxd/common/lxd>" pid=750105 comm="apparmor_parser"
Oct 12 05:31:49 {LXDHostName} kernel: [1337627.966895] physmEtQVl: renamed from macb53d2725
Oct 12 05:31:49 {LXDHostName} kernel: [1337628.004825] eth0: renamed from physmEtQVl
Oct 12 05:31:49 {LXDHostName} kernel: [1337628.418317] audit: type=1400 audit(1665567109.615:369): apparmor="STATUS" operation="profile_load" label="lxd-{LXDInstanceName}_</var/snap/lxd/common/lxd>//&:lxd-{LXDInstanceName}_<var-snap-lxd-common-lxd>:unconfined" name="lsb_release" pid=750338 comm="apparmor_parser"
Oct 12 05:31:49 {LXDHostName} kernel: [1337628.418945] audit: type=1400 audit(1665567109.619:370): apparmor="STATUS" operation="profile_load" label="lxd-{LXDInstanceName}_</var/snap/lxd/common/lxd>//&:lxd-{LXDInstanceName}_<var-snap-lxd-common-lxd>:unconfined" name="nvidia_modprobe" pid=750339 comm="apparmor_parser"
Oct 12 05:31:49 {LXDHostName} kernel: [1337628.418952] audit: type=1400 audit(1665567109.619:371): apparmor="STATUS" operation="profile_load" label="lxd-{LXDInstanceName}_</var/snap/lxd/common/lxd>//&:lxd-{LXDInstanceName}_<var-snap-lxd-common-lxd>:unconfined" name="nvidia_modprobe//kmod" pid=750339 comm="apparmor_parser"
Oct 12 05:31:49 {LXDHostName} kernel: [1337628.430100] audit: type=1400 audit(1665567109.627:372): apparmor="STATUS" operation="profile_load" label="lxd-{LXDInstanceName}_</var/snap/lxd/common/lxd>//&:lxd-{LXDInstanceName}_<var-snap-lxd-common-lxd>:unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=750340 comm="apparmor_parser"
Oct 12 05:31:49 {LXDHostName} kernel: [1337628.430108] audit: type=1400 audit(1665567109.627:373): apparmor="STATUS" operation="profile_load" label="lxd-{LXDInstanceName}_</var/snap/lxd/common/lxd>//&:lxd-{LXDInstanceName}_<var-snap-lxd-common-lxd>:unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=750340 comm="apparmor_parser"
Oct 12 05:31:49 {LXDHostName} kernel: [1337628.430113] audit: type=1400 audit(1665567109.627:374): apparmor="STATUS" operation="profile_load" label="lxd-{LXDInstanceName}_</var/snap/lxd/common/lxd>//&:lxd-{LXDInstanceName}_<var-snap-lxd-common-lxd>:unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=750340 comm="apparmor_parser"
Oct 12 05:31:49 {LXDHostName} kernel: [1337628.430117] audit: type=1400 audit(1665567109.627:375): apparmor="STATUS" operation="profile_load" label="lxd-{LXDInstanceName}_</var/snap/lxd/common/lxd>//&:lxd-{LXDInstanceName}_<var-snap-lxd-common-lxd>:unconfined" name="/{,usr/}sbin/dhclient" pid=750340 comm="apparmor_parser"
Oct 12 05:31:51 {LXDHostName} ModemManager[2974]: <info>  [base-manager] couldn't check support for device '/sys/devices/pci0000:00/0000:00:01.1/0000:01:00.0': not supported by any plugin
Oct 12 05:31:52 {LXDHostName} kernel: [1337631.017153] overlayfs: upper fs does not support RENAME_WHITEOUT.
Oct 12 05:31:52 {LXDHostName} kernel: [1337631.017188] overlayfs: upper fs missing required features.
Oct 12 05:31:52 {LXDHostName} kernel: [1337631.273197] audit: type=1400 audit(1665567112.471:376): apparmor="STATUS" operation="profile_load" label="lxd-{LXDInstanceName}_</var/snap/lxd/common/lxd>//&:lxd-{LXDInstanceName}_<var-snap-lxd-common-lxd>:unconfined" name="docker-default" pid=750452 comm="apparmor_parser"
Oct 12 05:31:52 {LXDHostName} kernel: [1337631.284753] overlayfs: upper fs does not support RENAME_WHITEOUT.
Oct 12 05:31:52 {LXDHostName} kernel: [1337631.284758] overlayfs: upper fs does not support xattr, falling back to xino=off.
Oct 12 05:31:52 {LXDHostName} kernel: [1337631.284759] overlayfs: upper fs missing required features.
Oct 12 05:31:52 {LXDHostName} kernel: [1337631.289928] overlayfs: upper fs does not support RENAME_WHITEOUT.
Oct 12 05:31:52 {LXDHostName} kernel: [1337631.289932] overlayfs: upper fs does not support xattr, falling back to xino=off.
Oct 12 05:31:52 {LXDHostName} kernel: [1337631.289933] overlayfs: upper fs missing required features

Then created a very simple stack in mrtest-dkr-httpd.yml:

version: "3.9"

services:
  web:
    image: httpd:latest
    ports:
      - "8089:80"

From the LXD system container for docker /var/log/syslog, when starting the stack via docker stack deploy -c mrtest-dkr-httpd.yml httpd:

Oct 12 06:10:38 {LXDInstanceName} dockerd[184]: time="2022-10-12T06:10:38.519036406-04:00" level=info msg="initialized VXLAN UDP port to 4789 "
Oct 12 06:10:38 {LXDInstanceName} dockerd[184]: time="2022-10-12T06:10:38.944968720-04:00" level=error msg="error reading the kernel parameter net.ipv4.vs.conn_reuse_mode" error="open /proc/sys/net/ipv4/vs/conn_reuse_mode: no such file or directory"
Oct 12 06:10:38 {LXDInstanceName} dockerd[184]: time="2022-10-12T06:10:38.945051606-04:00" level=error msg="error reading the kernel parameter net.ipv4.vs.expire_nodest_conn" error="open /proc/sys/net/ipv4/vs/expire_nodest_conn: no such file or directory"
Oct 12 06:10:38 {LXDInstanceName} dockerd[184]: time="2022-10-12T06:10:38.945093947-04:00" level=error msg="error reading the kernel parameter net.ipv4.vs.expire_quiescent_template" error="open /proc/sys/net/ipv4/vs/expire_quiescent_template: no such file or directory"
Oct 12 06:10:38 {LXDInstanceName} dockerd[184]: time="2022-10-12T06:10:38.945146199-04:00" level=error msg="error reading the kernel parameter net.ipv4.vs.conn_reuse_mode" error="open /proc/sys/net/ipv4/vs/conn_reuse_mode: no such file or directory"
Oct 12 06:10:38 {LXDInstanceName} dockerd[184]: time="2022-10-12T06:10:38.945185878-04:00" level=error msg="error reading the kernel parameter net.ipv4.vs.expire_nodest_conn" error="open /proc/sys/net/ipv4/vs/expire_nodest_conn: no such file or directory"
Oct 12 06:10:38 {LXDInstanceName} dockerd[184]: time="2022-10-12T06:10:38.945400456-04:00" level=error msg="error reading the kernel parameter net.ipv4.vs.expire_quiescent_template" error="open /proc/sys/net/ipv4/vs/expire_quiescent_template: no such file or directory"
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 19 seen, reloading interface list 
Oct 12 06:10:39 {LXDInstanceName} systemd-udevd[2155]: Using default interface naming scheme 'v249'.
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: ERROR:Unknown interface index 19 seen even after reload
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 19 seen, reloading interface list
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: ERROR:Unknown interface index 19 seen even after reload
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 19 seen, reloading interface list
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: ERROR:Unknown interface index 19 seen even after reload
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 19 seen, reloading interface list
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: ERROR:Unknown interface index 19 seen even after reload
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 20 seen, reloading interface list
Oct 12 06:10:39 {LXDInstanceName} systemd-udevd[2158]: Using default interface naming scheme 'v249'.
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 21 seen, reloading interface list
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: ERROR:Unknown interface index 21 seen even after reload
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 21 seen, reloading interface list
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: ERROR:Unknown interface index 21 seen even after reload
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 21 seen, reloading interface list
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: ERROR:Unknown interface index 21 seen even after reload
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 21 seen, reloading interface list
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: ERROR:Unknown interface index 21 seen even after reload
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 21 seen, reloading interface list
Oct 12 06:10:39 {LXDInstanceName} networkd-dispatcher[156]: ERROR:Unknown interface index 21 seen even after reload
Oct 12 06:10:39 {LXDInstanceName} dockerd[184]: time="2022-10-12T06:10:39.684784278-04:00" level=warning msg="reference for unknown type: "digest="sha256:4400fb49c9d7d218d3c8109ef721e0ec1f3897028a3004b098af587d565f4ae5" remote="docker.io/library/httpd:latest@sha256:4400fb49c9d7d218d3c8109ef721e0ec1f3897028a3004b098af587d565f4ae5"
Oct 12 06:10:55 {LXDInstanceName} systemd[1]: var-lib-docker-overlay2-b08b7ef379351441d082ad8e300bc961f93bffd857c5a3508d439de2a4929496-merged.mount: Deactivated successfully.
Oct 12 06:10:58 {LXDInstanceName} systemd[1]: var-lib-docker-overlay2-f931646963ad214f7903d9d4e07437f796cd33239241d9f88480007c31cb741d-merged.mount: Deactivated successfully.
Oct 12 06:11:00 {LXDInstanceName} systemd[1]: var-lib-docker-overlay2-208ab045be3e69a15f79e7c1218534f59b82c04489a91962344c25ad829e7390-merged.mount: Deactivated successfully.
Oct 12 06:11:06 {LXDInstanceName} systemd[1]: var-lib-docker-overlay2-ee4f5c66a924ced0893fef118e6a5f7c6af855d79df095925cd5f9faf52c4d6b-merged.mount: Deactivated successfully.
Oct 12 06:11:06 {LXDInstanceName} systemd[1]: var-lib-docker-overlay2-45a6178ec4330488c444453fb71a8e8ac698fabe9d140a117bfa562c1050f349\x2dinit-merged.mount: Deactivated successfully.
Oct 12 06:11:07 {LXDInstanceName} systemd-udevd[2380]: Using default interface naming scheme 'v249'.
Oct 12 06:11:07 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 22 seen, reloading interface list
Oct 12 06:11:07 {LXDInstanceName} systemd-udevd[2381]: Using default interface naming scheme 'v249'.
Oct 12 06:11:07 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 24 seen, reloading interface list
Oct 12 06:11:07 {LXDInstanceName} systemd-networkd[136]: veth559b5aa: Link UP
Oct 12 06:11:07 {LXDInstanceName} systemd-udevd[2403]: Using default interface naming scheme 'v249'.
Oct 12 06:11:07 {LXDInstanceName} networkd-dispatcher[156]: WARNING:Unknown index 26 seen, reloading interface list
Oct 12 06:11:08 {LXDInstanceName} containerd[160]: time="2022-10-12T06:11:08.038601524-04:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 12 06:11:08 {LXDInstanceName} containerd[160]: time="2022-10-12T06:11:08.038814563-04:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 12 06:11:08 {LXDInstanceName} containerd[160]: time="2022-10-12T06:11:08.038861199-04:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 12 06:11:08 {LXDInstanceName} containerd[160]: time="2022-10-12T06:11:08.039853745-04:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/7f548e324d76cda3f6d28beb83425122d23569908d185605924e210b2497f345 pid=2444 runtime=io.containerd.runc.v2
Oct 12 06:11:08 {LXDInstanceName} systemd[1]: Started libcontainer container 7f548e324d76cda3f6d28beb83425122d23569908d185605924e210b2497f345.
Oct 12 06:11:08 {LXDInstanceName} systemd-networkd[136]: veth559b5aa: Gained carrier
Oct 12 06:11:09 {LXDInstanceName} dockerd[2653]: time="2022-10-12T06:11:09-04:00" level=error msg="Failed to write to /proc/sys/net/ipv4/vs/conntrack: open /proc/sys/net/ipv4/vs/conntrack: no such file or directory"
Oct 12 06:11:09 {LXDInstanceName} dockerd[184]: time="2022-10-12T06:11:09.350014048-04:00" level=error msg="Failed to add firewall mark rule in sbox lb_qb2h (lb-http): reexec failed: exit status 7"
Oct 12 06:11:09 {LXDInstanceName} dockerd[2682]: time="2022-10-12T06:11:09-04:00" level=error msg="Failed to write to /proc/sys/net/ipv4/vs/conntrack: open /proc/sys/net/ipv4/vs/conntrack: no such file or directory"
Oct 12 06:11:09 {LXDInstanceName} dockerd[184]: time="2022-10-12T06:11:09.471221660-04:00" level=error msg="Failed to add firewall mark rule in sbox ingress (ingress): reexec failed: exit status 7"
Oct 12 06:11:10 {LXDInstanceName} systemd-networkd[136]: veth559b5aa: Gained IPv6LL
 

So the result is I cannot get a response on http://{LXDInstanceName}:8089 from another host on the network (LXD instance macvlan nics). LXD instances / Swarm nodes netstat -an shows 8089 listening. HTTP 80 in the Docker Task (app container) serves the default httpd index.html fine.