How exactly does LXD put the files from the images in place?

Hi,

i have here very much unappriciated effect:

//mounting a zfs dataset

[root@node2 containers]# zfs mount lxc-vrtx-zfs-storage/lxc1752-zfs-0

//checking the available space ( its 50GB free, 128 byte used )

[root@node2 containers]# df
lxc-vrtx-zfs-storage/lxc1752-zfs-0 52428800 128 52428672 1% /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1752

//Checking the content of the directory ( its empty )

[root@node2 containers]# ls -la /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1752
total 5
drwxr-xr-x 2 root root 2 Dec 5 18:59 .
drwxr-xr-x 31 root root 4096 Dec 5 19:23 …

//Running a lxc init:

lxc init images:centos/6 lxc1752 -p lxc1752 --debug --verbose

DBUG[12-05|19:24:26] Connecting to a local LXD over a Unix socket 
DBUG[12-05|19:24:26] Sending request to LXD                   method=GET url=http://unix.socket/1.0 etag=
DBUG[12-05|19:24:26] Got response struct from LXD 
DBUG[12-05|19:24:26] 
	{
		"config": {
			"core.https_address": "[::]:8443",
			"images.auto_update_interval": "0"
		},
		"api_extensions": [
			"storage_zfs_remove_snapshots",
			"container_host_shutdown_timeout",
			"container_stop_priority",
			"container_syscall_filtering",
			"auth_pki",
			"container_last_used_at",
			"etag",
			"patch",
			"usb_devices",
			"https_allowed_credentials",
			"image_compression_algorithm",
			"directory_manipulation",
			"container_cpu_time",
			"storage_zfs_use_refquota",
			"storage_lvm_mount_options",
			"network",
			"profile_usedby",
			"container_push",
			"container_exec_recording",
			"certificate_update",
			"container_exec_signal_handling",
			"gpu_devices",
			"container_image_properties",
			"migration_progress",
			"id_map",
			"network_firewall_filtering",
			"network_routes",
			"storage",
			"file_delete",
			"file_append",
			"network_dhcp_expiry",
			"storage_lvm_vg_rename",
			"storage_lvm_thinpool_rename",
			"network_vlan",
			"image_create_aliases",
			"container_stateless_copy",
			"container_only_migration",
			"storage_zfs_clone_copy",
			"unix_device_rename",
			"storage_lvm_use_thinpool",
			"storage_rsync_bwlimit",
			"network_vxlan_interface",
			"storage_btrfs_mount_options",
			"entity_description",
			"image_force_refresh",
			"storage_lvm_lv_resizing",
			"id_map_base",
			"file_symlinks",
			"container_push_target",
			"network_vlan_physical",
			"storage_images_delete",
			"container_edit_metadata",
			"container_snapshot_stateful_migration",
			"storage_driver_ceph",
			"storage_ceph_user_name",
			"resource_limits",
			"storage_volatile_initial_source",
			"storage_ceph_force_osd_reuse",
			"storage_block_filesystem_btrfs",
			"resources",
			"kernel_limits",
			"storage_api_volume_rename",
			"macaroon_authentication",
			"network_sriov",
			"console",
			"restrict_devlxd",
			"migration_pre_copy",
			"infiniband",
			"maas_network",
			"devlxd_events",
			"proxy",
			"network_dhcp_gateway",
			"file_get_symlink",
			"network_leases",
			"unix_device_hotplug",
			"storage_api_local_volume_handling",
			"operation_description",
			"clustering",
			"event_lifecycle",
			"storage_api_remote_volume_handling",
			"nvidia_runtime",
			"container_mount_propagation",
			"container_backup",
			"devlxd_images",
			"container_local_cross_pool_handling",
			"proxy_unix",
			"proxy_udp",
			"clustering_join",
			"proxy_tcp_udp_multi_port_handling",
			"network_state",
			"proxy_unix_dac_properties",
			"container_protection_delete",
			"unix_priv_drop",
			"pprof_http",
			"proxy_haproxy_protocol",
			"network_hwaddr",
			"proxy_nat",
			"network_nat_order",
			"container_full",
			"candid_authentication",
			"backup_compression",
			"candid_config",
			"nvidia_runtime_config",
			"storage_api_volume_snapshots",
			"storage_unmapped",
			"projects",
			"candid_config_key",
			"network_vxlan_ttl",
			"container_incremental_copy",
			"usb_optional_vendorid",
			"snapshot_scheduling",
			"container_copy_project",
			"clustering_server_address",
			"clustering_image_replication",
			"container_protection_shift",
			"snapshot_expiry",
			"container_backup_override_pool",
			"snapshot_expiry_creation",
			"network_leases_location",
			"resources_cpu_socket",
			"resources_gpu",
			"resources_numa",
			"kernel_features",
			"id_map_current",
			"event_location",
			"storage_api_remote_volume_snapshots",
			"network_nat_address",
			"container_nic_routes",
			"rbac",
			"cluster_internal_copy",
			"seccomp_notify",
			"lxc_features",
			"container_nic_ipvlan",
			"network_vlan_sriov",
			"storage_cephfs",
			"container_nic_ipfilter",
			"resources_v2",
			"container_exec_user_group_cwd",
			"container_syscall_intercept",
			"container_disk_shift",
			"storage_shifted",
			"resources_infiniband",
			"daemon_storage",
			"instances",
			"image_types",
			"resources_disk_sata",
			"clustering_roles",
			"images_expiry"
		],
		"api_status": "stable",
		"api_version": "1.0",
		"auth": "trusted",
		"public": false,
		"auth_methods": [
			"tls"
		],
		"environment": {
			"addresses": [
				"10.2.3.4:8443",
			],
			"architectures": [
				"x86_64",
				"i686"
			],
			"certificate": "\n",
			"certificate_fingerprint": "46db67db4203fae6df89b3b27fb946635f31ba90eaaa2eb3b608be06b2a2dae7",
			"driver": "lxc",
			"driver_version": "3.2.1",
			"kernel": "Linux",
			"kernel_architecture": "x86_64",
			"kernel_features": {
				"netnsid_getifaddrs": "true",
				"seccomp_listener": "true",
				"shiftfs": "false",
				"uevent_injection": "true",
				"unpriv_fscaps": "true"
			},
			"kernel_version": "5.2.13-200.fc30.x86_64",
			"lxc_features": {
				"mount_injection_file": "true",
				"network_gateway_device_route": "true",
				"network_ipvlan": "true",
				"network_l2proxy": "true",
				"network_phys_macvlan_mtu": "true",
				"seccomp_notify": "true"
			},
			"project": "default",
			"server": "lxd",
			"server_clustered": false,
			"server_name": "node2",
			"server_pid": 48715,
			"server_version": "3.18",
			"storage": "dir",
			"storage_version": "1"
		}
	} 
Creating lxc1752
DBUG[12-05|19:24:26] Connecting to a remote simplestreams server 
DBUG[12-05|19:24:26] Connected to the websocket: ws://unix.socket/1.0/events 
DBUG[12-05|19:24:26] Sending request to LXD                   method=POST url=http://unix.socket/1.0/instances etag=
DBUG[12-05|19:24:26] 
	{
		"architecture": "",
		"config": {},
		"devices": {},
		"ephemeral": false,
		"profiles": [
			"lxc1752"
		],
		"stateful": false,
		"description": "",
		"name": "lxc1752",
		"source": {
			"type": "image",
			"certificate": "",
			"alias": "centos/6",
			"server": "https://images.linuxcontainers.org",
			"protocol": "simplestreams",
			"mode": "pull"
		},
		"instance_type": "",
		"type": ""
	} 
DBUG[12-05|19:24:26] Got operation from LXD 
DBUG[12-05|19:24:26] 
	{
		"id": "e3d8dc11-34cc-4b43-86b8-d75dee7cde3b",
		"class": "task",
		"description": "Creating container",
		"created_at": "2019-12-05T19:24:26.697858977+01:00",
		"updated_at": "2019-12-05T19:24:26.697858977+01:00",
		"status": "Running",
		"status_code": 103,
		"resources": {
			"containers": [
				"/1.0/containers/lxc1752"
			]
		},
		"metadata": null,
		"may_cancel": false,
		"err": "",
		"location": "none"
	} 
DBUG[12-05|19:24:26] Sending request to LXD                   method=GET url=http://unix.socket/1.0/operations/e3d8dc11-34cc-4b43-86b8-d75dee7cde3b etag=
DBUG[12-05|19:24:26] Got response struct from LXD 
DBUG[12-05|19:24:26] 
	{
		"id": "e3d8dc11-34cc-4b43-86b8-d75dee7cde3b",
		"class": "task",
		"description": "Creating container",
		"created_at": "2019-12-05T19:24:26.697858977+01:00",
		"updated_at": "2019-12-05T19:24:26.697858977+01:00",
		"status": "Running",
		"status_code": 103,
		"resources": {
			"containers": [
				"/1.0/containers/lxc1752"
			]
		},
		"metadata": null,
		"may_cancel": false,
		"err": "",
		"location": "none"
	} 
DBUG[12-05|19:24:27] Sending request to LXD                   method=GET url=http://unix.socket/1.0/instances/lxc1752 etag=
DBUG[12-05|19:24:27] Got response struct from LXD 
DBUG[12-05|19:24:27] 
	{
		"architecture": "x86_64",
		"config": {
			"image.architecture": "amd64",
			"image.description": "Centos 6 amd64 (20191205_07:08)",
			"image.os": "Centos",
			"image.release": "6",
			"image.serial": "20191205_07:08",
			"image.type": "squashfs",
			"volatile.apply_template": "create",
			"volatile.base_image": "f6df01ad636278ed6c15a3b424f9bc8b06dc30ebbdc6011a6f8e0fd497afa593",
			"volatile.idmap.base": "0",
			"volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
			"volatile.last_state.idmap": "[]"
		},
		"devices": {},
		"ephemeral": false,
		"profiles": [
			"lxc1752"
		],
		"stateful": false,
		"description": "",
		"created_at": "2019-12-05T19:24:26.746658602+01:00",
		"expanded_config": {
			"image.architecture": "amd64",
			"image.description": "Centos 6 amd64 (20191205_07:08)",
			"image.os": "Centos",
			"image.release": "6",
			"image.serial": "20191205_07:08",
			"image.type": "squashfs",
			"volatile.apply_template": "create",
			"volatile.base_image": "f6df01ad636278ed6c15a3b424f9bc8b06dc30ebbdc6011a6f8e0fd497afa593",
			"volatile.idmap.base": "0",
			"volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
			"volatile.last_state.idmap": "[]"
		},
		"expanded_devices": {
			"root": {
				"path": "/",
				"pool": "lxc-vrtx-zfs-storage",
				"type": "disk"
			}
		},
		"name": "lxc1752",
		"status": "Stopped",
		"status_code": 102,
		"last_used_at": "1970-01-01T01:00:00+01:00",
		"location": "none",
		"type": "container"
	} 

The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

// Now checking again the content of the directory ( which is still empty, even we just installed a container – which is startable and working

[root@node2 containers]# ls -la /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1752
total 5
drwxr-xr-x 2 root root 2 Dec 5 18:59 .
drwxr-xr-x 31 root root 4096 Dec 5 19:23 …

// Now we umount the dataset and check the directory again, and voila, here are the files

[root@node2 containers]# zfs umount /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1752
[root@node2 containers]# ls -la /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1752
total 12
drwx–x–x 4 root root 77 Dec 5 19:24 .
drwxr-xr-x 31 root root 4096 Dec 5 19:23 …
-r-------- 1 root root 1926 Dec 5 19:24 backup.yaml
-rw-r–r-- 1 root root 686 Dec 5 08:19 metadata.yaml
dr-xr-xr-x 22 root root 239 Dec 5 08:19 rootfs
drwxr-xr-x 2 root root 72 Dec 5 08:19 templates

And even if i let it install the files on the disk and copy it to the zfs dataset and try to start the container, i get:

[root@node2 containers]# lxc start lxc1752

Error: Common start logic: No such file or directory: "/var/snap/lxd/common/lxd/storage-pools/lxc-vrtx-zfs-storage/containers/lxc1752/rootfs"
    Try `lxc info --show-log lxc1752` for more info
    [root@node2 containers]# lxc info --show-log lxc1752
    Name: lxc1752
    Location: none
    Remote: unix://
    Architecture: x86_64
    Created: 2019/12/05 18:59 UTC
    Status: Stopped
    Type: persistent
    Profiles: lxc1752

    Log:

    lxc 20191205185937.795 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205185937.805 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190149.346 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190149.346 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190149.346 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190154.993 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190155.723 INFO     confile - confile.c:set_config_idmaps:1987 - Read uid map: type u nsid 0 hostid 1000000 range 1000000000
    lxc 20191205190155.726 INFO     confile - confile.c:set_config_idmaps:1987 - Read uid map: type g nsid 0 hostid 1000000 range 1000000000
    lxc 20191205190155.731 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190235.882 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190445.417 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190445.430 INFO     confile - confile.c:set_config_idmaps:1987 - Read uid map: type u nsid 0 hostid 1000000 range 1000000000
    lxc 20191205190445.430 INFO     confile - confile.c:set_config_idmaps:1987 - Read uid map: type g nsid 0 hostid 1000000 range 1000000000
    lxc 20191205190445.430 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190453.703 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190453.709 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191205190453.709 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket

So somehow LXD ignores this mount suddenly. Before this was working, now suddenly, it does not work anymore again ( its possible, that this started when i run the command “zpool import” ( without importing or doing anything ).

The zfs system works fine. I can write the directory, and also all the KVM servers have no issue, as well as other LXD containers. So something happend, and i have no idea what and why.

My question would be now what mechanics LXD exactly uses to put the files in place, so i might be able to understand why LXD does not put the files into the directory which is perfectly writeable for LXD and instead putting the files on the / partition of the system disc.

Thank you for every idea or suggestion !

Greetings
Oliver

LXD will never mount ZFS datasets outside of /var/lib/lxd/storage-pools/… or if using the snap, /var/snap/lxd/common/lxd/storage-pools/…

Containers datasets must have mountpoint set to the path under the LXD directory and must be marked as canmount=noauto.

The fact that you’re accessing the data thorugh /opt/storages/… on your system suggests something’s pretty wrong with the mountpoint properties, causing LXD to think that it’s mounted where it expects it when it’s in fact mounted somewhere else.

Hi Stéphane,

thank you for your reply !

The LXD storage we use for this is a directory storage. Its not a ZFS type of storage.

So LXD is ( or should ) not manage this ZFS zpool and this way also not handle the ZFS mounts.

We create the zfs datasets ourself and mount them ourself into the correct position of the LXD storage.

+-----------------------+-------------+--------+------------------------------------------+---------+
|         NAME          | DESCRIPTION | DRIVER |                  SOURCE                  | USED BY |
+-----------------------+-------------+--------+------------------------------------------+---------+
| lxc-vrtx-zfs-storage  |             | dir    | /opt/storages/lxc-vrtx-zfs-storage  | 54      |

This was up until now working perfect just as expected.

For some reason, LXD now ignores the zfs mount in the system and does not write the data correct into the mountpoint but on the position of the base OS where the data would be, if we didnt create a zfs dataset and mount it at this position.

So i dont understand why LXD suddenly started to ignore this mountpoint.

If i write data on the mountpoint via the OS ( dd or cp or mkdir or what ever ) it works perfectly fine.
But for LXD this mountpoint simply does not belong to the zfs dataset.

With dir pools, LXD will bind-mount the source (/opt/storages/lxc-vrtx-zfs-storage) onto its own /var/lib/lxd/storage-pools/lxc-vrtx-zfs-storage on startup (slightly different path if using snap).

This isn’t a recursive bind-mount and isn’t a shared mount, meaning that any sub-mount on the source path will not be bind-mounted, only the source will.

So anything which is mounted under the source or gets mounted at a later point will not be visible to LXD.

Hi,

aeohm, if i understand you right if this is the case, then:

  1. #ls -la /var/snap/lxd/common/lxd/storage-pools/lxc-vrtx-zfs-storage

    total 0
    drwx–x--x 2 root root 6 Oct 24 02:32 .
    drwx–x--x 5 root root 84 Nov 4 17:20 .

VS.

#ls -la /opt/storages/lxc-vrtx-zfs-storage/containers/

    total 402844
    drwxr-xr-x 32 root    root      4096 Dec  6 12:30 .
    drwx--x--x  3 root    root        24 Nov 30 11:20 ..
    drwx--x--x  4 root    root        77 Dec  1 02:36 centos7
    drwx--x--x  4 root    root        77 Dec  1 12:51 fedora30
    -rw-r--r--  1 root    root 412469152 Dec  5 17:25 lxc1090.tar.gz
    d--x------  5 1000000 root         8 Nov 19 23:21 lxc1101-1304
    d--x------  5 1000000 root         8 Nov 19 23:24 lxc1101-1305
    d--x------  5 1000000 root         8 Nov 19 23:26 lxc1101-1306
    d--x------  5 1000000 root         8 Nov 19 23:28 lxc1101-1307
    d--x------  5 1000000 root         8 Nov 19 23:31 lxc1102-1308
    d--x------  5 1000000 root         8 Nov 19 23:36 lxc1110-1316
    d--x------  5 1000000 root         8 Nov 19 23:56 lxc1110-1318
    d--x------  5 1000000 root         8 Nov 19 23:58 lxc1110-1319
    d--x------  5 1000000 root         8 Nov 20 00:02 lxc1110-1321
    d--x------  5 1000000 root         8 Nov 20 00:07 lxc1110-1325
    d--x------  5 root    root         8 Nov 20 00:11 lxc1122-1347
    d--x------  5 root    root         8 Nov 21 13:57 lxc1158-1383
    d--x------  5 1000000 root         8 Nov 19 22:54 lxc1184-1412
    d--x------  5 root    root         8 Nov 19 22:56 lxc1213-1452
    d--x------  5 1000000 root         8 Nov 19 23:00 lxc1214-1455
    d--x------  5 1000000 root         8 Nov 19 22:49 lxc1214-1458
    d--x------  5 1000000 root         8 Nov 19 22:45 lxc1214-1459
    d--x------  4 1000000 root         6 Nov 21 18:05 lxc1215-1463
    d--x------  5 1000000 root         8 Nov 19 22:30 lxc1216-1464
    d--x------  5 root    root         8 Nov 19 22:16 lxc1507
    d--x------  4 root    root         6 Nov 13 17:15 lxc1692
    drwxr-xr-x  2 root    root         6 Oct 24 02:52 lxc1693
    drwx--x--x  5 root    root         9 Oct 26 02:19 lxc1695
    drwxr-xr-x  2 root    root         6 Oct 26 02:35 lxc1696
    drwxr-xr-x  4 root    root         7 Dec  5 20:01 lxc1752
    drwxr-xr-x  2 root    root         6 Dec  5 19:57 lxc1752-zfs-0
    drwxr-xr-x  2 root    root         6 Nov 30 14:10 lxc202
    d--x------  4 1000000 root         6 Nov 30 14:14 lxc204
  1. How was is possible to get ~20 LXD containers up and running successfully up until now ( for months ) right with this mechanic ( create zfs, and mount it into the /opt/storages/lxc-vrtx-zfs-storage/containers/$container path ) ?

  2. Where is this bind-mount you are talking about ?:

cat /proc/mounts

sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=132019204k,nr_inodes=33004801,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup2 /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
efivarfs /sys/firmware/efi/efivars efivarfs rw,nosuid,nodev,noexec,relatime 0 0
bpf /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime,mode=700 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset,clone_children 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/mapper/fedora-root / xfs rw,relatime,attr2,inode64,noquota 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=45,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=23827 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
tmpfs /tmp tmpfs rw,nosuid,nodev 0 0
/dev/sda2 /boot ext4 rw,relatime 0 0
/dev/sda1 /boot/efi vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
tmpfs /run/snapd/ns tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /var/snap/lxd/common/ns tmpfs rw,relatime,size=1024k,mode=700 0 0
nsfs /var/snap/lxd/common/ns/shmounts nsfs rw 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=26406844k,mode=700 0 0
 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
lxc-vrtx-zfs-storage/lxc1695-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1695 zfs rw,xattr,posixacl 0 0
/dev/loop7 /var/lib/snapd/snap/lxd/12317 squashfs ro,nodev,relatime 0 0
/dev/loop9 /var/lib/snapd/snap/core/8039 squashfs ro,nodev,relatime 0 0
lxc-vrtx-zfs-storage/lxc1692-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1692 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1507-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1507 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1216-1464-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1216-1464 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1214-1459-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1214-1459 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1214-1458-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1214-1458 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1184-1412-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1184-1412 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1213-1452-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1213-1452 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1214-1455-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1214-1455 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1101-1304-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1101-1304 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1101-1305-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1101-1305 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1101-1306-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1101-1306 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1101-1307-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1101-1307 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1102-1308-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1102-1308 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1110-1316-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1110-1316 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1110-1318-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1110-1318 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1110-1319-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1110-1319 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1110-1321-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1110-1321 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1110-1325-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1110-1325 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1122-1347-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1122-1347 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1158-1383-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1158-1383 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc1215-1463-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1215-1463 zfs rw,xattr,posixacl 0 0
lxc-vrtx-zfs-storage/lxc204-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc204 zfs rw,xattr,posixacl 0 0
/dev/loop3 /var/lib/snapd/snap/lxd/12631 squashfs ro,nodev,relatime 0 0
nsfs /var/snap/lxd/common/ns/mntns nsfs rw 0 0
tracefs /sys/kernel/debug/tracing tracefs rw,relatime 0 0
/dev/loop10 /var/lib/snapd/snap/core/8213 squashfs ro,nodev,relatime 0 0
nsfs /run/snapd/ns/lxd.mnt nsfs rw 0 0
lxc-vrtx-zfs-storage/lxc1752-zfs-0 /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1752 zfs rw,xattr,posixacl 0 0

If there are bind-mounts, should’nt i see them somehow ? And if LXD can not see them ( which indecates the empty ls of /var/snap/lxd/common/lxd/storage-pools/lxc-vrtx-zfs-storage ) how was LXD able to start all those LXD container successfully ( and use the files on the zfs datasets ) ?

#zfs list

lxc-vrtx-zfs-storage                                                   15.4G   946G       24K  none
lxc-vrtx-zfs-storage/lxc1101-1304-zfs-0                                 557M   499G      557M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1101-1304
lxc-vrtx-zfs-storage/lxc1101-1305-zfs-0                                 574M   499G      574M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1101-1305
lxc-vrtx-zfs-storage/lxc1101-1306-zfs-0                                 546M   499G      546M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1101-1306
lxc-vrtx-zfs-storage/lxc1101-1307-zfs-0                                 543M   499G      543M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1101-1307
lxc-vrtx-zfs-storage/lxc1102-1308-zfs-0                                 548M   499G      548M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1102-1308
lxc-vrtx-zfs-storage/lxc1110-1316-zfs-0                                1.04G  49.0G     1.04G  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1110-1316
lxc-vrtx-zfs-storage/lxc1110-1318-zfs-0                                 859M  49.2G      859M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1110-1318
lxc-vrtx-zfs-storage/lxc1110-1319-zfs-0                                 810M  49.2G      810M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1110-1319
lxc-vrtx-zfs-storage/lxc1110-1321-zfs-0                                 866M  49.2G      866M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1110-1321
lxc-vrtx-zfs-storage/lxc1110-1325-zfs-0                                 874M  49.1G      874M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1110-1325
lxc-vrtx-zfs-storage/lxc1122-1347-zfs-0                                 529M  49.5G      529M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1122-1347
lxc-vrtx-zfs-storage/lxc1158-1383-zfs-0                                1.28G  48.7G     1.28G  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1158-1383
lxc-vrtx-zfs-storage/lxc1184-1412-zfs-0                                 550M  49.5G      550M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1184-1412
lxc-vrtx-zfs-storage/lxc1213-1452-zfs-0                                 313M  49.7G      313M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1213-1452
lxc-vrtx-zfs-storage/lxc1214-1455-zfs-0                                 889M  49.1G      889M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1214-1455
lxc-vrtx-zfs-storage/lxc1214-1458-zfs-0                                 889M  49.1G      889M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1214-1458
lxc-vrtx-zfs-storage/lxc1214-1459-zfs-0                                1.05G  49.0G     1.05G  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1214-1459
lxc-vrtx-zfs-storage/lxc1215-1463-zfs-0                                 325M  49.7G      325M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1215-1463
lxc-vrtx-zfs-storage/lxc1216-1464-zfs-0                                 611M   499G      611M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1216-1464
lxc-vrtx-zfs-storage/lxc1507-zfs-0                                      988M   249G      988M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1507
lxc-vrtx-zfs-storage/lxc1692-zfs-0                                      284M  49.7G      284M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1692
lxc-vrtx-zfs-storage/lxc1695-zfs-0                                      305M  49.7G      305M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1695
lxc-vrtx-zfs-storage/lxc1752-zfs-0                                      192M  49.8G      192M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc1752
lxc-vrtx-zfs-storage/lxc204-zfs-0                                       229M  9.78G      229M  /opt/storages/lxc-vrtx-zfs-storage/containers/lxc204

In the very end, the question would be, what could be a regular way to external mounts ( coming from where ever ) and being useable with LXD.

Thank you for your time !

PS: Doing this with LVM works perfectly too… So we create the LV’s ( cached & thin ) and mount them into the /containers directory and run the lxc init command ) And its doing its job as expected.

Hi,

just for the test i tried out to restart the lxd daemon. So all started containers stopped, which was expected.
And all containers now refuse to start, with the same error this topic is about.

Just for the try i changed the mountpoint of the zfs right into the right place within the snapd lxd ecosystem and tried to start, resulting in:

[....]
    lxc lxc1101-1304 20191206172015.169 DEBUG    conf - conf.c:lxc_map_ids:2947 - No newuidmap and newgidmap binary found. Trying to write directly with euid 0
    lxc lxc1101-1304 20191206172015.169 TRACE    conf - conf.c:lxc_map_ids:3019 - Wrote mapping "0 1000000 1000000000
    1000000000 0 1
    "
    lxc lxc1101-1304 20191206172015.169 TRACE    conf - conf.c:lxc_map_ids:3019 - Wrote mapping "0 1000000 1000000000
    1000000000 0 1
    "
    lxc lxc1101-1304 20191206172015.169 TRACE    conf - conf.c:run_userns_fn:4163 - Calling function "chown_cgroup_wrapper"
    lxc lxc1101-1304 20191206172015.169 WARN     cgfsng - cgroups/cgfsng.c:chowmod:1525 - No such file or directory - Failed to chown(/sys/fs/cgroup/unified//lxc.payload/lxc1101-1304/memory.oom.group, 1000000000, 0)
    lxc lxc1101-1304 20191206172015.170 DEBUG    start - start.c:lxc_spawn:1836 - Preserved net namespace via fd 11
    lxc lxc1101-1304 20191206172015.170 TRACE    start - start.c:lxc_spawn:1843 - Allocated new network namespace id
    lxc lxc1101-1304 20191206172015.173 DEBUG    network - network.c:instantiate_phys:908 - Instantiated phys "vethe0a3bf91" with ifindex is "918"
    lxc lxc1101-1304 20191206172015.196 DEBUG    network - network.c:lxc_network_move_created_netdev_priv:3300 - Moved network device "vethe0a3bf91" with ifindex 918 to network namespace of 17242
    lxc lxc1101-1304 20191206172015.196 TRACE    network - network.c:lxc_network_send_to_child:3924 - Sent network device name "vethe0a3bf91" to child
    lxc lxc1101-1304 20191206172015.196 TRACE    network - network.c:lxc_network_recv_from_parent:3950 - Received network device name "vethe0a3bf91" from parent
    lxc lxc1101-1304 20191206172015.196 NOTICE   utils - utils.c:lxc_switch_uid_gid:1411 - Switched to gid 0
    lxc lxc1101-1304 20191206172015.196 NOTICE   utils - utils.c:lxc_switch_uid_gid:1420 - Switched to uid 0
    lxc lxc1101-1304 20191206172015.196 NOTICE   utils - utils.c:lxc_setgroups:1433 - Dropped additional groups
    lxc lxc1101-1304 20191206172015.197 INFO     start - start.c:do_start:1301 - Unshared CLONE_NEWCGROUP
    lxc lxc1101-1304 20191206172015.197 TRACE    conf - conf.c:remount_all_slave:3352 - Remounted all mount table entries as MS_SLAVE
    lxc lxc1101-1304 20191206172015.198 DEBUG    storage - storage/storage.c:get_storage_by_name:232 - Detected rootfs type "dir"
    lxc lxc1101-1304 20191206172015.198 ERROR    dir - storage/dir.c:dir_mount:198 - No such file or directory - Failed to mount "/var/snap/lxd/common/lxd/containers/lxc1101-1304/rootfs" on "/var/snap/lxd/common/lxc/"
    lxc lxc1101-1304 20191206172015.198 ERROR    conf - conf.c:lxc_mount_rootfs:1353 - Failed to mount rootfs "/var/snap/lxd/common/lxd/containers/lxc1101-1304/rootfs" onto "/var/snap/lxd/common/lxc/" with options "(null)"
    lxc lxc1101-1304 20191206172015.198 ERROR    conf - conf.c:lxc_setup_rootfs_prepare_root:3447 - Failed to setup rootfs for
    lxc lxc1101-1304 20191206172015.198 ERROR    conf - conf.c:lxc_setup:3550 - Failed to setup rootfs
    lxc lxc1101-1304 20191206172015.198 ERROR    start - start.c:do_start:1321 - Failed to setup container "lxc1101-1304"
    lxc lxc1101-1304 20191206172015.199 ERROR    sync - sync.c:__sync_wait:62 - An error occurred in another process (expected sequence number 5)
    lxc lxc1101-1304 20191206172015.199 WARN     network - network.c:lxc_delete_network_priv:3377 - Failed to rename interface with index 918 from "eth0" to its initial name "vethe0a3bf91"
    lxc lxc1101-1304 20191206172015.199 DEBUG    network - network.c:lxc_delete_network:4030 - Deleted network devices
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_serve_state_socket_pair:544 - Sent container state "ABORTING" to 5
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_serve_state_clients:474 - Set container state to ABORTING
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_serve_state_clients:477 - No state clients registered
    lxc lxc1101-1304 20191206172015.199 ERROR    start - start.c:lxc_abort:1122 - Function not implemented - Failed to send SIGKILL to 17242
    lxc lxc1101-1304 20191206172015.199 DEBUG    lxccontainer - lxccontainer.c:wait_on_daemonized_start:861 - First child 17221 exited
    lxc lxc1101-1304 20191206172015.199 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:873 - Received container state "ABORTING" instead of "RUNNING"
    lxc lxc1101-1304 20191206172015.199 ERROR    start - start.c:__lxc_start:2039 - Failed to spawn container "lxc1101-1304"
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_serve_state_clients:474 - Set container state to STOPPING
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_serve_state_clients:477 - No state clients registered
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_fini:999 - Set environment variable LXC_USER_NS=/proc/17223/fd/16
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_fini:999 - Set environment variable LXC_MNT_NS=/proc/17223/fd/17
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_fini:999 - Set environment variable LXC_PID_NS=/proc/17223/fd/18
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_fini:999 - Set environment variable LXC_UTS_NS=/proc/17223/fd/19
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_fini:999 - Set environment variable LXC_IPC_NS=/proc/17223/fd/20
    lxc lxc1101-1304 20191206172015.199 TRACE    start - start.c:lxc_fini:999 - Set environment variable LXC_NET_NS=/proc/17223/fd/11
    lxc lxc1101-1304 20191206172015.199 INFO     conf - conf.c:run_script_argv:374 - Executing script "/snap/lxd/current/bin/lxd callhook /var/snap/lxd/common/lxd 98 stopns" for container "lxc1101-1304"
    lxc lxc1101-1304 20191206172015.199 TRACE    conf - conf.c:run_script_argv:421 - Set environment variable: LXC_HOOK_TYPE=stop
    lxc lxc1101-1304 20191206172015.199 TRACE    conf - conf.c:run_script_argv:429 - Set environment variable: LXC_HOOK_SECTION=lxc
    lxc lxc1101-1304 20191206172015.270 TRACE    conf - conf.c:get_minimal_idmap:4339 - Allocated minimal idmapping
    lxc lxc1101-1304 20191206172015.271 TRACE    conf - conf.c:userns_exec_1:4403 - Establishing uid mapping for "17298" in new user namespace: nsuid 0 - hostid 1000000 - range 1000000000
    lxc lxc1101-1304 20191206172015.271 TRACE    conf - conf.c:userns_exec_1:4403 - Establishing uid mapping for "17298" in new user namespace: nsuid 1000000000 - hostid 0 - range 1
    lxc lxc1101-1304 20191206172015.271 TRACE    conf - conf.c:userns_exec_1:4403 - Establishing gid mapping for "17298" in new user namespace: nsuid 0 - hostid 1000000 - range 1000000000
    lxc lxc1101-1304 20191206172015.271 TRACE    conf - conf.c:userns_exec_1:4403 - Establishing gid mapping for "17298" in new user namespace: nsuid 1000000000 - hostid 0 - range 1
    lxc lxc1101-1304 20191206172015.271 DEBUG    conf - conf.c:lxc_map_ids:2947 - No newuidmap and newgidmap binary found. Trying to write directly with euid 0
    lxc lxc1101-1304 20191206172015.271 TRACE    conf - conf.c:lxc_map_ids:3019 - Wrote mapping "0 1000000 1000000000
    1000000000 0 1
    "
    lxc lxc1101-1304 20191206172015.271 TRACE    conf - conf.c:lxc_map_ids:3019 - Wrote mapping "0 1000000 1000000000
    1000000000 0 1
    "
    lxc lxc1101-1304 20191206172015.271 TRACE    conf - conf.c:run_userns_fn:4163 - Calling function "cgroup_rmdir_wrapper"
    lxc lxc1101-1304 20191206172015.276 TRACE    cgfsng - cgroups/cgfsng.c:cg_legacy_filter_and_set_cpus:500 - No isolated or offline cpus present in cpuset
    lxc lxc1101-1304 20191206172015.276 TRACE    cgfsng - cgroups/cgfsng.c:cg_legacy_handle_cpuset_hierarchy:616 - "cgroup.clone_children" was already set to "1"
    lxc 20191206172015.277 WARN     commands - commands.c:lxc_cmd_rsp_recv:135 - Connection reset by peer - Failed to receive response for command "get_state"
    lxc lxc1101-1304 20191206172015.277 TRACE    start - start.c:lxc_fini:1043 - Closed command socket
    lxc lxc1101-1304 20191206172015.277 TRACE    start - start.c:lxc_fini:1054 - Set container state to "STOPPED"
    lxc lxc1101-1304 20191206172015.288 INFO     conf - conf.c:run_script_argv:374 - Executing script "/snap/lxd/current/lxcfs/lxc.reboot.hook" for container "lxc1101-1304"
    lxc lxc1101-1304 20191206172015.288 TRACE    conf - conf.c:run_script_argv:421 - Set environment variable: LXC_HOOK_TYPE=post-stop
    lxc lxc1101-1304 20191206172015.288 TRACE    conf - conf.c:run_script_argv:429 - Set environment variable: LXC_HOOK_SECTION=lxc
    lxc 20191206172015.628 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc lxc1101-1304 20191206172015.794 INFO     conf - conf.c:run_script_argv:374 - Executing script "/snap/lxd/current/bin/lxd callhook /var/snap/lxd/common/lxd 98 stop" for container "lxc1101-1304"
    lxc lxc1101-1304 20191206172015.794 TRACE    conf - conf.c:run_script_argv:421 - Set environment variable: LXC_HOOK_TYPE=post-stop
    lxc lxc1101-1304 20191206172015.794 TRACE    conf - conf.c:run_script_argv:429 - Set environment variable: LXC_HOOK_SECTION=lxc
    lxc 20191206172015.831 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191206172015.862 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191206172020.900 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191206172026.166 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191206172026.277 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191206172026.282 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket
    lxc 20191206172026.282 TRACE    commands - commands.c:lxc_cmd:303 - Connection refused - Command "get_state" failed to connect command socket

So if lxd does a mount-bind the source directory ( which is populated with the mounted zfs data sets ) on its start up, how comes that things are still like this ?

Normally i would assume, that what ever is in the directory of the source dir ( /opt/storages/lxc-vrtx-zfs-storage ) will also be bind-mounted on LXD daemon’s start up.

So if this is the case, then the containers should be available now to LXD. But they are not.
By the way, we also try this with mounting qcow2 files into the source dir’s, and thats all working fine usually.

Hi,

ok, a work around for existing lxd servers seems to be:

umount where ever the lxd is located ( or do a backup, or what ever to protect it from LXD ), then

lxd delete $container

and then

lxd import $container

( after you put it back in place in the correct path within your source storage folder ).

I dont know what LXD is doing there, but somewhere is really a bug in there, harming reliability of this all.

Hi,

ok, after removing all LXD containers that have been on this zfs from the LXD database by:

  1. zfs umount $dataset
  2. lxc delete $container

and readd them:

  1. zfs mount $dataset
  2. lxd import $container

all containers are running AND new container can also be added without issues.

So it would be great if someone could explain whats going on with this.

I opened a new topic for this to make it easier for others to find this “solution” alias workaround.

https://discuss.linuxcontainers.org/t/how-exactly-does-the-command-lxd-import-work/6303