「LXD with custom zfs pool」where is my containers file?

I was first time to use zfs as storage pool for my lxd cluster.

I manually create the zpool「zfs-vda」 and the storage pool「data」, and launch an instance:

[root@lxd-server-100-23 ~]# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zfs-vda  14.5G  1.01G  13.5G        -         -     0%     6%  1.00x    ONLINE  -

[root@lxd-server-100-23 containers]# lxc storage show data 
config: {}
description: ""
name: data
driver: zfs
used_by:
- /1.0/images/9a04aa57d48d12a3a82eb71587eeef726924c3088a84a3acc62d84f02c11f32e?target=lxd-server-100-23
- /1.0/instances/u1?target=lxd-server-100-23
status: Created
locations:
- lxd-server-100-23
- lxd-server-100-8

[root@lxd-server-100-23 ~]# lxc list u1 
+------+---------+------+------+-----------+-----------+-------------------+
| NAME |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |     LOCATION      |
+------+---------+------+------+-----------+-----------+-------------------+
| u1   | RUNNING |      |      | CONTAINER | 0         | lxd-server-100-23 |
+------+---------+------+------+-----------+-----------+-------------------+


[root@lxd-server-100-23 ~]# zfs list 
NAME                                                                              USED  AVAIL     REFER  MOUNTPOINT
zfs-vda                                                                          1.01G  13.0G       24K  legacy
zfs-vda/containers                                                               35.2M  13.0G       24K  legacy
zfs-vda/containers/u1                                                            35.2M  13.0G     1.00G  legacy
zfs-vda/custom                                                                     24K  13.0G       24K  legacy
zfs-vda/deleted                                                                   120K  13.0G       24K  legacy
zfs-vda/deleted/containers                                                         24K  13.0G       24K  legacy
zfs-vda/deleted/custom                                                             24K  13.0G       24K  legacy
zfs-vda/deleted/images                                                             24K  13.0G       24K  legacy
zfs-vda/deleted/virtual-machines                                                   24K  13.0G       24K  legacy
zfs-vda/images                                                                   1001M  13.0G       24K  legacy
zfs-vda/images/9a04aa57d48d12a3a82eb71587eeef726924c3088a84a3acc62d84f02c11f32e  1001M  13.0G     1001M  legacy
zfs-vda/virtual-machines                                                           24K  13.0G       24K  legacy

I can see the instance 「u1」 is using the zpool「zfs-vda」, I want to mount it and checkout the file, but failed:

[root@lxd-server-100-23 ~]# zfs set mountpoint=/data/u1 zfs-vda/containers/u1
[root@lxd-server-100-23 ~]# zfs mount -a
[root@lxd-server-100-23 ~]# zfs list 
NAME                                                                              USED  AVAIL     REFER  MOUNTPOINT
zfs-vda                                                                          1.01G  13.0G       24K  legacy
zfs-vda/containers                                                               35.2M  13.0G       24K  legacy
zfs-vda/containers/u1                                                            35.2M  13.0G     1.00G  /data/u1
zfs-vda/custom                                                                     24K  13.0G       24K  legacy
zfs-vda/deleted                                                                   120K  13.0G       24K  legacy
zfs-vda/deleted/containers                                                         24K  13.0G       24K  legacy
zfs-vda/deleted/custom                                                             24K  13.0G       24K  legacy
zfs-vda/deleted/images                                                             24K  13.0G       24K  legacy
zfs-vda/deleted/virtual-machines                                                   24K  13.0G       24K  legacy
zfs-vda/images                                                                   1001M  13.0G       24K  legacy
zfs-vda/images/9a04aa57d48d12a3a82eb71587eeef726924c3088a84a3acc62d84f02c11f32e  1001M  13.0G     1001M  legacy
zfs-vda/virtual-machines                                                           24K  13.0G       24K  legacy

[root@lxd-server-100-23 ~]# ll /data/u1/
total 0
[root@lxd-server-100-23 ~]# 

How can I access the instance file?

You should never alter the mountpoint property on a LXD managed dataset, this can cause quite a lot of issues.

ZFS doesn’t understand mount namespaces, so it’s quite likely that your action above caused it to be mounted at /data/u1 in one of the mount namespaces, maybe the snapd one, maybe one of your containers.

In general, you should not do this and if you need to see the container’s files, make sure the container is running an then access them through either /proc/PID/root/ where PID is the value you get in lxc info u1 or through /var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/data/containers/u1

Thanks for the advice.

You may also benefit from using the lxc file mount command in LXD 4.24

Oh My God, that’s fantastic!

[root@lxd-server-100-23 ~]# yum install sshfs
Loaded plugins: fastestmirror
...
Running transaction
  Installing : fuse-sshfs-2.10-1.el7.x86_64                                                                                                                                                                     1/1 
  Verifying  : fuse-sshfs-2.10-1.el7.x86_64                                                                                                                                                                     1/1 

Installed:
  fuse-sshfs.x86_64 0:2.10-1.el7                                                                                                                                                                                    

Complete!

[root@lxd-server-100-23 ~]# lxc file mount local:u1/ /data/u1/
sshfs mounting "u1/" on "/var/lib/snapd/hostfs/data/u1"
Press ctrl+c to finish

[root@lxd-server-100-23 ~]# ll /data/u1/
total 80
lrwxrwxrwx. 1 root  root    7 Mar 22 05:40 bin -> usr/bin
drwxr-xr-x. 1 root  root    2 Mar 22 05:48 boot
drwxr-xr-x. 1 root  root  500 Mar 22 19:15 dev
drwxr-xr-x. 1 root  root  184 Mar 22 19:15 etc
drwxr-xr-x. 1 root  root    3 Mar 22 19:15 home
lrwxrwxrwx. 1 root  root    7 Mar 22 05:40 lib -> usr/lib
lrwxrwxrwx. 1 root  root    9 Mar 22 05:40 lib32 -> usr/lib32
lrwxrwxrwx. 1 root  root    9 Mar 22 05:40 lib64 -> usr/lib64
lrwxrwxrwx. 1 root  root   10 Mar 22 05:40 libx32 -> usr/libx32
drwxr-xr-x. 1 root  root    2 Mar 22 05:40 media
drwxr-xr-x. 1 root  root    2 Mar 22 05:40 mnt
drwxr-xr-x. 1 root  root    2 Mar 22 05:40 opt
dr-xr-xr-x. 1 65534 65534   0 Mar 22 19:15 proc
drwx------. 1 root  root    6 Mar 22 20:31 root
drwxr-xr-x. 1 root  root  660 Mar 22 19:16 run
lrwxrwxrwx. 1 root  root    8 Mar 22 05:40 sbin -> usr/sbin
drwxr-xr-x. 1 root  root    5 Mar 23 17:52 snap
drwxr-xr-x. 1 root  root    2 Mar 22 05:40 srv
dr-xr-xr-x. 1 65534 65534   0 Mar 22 11:21 sys
drwxrwxrwt. 1 root  root   10 Mar 23 17:52 tmp
drwxr-xr-x. 1 root  root   14 Mar 22 05:42 usr
drwxr-xr-x. 1 root  root   15 Mar 22 05:43 var

Thanks again!

2 Likes