LXD ZFS mount empty

I have a fresh install of Ubuntu 16.04.3 LTS and installed the newest version of LXD via snap install lxd; LXD has its own partition on a software raid 1.
Everything works as exepted but the specified mountpoint is empty so I cant access the files from the container (www).

zfs list

NAME                                                                              USED  AVAIL  REFER  MOUNTPOINT
default                                                                           237M  1.68T    19K  none
default/containers                                                               30.8M  1.68T    19K  none
default/containers/www                                                           30.8M  1.68T   207M  /var/snap/lxd/common/lxd/storage-pools/default/containers/www
default/custom                                                                     19K  1.68T    19K  none
default/deleted                                                                    19K  1.68T    19K  none
default/images                                                                    206M  1.68T    19K  none
default/images/2ff59908824be6adf74d03f5fc184b7a1608efd695ce043f3244e0e6405917d4   199M  1.68T   199M  none
default/images/7b4538bb7aeb979195e5ea1585589745f5b32e0c1d6602530c688447768438dc  3.59M  1.68T  3.59M  none
default/images/d1c24ed541bd64e26f41b46fef656da1d55354a1f4c3e3677c654f6bf24d90cf  3.59M  1.68T  3.59M  none
default/snapshots                                                                  19K  1.68T    19K  none
$> ls -la /var/snap/lxd/common/lxd/storage-pools/default/containers/www
total 8
drwxr-xr-x 2 1000000 1000000 4096 Nov 22 22:22 .
drwxr-xr-x 3 root    root    4096 Nov 22 22:22 ..

[details=LXD info]```
$> lxc info
config:
images.auto_update_interval: “0”
api_extensions:

  • storage_zfs_remove_snapshots
  • container_host_shutdown_timeout
  • container_syscall_filtering
  • auth_pki
  • container_last_used_at
  • etag
  • patch
  • usb_devices
  • https_allowed_credentials
  • image_compression_algorithm
  • directory_manipulation
  • container_cpu_time
  • storage_zfs_use_refquota
  • storage_lvm_mount_options
  • network
  • profile_usedby
  • container_push
  • container_exec_recording
  • certificate_update
  • container_exec_signal_handling
  • gpu_devices
  • container_image_properties
  • migration_progress
  • id_map
  • network_firewall_filtering
  • network_routes
  • storage
  • file_delete
  • file_append
  • network_dhcp_expiry
  • storage_lvm_vg_rename
  • storage_lvm_thinpool_rename
  • network_vlan
  • image_create_aliases
  • container_stateless_copy
  • container_only_migration
  • storage_zfs_clone_copy
  • unix_device_rename
  • storage_lvm_use_thinpool
  • storage_rsync_bwlimit
  • network_vxlan_interface
  • storage_btrfs_mount_options
  • entity_description
  • image_force_refresh
  • storage_lvm_lv_resizing
  • id_map_base
  • file_symlinks
  • container_push_target
  • network_vlan_physical
  • storage_images_delete
  • container_edit_metadata
  • container_snapshot_stateful_migration
  • storage_driver_ceph
  • storage_ceph_user_name
  • resource_limits
  • storage_volatile_initial_source
  • storage_ceph_force_osd_reuse
  • storage_block_filesystem_btrfs
  • resources
  • kernel_limits
  • storage_api_volume_rename
  • macaroon_authentication
  • network_sriov
  • console
    api_status: stable
    api_version: “1.0”
    auth: trusted
    public: false
    auth_methods:
  • tls
    environment:
    addresses: []
    architectures:
    • x86_64
    • i686
      certificate: |
      -----BEGIN CERTIFICATE-----

      -----END CERTIFICATE-----
      certificate_fingerprint: f79085244ecea53d2615b9cc9bc92ba4a7465ab7456dc252c2ebce9dad252ba0
      driver: lxc
      driver_version: 2.1.1
      kernel: Linux
      kernel_architecture: x86_64
      kernel_version: 4.4.0-101-generic
      server: lxd
      server_pid: 2540
      server_version: “2.20”
      storage: zfs
      storage_version: 0.6.5.6-0ubuntu16

So that’s actually expected behavior. The snap package runs in its own mount namespace, shielding the host from any mount that occurs inside the snap.

This avoids a number of issues with the way the Linux kernel handles mounts and especially issues with ZFS’ own ideas of how mounts work on Linux.

In general we recommend that you interact with the container’s filesystem through the LXD API using the lxc file command. If that’s not enough, then you can get rsync to work over lxc exec as i described here:

If you absolutely must access the container’s filesystem directly, you’ll have to jump through a few hoops:

  • First, note that modern LXD unmounts the container when it’s not running, so you’ll need your container to be running first.
  • Second, locate the PID of the “[lxc monitor] …” process for the container that you want.
  • Third, Access the filesystem through /proc/PID/root/var/snap/lxd/common/lxd/storage-pools/default/containers/www
4 Likes

Thanks! I’ve been wondering why when I do zfs import -a ant then zfs set mountpoint=/lxd_pool lxd_pool followed by df -hT I see 0M being used. So there is a failsafe mechanism in place. So I guess I should backup my containers directly with lxc commands, not via host zfs, right?

You can use what @stgraber mentioned with the script to directly rsync into the container.
I have been using that method and it works pretty well.

The method from the PID does not work well in my tests.

So I guess you could “zfs import” to a local mountpoint, then rsync to you container . Have you tried ?