Possible to install and run LXCFS in tmpfs?

Just wondering, does LXCFS need to be installed and run on a hard drive, or can it be installed and run from tmpfs?

More specifically, normally /var/lib/lxcfs is mounted to lxcfs. Can that be in tmpfs, or must it be on a hard drive filesystem?

My current lxc/lxd mounts, for reference:

├─/var rpool/local/var zfs
│ ├─/var/lib/lxcfs lxcfs fuse.lxcfs
│ ├─/var/lib/lxd/shmounts tmpfs tmpfs
│ ├─/var/lib/lxd/devlxd tmpfs tmpfs
│ └─/var/lib/lxd/storage-pools/lxdpool/containers/ae-test rpool/safe/lxd/containers/ae-test zfs

It doesn’t matter, LXCFS is just a FUSE filesystem, its only requirement is to have some kind of empty directory that it can mount its filesystem over.

Thanks, I ask because when I try to run my system with root on tmpfs, including /var on tmpfs, I get the following problem with lxcfs:

> ls -al /var/lib | grep lx
drwxr-xr-x - root root 2021-08-09 19:28 lxc/
drwxr-xr-x - root root 2021-08-09 19:31 lxcfs/
└── <Operation not supported (os error 95)>
drwx–x–x - root root 2021-08-09 19:28 lxd/

Running the system with root on tmpfs, but /var mounted to its own on-disk ZFS dataset, works fine, though.

I assume if I want to keep both / and /var on tmpfs, then I’ll need to explicitly mount /var/lib/lxcfs to a fuse.lxcfs filesystem, instead of letting the system automatically mount it to tmpfs.

@brauner any reason lxcfs would be unhappy on a tmpfs?
I thought we were actually using tmpfs in our tests.

lxcfs seems fine when all of /var is on tmpfs.

But then the lxc/lxd config is lost between reboots. I have to rerun lxd init and then re-import my backed-up containers.

So I’m trying to figure out how keep /var on tmpfs, while putting whatever is necessary in /var/lib/{lxc,lxcfs,lxd} on disk, so as not lose config between reboots.

But when I configure the system to explicitly put /var/lib/{lxc,lxd} on disk, but not explicitly specify anything about /var/lib/lxcfs and let the system handle it, then that’s when that error starts happening.

Update: After much experimentation, including putting /var/lib and /var/log on disk, and /var/lib/{lxc,lxcfs,lxd} on disk, and several other variants, the only thing that works consistently is tmpfs on /, but all of /var on disk (its own full zfs dataset in this case).

Any other configuration and there are problems. Either containers can be created but fail to start, or all the config is lost between reboots, etc. But / on tmpfs and all of /var on disk works with no problems.

One more thing I’ve tried: create a systemd tmpfile rule that links /var/lib and /var/log from tmpfs to a ZFS pool:

In /etc/tmpfiles.d/var.conf:

"L /var/lib - - - - /persist/var/lib"    
"L /var/log - - - - /persist/var/log"

Reboot, then try to run lxd init and launch a container:

> sudo lxd init                                                           
[sudo] password for bgibson: 
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: lxdpool
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: 
Would you like to create a new zfs dataset under rpool/lxd? (yes/no) [default=yes]: no
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: rpool/safe/lxd
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]: 
Port to bind LXD to [default=8443]: 
Trust password for new clients: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

> sudo lxc launch images:alpine/edge ae-test                              
Creating ae-test
Starting ae-test                            
Error: Failed to run: /nix/store/9kp22pvvgn376q6jqhvi8agqwqzbg3a2-lxd-4.14/bin/.lxd-wrapped forkstart ae-test /var/lib/lxd/containers /var/log/lxd/ae-test/lxc.conf: 
Try `lxc info --show-log local:ae-test` for more info

bgibson@z10pe-d8:~ (*)
> sudo lxc info --show-log local:ae-test                                  master [81c71d8] deleted modified untracked
Name: ae-test
Location: none
Remote: unix://
Architecture: x86_64
Created: 2021/08/12 03:03 UTC
Status: Stopped
Type: container
Profiles: default


lxc ae-test 20210812030302.480 WARN     conf - conf.c:lxc_map_ids:3007 - newuidmap binary is missing
lxc ae-test 20210812030302.483 WARN     conf - conf.c:lxc_map_ids:3013 - newgidmap binary is missing
lxc ae-test 20210812030302.507 WARN     conf - conf.c:lxc_map_ids:3007 - newuidmap binary is missing
lxc ae-test 20210812030302.509 WARN     conf - conf.c:lxc_map_ids:3013 - newgidmap binary is missing
lxc ae-test 20210812030302.514 WARN     cgfsng - cgroups/cgfsng.c:fchowmodat:1293 - No such file or directory - Failed to fchownat(43, memory.oom.group, 65536, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc ae-test 20210812030302.868 ERROR    conf - conf.c:lxc_setup_rootfs_prepare_root:3437 - Failed to setup rootfs for
lxc ae-test 20210812030302.869 ERROR    conf - conf.c:lxc_setup:3600 - Failed to setup rootfs
lxc ae-test 20210812030302.870 ERROR    start - start.c:do_start:1265 - Failed to setup container "ae-test"
lxc ae-test 20210812030302.870 ERROR    sync - sync.c:sync_wait:36 - An error occurred in another process (expected sequence number 5)
lxc ae-test 20210812030302.940 WARN     network - network.c:lxc_delete_network_priv:3621 - Failed to rename interface with index 0 from "eth0" to its initial name "veth179532bb"
lxc ae-test 20210812030302.942 ERROR    start - start.c:__lxc_start:2073 - Failed to spawn container "ae-test"
lxc ae-test 20210812030302.942 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:868 - Received container state "ABORTING" instead of "RUNNING"
lxc ae-test 20210812030302.942 WARN     start - start.c:lxc_abort:1016 - No such process - Failed to send SIGKILL via pidfd 44 for process 9138
lxc ae-test 20210812030302.241 WARN     conf - conf.c:lxc_map_ids:3007 - newuidmap binary is missing
lxc ae-test 20210812030302.241 WARN     conf - conf.c:lxc_map_ids:3013 - newgidmap binary is missing
lxc 20210812030302.264 ERROR    af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:207 - Connection reset by peer - Failed to receive response
lxc 20210812030302.264 ERROR    commands - commands.c:lxc_cmd_rsp_recv_fds:129 - Failed to receive file descriptors

LXCFS shouldn’t care unless fuse is confused when tmpfs is in play somehow.