Managed to lose all containers help

This was the output of
zfs list -t all

zpooldata3/lxd                                                                                            117G   573G    96K  none
    zpooldata3/lxd/containers                                                                                 116G   573G    96K  none
    zpooldata3/lxd/containers/bionic-selenium                                                                 456M   573G   765M  /var/lib/lxd/storage-pools/default/containers/bionic-selenium
    zpooldata3/lxd/containers/xenial-b                                                                      2.14G   573G  2.14G  /var/lib/lxd/storage-pools/default/containers/xenial-b
    zpooldata3/lxd/containers/trusty-android                                                                 9.86G   573G  9.86G  /var/lib/lxd/storage-pools/default/containers/trusty-android

How can I point my lxd to use the dataset zpooldata3/lxd for all the storage? It happend as I had to rsync to rollback my root and I missed to backup my /var/lib/lxd…

Any tips…

What LXD version?

lxc --version
2.21

This is the output of df

Filesystem          Size  Used Avail Use% Mounted on
udev                7.8G     0  7.8G   0% /dev
tmpfs               1.6G  9.5M  1.6G   1% /run
/dev/sdc7            54G   20G   32G  38% /
tmpfs               7.9G  8.5M  7.9G   1% /dev/shm
tmpfs               5.0M  8.0K  5.0M   1% /run/lock
tmpfs               7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/sdc8            62G   52M   59G   1% /tmp
/dev/sdc5           708G  401G  272G  60% /localbackup
zpooldata3/data3    1.2T  607G  574G  52% /data3
zpooldata3/home     834G  260G  574G  32% /home
zpooldata3          574G  128K  574G   1% /zpooldata3
cgmfs               100K     0  100K   0% /run/cgmanager/fs
tmpfs               1.6G   28K  1.6G   1% /run/user/1000
tmpfs               100K     0  100K   0% /var/lib/lxd/shmounts
tmpfs               100K     0  100K   0% /var/lib/lxd/devlxd

Ok, make sure your system has a working LXD by running lxc info.

If that works, then try:

  • zfs mount zpooldata3/lxd/containers/bionic-selenium
  • zfs mount zpooldata3/lxd/containers/xenial-b
  • zfs mount zpooldata3/lxd/containers/trusty-android
  • lxd import bionic-selenium
  • lxd import xenial-b
  • lxd import trusty-android

I meant lxc info works

  certificate_fingerprint: 323859f65465656566c67be46a6332c10d867c
  driver: lxc
  driver_version: 2.0.8
  kernel: Linux
  kernel_architecture: x86_64
  kernel_version: 4.13.0-43-generic
  server: lxd
  server_pid: 5762
  server_version: "2.21"
  storage: ""

Thanks for the quick reply
ZFS mount command was success
df shows proper mount
zpooldata3/lxd/containers/bionic-selenium 602012672 783232 601229440 1% /var/lib/lxd/storage-pools/default/containers/bionic-selenium

lxd import bionic-selenium
error: No root device could be found.

UPDATE: some how USED BY is 0

lxc  storage   list
+---------+-------------+--------+----------------+---------+
|  NAME   | DESCRIPTION | DRIVER |     SOURCE     | USED BY |
+---------+-------------+--------+----------------+---------+
| default |             | zfs    | zpooldata3/lxd | 0       |
+---------+-------------+--------+----------------+---------+

Any other ideas. Thanks!

Ah, yeah, it’s a bug we fixed recently.

Try doing:

  • lxc profile device add default root disk path=/ pool=default

Then run the lxd import bionic-selenium again.

If that works, the others should import without trouble.

Note that you’ll still need to manually recreate any network you had on the system as those aren’t part of the emergency backup.

If using the default setup, this should do the trick:

  • lxc network create lxdbr0
  • lxc profile device add default eth0 nic nictype=bridged parent=lxdbr0 name=eth0

OK I wrote too quickly. It did not recover the issue.

lxc list
+-----------------+---------+------+------+------------+-----------+
|      NAME       |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+-----------------+---------+------+------+------------+-----------+
| bionic-selenium | STOPPED |      |      | PERSISTENT | 0         |
+-----------------+---------+------+------+------------+-----------+


lxc profile edit default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/containers/bionic-selenium
- /1.0/containers/trusty-android

Name: bionic-selenium
Remote: unix://
Architecture: x86_64
Created: 2018/11/06 09:35 UTC
Status: Stopped
Type: persistent
Profiles: default

lxc start bionic-selenium


Log:

            lxc 20181106093601.543 ERROR    lxc_conf - conf.c:mount_rootfs:798 - Permission denied - Failed to get real path for "/var/lib/lxd/containers/bionic-selenium/rootfs".
            lxc 20181106093601.543 ERROR    lxc_conf - conf.c:setup_rootfs:1220 - Failed to mount rootfs "/var/lib/lxd/containers/bionic-selenium/rootfs" onto "/usr/lib/x86_64-linux-gnu/lxc" with options "(null)".
            lxc 20181106093601.543 ERROR    lxc_conf - conf.c:do_rootfs_setup:3899 - failed to setup rootfs for 'bionic-selenium'
            lxc 20181106093601.543 ERROR    lxc_conf - conf.c:lxc_setup:3981 - Error setting up rootfs mount after spawn
            lxc 20181106093601.543 ERROR    lxc_start - start.c:do_start:811 - Failed to setup container "bionic-selenium".
            lxc 20181106093601.543 ERROR    lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 3)
            lxc 20181106093601.586 ERROR    lxc_start - start.c:__lxc_start:1358 - Failed to spawn container "bionic-selenium".
            lxc 20181106093602.185 ERROR    lxc_conf - conf.c:run_buffer:416 - Script exited with status 1.
            lxc 20181106093602.185 ERROR    lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "bionic-selenium".
            lxc 20181106093602.185 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:177 - Command get_cgroup failed to receive response: Connection reset by peer.
            lxc 20181106093602.185 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:177 - Command get_cgroup failed to receive response: Connection reset by peer.

Even a new container launch shows same issue.
Any other suggestions.

That suggests you have bad permissions preventing unprivileged containers from working on your system, please show:

sudo stat -c "%n %a" / /var /var/lib /var/lib/lxd /var/lib/lxd/containers /var/lib/lxd/storage-pools /var/lib/lxd/storage-pools/default /var/lib/lxd/storage-pools/default/containers

Normal output should be something like:

/ 755
/var 755
/var/lib 755
/var/lib/lxd 711
/var/lib/lxd/containers 711
/var/lib/lxd/storage-pools 711
/var/lib/lxd/storage-pools/default 711
/var/lib/lxd/storage-pools/default/containers 711

thanks for the reply.

I was a bit in a hurry though lxd did not start, copy worked OK. So managed to lxd remote copy these to remote server and move them back later.

zfs mount zpooldata3/lxd/containers/bionic-selenium
zfs mount zpooldata3/lxd/containers/xenial-b
zfs mount zpooldata3/lxd/containers/trusty-android
lxd import bionic-selenium
lxd import xenial-b
lxd import trusty-android

after this add remote.
copy to remote

purge my local LXD
install from backports 3.0.1
Copy back everything from remote

Finally did this so that next time I save my time

Thanks a lot…