Can no longer start containers after migrating to snap package

Because a colleague of mine noticed that the containers on one of the build machines didn’t work any longer, I investigated.

Oddly enough I noticed how the lxd package had disappeared. I have no recollection of having done that myself.

Either way, so I checked regarding the PPA and saw that I’m supposed to install the snap package. And so I did snap install lxd and all seemed fine at first.

Until, of course, I found there were no containers. So I found one of the threads here in the forum and bind-mounted the old location to the new one and LXD seemed happy with that (lxc list returned the expected list).

I couldn’t run lxd.migrate, because if complained about the sockets being unavailable (my guess here was that normally lxd.migrate is run when the .deb package is still installed and two lxd daemons are running):

=> Connecting to source server
error: Unable to connect to the source LXD: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: connect: no such file or directory

But bind-mounting seems to do the job, so it’s all fine, I reckoned.

Except it wasn’t fine. For starters I can no longer start the containers that are bind-mounted into the new location. Autostart also doesn’t work, but that seems only natural given that the startup of containers fails in general.

$ lxc start sheep01
error: Failed to run: /snap/lxd/current/bin/lxd forkstart sheep01 /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/sheep01/lxc.conf:
Try `lxc info --show-log sheep01` for more info

so looking at the output of the suggested lxc info --show-log sheep01 I got:

$ lxc info --show-log sheep01
Name: sheep01
Remote: unix://
Architecture: x86_64
Created: 2017/10/31 14:02 UTC
Status: Stopped
Type: persistent
Profiles: default


lxc sheep01 20180815095750.939 WARN     lxc_conf - conf.c:lxc_map_ids:2862 - newuidmap binary is missing
lxc sheep01 20180815095750.939 WARN     lxc_conf - conf.c:lxc_map_ids:2868 - newgidmap binary is missing
lxc sheep01 20180815095750.988 WARN     lxc_conf - conf.c:lxc_map_ids:2862 - newuidmap binary is missing
lxc sheep01 20180815095750.988 WARN     lxc_conf - conf.c:lxc_map_ids:2868 - newgidmap binary is missing
lxc sheep01 20180815095751.257 ERROR    dir - storage/dir.c:dir_mount:189 - No such file or directory - Failed to mount "/var/snap/lxd/common/lxd/containers/sheep01/rootfs" on "/var/snap/lxd/common/lxc/"
lxc sheep01 20180815095751.258 ERROR    lxc_conf - conf.c:lxc_setup_rootfs:1370 - Failed to mount rootfs "/var/snap/lxd/common/lxd/containers/sheep01/rootfs" onto "/var/snap/lxd/common/lxc/" with options "(null)"
lxc sheep01 20180815095751.258 ERROR    lxc_conf - conf.c:do_rootfs_setup:3318 - Failed to setup rootfs for
lxc sheep01 20180815095751.258 ERROR    lxc_conf - conf.c:lxc_setup:3382 - Failed to setup rootfs
lxc sheep01 20180815095751.258 ERROR    lxc_start - start.c:do_start:1219 - Failed to setup container "sheep01"
lxc sheep01 20180815095751.259 ERROR    lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
lxc sheep01 20180815095751.673 ERROR    lxc_start - start.c:__lxc_start:1887 - Failed to spawn container "sheep01"
lxc sheep01 20180815095751.673 ERROR    lxc_container - lxccontainer.c:wait_on_daemonized_start:834 - Received container state "ABORTING" instead of "RUNNING"
lxc sheep01 20180815095751.682 WARN     lxc_conf - conf.c:lxc_map_ids:2862 - newuidmap binary is missing
lxc sheep01 20180815095751.683 WARN     lxc_conf - conf.c:lxc_map_ids:2868 - newgidmap binary is missing
lxc 20180815095751.711 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:130 - Connection reset by peer - Failed to receive response for command "get_state"

With no idea what to make of all of that.

Since lxd.migrate isn’t merely a script I am unsure what other it’s doing than shoveling data from /var/lib/lxd into /var/snap/lxd/common/lxd.

What exactly am I doing wrong other than the fact that somehow magically the old (.deb-based) package disappeared and left the LXD installation in a state that prevents me from using lxd.migrate?

# snap list && lsb_release -a
Name  Version    Rev   Tracking  Publisher  Notes
core  16-2.34.3  5145  stable    canonical  core
lxd   3.3        8011  stable    canonical  -
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.5 LTS
Release:        16.04
Codename:       xenial

By bind-mounting the lxd folder the permissions on that folder become:

drwxr-xr-x 15 lxd nogroup 4096 2018-08-15 08:41 lxd

lxd.migrate manually updates all paths inside the LXD database which is probably what’s making your manual migration fail here.

What does lxc storage list and lxc storage show (for any of the entries in list) show you?

Hi Stéphane, thanks for the response. And sorry for my delayed response.

$ lxc storage list
| lxd  |             | zfs    | lxd    | 26      |

(truncated for brevity)

$ lxc storage show lxd
  source: lxd
  zfs.pool_name: lxd
description: ""
name: lxd
driver: zfs
- /1.0/containers/bldbase-2017-12-01
- /1.0/containers/sheep01
- /1.0/containers/sheep24
- /1.0/profiles/default
status: Created
- none

Looking at this output I am already starting to think that none for the locations might be an issue?! :wink:

Thanks for any insights you can provide.

Oh, btw, I was trying to find which code corresponds to the lxd.migrate binary, but came up empty-handed. Any pointers there would also help, since it would help to help myself. I promise I will share the gained insights. has the code for lxd.migrate.

Since your storage is ZFS based, the database actually doesn’t get touched, but you do need to update the mountpoint property of all yoru ZFS datasets, switching them from /var/lib/lxd/storage-pools/… to /var/snap/lxd/common/lxd/storage-pools/…

1 Like

First off: sorry for the delayed response.

Hmm, I was a bit puzzled what you mean by your advice (I’m no expert in ZFS matters and chose it mainly based on the merits it seemed to have in conjunction with LXD). So I read a bit of documentation and came up with this:

# zfs get mountpoint lxd
lxd   mountpoint  none        local

So I decided to attempt to set the property, which failed (my ZFS pool for lxd is named lxd):

# zfs set mountpoint=/var/snap/lxd/common/lxd/storage-pools/lxd lxd
cannot mount '/var/snap/lxd/common/lxd/storage-pools/lxd': directory is not empty
cannot mount '/var/snap/lxd/common/lxd/storage-pools/lxd/containers': directory is not empty
cannot mount '/var/snap/lxd/common/lxd/storage-pools/lxd/images': directory is not empty
property may be set but unable to remount filesystem

Anyway, I decided to attempt to export my existing base image (from which all the others were cloned) and start from scratch.

So I cleaned out all those containers named sheep* and the respective ZFS resources. The I set the mountpoint property for my base image and attempted to start that. But no luck. It then turned out, that there were still some remnants inside /var/snap/lxd/common from before, pointing back at /var/lib/lxd … namely symlinks.

Sure enough find /var/snap/lxd/common -type l -lname '/var/lib/lxd/*' gave me the list of the stray symlinks.

And that turned out to be the actual issue in getting those containers to start.

However, I got a kind of followup question here. Which of the ZFS volumes on the pool need to have their mountpoint property set? Am I right to assume that these would be containers, deleted and images. I.e. those that appear as folders underneath /var/snap/lxd/common/lxd/storage-pools/lxd?! And if so, am I supposed to set the mountpoints as follows?

zfs set mountpoint=/var/snap/lxd/common/lxd/storage-pools/lxd/containers lxd/containers
zfs set mountpoint=/var/snap/lxd/common/lxd/storage-pools/lxd/deleted lxd/deleted
zfs set mountpoint=/var/snap/lxd/common/lxd/storage-pools/lxd/images lxd/images

… or by issuing:

zfs set mountpoint=/var/snap/lxd/common/lxd/storage-pools/lxd lxd

… the difference being what part of the (not yet mounted) folder structure needs to be empty up front …

I just tried installing the snap-package (3.4) on a fresh Ubuntu 18.04 VM in order to see how the ZFS volumes relate to the folders …

# zfs list
lxd              294K  30.8G    24K  none
lxd/containers    24K  30.8G    24K  none
lxd/custom        24K  30.8G    24K  none
lxd/deleted       24K  30.8G    24K  none
lxd/images        24K  30.8G    24K  none
lxd/snapshots     24K  30.8G    24K  none

Seems there are not just containers, deleted and images after all …

I guess my question boils down to this: which of these require a mountpoint property and which ones don’t. Because evidently from listing the contents of a vanilla ubuntu:lts after launch-ing it, I get:

# zfs list
NAME                                                                          USED  AVAIL  REFER  MOUNTPOINT
lxd                                                                           859M  29.9G    24K  none
lxd/containers                                                                219M  29.9G    24K  none
lxd/containers/bldbase                                                        219M  29.9G   839M  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/bldbase
lxd/custom                                                                     24K  29.9G    24K  none
lxd/deleted                                                                    24K  29.9G    24K  none
lxd/images                                                                    640M  29.9G    24K  none
lxd/images/51d8adf6ba25c3f79963994a06cbfb55aa6eb2ebb2d67817bd0938b799ec0315   640M  29.9G   640M  none
lxd/snapshots                                                                  24K  29.9G    24K  none

Only containers do need a mountpoint set, everything else should be left the way it is.
So effectively running zfs set mountpoint=/var/snap/lxd/common/lxd/storage-pools/POOL/containers/NAME DATASET for the datasets that match the */containers/NAME pattern.

Great, thank you very much!