Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory

Hi

I rebooted my debian 9 proxmox server with lxd 3.0.0 yesterday and now I’m not getting any connection to the local socket :frowning:

jon@hub2-proxhub2-prox:~$ systemctl status snap.lxd.daemon.service
● snap.lxd.daemon.service - Service for snap application lxd.daemon
Loaded: loaded (/etc/systemd/system/snap.lxd.daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-04-26 09:21:52 CEST; 3s ago
Process: 30883 ExecStop=/usr/bin/snap run --command=stop lxd.daemon (code=exited, status=0/SUCCESS)
Main PID: 32355 (daemon.start)
Tasks: 0 (limit: 4915)
Memory: 1.8M
CPU: 48ms
CGroup: /system.slice/snap.lxd.daemon.service
‣ 32355 /bin/sh /snap/lxd/6452/commands/daemon.start
jon@hub2-proxhub2-prox:~$ lxc list
Error: Get http://unix.socket/1.0: EOF

jon@hub2-proxhub2-prox:~$ lxc list
Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory

disksdb3/proxmox_lxd_prod/containers 182G 976G 24K none
disksdb3/proxmox_lxd_prod/containers/ansible 1.89G 976G 1.08G none
disksdb3/proxmox_lxd_prod/containers/dns1 1.48G 976G 1.27G none
disksdb3/proxmox_lxd_prod/containers/gitlab 6.72G 976G 2.66G none
disksdb3/proxmox_lxd_prod/containers/jira-support 3.07G 976G 2.13G none
disksdb3/proxmox_lxd_prod/containers/netbox 1.73G 976G 1.26G none
disksdb3/proxmox_lxd_prod/containers/oxidized 16.6G 976G 16.9G /var/snap/lxd/common/lxd/storage-pools/local/containers/oxidized
disksdb3/proxmox_lxd_prod/containers/rundeck 949M 976G 1.46G /var/snap/lxd/common/lxd/storage-pools/local/containers/rundeck
disksdb3/proxmox_lxd_prod/containers/salt 9.29G 976G 7.23G /var/snap/lxd/common/lxd/storage-pools/local/containers/salt
disksdb3/proxmox_lxd_prod/containers/smokeping 910M 976G 776M none
disksdb3/proxmox_lxd_prod/containers/unms-vm 8.36G 976G 7.60G none
disksdb3/proxmox_lxd_prod/containers/wifi-controller-backup 1.38G 976G 1.38G none
disksdb3/proxmox_lxd_prod/containers/xeoma 24K 976G 24K /var/snap/lxd/common/lxd/storage-pools/local/containers/xeoma
disksdb3/proxmox_lxd_prod/containers/xeoma4 130G 976G 130G /var/snap/lxd/common/lxd/storage-pools/local/containers/xeoma4

root@hub2-proxhub2-prox /home/jon # systemctl | grep lxd
sys-devices-virtual-net-lxdbr300.device loaded active plugged /sys/devices/virtual/net/lxdbr300
sys-subsystem-net-devices-lxdbr300.device loaded active plugged /sys/subsystem/net/devices/lxdbr300
disksdb3-proxmox_lxd-ansible.mount loaded active mounted /disksdb3/proxmox_lxd/ansible
disksdb3-proxmox_lxd-dns1.mount loaded active mounted /disksdb3/proxmox_lxd/dns1
disksdb3-proxmox_lxd-gitlab.mount loaded active mounted /disksdb3/proxmox_lxd/gitlab
disksdb3-proxmox_lxd-jira\x2dsupport.mount loaded active mounted /disksdb3/proxmox_lxd/jira-support
disksdb3-proxmox_lxd-netbox.mount loaded active mounted /disksdb3/proxmox_lxd/netbox
disksdb3-proxmox_lxd-smokeping.mount loaded active mounted /disksdb3/proxmox_lxd/smokeping
disksdb3-proxmox_lxd-unms\x2dvm.mount loaded active mounted /disksdb3/proxmox_lxd/unms-vm
disksdb3-proxmox_lxd-wifi\x2dcontroller\x2dbackup.mount loaded active mounted /disksdb3/proxmox_lxd/wifi-controller-backup
disksdb3-proxmox_lxd-xeoma.mount loaded active mounted /disksdb3/proxmox_lxd/xeoma
disksdb3-proxmox_lxd.mount loaded active mounted /disksdb3/proxmox_lxd
run-snapd-ns-lxd.mnt.mount loaded active mounted /run/snapd/ns/lxd.mnt
snap-lxd-6452.mount loaded active mounted Mount unit for lxd
snap.lxd.daemon.service loaded active running Service for snap application lxd.daemon

Any ideas how to sort it?

Cheers,
Jon.

jon@hub2-proxhub2-prox:~$ lxd --version
3.0.0

Try:

systemctl stop snap.lxd.daemon
lxd --debug --group lxd

And report what that shows. If it hangs at the end of startup, then that means it’s back online temporarily, in which case, try running lxc list in another terminal.

root@hub2-proxhub2-prox /var/snap/lxd # /snap/bin/lxd --debug --group lxd
DBUG[04-26|17:58:42] Connecting to a local LXD over a Unix socket 
DBUG[04-26|17:58:42] Sending request to LXD                   etag= method=GET url=http://unix.socket/1.0
INFO[04-26|17:58:42] LXD 3.0.0 is starting in normal mode     path=/var/snap/lxd/common/lxd
INFO[04-26|17:58:42] Kernel uid/gid map: 
INFO[04-26|17:58:42]  - u 0 0 4294967295 
INFO[04-26|17:58:42]  - g 0 0 4294967295 
INFO[04-26|17:58:42] Configured LXD uid/gid map: 
INFO[04-26|17:58:42]  - u 0 1000000 1000000000 

INFO[04-26|17:58:42] - g 0 1000000 1000000000
INFO[04-26|17:58:42] Initializing database gateway
INFO[04-26|17:58:42] Start database node address=10.55.0.1:8443 id=1
INFO[04-26|17:58:42] Raft: Restored from snapshot 105-460100-1524764626147
INFO[04-26|17:58:42] Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}]
INFO[04-26|17:58:42] Raft: Node at 10.55.0.1:8443 [Follower] entering Follower state (Leader: “”)
INFO[04-26|17:58:42] LXD isn’t socket activated
DBUG[04-26|17:58:42] Connecting to a local LXD over a Unix socket
DBUG[04-26|17:58:42] Sending request to LXD etag= method=GET url=http://unix.socket/1.0
DBUG[04-26|17:58:42] Detected stale unix socket, deleting
DBUG[04-26|17:58:42] Detected stale unix socket, deleting
INFO[04-26|17:58:42] Starting /dev/lxd handler:
INFO[04-26|17:58:42] - binding devlxd socket socket=/var/snap/lxd/common/lxd/devlxd/sock
INFO[04-26|17:58:42] REST API daemon:
INFO[04-26|17:58:42] - binding Unix socket socket=/var/snap/lxd/common/lxd/unix.socket
INFO[04-26|17:58:42] - binding TCP socket socket=10.55.0.1:8443
DBUG[04-26|17:58:42] Found cert k=0
DBUG[04-26|17:58:42] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:43] Found cert k=0
DBUG[04-26|17:58:43] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:43] Found cert k=0
DBUG[04-26|17:58:43] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:43] Found cert k=0
DBUG[04-26|17:58:43] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:44] Found cert k=0
DBUG[04-26|17:58:44] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:44] Found cert k=0
DBUG[04-26|17:58:44] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:44] Found cert k=0
DBUG[04-26|17:58:44] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:44] Found cert k=0
DBUG[04-26|17:58:44] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:45] Found cert k=0
DBUG[04-26|17:58:45] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:45] Found cert k=0
DBUG[04-26|17:58:45] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:45] Found cert k=0
DBUG[04-26|17:58:45] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:46] Found cert k=0
DBUG[04-26|17:58:46] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:46] Found cert k=0
DBUG[04-26|17:58:46] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:46] Found cert k=0
DBUG[04-26|17:58:46] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:46] Found cert k=0
DBUG[04-26|17:58:46] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:47] Found cert k=0
DBUG[04-26|17:58:47] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:47] Found cert k=0
DBUG[04-26|17:58:47] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:47] Found cert k=0
DBUG[04-26|17:58:47] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:48] Found cert k=0
DBUG[04-26|17:58:48] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
WARN[04-26|17:58:48] Raft: Heartbeat timeout from “” reached, starting election
INFO[04-26|17:58:48] Raft: Node at 10.55.0.1:8443 [Candidate] entering Candidate state in term 106
DBUG[04-26|17:58:48] Found cert k=0
DBUG[04-26|17:58:48] Failed to establish gRPC connection with 10.55.0.1:8443: 503 Service Unavailable
DBUG[04-26|17:58:48] Raft: Votes needed: 1
DBUG[04-26|17:58:48] Raft: Vote granted from 1 in term 106. Tally: 1
INFO[04-26|17:58:48] Raft: Election won. Tally: 1
INFO[04-26|17:58:48] Raft: Node at 10.55.0.1:8443 [Leader] entering Leader state
DBUG[04-26|17:58:48] Found cert k=0
DBUG[04-26|17:58:48] Found cert k=0
DBUG[04-26|17:58:48] Found cert k=0
DBUG[04-26|17:58:48] Found cert k=0
DBUG[04-26|17:58:54] Initializing and checking storage pool “local”.
DBUG[04-26|17:58:54] Initializing a ZFS driver.
DBUG[04-26|17:58:54] Checking ZFS storage pool “local”.
DBUG[04-26|17:58:54] Initializing a ZFS driver.
DBUG[04-26|17:58:54] Connecting to a remote simplestreams server
DBUG[04-26|17:58:54] Initialized inotify with file descriptor 59
INFO[04-26|17:58:54] Pruning expired images
INFO[04-26|17:58:54] Done pruning expired images
INFO[04-26|17:58:54] Expiring log files
DBUG[04-26|17:58:54] Starting heartbeat round
INFO[04-26|17:58:54] Updating instance types
DBUG[04-26|17:58:54] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
INFO[04-26|17:58:54] Updating images
INFO[04-26|17:58:54] Done expiring log files
DBUG[04-26|17:58:54] Processing image alias=17.10 fp=f7febb8cbebc6aa8a993eb1ce534963a6b288fde23b9594bb3ba4560704dd65c protocol=simplestreams server=https://cloud-images.ubuntu.com/releases
DBUG[04-26|17:58:54] Connecting to a remote simplestreams server
DBUG[04-26|17:58:54] Successful heartbeat for 10.55.0.1:8443
DBUG[04-26|17:58:55] Completed heartbeat round
DBUG[04-26|17:58:55] Initializing a ZFS driver.
WARN[04-26|17:58:55] Unable to update backup.yaml at this time. name=ansible
DBUG[04-26|17:58:55] Mounting ZFS storage volume for container “ansible” on storage pool “local”.
DBUG[04-26|17:58:55] Mounted ZFS storage volume for container “ansible” on storage pool “local”.
DBUG[04-26|17:58:55] Unmounting ZFS storage volume for container “ansible” on storage pool “local”.
DBUG[04-26|17:58:55] Unmounted ZFS storage volume for container “ansible” on storage pool “local”.
DBUG[04-26|17:58:55] Mounting ZFS storage volume for container “ansible” on storage pool “local”.
DBUG[04-26|17:58:55] Mounted ZFS storage volume for container “ansible” on storage pool “local”.
INFO[04-26|17:58:55] Starting container action=start created=2018-04-03T19:10:39+0000 ephemeral=false name=ansible stateful=false used=2018-04-26T07:56:31+0000
DBUG[04-26|17:58:55] handling ip=@ method=GET url=/1.0
DBUG[04-26|17:58:55] handling ip=@ method=GET url=/internal/containers/3/onstart
DBUG[04-26|17:58:55] Initializing a ZFS driver.
DBUG[04-26|17:58:55] Mounting ZFS storage volume for container “ansible” on storage pool “local”.
DBUG[04-26|17:58:55] Mounted ZFS storage volume for container “ansible” on storage pool “local”.
DBUG[04-26|17:58:55] Scheduler: container ansible started: re-balancing
INFO[04-26|17:58:55] Started container action=start created=2018-04-03T19:10:39+0000 ephemeral=false name=ansible stateful=false used=2018-04-26T07:56:31+0000
DBUG[04-26|17:58:55] Image already exists in the db image=f7febb8cbebc6aa8a993eb1ce534963a6b288fde23b9594bb3ba4560704dd65c
DBUG[04-26|17:58:55] Image already exists on storage pool “local”.
DBUG[04-26|17:58:55] Already up to date fp=f7febb8cbebc6aa8a993eb1ce534963a6b288fde23b9594bb3ba4560704dd65c
INFO[04-26|17:58:55] Done updating images
DBUG[04-26|17:58:55] Initializing a ZFS driver.
WARN[04-26|17:58:55] Unable to update backup.yaml at this time. name=dns1
DBUG[04-26|17:58:55] Mounting ZFS storage volume for container “dns1” on storage pool “local”.
DBUG[04-26|17:58:55] Mounted ZFS storage volume for container “dns1” on storage pool “local”.
DBUG[04-26|17:58:55] Unmounting ZFS storage volume for container “dns1” on storage pool “local”.
DBUG[04-26|17:58:55] Scheduler: network: vethCWAKEH has been added: updating network priorities
DBUG[04-26|17:58:55] Scheduler: network: veth2C5BBD has been added: updating network priorities
DBUG[04-26|17:58:55] Unmounted ZFS storage volume for container “dns1” on storage pool “local”.
DBUG[04-26|17:58:56] Mounting ZFS storage volume for container “dns1” on storage pool “local”.
DBUG[04-26|17:58:56] Mounted ZFS storage volume for container “dns1” on storage pool “local”.
INFO[04-26|17:58:56] Starting container action=start created=2018-04-03T19:10:43+0000 ephemeral=false name=dns1 stateful=false used=2018-04-26T07:56:32+0000
DBUG[04-26|17:58:56] handling ip=@ method=GET url=/1.0
DBUG[04-26|17:58:56] handling ip=@ method=GET url=/internal/containers/4/onstart
DBUG[04-26|17:58:56] Initializing a ZFS driver.
DBUG[04-26|17:58:56] Mounting ZFS storage volume for container “dns1” on storage pool “local”.
DBUG[04-26|17:58:56] Mounted ZFS storage volume for container “dns1” on storage pool “local”.
DBUG[04-26|17:58:56] Scheduler: container dns1 started: re-balancing
INFO[04-26|17:58:56] Started container action=start created=2018-04-03T19:10:43+0000 ephemeral=false name=dns1 stateful=false used=2018-04-26T07:56:32+0000
DBUG[04-26|17:58:56] Initializing a ZFS driver.
INFO[04-26|17:58:56] Done updating instance types
WARN[04-26|17:58:56] Unable to update backup.yaml at this time. name=gitlab
DBUG[04-26|17:58:56] Mounting ZFS storage volume for container “gitlab” on storage pool “local”.
DBUG[04-26|17:58:56] Mounted ZFS storage volume for container “gitlab” on storage pool “local”.
DBUG[04-26|17:58:56] Unmounting ZFS storage volume for container “gitlab” on storage pool “local”.
DBUG[04-26|17:58:56] Unmounted ZFS storage volume for container “gitlab” on storage pool “local”.
DBUG[04-26|17:58:56] Mounting ZFS storage volume for container “gitlab” on storage pool “local”.
DBUG[04-26|17:58:56] Mounted ZFS storage volume for container “gitlab” on storage pool “local”.
INFO[04-26|17:58:56] Starting container action=start created=2018-04-03T19:10:47+0000 ephemeral=false name=gitlab stateful=false used=2018-04-26T07:56:36+0000
DBUG[04-26|17:58:56] handling ip=@ method=GET url=/1.0
DBUG[04-26|17:58:56] handling ip=@ method=GET url=/internal/containers/5/onstart
DBUG[04-26|17:58:56] Initializing a ZFS driver.
DBUG[04-26|17:58:56] Mounting ZFS storage volume for container “gitlab” on storage pool “local”.
DBUG[04-26|17:58:56] Mounted ZFS storage volume for container “gitlab” on storage pool “local”.
DBUG[04-26|17:58:56] Scheduler: container gitlab started: re-balancing
INFO[04-26|17:58:56] Started container action=start created=2018-04-03T19:10:47+0000 ephemeral=false name=gitlab stateful=false used=2018-04-26T07:56:36+0000
DBUG[04-26|17:58:56] Initializing a ZFS driver.
WARN[04-26|17:58:57] Unable to update backup.yaml at this time. name=jira-support
DBUG[04-26|17:58:57] Mounting ZFS storage volume for container “jira-support” on storage pool “local”.
DBUG[04-26|17:58:57] Mounted ZFS storage volume for container “jira-support” on storage pool “local”.
DBUG[04-26|17:58:57] Unmounting ZFS storage volume for container “jira-support” on storage pool “local”.
DBUG[04-26|17:58:57] Unmounted ZFS storage volume for container “jira-support” on storage pool “local”.
DBUG[04-26|17:58:57] Mounting ZFS storage volume for container “jira-support” on storage pool “local”.
DBUG[04-26|17:58:57] Mounted ZFS storage volume for container “jira-support” on storage pool “local”.
INFO[04-26|17:58:57] Starting container action=start created=2018-04-03T15:24:58+0000 ephemeral=false name=jira-support stateful=false used=2018-04-26T07:56:40+0000
DBUG[04-26|17:58:57] handling ip=@ method=GET url=/1.0
DBUG[04-26|17:58:57] handling ip=@ method=GET url=/internal/containers/1/onstart
DBUG[04-26|17:58:57] Initializing a ZFS driver.
DBUG[04-26|17:58:57] Mounting ZFS storage volume for container “jira-support” on storage pool “local”.
DBUG[04-26|17:58:57] Mounted ZFS storage volume for container “jira-support” on storage pool “local”.
DBUG[04-26|17:58:57] Scheduler: container jira-support started: re-balancing
INFO[04-26|17:58:57] Started container action=start created=2018-04-03T15:24:58+0000 ephemeral=false name=jira-support stateful=false used=2018-04-26T07:56:40+0000
DBUG[04-26|17:58:57] Initializing a ZFS driver.
WARN[04-26|17:58:57] Unable to update backup.yaml at this time. name=oxidized
DBUG[04-26|17:58:57] Mounting ZFS storage volume for container “oxidized” on storage pool “local”.
DBUG[04-26|17:58:57] Mounted ZFS storage volume for container “oxidized” on storage pool “local”.
DBUG[04-26|17:58:57] Unmounting ZFS storage volume for container “oxidized” on storage pool “local”.
DBUG[04-26|17:58:58] Unmounted ZFS storage volume for container “oxidized” on storage pool “local”.
DBUG[04-26|17:58:58] Mounting ZFS storage volume for container “oxidized” on storage pool “local”.
DBUG[04-26|17:58:58] Mounted ZFS storage volume for container “oxidized” on storage pool “local”.
INFO[04-26|17:58:58] Starting container action=start created=2018-04-08T17:50:54+0000 ephemeral=false name=oxidized stateful=false used=2018-04-26T07:56:43+0000
DBUG[04-26|17:58:58] handling ip=@ method=GET url=/1.0
DBUG[04-26|17:58:58] handling ip=@ method=GET url=/internal/containers/25/onstart
DBUG[04-26|17:58:58] Initializing a ZFS driver.
DBUG[04-26|17:58:58] Mounting ZFS storage volume for container “oxidized” on storage pool “local”.
DBUG[04-26|17:58:58] Mounted ZFS storage volume for container “oxidized” on storage pool “local”.
DBUG[04-26|17:58:58] Scheduler: container oxidized started: re-balancing
INFO[04-26|17:58:58] Started container action=start created=2018-04-08T17:50:54+0000 ephemeral=false name=oxidized stateful=false used=2018-04-26T07:56:43+0000
DBUG[04-26|17:58:58] Initializing a ZFS driver.
WARN[04-26|17:58:58] Unable to update backup.yaml at this time. name=rundeck
DBUG[04-26|17:58:58] Mounting ZFS storage volume for container “rundeck” on storage pool “local”.
DBUG[04-26|17:58:58] Mounted ZFS storage volume for container “rundeck” on storage pool “local”.
DBUG[04-26|17:58:58] Unmounting ZFS storage volume for container “rundeck” on storage pool “local”.
DBUG[04-26|17:58:58] Unmounted ZFS storage volume for container “rundeck” on storage pool “local”.
DBUG[04-26|17:58:58] Mounting ZFS storage volume for container “rundeck” on storage pool “local”.
DBUG[04-26|17:58:58] Mounted ZFS storage volume for container “rundeck” on storage pool “local”.
INFO[04-26|17:58:58] Starting container action=start created=2018-04-19T23:10:20+0000 ephemeral=false name=rundeck stateful=false used=2018-04-26T07:56:45+0000
DBUG[04-26|17:58:58] handling ip=@ method=GET url=/1.0
DBUG[04-26|17:58:58] handling ip=@ method=GET url=/internal/containers/41/onstart
DBUG[04-26|17:58:58] Initializing a ZFS driver.
DBUG[04-26|17:58:58] Mounting ZFS storage volume for container “rundeck” on storage pool “local”.
DBUG[04-26|17:58:58] Mounted ZFS storage volume for container “rundeck” on storage pool “local”.
DBUG[04-26|17:58:58] Scheduler: container rundeck started: re-balancing
DBUG[04-26|17:58:59] Starting heartbeat round
DBUG[04-26|17:58:59] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
DBUG[04-26|17:58:59] Successful heartbeat for 10.55.0.1:8443
INFO[04-26|17:58:59] Started container action=start created=2018-04-19T23:10:20+0000 ephemeral=false name=rundeck stateful=false used=2018-04-26T07:56:45+0000
DBUG[04-26|17:58:59] Completed heartbeat round
DBUG[04-26|17:58:59] Initializing a ZFS driver.
WARN[04-26|17:58:59] Unable to update backup.yaml at this time. name=salt
DBUG[04-26|17:58:59] Mounting ZFS storage volume for container “salt” on storage pool “local”.
DBUG[04-26|17:58:59] Mounted ZFS storage volume for container “salt” on storage pool “local”.
DBUG[04-26|17:58:59] Unmounting ZFS storage volume for container “salt” on storage pool “local”.
DBUG[04-26|17:58:59] Unmounted ZFS storage volume for container “salt” on storage pool “local”.
DBUG[04-26|17:58:59] Mounting ZFS storage volume for container “salt” on storage pool “local”.
DBUG[04-26|17:58:59] Mounted ZFS storage volume for container “salt” on storage pool “local”.
INFO[04-26|17:58:59] Starting container action=start created=2018-04-09T23:08:12+0000 ephemeral=false name=salt stateful=false used=2018-04-26T07:56:49+0000
DBUG[04-26|17:58:59] handling ip=@ method=GET url=/1.0
DBUG[04-26|17:58:59] handling ip=@ method=GET url=/internal/containers/33/onstart
DBUG[04-26|17:58:59] Initializing a ZFS driver.
DBUG[04-26|17:58:59] Mounting ZFS storage volume for container “salt” on storage pool “local”.
DBUG[04-26|17:58:59] Mounted ZFS storage volume for container “salt” on storage pool “local”.
DBUG[04-26|17:58:59] Scheduler: container salt started: re-balancing
INFO[04-26|17:59:00] Started container action=start created=2018-04-09T23:08:12+0000 ephemeral=false name=salt stateful=false used=2018-04-26T07:56:49+0000
DBUG[04-26|17:59:03] Starting heartbeat round
DBUG[04-26|17:59:03] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
DBUG[04-26|17:59:03] Successful heartbeat for 10.55.0.1:8443
DBUG[04-26|17:59:03] Completed heartbeat round
DBUG[04-26|17:59:07] Starting heartbeat round
DBUG[04-26|17:59:07] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
DBUG[04-26|17:59:07] Successful heartbeat for 10.55.0.1:8443
DBUG[04-26|17:59:07] Completed heartbeat round
DBUG[04-26|17:59:11] Starting heartbeat round
DBUG[04-26|17:59:11] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
DBUG[04-26|17:59:11] Successful heartbeat for 10.55.0.1:8443
DBUG[04-26|17:59:11] Completed heartbeat round
DBUG[04-26|17:59:15] Starting heartbeat round
DBUG[04-26|17:59:15] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
DBUG[04-26|17:59:15] Successful heartbeat for 10.55.0.1:8443
DBUG[04-26|17:59:15] Completed heartbeat round
DBUG[04-26|17:59:19] Starting heartbeat round
DBUG[04-26|17:59:19] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
DBUG[04-26|17:59:20] Successful heartbeat for 10.55.0.1:8443
DBUG[04-26|17:59:20] Completed heartbeat round
DBUG[04-26|17:59:24] Starting heartbeat round
DBUG[04-26|17:59:24] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
DBUG[04-26|17:59:24] Successful heartbeat for 10.55.0.1:8443
DBUG[04-26|17:59:24] Completed heartbeat round
DBUG[04-26|17:59:28] Starting heartbeat round
DBUG[04-26|17:59:28] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
DBUG[04-26|17:59:28] Successful heartbeat for 10.55.0.1:8443
DBUG[04-26|17:59:28] Completed heartbeat round
DBUG[04-26|17:59:32] Starting heartbeat round
DBUG[04-26|17:59:32] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
DBUG[04-26|17:59:32] Successful heartbeat for 10.55.0.1:8443
DBUG[04-26|17:59:32] Completed heartbeat round
DBUG[04-26|17:59:36] Starting heartbeat round
DBUG[04-26|17:59:36] Heartbeat updating local raft nodes to [{ID:1 Address:10.55.0.1:8443}]
DBUG[04-26|17:59:36] Successful heartbeat for 10.55.0.1:8443
DBUG[04-26|17:59:36] Completed heartbeat round

jon@hub2-proxhub2-prox:~$ lxc list
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
|          NAME          |  STATE  |        IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS | LOCATION |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| ansible                | RUNNING | 10.55.99.154 (eth0) | fd42:7c6b:bdb2:6333:216:3eff:fe73:ce75 (eth0) | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| dns1                   | RUNNING | 10.55.99.2 (eth0)   | fd42:7c6b:bdb2:6333:216:3eff:fec0:23b7 (eth0) | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| gitlab                 | RUNNING | 10.55.99.224 (eth0) | fd42:7c6b:bdb2:6333:216:3eff:feb7:31a (eth0)  | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| jira-support           | RUNNING | 10.55.99.138 (eth0) | fd42:7c6b:bdb2:6333:216:3eff:fe06:ae9f (eth0) | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| netbox                 | STOPPED |                     |                                               | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| oxidized               | RUNNING | 10.55.99.202 (eth0) | fd42:7c6b:bdb2:6333:216:3eff:fea1:ec22 (eth0) | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| rundeck                | RUNNING | 10.55.99.209 (eth0) | fd42:7c6b:bdb2:6333:216:3eff:fe4f:1684 (eth0) | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| salt                   | RUNNING | 10.55.99.19 (eth0)  | fd42:7c6b:bdb2:6333:216:3eff:fe62:cf45 (eth0) | PERSISTENT | 3         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| smokeping              | STOPPED |                     |                                               | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| unms-vm                | STOPPED |                     |                                               | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| wifi-controller-backup | STOPPED |                     |                                               | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| xeoma                  | STOPPED |                     |                                               | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
| xeoma4                 | STOPPED |                     |                                               | PERSISTENT | 0         | hetzner  |
+------------------------+---------+---------------------+-----------------------------------------------+------------+-----------+----------+
jon@hub2-proxhub2-prox:~$

Ok, so that looks back online.
Can you show lxc cluster list? It looks like this is a LXD cluster?

jon@hub2-proxhub2-prox:~$ lxc cluster list
+---------+------------------------+----------+--------+-------------------+
|  NAME   |          URL           | DATABASE | STATE  |      MESSAGE      |
+---------+------------------------+----------+--------+-------------------+
| hetzner | https://10.55.0.1:8443 | YES      | ONLINE | fully operational |
+---------+------------------------+----------+--------+-------------------

It’s a cluster master, no other members yet though.

Ok, so that part explains the extra chatter.
Can you send the output of ifconfig -a? I wonder if the IP of the cluster member is the problem somehow (on an interface that only shows up after LXD is up).

enp0s31f6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::921b:eff:fe95:9de8  prefixlen 64  scopeid 0x20<link>
        ether 90:1b:0e:95:9d:e8  txqueuelen 1000  (Ethernet)
        RX packets 1170931  bytes 359975827 (343.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1968669  bytes 2033071713 (1.8 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xf7000000-f7020000  

gre0: flags=128<NOARP>  mtu 1476
        unspec 00-00-00-00-30-30-30-3A-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

gretap0: flags=4098<BROADCAST,MULTICAST>  mtu 1462
        ether 00:00:00:00:00:00  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 5155133  bytes 676539857 (645.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5155133  bytes 676539857 (645.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lxdbr300: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether c6:fc:cc:7f:a5:42  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 25  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovs-system: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 32:e1:9f:d1:05:b3  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap193i0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
        ether 46:89:fa:9f:ff:87  txqueuelen 1000  (Ethernet)
        RX packets 192591  bytes 21998983 (20.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 273159  bytes 146666910 (139.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap193i1: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
        ether 6e:1f:93:b9:3b:24  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1  bytes 130 (130.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap194i0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
        ether ea:79:f8:c3:7d:5d  txqueuelen 1000  (Ethernet)
        RX packets 9619  bytes 5435057 (5.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 85901  bytes 9276521 (8.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap201i0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
        ether 0a:b7:67:e7:0e:3b  txqueuelen 1000  (Ethernet)
        RX packets 1320  bytes 179377 (175.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 85365  bytes 4298774 (4.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth196i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether fe:f5:55:75:4c:ec  txqueuelen 1000  (Ethernet)
        RX packets 93477  bytes 11376548 (10.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 155725  bytes 247719370 (236.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth197i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether fe:e1:73:25:8c:6c  txqueuelen 1000  (Ethernet)
        RX packets 21158  bytes 10572302 (10.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 159211  bytes 17761360 (16.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth198i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether fe:fb:6c:a6:84:f1  txqueuelen 1000  (Ethernet)
        RX packets 6365  bytes 512228 (500.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 158724  bytes 7939401 (7.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth199i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether fe:30:a8:79:52:6d  txqueuelen 1000  (Ethernet)
        RX packets 4377  bytes 406235 (396.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 141813  bytes 7625667 (7.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth199i1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether fe:72:0a:0b:95:c6  txqueuelen 1000  (Ethernet)
        RX packets 41  bytes 3094 (3.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 191123  bytes 8557296 (8.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth2BR8PH: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fcc4:cfff:feb0:a847  prefixlen 64  scopeid 0x20<link>
        ether fe:c4:cf:b0:a8:47  txqueuelen 1000  (Ethernet)
        RX packets 5945  bytes 444452 (434.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12786  bytes 3335763 (3.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth2C5BBD: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fcaa:17ff:fee8:5dcc  prefixlen 64  scopeid 0x20<link>
        ether fe:aa:17:e8:5d:cc  txqueuelen 1000  (Ethernet)
        RX packets 128  bytes 15202 (14.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2122  bytes 374597 (365.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth88X7NB: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc7c:deff:fe4a:f543  prefixlen 64  scopeid 0x20<link>
        ether fe:7c:de:4a:f5:43  txqueuelen 1000  (Ethernet)
        RX packets 109  bytes 9610 (9.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2174  bytes 384652 (375.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethMXLAXN: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fcf9:76ff:feef:5fcb  prefixlen 64  scopeid 0x20<link>
        ether fe:f9:76:ef:5f:cb  txqueuelen 1000  (Ethernet)
        RX packets 18263  bytes 1374627 (1.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 18882  bytes 2106303 (2.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethOVG744: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc12:e6ff:fe27:53fc  prefixlen 64  scopeid 0x20<link>
        ether fe:12:e6:27:53:fc  txqueuelen 1000  (Ethernet)
        RX packets 55  bytes 5224 (5.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2050  bytes 112297 (109.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethS50I26: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc94:95ff:feb5:e1d7  prefixlen 64  scopeid 0x20<link>
        ether fe:94:95:b5:e1:d7  txqueuelen 1000  (Ethernet)
        RX packets 122  bytes 11125 (10.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2112  bytes 378197 (369.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethV8WKT7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc06:56ff:fe3b:3c10  prefixlen 64  scopeid 0x20<link>
        ether fe:06:56:3b:3c:10  txqueuelen 1000  (Ethernet)
        RX packets 61  bytes 3454 (3.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2113  bytes 117454 (114.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_10: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.55.10.1  netmask 255.255.255.0  broadcast 10.55.10.255
        inet6 fe80::7c5e:9fff:fe10:76e5  prefixlen 64  scopeid 0x20<link>
        ether 7e:5e:9f:10:76:e5  txqueuelen 1000  (Ethernet)
        RX packets 200276  bytes 19886724 (18.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 206397  bytes 143004046 (136.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_11: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.55.11.1  netmask 255.255.255.0  broadcast 10.55.11.255
        inet6 fe80::586f:7fff:fe66:caca  prefixlen 64  scopeid 0x20<link>
        ether 5a:6f:7f:66:ca:ca  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 77589  bytes 3266892 (3.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_88: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.55.88.1  netmask 255.255.255.0  broadcast 10.55.88.255
        inet6 fe80::84d0:4dff:fec1:5992  prefixlen 64  scopeid 0x20<link>
        ether 86:d0:4d:c1:59:92  txqueuelen 1000  (Ethernet)
        RX packets 119012  bytes 20688917 (19.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 174997  bytes 257848532 (245.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_89: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.55.89.1  netmask 255.255.255.0  broadcast 10.55.89.255
        inet6 fe80::64ef:24ff:fe64:159c  prefixlen 64  scopeid 0x20<link>
        ether 66:ef:24:64:15:9c  txqueuelen 1000  (Ethernet)
        RX packets 41  bytes 2520 (2.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 191294  bytes 8567826 (8.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_99: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.55.99.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fd42:7c6b:bdb2:6333::1  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::a04a:eeff:fe3d:3845  prefixlen 64  scopeid 0x20<link>
        ether fe:aa:17:e8:5d:cc  txqueuelen 1000  (Ethernet)
        RX packets 24673  bytes 1517542 (1.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 30013  bytes 6147302 (5.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_100: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::88eb:dfff:fefe:97a8  prefixlen 64  scopeid 0x20<link>
        ether 8a:eb:df:fe:97:a8  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26  bytes 5054 (4.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_154: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.55.154.1  netmask 255.255.255.248  broadcast 10.55.154.7
        inet6 fe80::8827:a4ff:fe66:6749  prefixlen 64  scopeid 0x20<link>
        ether 8a:27:a4:66:67:49  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 88023  bytes 3705140 (3.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_190: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.55.190.1  netmask 255.255.255.0  broadcast 10.55.190.255
        inet6 fe80::70b3:bcff:fec3:2099  prefixlen 64  scopeid 0x20<link>
        ether 72:b3:bc:c3:20:99  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 78126  bytes 3289466 (3.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_191: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.55.191.1  netmask 255.255.255.0  broadcast 10.55.191.255
        inet6 fe80::ac8d:d4ff:fef5:5f54  prefixlen 64  scopeid 0x20<link>
        ether ae:8d:d4:f5:5f:54  txqueuelen 1000  (Ethernet)
        RX packets 9619  bytes 5300391 (5.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 85997  bytes 9283139 (8.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_254: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.55.254.1  netmask 255.255.255.0  broadcast 10.55.254.255
        inet6 fe80::9cbb:56ff:fe23:baec  prefixlen 64  scopeid 0x20<link>
        ether 9e:bb:56:23:ba:ec  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 87185  bytes 4247953 (4.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan_300: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::7001:6cff:fe82:741  prefixlen 64  scopeid 0x20<link>
        ether 72:01:6c:82:07:41  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26  bytes 5054 (4.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::c4fc:ccff:fe7f:a542  prefixlen 64  scopeid 0x20<link>
        ether c6:fc:cc:7f:a5:42  txqueuelen 1000  (Ethernet)
        RX packets 898131  bytes 29320594 (27.9 MiB)
        RX errors 0  dropped 160  overruns 0  frame 0
        TX packets 26  bytes 5054 (4.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vmbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 138.201.yy.xx  netmask 255.255.255.192  broadcast 138.201.xx.xx
        inet6 fe80::921b:eff:fe95:9de8  prefixlen 64  scopeid 0x20<link>
        ether 90:1b:0e:95:9d:e8  txqueuelen 1000  (Ethernet)
        RX packets 1082288  bytes 335156797 (319.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 990644  bytes 1960490009 (1.8 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

zt0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 2800
        inet 10.55.0.1  netmask 255.255.255.0  broadcast 10.55.0.255
        inet6 fd17:d709:436c:8908:1d99:9331:fdd7:2e08  prefixlen 88  scopeid 0x0<global>
        inet6 fc7b:5e01:5e31:fdd7:2e08::1  prefixlen 40  scopeid 0x0<global>
        inet6 fe80::1c39:74ff:febb:6d01  prefixlen 64  scopeid 0x20<link>
        ether 1e:39:74:bb:6d:01  txqueuelen 1000  (Ethernet)
        RX packets 179420  bytes 35613687 (33.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 173587  bytes 27260053 (25.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Is that zt0 device a zerotier interface? If so, the problem might have been that zerotier wasn’t running and operational by the time LXD got started.

Thats a point actually 10.55.0.1 (the cluster bound address) is a Zerotier interface, basically like a TUN endpoint for a full mesh DMVPN to other remote servers, it may have come up too late?

I was using that address as it was an easily accessible address for other servers that may need to connect over the WAN

Yeah it is! :slight_smile:

So if I bind it to some other interface, I’m thinking an OVS bridge or even a loopback?

Ok, one thing which may help would be to setup a service override for the LXD daemon.

systemctl edit snap.lxd.daemon then add:

[service]
After=zerotier-one.service

That may be enough to fix the ordering issue, giving the machine a chance for the IP to be setup by the time LXD starts.

1 Like

OK I can try that for starters anyway thanks.

Well, what IP you use really depends on what network you expect the other cluster members to use to communicate as LXD won’t be binding any other address.

Awesome its working now, thanks!

:smiley:

Hi

I’ve got a new error now after a reboot :frowning:

Looks like something to do with the database possibly?

NFO[04-29|17:03:29] Raft: Node at 10.55.0.1:8443 [Leader] entering Leader state 
DBUG[04-29|17:03:29] Found cert                               k=0
DBUG[04-29|17:03:29] Found cert                               k=0
DBUG[04-29|17:03:29] Found cert                               k=0
DBUG[04-29|17:03:29] Found cert                               k=0
DBUG[04-29|17:04:29] Database error: failed to begin transaction: gRPC BEGIN response error: rpc error: code = Unknown desc = failed to handle BEGIN request: FSM out of sync: timed out enqueuing operation 
INFO[04-29|17:04:29] Stopping REST API handler: 
INFO[04-29|17:04:29]  - closing socket                        socket=10.55.0.1:8443
INFO[04-29|17:04:29]  - closing socket                        socket=/var/snap/lxd/common/lxd/unix.socket
INFO[04-29|17:04:29] Stopping /dev/lxd handler 
INFO[04-29|17:04:29]  - closing socket                        socket=/var/snap/lxd/common/lxd/devlxd/sock
INFO[04-29|17:04:29] Stop database gateway 
INFO[04-29|17:04:29] Stop raft instance 
INFO[04-29|17:04:30] Stopping REST API handler: 
INFO[04-29|17:04:30] Stopping /dev/lxd handler 
INFO[04-29|17:04:30] Stopping REST API handler: 
INFO[04-29|17:04:30] Stopping /dev/lxd handler 
DBUG[04-29|17:04:30] Not unmounting temporary filesystems (containers are still running) 
INFO[04-29|17:04:30] Saving simplestreams cache 
INFO[04-29|17:04:30] Saved simplestreams cache 
Error: failed to open cluster database: failed to ensure schema: failed to begin transaction: gRPC BEGIN response error: rpc error: code = Unknown desc = failed to handle BEGIN request: FSM out of sync: timed out enqueuing operation
root@hub2-proxhub2-prox /home/jon #

@freeekanayaka any ideas?