Failed to start lxd.activate

Hi

when i try
:~# snap start lxd
error: cannot perform the following tasks:

  • start of [lxd.activate lxd.daemon] (# systemctl start snap.lxd.activate.service snap.lxd.daemon.service
    Job for snap.lxd.activate.service failed because the control process exited with error code.
    See “systemctl status snap.lxd.activate.service” and “journalctl -xe” for details.
    )
  • start of [lxd.activate lxd.daemon] (exit status 1)

I have the log

sudo cat /var/snap/lxd/common/lxd/logs/lxd.log
t=2019-12-10T10:34:35+0530 lvl=info msg=“LXD 3.18 is starting in normal mode” path=/var/snap/lxd/common/lxd
t=2019-12-10T10:34:35+0530 lvl=info msg=“Kernel uid/gid map:”
t=2019-12-10T10:34:35+0530 lvl=info msg=" - u 0 0 4294967295"
t=2019-12-10T10:34:35+0530 lvl=info msg=" - g 0 0 4294967295"
t=2019-12-10T10:34:35+0530 lvl=info msg=“Configured LXD uid/gid map:”
t=2019-12-10T10:34:35+0530 lvl=info msg=" - u 0 1000000 1000000000"
t=2019-12-10T10:34:35+0530 lvl=info msg=" - g 0 1000000 1000000000"
t=2019-12-10T10:34:35+0530 lvl=warn msg=“CGroup memory swap accounting is disabled, swap limits will be ignored.”
t=2019-12-10T10:34:35+0530 lvl=info msg=“Kernel features:”
t=2019-12-10T10:34:35+0530 lvl=info msg=" - netnsid-based network retrieval: no"
t=2019-12-10T10:34:35+0530 lvl=info msg=" - uevent injection: no"
t=2019-12-10T10:34:35+0530 lvl=info msg=" - seccomp listener: no"
t=2019-12-10T10:34:35+0530 lvl=info msg=" - unprivileged file capabilities: yes"
t=2019-12-10T10:34:35+0530 lvl=info msg=" - shiftfs support: no"
t=2019-12-10T10:34:35+0530 lvl=info msg=“Initializing local database”
t=2019-12-10T10:34:36+0530 lvl=eror msg=“Failed to start the daemon: Failed to start dqlite server: failed to start task”
t=2019-12-10T10:34:36+0530 lvl=info msg=“Starting shutdown sequence”

please help to sort it

Try running:

  • rm /var/snap/lxd/common/lxd/unix.socket
  • lxd --debug --group lxd

Maybe it will give us some more details on what’s going on.

Thanks, It seems the issue was solved. The storage pool got out of space and they rebooted the server, so the DB seems to be corrupted. Deleting the last segment-transaction-file solved the issue as you suggested in another post.

All are dated Dec 10th because the directory is a copy of the previous database directory from a previous attempt but it was like 4 days old in reality, nothing was lost.

After this snap start lxd worked just fine

root@node33:~# rm /var/snap/lxd/common/lxd/unix.socket
root@node33:~# lxd --debug --group lxd
INFO[12-01|13:38:49] LXD 3.18 is starting in normal mode path=/var/snap/lxd/common/lxd
INFO[12-01|13:38:49] Kernel uid/gid map:
INFO[12-01|13:38:49] - u 0 0 4294967295
INFO[12-01|13:38:49] - g 0 0 4294967295
INFO[12-01|13:38:49] Configured LXD uid/gid map:
INFO[12-01|13:38:49] - u 0 1000000 1000000000
INFO[12-01|13:38:49] - g 0 1000000 1000000000
WARN[12-01|13:38:49] CGroup memory swap accounting is disabled, swap limits will be ignored.
INFO[12-01|13:38:49] Kernel features:
INFO[12-01|13:38:49] - netnsid-based network retrieval: no
INFO[12-01|13:38:49] - uevent injection: no
INFO[12-01|13:38:49] - seccomp listener: no
INFO[12-01|13:38:49] - unprivileged file capabilities: yes
INFO[12-01|13:38:49] - shiftfs support: no
INFO[12-01|13:38:49] Initializing local database
DBUG[12-01|13:38:49] Initializing database gateway
DBUG[12-01|13:38:49] Start database node id=1 address=
06:02:38.912 [DEBUG]: data dir: /var/snap/lxd/common/lxd/database/global
06:02:38.912 [DEBUG]: metadata1: version 193, term 29, voted for 1
06:02:38.912 [DEBUG]: metadata2: version 194, term 29, voted for 1
06:02:38.912 [DEBUG]: metadata: version 196, term 29, voted for 1
06:02:38.912 [DEBUG]: I/O: direct 0, async 0, block 4096
06:02:38.912 [INFO ]: starting
06:02:38.912 [DEBUG]: segment 633388-634109
06:02:38.912 [DEBUG]: segment 634110-634238
06:02:38.912 [DEBUG]: segment 634239-634318
06:02:38.912 [DEBUG]: segment 634319-634473
06:02:38.912 [DEBUG]: segment 634474-634540
06:02:38.912 [DEBUG]: segment 634541-634778
06:02:38.912 [DEBUG]: segment 634779-635594
06:02:38.912 [DEBUG]: segment 635595-635879
06:02:38.912 [DEBUG]: segment 635880-635996
06:02:38.912 [DEBUG]: segment 635997-636343
06:02:38.912 [DEBUG]: segment 636344-636562
06:02:38.912 [DEBUG]: segment 636563-637378
06:02:38.912 [DEBUG]: segment 637379-637399
06:02:38.912 [DEBUG]: segment 637400-637719
06:02:38.912 [DEBUG]: segment 637720-637945
06:02:38.912 [DEBUG]: segment 637946-638035
06:02:38.912 [DEBUG]: segment 638036-638396
06:02:38.912 [DEBUG]: segment 638397-639118
06:02:38.912 [DEBUG]: segment 639119-639625
06:02:38.912 [DEBUG]: segment 639626-640439
06:02:38.912 [DEBUG]: segment 640440-641253
06:02:38.912 [DEBUG]: segment 641254-641344
06:02:38.912 [DEBUG]: segment 641345-641628
06:02:38.912 [DEBUG]: segment 641629-641719
06:02:38.912 [DEBUG]: segment 641720-642006
06:02:38.912 [DEBUG]: ignore db.bin
06:02:38.912 [DEBUG]: ignore db.bin-shm
06:02:38.912 [DEBUG]: ignore db.bin-wal
06:02:38.912 [DEBUG]: ignore metadata1
06:02:38.912 [DEBUG]: ignore metadata2
06:02:38.912 [DEBUG]: ignore snapshot-26-640555-4145502219
06:02:38.912 [DEBUG]: snapshot snapshot-26-640555-4145502219.meta
06:02:38.912 [DEBUG]: ignore snapshot-27-641579-5534976152
06:02:38.912 [DEBUG]: snapshot snapshot-27-641579-5534976152.meta
06:02:38.912 [DEBUG]: most recent snapshot at 641579
06:02:38.912 [DEBUG]: most recent closed segment is 641720-642006
06:02:38.912 [DEBUG]: load segment 633388-634109
06:02:38.912 [DEBUG]: load segment 634110-634238
06:02:38.912 [DEBUG]: load segment 634239-634318
06:02:38.912 [DEBUG]: load segment 634319-634473
06:02:38.912 [DEBUG]: load segment 634474-634540
06:02:38.912 [DEBUG]: load segment 634541-634778
06:02:38.912 [DEBUG]: load segment 634779-635594
06:02:38.912 [DEBUG]: load segment 635595-635879
06:02:38.912 [DEBUG]: load segment 635880-635996
06:02:38.912 [DEBUG]: load segment 635997-636343
06:02:38.912 [DEBUG]: load segment 636344-636562
06:02:38.912 [DEBUG]: load segment 636563-637378
06:02:38.912 [DEBUG]: load segment 637379-637399
06:02:38.912 [DEBUG]: load segment 637400-637719
06:02:38.912 [DEBUG]: load segment 637720-637945
06:02:38.912 [DEBUG]: load segment 637946-638035
06:02:38.912 [DEBUG]: load segment 638036-638396
06:02:38.912 [DEBUG]: load segment 638397-639118
06:02:38.912 [DEBUG]: load segment 639119-639625
06:02:38.912 [DEBUG]: load segment 639626-640439
06:02:38.912 [DEBUG]: load segment 640440-641253
06:02:38.912 [DEBUG]: load segment 641254-641344
06:02:38.912 [DEBUG]: load segment 641345-641628
06:02:38.912 [DEBUG]: load segment 641629-641719
06:02:38.912 [DEBUG]: load segment 641720-642006
06:02:38.912 [ERROR]: batch has zero entries (preamble at 3041088)
EROR[12-01|13:38:49] Failed to start the daemon: Failed to start dqlite server: failed to start task
INFO[12-01|13:38:49] Starting shutdown sequence
DBUG[12-01|13:38:49] Not unmounting temporary filesystems (containers are still running)
Error: Failed to start dqlite server: failed to start task
root@node33:~# cd /var/snap/lxd/common/lxd/database/global
root@node33:/var/snap/lxd/common/lxd/database/global# ls -l
total 93850
-rw------- 1 root root 7414424 Dec 10 2019 633388-634109
-rw------- 1 root root 1306064 Dec 10 2019 634110-634238
-rw------- 1 root root 785408 Dec 10 2019 634239-634318
-rw------- 1 root root 1570592 Dec 10 2019 634319-634473
-rw------- 1 root root 661376 Dec 10 2019 634474-634540
-rw------- 1 root root 2417864 Dec 10 2019 634541-634778
-rw------- 1 root root 8385632 Dec 10 2019 634779-635594
-rw------- 1 root root 2938472 Dec 10 2019 635595-635879
-rw------- 1 root root 1354424 Dec 10 2019 635880-635996
-rw------- 1 root root 3533760 Dec 10 2019 635997-636343
-rw------- 1 root root 2231840 Dec 10 2019 636344-636562
-rw------- 1 root root 8385632 Dec 10 2019 636563-637378
-rw------- 1 root root 210824 Dec 10 2019 637379-637399
-rw------- 1 root root 3429312 Dec 10 2019 637400-637719
-rw------- 1 root root 2330816 Dec 10 2019 637720-637945
-rw------- 1 root root 913376 Dec 10 2019 637946-638035
-rw------- 1 root root 3731792 Dec 10 2019 638036-638396
-rw------- 1 root root 7418552 Dec 10 2019 638397-639118
-rw------- 1 root root 5195096 Dec 10 2019 639119-639625
-rw------- 1 root root 8385552 Dec 10 2019 639626-640439
-rw------- 1 root root 8385584 Dec 10 2019 640440-641253
-rw------- 1 root root 938168 Dec 10 2019 641254-641344
-rw------- 1 root root 2995800 Dec 10 2019 641345-641628
-rw------- 1 root root 1044840 Dec 10 2019 641629-641719
-rw------- 1 root root 3053472 Dec 10 2019 641720-642006
-rw------- 1 root root 536576 Dec 10 2019 db.bin
-rw------- 1 root root 32768 Dec 10 2019 db.bin-shm
-rw------- 1 root root 997072 Dec 10 2019 db.bin-wal
-rw------- 1 root root 32 Dec 1 2019 metadata1
-rw------- 1 root root 32 Dec 1 2019 metadata2
-rw------- 1 root root 2856208 Dec 10 2019 snapshot-26-640555-4145502219
-rw------- 1 root root 56 Dec 10 2019 snapshot-26-640555-4145502219.meta
-rw------- 1 root root 1064008 Dec 10 2019 snapshot-27-641579-5534976152
-rw------- 1 root root 56 Dec 10 2019 snapshot-27-641579-5534976152.meta
root@node33:/var/snap/lxd/common/lxd/database/global# rm 641720-642006
root@node33:/var/snap/lxd/common/lxd/database/global# lxd --debug --group lxd
INFO[12-01|07:32:08] LXD 3.18 is starting in normal mode path=/var/snap/lxd/common/lxd
INFO[12-01|07:32:08] Kernel uid/gid map:
INFO[12-01|07:32:08] - u 0 0 4294967295
INFO[12-01|07:32:08] - g 0 0 4294967295
INFO[12-01|07:32:08] Configured LXD uid/gid map:
INFO[12-01|07:32:08] - u 0 1000000 1000000000
INFO[12-01|07:32:08] - g 0 1000000 1000000000
WARN[12-01|07:32:08] CGroup memory swap accounting is disabled, swap limits will be ignored.
INFO[12-01|07:32:08] Kernel features:
INFO[12-01|07:32:08] - netnsid-based network retrieval: no
INFO[12-01|07:32:08] - uevent injection: no
INFO[12-01|07:32:08] - seccomp listener: no
INFO[12-01|07:32:08] - unprivileged file capabilities: yes
INFO[12-01|07:32:08] - shiftfs support: no
INFO[12-01|07:32:08] Initializing local database
DBUG[12-01|07:32:08] Initializing database gateway
DBUG[12-01|07:32:08] Start database node address= id=1
INFO[12-01|07:32:09] Starting /dev/lxd handler:
INFO[12-01|07:32:09] - binding devlxd socket socket=/var/snap/lxd/common/lxd/devlxd/sock
INFO[12-01|07:32:09] REST API daemon:
INFO[12-01|07:32:09] - binding Unix socket socket=/var/snap/lxd/common/lxd/unix.socket
INFO[12-01|07:32:09] - binding TCP socket socket=[::]:8443
INFO[12-01|07:32:09] Initializing global database
DBUG[12-01|07:32:09] Dqlite: connected address=1 id=1 attempt=0
INFO[12-01|07:32:09] Initializing storage pools
DBUG[12-01|07:32:09] Initializing and checking storage pool “default”
DBUG[12-01|07:32:09] Checking ZFS storage pool “default”
DBUG[12-01|07:32:09] Initializing and checking storage pool “data”
DBUG[12-01|07:32:09] Checking BTRFS storage pool “data”
INFO[12-01|07:32:09] Initializing networks
DBUG[12-01|07:32:09] New task Operation: 639176f8-331e-4e5c-9a9a-7ed2c03c2e0f
INFO[12-01|07:32:09] Pruning leftover image files
DBUG[12-01|07:32:09] Started task operation: 639176f8-331e-4e5c-9a9a-7ed2c03c2e0f
INFO[12-01|07:32:09] Done pruning leftover image files
INFO[12-01|07:32:09] Loading daemon configuration
DBUG[12-01|07:32:09] Success for task operation: 639176f8-331e-4e5c-9a9a-7ed2c03c2e0f
DBUG[12-01|07:32:09] Initialized inotify with file descriptor 23
DBUG[12-01|07:32:09] New task Operation: e2e034a3-309c-4ba0-b6cf-b0be236f4ccc
INFO[12-01|07:32:09] Pruning expired images
DBUG[12-01|07:32:09] Started task operation: e2e034a3-309c-4ba0-b6cf-b0be236f4ccc
INFO[12-01|07:32:09] Done pruning expired images
DBUG[12-01|07:32:09] New task Operation: f5fe9dbc-86f1-4d61-869a-cab750a1ee83
DBUG[12-01|07:32:09] Success for task operation: e2e034a3-309c-4ba0-b6cf-b0be236f4ccc
INFO[12-01|07:32:09] Pruning expired container backups
DBUG[12-01|07:32:09] Started task operation: f5fe9dbc-86f1-4d61-869a-cab750a1ee83
INFO[12-01|07:32:09] Done pruning expired container backups
DBUG[12-01|07:32:09] Success for task operation: f5fe9dbc-86f1-4d61-869a-cab750a1ee83
DBUG[12-01|07:32:09] New task Operation: 551c001f-f573-4a8d-a150-086b5e6089a9
DBUG[12-01|07:32:09] New task Operation: 2e3f7d83-36e3-4faf-9cf3-813da3d8ef0b
INFO[12-01|07:32:09] Updating instance types
DBUG[12-01|07:32:09] Started task operation: 2e3f7d83-36e3-4faf-9cf3-813da3d8ef0b
INFO[12-01|07:32:09] Done updating instance types
INFO[12-01|07:32:09] Expiring log files
DBUG[12-01|07:32:09] Started task operation: 551c001f-f573-4a8d-a150-086b5e6089a9
INFO[12-01|07:32:09] Done expiring log files
DBUG[12-01|07:32:09] New task Operation: 30d32bcc-ff03-4d53-9911-02d827f50505
DBUG[12-01|07:32:09] Success for task operation: 551c001f-f573-4a8d-a150-086b5e6089a9
INFO[12-01|07:32:09] Updating images
DBUG[12-01|07:32:09] Started task operation: 30d32bcc-ff03-4d53-9911-02d827f50505
INFO[12-01|07:32:10] Done updating images
DBUG[12-01|07:32:10] Success for task operation: 30d32bcc-ff03-4d53-9911-02d827f50505
DBUG[12-01|07:32:10] Scheduler: network: vethb9a8d89b has been added: updating network priorities
DBUG[12-01|04:28:09] Scheduler: network: veth10f6e5de has been added: updating network priorities
DBUG[12-01|04:28:09] Scheduler: network: veth901e6e35 has been added: updating network priorities
DBUG[12-01|04:28:09] Scheduler: network: veth38838223 has been added: updating network priorities
DBUG[12-01|04:28:09] Scheduler: network: veth5eb946a7 has been added: updating network priorities
DBUG[12-01|04:28:09] Scheduler: network: veth2a8dc423 has been added: updating network priorities
DBUG[12-01|04:28:09] Mounting ZFS storage volume for container “maas-icts” on storage pool “default”
WARN[12-01|04:28:09] Unable to connect to MAAS, trying again in a minute url=http://10.3.4.10:5240/MAAS err=“unexpected: ServerError: 401 Unauthorized (Authorization Error: ‘Expired timestamp: given 1575154689 and now 1575165730 has a greater difference than threshold 300’)”
DBUG[12-01|04:28:09] Mounted ZFS storage volume for container “maas-icts” on storage pool “default”
DBUG[12-01|04:28:10] Mounting ZFS storage volume for container “maas-icts” on storage pool “default”
DBUG[12-01|04:28:10] Mounted ZFS storage volume for container “maas-icts” on storage pool “default”
INFO[12-01|04:28:10] Starting container created=2018-08-31T11:38:04+0530 ephemeral=false used=2019-12-06T15:42:30+0530 stateful=false project=default name=maas-icts action=start
DBUG[12-01|04:28:10] Handling method=GET url=/internal/containers/62/onstart ip=@ user=
DBUG[12-01|04:28:10] Mounting ZFS storage volume for container “maas-icts” on storage pool “default”
DBUG[12-01|04:28:10] Mounted ZFS storage volume for container “maas-icts” on storage pool “default”
DBUG[12-01|04:28:10] Scheduler: container maas-icts started: re-balancing
DBUG[12-01|04:28:10]
{
“type”: “sync”,
“status”: “Success”,
“status_code”: 200,
“operation”: “”,
“error_code”: 0,
“error”: “”,
“metadata”: {}
}
INFO[12-01|04:28:11] Started container project=default name=maas-icts action=start created=2018-08-31T11:38:04+0530 ephemeral=false used=2019-12-06T15:42:30+0530 stateful=false
DBUG[12-01|04:28:11] Scheduler: network: vetha4b16631 has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: veth73329768 has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: vethfbc36932 has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: veth623dbc90 has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: veth71ed1b81 has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: veth0ea1166c has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: veth477c2b84 has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: veth560e984d has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: vetha3268729 has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: veth48c189ea has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: veth9709d039 has been added: updating network priorities
DBUG[12-01|04:28:11] Scheduler: network: veth1381be08 has been added: updating network priorities
DBUG[12-01|04:28:11] Mounting ZFS storage volume for container “gateway-icts” on storage pool “default”
DBUG[12-01|04:28:11] Mounted ZFS storage volume for container “gateway-icts” on storage pool “default”
DBUG[12-01|04:28:11] Mounting ZFS storage volume for container “gateway-icts” on storage pool “default”
DBUG[12-01|04:28:11] Mounted ZFS storage volume for container “gateway-icts” on storage pool “default”
INFO[12-01|04:28:11] Starting container project=default name=gateway-icts action=start created=2018-08-31T11:43:17+0530 ephemeral=false used=2019-12-06T15:23:54+0530 stateful=false
DBUG[12-01|04:28:12] Handling ip=@ user= method=GET url=/internal/containers/69/onstart
DBUG[12-01|04:28:12] Mounting ZFS storage volume for container “gateway-icts” on storage pool “default”
DBUG[12-01|04:28:12] Mounted ZFS storage volume for container “gateway-icts” on storage pool “default”
DBUG[12-01|04:28:12] Scheduler: container gateway-icts started: re-balancing
DBUG[12-01|04:28:12]
{
“type”: “sync”,
“status”: “Success”,
“status_code”: 200,
“operation”: “”,
“error_code”: 0,
“error”: “”,
“metadata”: {}
}
INFO[12-01|04:28:13] Started container name=gateway-icts action=start created=2018-08-31T11:43:17+0530 ephemeral=false used=2019-12-06T15:23:54+0530 stateful=false project=default
DBUG[12-01|04:28:13] Scheduler: network: vethffe8cba7 has been added: updating network priorities
DBUG[12-01|04:28:13] Scheduler: network: vethdf5c1167 has been added: updating network priorities
DBUG[12-01|04:28:13] Mounting ZFS storage volume for container “aiotm-backend” on storage pool “default”
DBUG[12-01|04:28:13] Mounted ZFS storage volume for container “aiotm-backend” on storage pool “default”
DBUG[12-01|04:28:13] Mounting ZFS storage volume for container “aiotm-backend” on storage pool “default”
DBUG[12-01|04:28:13] Mounted ZFS storage volume for container “aiotm-backend” on storage pool “default”
INFO[12-01|04:28:13] Starting container stateful=false project=default name=aiotm-backend action=start created=2018-04-06T11:12:33+0530 ephemeral=false used=2019-12-06T15:23:56+0530
DBUG[12-01|04:28:13] Handling method=GET url=/internal/containers/4/onstart ip=@ user=
DBUG[12-01|04:28:13] Mounting ZFS storage volume for container “aiotm-backend” on storage pool “default”
DBUG[12-01|04:28:13] Mounted ZFS storage volume for container “aiotm-backend” on storage pool “default”
DBUG[12-01|04:28:13] Scheduler: container aiotm-backend started: re-balancing
DBUG[12-01|04:28:13]
{
“type”: “sync”,
“status”: “Success”,
“status_code”: 200,
“operation”: “”,
“error_code”: 0,
“error”: “”,
“metadata”: {}
}
INFO[12-01|04:28:14] Started container project=default name=aiotm-backend action=start created=2018-04-06T11:12:33+0530 ephemeral=false used=2019-12-06T15:23:56+0530 stateful=false
DBUG[12-01|04:28:14] Scheduler: network: veth91696506 has been added: updating network priorities
DBUG[12-01|04:28:14] Mounting ZFS storage volume for container “aiotm-besafe” on storage pool “default”
DBUG[12-01|04:28:14] Scheduler: network: vetha04847e6 has been added: updating network priorities
DBUG[12-01|04:28:14] Mounted ZFS storage volume for container “aiotm-besafe” on storage pool “default”
DBUG[12-01|04:28:14] Mounting ZFS storage volume for container “aiotm-besafe” on storage pool “default”
DBUG[12-01|04:28:14] Mounted ZFS storage volume for container “aiotm-besafe” on storage pool “default”
INFO[12-01|04:28:14] Starting container used=2019-12-06T15:23:59+0530 stateful=false project=default name=aiotm-besafe action=start created=2018-04-04T14:05:17+0530 ephemeral=false
DBUG[12-01|04:28:14] Handling method=GET url=/internal/containers/12/onstart ip=@ user=
DBUG[12-01|04:28:14] Mounting ZFS storage volume for container “aiotm-besafe” on storage pool “default”
DBUG[12-01|04:28:14] Mounted ZFS storage volume for container “aiotm-besafe” on storage pool “default”
DBUG[12-01|04:28:15] Scheduler: container aiotm-besafe started: re-balancing
DBUG[12-01|04:28:15]
{
“type”: “sync”,
“status”: “Success”,
“status_code”: 200,
“operation”: “”,
“error_code”: 0,
“error”: “”,
“metadata”: {}
}
INFO[12-01|04:28:16] Started container action=start created=2018-04-04T14:05:17+0530 ephemeral=false used=2019-12-06T15:23:59+0530 stateful=false project=default name=aiotm-besafe
DBUG[12-01|04:28:16] Mounting ZFS storage volume for container “aiotm-devi” on storage pool “default”
DBUG[12-01|04:28:16] Scheduler: network: vethaa01fadd has been added: updating network priorities
DBUG[12-01|04:28:16] Scheduler: network: veth8548335d has been added: updating network priorities
DBUG[12-01|04:28:16] Mounted ZFS storage volume for container “aiotm-devi” on storage pool “default”
DBUG[12-01|04:28:16] Mounting ZFS storage volume for container “aiotm-devi” on storage pool “default”
DBUG[12-01|04:28:16] Mounted ZFS storage volume for container “aiotm-devi” on storage pool “default”
INFO[12-01|04:28:16] Starting container project=default name=aiotm-devi action=start created=2018-06-20T12:24:51+0530 ephemeral=false used=2019-12-06T15:24:01+0530 stateful=false
DBUG[12-01|04:28:17] Handling ip=@ user= method=GET url=/internal/containers/21/onstart
DBUG[12-01|04:28:17] Mounting ZFS storage volume for container “aiotm-devi” on storage pool “default”
DBUG[12-01|04:28:17] Mounted ZFS storage volume for container “aiotm-devi” on storage pool “default”
DBUG[12-01|04:28:17] Scheduler: container aiotm-devi started: re-balancing
DBUG[12-01|04:28:17]
{
“type”: “sync”,
“status”: “Success”,
“status_code”: 200,
“operation”: “”,
“error_code”: 0,
“error”: “”,
“metadata”: {}
}
INFO[12-01|04:28:17] Started container action=start created=2018-06-20T12:24:51+0530 ephemeral=false used=2019-12-06T15:24:01+0530 stateful=false project=default name=aiotm-devi
DBUG[12-01|04:28:17] Mounting ZFS storage volume for container “aiotm-frontend” on storage pool “default”
DBUG[12-01|04:28:17] Scheduler: network: veth63544f52 has been added: updating network priorities
DBUG[12-01|04:28:17] Scheduler: network: vethbb62451f has been added: updating network priorities
DBUG[12-01|04:28:18] Mounted ZFS storage volume for container “aiotm-frontend” on storage pool “default”
DBUG[12-01|04:28:18] Mounting ZFS storage volume for container “aiotm-frontend” on storage pool “default”
DBUG[12-01|04:28:18] Mounted ZFS storage volume for container “aiotm-frontend” on storage pool “default”
INFO[12-01|04:28:18] Starting container name=aiotm-frontend action=start created=2018-04-06T11:12:33+0530 ephemeral=false used=2019-12-06T15:24:07+0530 stateful=false project=default
DBUG[12-01|04:28:18] Handling method=GET url=/internal/containers/6/onstart ip=@ user=
DBUG[12-01|04:28:18] Mounting ZFS storage volume for container “aiotm-frontend” on storage pool “default”
DBUG[12-01|04:28:18] Mounted ZFS storage volume for container “aiotm-frontend” on storage pool “default”
DBUG[12-01|04:28:18] Scheduler: container aiotm-frontend started: re-balancing
DBUG[12-01|04:28:18]
{
“type”: “sync”,
“status”: “Success”,
“status_code”: 200,
“operation”: “”,
“error_code”: 0,
“error”: “”,
“metadata”: {}
}
WARN[12-01|04:28:19] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org: Temporary failure in name resolution
DBUG[12-01|04:28:19] Failure for task operation: 2e3f7d83-36e3-4faf-9cf3-813da3d8ef0b: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org: Temporary failure in name resolution
INFO[12-01|04:28:20] Started container ephemeral=false used=2019-12-06T15:24:07+0530 stateful=false project=default name=aiotm-frontend action=start created=2018-04-06T11:12:33+0530
DBUG[12-01|04:28:20] Mounting ZFS storage volume for container “aiotm-gps” on storage pool “default”
DBUG[12-01|04:28:20] Scheduler: network: veth3f33b150 has been added: updating network priorities
DBUG[12-01|04:28:20] Scheduler: network: veth7bb39e16 has been added: updating network priorities
DBUG[12-01|04:28:20] Mounted ZFS storage volume for container “aiotm-gps” on storage pool “default”
DBUG[12-01|04:28:21] Mounting ZFS storage volume for container “aiotm-gps” on storage pool “default”
DBUG[12-01|04:28:21] Mounted ZFS storage volume for container “aiotm-gps” on storage pool “default”
INFO[12-01|04:28:21] Starting container ephemeral=false used=2019-12-06T15:25:15+0530 stateful=false project=default name=aiotm-gps action=start created=2018-04-05T11:34:32+0530
DBUG[12-01|04:28:21] Handling method=GET url=/internal/containers/11/onstart ip=@ user=
DBUG[12-01|04:28:21] Mounting ZFS storage volume for container “aiotm-gps” on storage pool “default”
DBUG[12-01|04:28:21] Mounted ZFS storage volume for container “aiotm-gps” on storage pool “default”
DBUG[12-01|04:28:21] Scheduler: container aiotm-gps started: re-balancing
DBUG[12-01|04:28:21]
{
“type”: “sync”,
“status”: “Success”,
“status_code”: 200,
“operation”: “”,
“error_code”: 0,
“error”: “”,
“metadata”: {}
}
INFO[12-01|04:28:24] Started container action=start created=2018-04-05T11:34:32+0530 ephemeral=false used=2019-12-06T15:25:15+0530 stateful=false project=default name=aiotm-gps
DBUG[12-01|04:28:24] Mounting BTRFS storage volume “mysql-log” on storage pool “data”
DBUG[12-01|04:28:24] Mounting BTRFS storage pool “data”
DBUG[12-01|04:28:24] Mounted BTRFS storage pool “data”
DBUG[12-01|04:28:24] Mounted BTRFS storage volume “mysql-log” on storage pool “data”
DBUG[12-01|04:28:24] Mounting ZFS storage volume for container “aiotm-mysql” on storage pool “default”
DBUG[12-01|04:28:24] Scheduler: network: veth55e454c6 has been added: updating network priorities
DBUG[12-01|04:28:24] Scheduler: network: veth868b2aab has been added: updating network priorities
DBUG[12-01|04:28:24] Mounted ZFS storage volume for container “aiotm-mysql” on storage pool “default”
DBUG[12-01|04:28:25] Mounting ZFS storage volume for container “aiotm-mysql” on storage pool “default”
DBUG[12-01|04:28:25] Mounted ZFS storage volume for container “aiotm-mysql” on storage pool “default”
INFO[12-01|04:28:25] Starting container project=default name=aiotm-mysql action=start created=2018-04-06T11:12:33+0530 ephemeral=false used=2019-12-06T15:26:28+0530 stateful=false
DBUG[12-01|04:28:25] Handling method=GET url=/internal/containers/5/onstart ip=@ user=
DBUG[12-01|04:28:25] Mounting ZFS storage volume for container “aiotm-mysql” on storage pool “default”
DBUG[12-01|04:28:25] Mounted ZFS storage volume for container “aiotm-mysql” on storage pool “default”
DBUG[12-01|04:28:25] Scheduler: container aiotm-mysql started: re-balancing
DBUG[12-01|04:28:25]
{
“type”: “sync”,
“status”: “Success”,
“status_code”: 200,
“operation”: “”,
“error_code”: 0,
“error”: “”,
“metadata”: {}
}
INFO[12-01|04:28:29] Started container ephemeral=false used=2019-12-06T15:26:28+0530 stateful=false project=default name=aiotm-mysql action=start created=2018-04-06T11:12:33+0530
DBUG[12-01|04:28:29] Scheduler: network: veth57a61238 has been added: updating network priorities
DBUG[12-01|04:28:29] Scheduler: network: veth4518f287 has been added: updating network priorities
DBUG[12-01|04:28:29] Mounting ZFS storage volume for container “maas-1285-rackd” on storage pool “default”
DBUG[12-01|04:28:29] Mounted ZFS storage volume for container “maas-1285-rackd” on storage pool “default”
DBUG[12-01|04:28:30] Mounting ZFS storage volume for container “maas-1285-rackd” on storage pool “default”
DBUG[12-01|04:28:30] Mounted ZFS storage volume for container “maas-1285-rackd” on storage pool “default”
INFO[12-01|04:28:30] Starting container project=default name=maas-1285-rackd action=start created=2018-09-25T15:24:52+0530 ephemeral=false used=2019-12-06T15:27:27+0530 stateful=false
DBUG[12-01|04:28:30] Handling user= method=GET url=/internal/containers/87/onstart ip=@
DBUG[12-01|04:28:30] Mounting ZFS storage volume for container “maas-1285-rackd” on storage pool “default”
DBUG[12-01|04:28:30] Mounted ZFS storage volume for container “maas-1285-rackd” on storage pool “default”
DBUG[12-01|04:28:31] Scheduler: container maas-1285-rackd started: re-balancing
DBUG[12-01|04:28:31]
{
“type”: “sync”,
“status”: “Success”,
“status_code”: 200,
“operation”: “”,
“error_code”: 0,
“error”: “”,
“metadata”: {}
}
INFO[12-01|04:28:33] Started container project=default name=maas-1285-rackd action=start created=2018-09-25T15:24:52+0530 ephemeral=false used=2019-12-06T15:27:27+0530 stateful=false
INFO[12-01|04:29:10] Connected to MAAS controller url=http://10.3.4.10:5240/MAAS
^CINFO[12-01|04:35:57] Received ‘interrupt signal’, exiting
INFO[12-01|04:35:57] Starting shutdown sequence
INFO[12-01|04:35:57] Stopping REST API handler:
INFO[12-01|04:35:57] - closing socket socket=[::]:8443
INFO[12-01|04:35:57] - closing socket socket=/var/snap/lxd/common/lxd/unix.socket
INFO[12-01|04:35:57] Stopping /dev/lxd handler:
INFO[12-01|04:35:57] - closing socket socket=/var/snap/lxd/common/lxd/devlxd/sock
INFO[12-01|04:35:57] Closing the database
DBUG[12-01|04:35:57] Stop database gateway
DBUG[12-01|04:35:57] Not unmounting temporary filesystems (containers are still running)
root@node33:/var/snap/lxd/common/lxd/database/global#

Good, note that we’ve added a whole bunch of hardening and testing around running out of disk space which will be present in LXD 3.19, so hopefully such issues will just go away entirely (system will hang/error instead).

Thanks for the reporting. I stumbled upon a corrupted database on 3.18 (Fedora 31) after an unclean shutdown and your description allowed me to solve the issue (deleting by hand the last two segment files under database/global).