LXD 4.19 has been released

Introduction

The LXD team is very excited to announce the release of LXD 4.19!

This is a release that’s very busy on the bugfixing front with a lot of improvements around clustering, including improved shutdown logic, easier disaster recovery, improved logging and better handling of a variety of network setups.

There are also a number of fixes and minor improvements to the recently added network forwards feature, now properly integrating with BGP and a new lxc network forward get command.

The headline feature for this release is the addition of instance metrics, effectively a new endpoint (/1.0/metrics) which exposes a text OpenMetrics endpoint suitable for scraping with tool like Prometheus.

Enjoy!

New features and highlights

Instance metrics

A frequent request over the years has been for a better way to track instance resource usage. This becomes particularly critical on busy systems with many projects or even multiple clustered servers.

To handle this, LXD 4.19 introduces a new /1.0/metrics API endpoint which provides a text OpenMetric endpoint suitable for use with Prometheus and similar tools.

As it stands it provides a variety of metrics related to:

  • CPU
  • Memory
  • Disk
  • Network
  • Processes

In general, we’ve tried to keep the metric names aligned with those of node-exporter which should then make adapting existing dashboards and tooling pretty easy.

The endpoint is always available to authenticated users but can also be configured to listen to an additional address with core.metrics_address as well as adding additional trusted certificates which will be restricted only to the metrics interface (lxc config trust add --type metrics).

Example output at: https://gist.github.com/stgraber/ab7f204fb4bf53dbe134f6460bf41470

Specification: [LXD] Metric exporter for instances
Documentation: Instance metrics exporter | LXD

Reworked output for lxc cluster list

The lxc cluster list output was changed from just showing a boolean YES/NO in a database column to instead showing a text list of roles.

Currently the roles are database or database-standby but more will be added in the future. This makes it easier to understand exactly what each clustered server is doing.

stgraber@dakara:~$ lxc cluster list s-dcmtl-cluster:
+---------+-------------------------------------+----------+--------------+----------------+----------------------+--------+-------------------+
|  NAME   |                 URL                 |  ROLES   | ARCHITECTURE | FAILURE DOMAIN |     DESCRIPTION      | STATE  |      MESSAGE      |
+---------+-------------------------------------+----------+--------------+----------------+----------------------+--------+-------------------+
| abydos  | https://[2602:fd23:8:200::100]:8443 | database | x86_64       | default        | HIVE - top server    | ONLINE | Fully operational |
+---------+-------------------------------------+----------+--------------+----------------+----------------------+--------+-------------------+
| langara | https://[2602:fd23:8:200::101]:8443 | database | x86_64       | default        | HIVE - middle server | ONLINE | Fully operational |
+---------+-------------------------------------+----------+--------------+----------------+----------------------+--------+-------------------+
| orilla  | https://[2602:fd23:8:200::102]:8443 | database | x86_64       | default        | HIVE - bottom server | ONLINE | Fully operational |
+---------+-------------------------------------+----------+--------------+----------------+----------------------+--------+-------------------+

Export of block custom storage volumes

It’s now possible to export block custom storage volumes using lxc storage volume export just as it is for filesystem volumes.

Note however that block custom storage volumes tend to end up being significantly larger than the filesystem ones and so can take quite a bit of resources to export and import.

Complete changelog

Here is a complete list of all changes in this release:

Full commit list
  • lxd/util/net: Update CanonicalNetworkAddress to return canconical IP
  • lxd/util/net: Update IsAddressCovered to use net.IP when comparing IP equality
  • lxd/endpoints/cluster: Improve error message in ClusterUpdateAddress
  • lxd/endpoints/network: Improve error message in NetworkUpdateAddress
  • lxd/util/net: Improve comment in CanonicalNetworkAddress
  • lxd/main/init/interactive: Use util.CanonicalNetworkAddress in askClustering
  • lxd/main/init: Use util.CanonicalNetworkAddress when constructing address from join token
  • lxd/main/init: Ensure config.Cluster.ServerAddress and config.Cluster.ClusterAddress are in canonical form
  • doc: Adds network forwards to left hand nav
  • doc/server: Fix incorrect default for routerid
  • lxd/endpoints/endpoints: require set network listener before checking coverage
  • test/suites/clustering: add enable clustering test on lxd reload
  • lxd/resources/network: send not-found error instead of internal error
  • shared/util: rename DefaultPort to HTTPSDefaultPort
  • lxd/util/net: specify default port to CanonicalNetworkAddress
  • lxd/util/net: specify default port to CanonicalNetworkAddressFromAddressAndPort
  • shared/util: add HTTPDefaultPort
  • lxd/endpoints/pprof: use HTTP port instead of HTTPS for debug address
  • lxd/node/config: Canonicalize core.debug_address
  • lxd/daemon: Move ahead startTime
  • lxd/warnings: Add ResolveWarningsOlderThan
  • lxd/daemon: Resolve warnings earlier than startTime
  • lxc: Fix aliases containing @ARGS@
  • lxd/db/raft: rename RemoteRaftNode to RemoveRaftNode
  • lxd/db/node/update: Add updateFromV41
  • lxd/db/node/schema: update schema
  • lxd/db/raft: add Name field to RaftNode
  • lxd/storage/driver/zfs: Fix ListVolumes with custom zpool
  • lxd/node/raft: use empty Name if not yet clustered
  • lxd/cluster: handle Name field for RaftNode
  • lxd/cluster/gateway: populate RaftNode Name from global database
  • lxd/api/cluster: add Name field to internalRaftNode struct
  • lxd/main/cluster: add name to ‘lxd cluster show/edit’
  • lxd/test: add Name field to RaftNode tests
  • lxd/cluster/recover: append to patch.global.sql if exists
  • lxd/main/cluster: make segmentID a comment instead of struct field
  • doc/clustering: update ‘lxd cluster edit’ docs
  • lxd: Fix swagger definitions to avoid conflicts
  • doc/rest-api: Refresh swagger YAML
  • doc/instances: Clarify default CPU/RAM for VMs
  • lxd/networks: Handle stateful DHCPv6 leases
  • lxd/networks: Add EUI64 records to leases
  • lxd/device/nic: ensure instance device IP is different from parent network
  • lxd/network/driver/common: Adds bgpNextHopAddress function
  • lxd/network/driver/common: Reduce duplication of logic in bgpSetupPrefixes and uses bgpNextHopAddress
  • lxd/network/driver/common: Removes unnecessary function n.bgpClearPrefixes
  • lxd/network/driver/common: Improve errors in bgpSetup
  • lxd/network/driver/common: Clear address forward BGP prefixes in bgpClear
  • lxd/network/driver/bridge: Setup BGP prefix export in forwardsSetup
  • lxd/daemon/storage: unmount all storage pools on shutdown
  • lxd/project: Change restrictions check function in CheckClusterTargetRestriction
  • lxd/network/network/interface: Adds clientType arg to Forward management functions
  • lxd/network/driver: Add clientType to Forward management functions
  • lxd/network/driver/common: Remove empty newline
  • lxd/network/forwards: Pass clientType into Forward management functions
  • lxd/network/driver/ovn: Update Forward management functions to only apply changes for ClientTypeNormal requests
  • lxd/network/forwards: Removes duplicate record check from networkForwardsPost
  • lxd/network/driver: Moves duplicate forward record check into drivers
  • lxd/network/driver/ovn: Adds cluster member notification to Forward management functions
  • lxd/network/driver/ovn: Refresh BGP prefixes on Forward management
  • lxd/network/driver/common: Include exporting forward addresses in bgpSetup
  • lxd/network/driver/bridge: Remove BGP forward address refresh from forwardSetup
  • lxd/network/driver/bridge: Rename forwardsSetup to forwardSetupFirewall
  • test: Adds BGP prefix export checks to forward tests
  • lxd/cluster/heartbeat: Adds Name field to APIHeartbeatMember
  • lxd/cluster/heartbeat: Preallocate raftNodeMap in Update
  • lxd/cluster/heartbeat: Populate Name in Update
  • lxd/cluster/gateway: Update currentRaftNodes to use a single query to get cluster member info
  • lxd/cluster/gateway: Preallocate raftNodes slice for efficiency
  • lxd/cluster/gateway: Do not query leader cluster DB to enrich raft member name in HandlerFuncs
  • lxd/cluster/recover: Preallocate nodes in Reconfigure
  • lxd/util: Respect modprobe configuration
  • shared/instance: don’t allow ‘limits.memory’ to be 0
  • lxd/cgroup: Add GetMemoryStats
  • lxd/cgroup: Add GetIOStats
  • lxd/cgroup: Add GetCPUAcctUsageAll
  • lxd/cgroup: Add GetTotalProcesses
  • lxd/response: Add SyncResponsePlain
  • lxd/storage/filesystem: Add FSTypeToName
  • lxd/network/openvswitch/ovn: Work around a bug in lr-nat-del in ovn-nbctl in LogicalRouterDNATSNATAdd
  • shared/api/network/forward: Fix api extension references
  • lxd/network/forwards: Use consistent terminology in network address forward swagger comments
  • doc/rest-api: Refresh swagger YAML
  • test: Remove restart tests that don’t use --force
  • lxd/daemon/storage: Skip unmounting LVM pools in daemonStorageUnmount
  • lxc: Cleanup LXD client imports
  • lxd: Cleanup LXD client imports
  • lxc-to-lxd: Cleanup LXD client imports
  • lxc/cluster: Show roles instead of database column
  • tests: Support for showing roles by
  • i18n: Update translation templates
  • doc: update link to rest-api.yaml
  • Typo
  • lxd/device/tpm: Require path only for containers
  • lxd/instance: Fix response for patch
  • swagger: Fix return code for operations
  • doc/rest-api: Refresh swagger YAML
  • lxd/endpoints/network: Specify protocol version for 0.0.0.0 address
  • doc: Document recently added architectures
  • seccomp: Add riscv64 syscall mappings
  • shared/api: Add CertificateTypeMetrics
  • lxd/db: Add CertificateTypeMetrics
  • lxd: Check metrics certificates
  • lxc/config_trust: Allow adding metrics certificates
  • lxd/metrics: Add API types
  • lxd/metrics: Add types
  • lxd/metrics: Add helper functions
  • lxd: Add metrics related fields to daemon
  • lxd: Add /1.0/metrics endpoint
  • lxd/instance/drivers: Add Metrics function
  • lxd-agent: Add metrics endpoint
  • api: Add metrics API extension
  • i18n: Update translation templates
  • doc/rest-api: Refresh swagger YAML
  • doc: Add metrics.md
  • doc: Mention core.metrics_address
  • test/suites: Add lxd/metrics to static analysis
  • shared/util: Add HTTPSMetricsDefaultPort
  • lxd/node: Add core.metrics_address config key
  • lxd/endpoints: Add metrics endpoint
  • lxd: Handle metrics server
  • test: Add metrics test
  • lxd/daemon/storage: Renames daemonStorageUnmount to daemonStorageVolumesUnmount
  • lxd/daemon: Rename numRunningContainers numRunningInstances
  • Fix documented HTTP return code in console POST
  • doc/rest-api: Refresh swagger YAML
  • lxd/main/daemon: Rework cmdDaemon shutdown process
  • lxd/storage/drivers/driver/lvm: Fix Unmount to be more reliable
  • lxd/storage/drivers/driver/lvm: Fix Mount to be more reliable
  • lxd/main/daemon: Removes LVM shutdown unmount workaround
  • doc/rest-api: Add missing entry for 112 (error)
  • lxd/instance/drivers: Move raw.lxc config load to separate function
  • lxd/instance/drivers: Fix raw.lxc handling for shutdown/stop
  • lxd/storage/filesystem: Removes duplicated constants from unix package
  • lxd/storage/filesystem/fs: Removes duplicated constants from unix package
  • lxd/storage/filesystem/fs: Update FSTypeToName to work on 32bit platforms
  • lxd/instance/drivers/driver/lxc: filesystem.FSTypeToName usage
  • lxd-agent/metrics: filesystem.FSTypeToName usage
  • lxd/storage/drivers/driver/lvm: Skip unmount
  • lxd/cgroup: Implement CPU usage for cgroup v2
  • shared/json: Removes DebugJson from shared
  • lxd/cgroup: Fix logging in cgroup init
  • lxd/util/http: Adds DebugJSON function
  • lxd/util/http: Adds debugLogger arg to WriteJSON
  • lxd/main: Set response debug mode based on --debug flag
  • lxd/response/response: Reworks syncResponse to use util.WriteJSON
  • lxd/response/response: Adds util.DebugJSON support to errorResponse
  • lxd/operations/response: Adds util.WriteJSON support to operationResponse
  • lxd/operations/response: Adds util.WriteJSON support to forwardedOperationResponse
  • lxd/endpoints/endpoints/test: util.WriteJSON usage
  • lxd/cluster/notify/test: util.WriteJSON usage
  • lxd/devlxd: Adds util.WriteJSON support to hoistReq
  • lxd-agent/devlxd: Add util.WriteJSON support to hoistReq
  • lxd-agent/server: util.DebugJSON usage
  • lxd/daemon: Clearer logging of API requests in createCmd
  • lxd/daemon: util.DebugJSON usage in createCmd
  • lxd/cluster/gateway: util.WriteJSON usage
  • lxd/response/response: Use api.ResponseRaw in error response
  • client/interfaces: Corrects typo in GetNetworkForward
  • lxd/db/network/forwards: Fix error handling in GetNetworkForward
  • lxd/instances: containerStopList → instanceStopList
  • lxd/instances: Handle VMs in instancesOnDisk
  • lxd/instances: s/containers/instances/
  • lxd/instances: Rename old container variables
  • lxd/instances: Check DB before calling VolatileSet
  • lxc/network/forward: Add lxc network forward get command
  • i18n: Update translation templates
  • lxd/util: Handle ‘:8443’ syntax in ListenAddresses
  • lxd/util/http: Improve comment on ListenAddresses
  • lxd/util/http: Improve argument name in configListenAddress
  • lxd/util/http: Use net.JoinHostPort in ListenAddresses rather than wrapping IPv6 addresses in []
  • lxd/util/http: Improve ListenAddresses by breaking the parsing into phases
  • lxd/util/http/test: Adds ExampleListenAddresses function
  • lxd: Remove public facing errors that mention cluster “node”
  • shared/api/url: Adds URL builder type and functions
  • lxd/network/network/utils: Updates UsedBy to use api.URLBuild
  • doc/metrics: typo fix
  • lxc/file: use flagMkdir to create dirs on lxc pull
  • lxc/file: add DirMode constant for ‘lxc file’
  • lxd/api/cluster: only change member role from leader
  • test/suites/clustering: wait for node shutdown to propagate to members
  • lxd/storage/drivers: Support generic custom block volume backup/restore
  • lxd/storage/drivers/zfs: Drop restriction on custom block volume backup/restore
  • lxd/storage/drivers/btrfs: Drop restriction on custom block volume backup/restore
  • lxd/main/shutdown: Updates cmdShutdown to handle /internal/shutdown being synchronous
  • lxd/api/internal: Updates shutdown request to wait for d.shutdownDoneCtx
  • lxd/main/daemon: Call d.shutdownDoneCancel when daemon function ends
  • lxd/daemon: Adds shutdownDoneCtx context to indicate shutdown has finished
  • lxd: d.shutdownCtx usage
  • lxd/main/daemon: d.shutdownCancel usage in daemon function
  • lxc/config_trust: Delete only works on fingerprints
  • i18n: Update translation templates
  • test: Log PID of process being killed
  • test: Require node removal to succeed in test_clustering_remove_leader
  • lxd/storage/drivers: Checks that mount refCount is zero in all drivers
  • lxd/storage/drivers/driver/cephfs/volumes: Adds mount ref counting
  • lxd/device/disk: Use errors.Is() when checking for storageDrivers.ErrInUse in Update
  • lxd/device/disk: Ignore storageDrivers.ErrInUse error from pool.UnmountCustomVolume in postStop
  • lxd/storage/drivers: Log volName in UnmountVolume
  • lxd/instance/drivers: Add instance type to metrics
  • lxd: add core scheduling support
  • lxd/response/response: Adds manualResponse type
  • lxd/api/cluster: Removes arbitrary 3s wait in clusterPutDisable which was causing test issues
  • test: Wait for daemons to exit in test_clustering_remove_leader
  • lxd/api/cluster: Add logging to clusterPutDisable
  • test: Detect if clustering network needs removing
  • lxd/qemu: Disable large decrementor on ppc64le
  • lxd/daemon: Reworks shutdown sequence
  • lxd/daemon: Reworks Stop
  • lxd/api/cluster: d.shutdownCtx.Err usage
  • lxd/api/internal: d.shutdownCtx.Err usage
  • lxd: daemon.Stop usage
  • lxd/operations: Updates waitForOperations to accept context
  • lxd/main/shutdown: Require valid response from /internal/shutdown in cmdShutdown
  • lxd: db.OpenCluster usage
  • lxd/cluster/membership: Update notifyNodesUpdate to wait until all heartbeats have been sent
  • lxd/db/db: Replace clusterMu and closing with closingCtx in OpenCluster
  • lxd/api/cluster: Improves logging
  • lxd/api/internal: Rework internalShutdown to return valid response as LXD is shutdown
  • lxd/daemon: db.OpenCluster usage in init
  • lxd/daemon: Improved logging and error handling in init
  • lxd/main/daemon: Reworks cmdDaemon to use d.shutdownDoneCh and call d.Stop()
  • test: Increase timeouts on ping tests
  • lxd/daemon: Adds daemon started log
  • lxd/daemon: Whitespace in NodeRefreshTask
  • lxd/api/cluster: Improve logging in handoverMemberRole
  • lxd/api/cluster: Adds cluster logging
  • test: Addition test logging
  • lxd/cluster/membership: Improve logging in Rebalance
  • lxd/daemon: Stop clustering tasks during Stop
  • lxd/api/cluster: Improve logging in clusterNodeDelete
  • test: Try and kill LXD daemon that fails to start
  • lxd/dameon: Removes unnecessary go routines in NodeRefreshTask
  • lxd/db/db: Use db.PingContext in OpenCluster
  • lxd/db/db: Rework logging and error handling in OpenCluster
  • lxc/file: Fix file push help message
  • lxd/storage/drivers: Handle symlinks when walking file tree
  • test/suites/backup: Add cephfs
  • test/suites/backup: Check file content for storage volume backups
  • i18n: Update translation templates
  • lxd/cgroup: Fix GetIOStats on cgroup2
  • lxd/endpoints/network/test: Test tcp4 interface and request via IPv6
  • lxd/endpoints/network/test: Test tcp4 connection with configured 0.0.0.0 network address
  • i18n: Update translations from weblate
  • gomod: Update dependencies

Try it for yourself

This new LXD release is already available for you to try on our demo service.

Downloads

The release tarballs can be found on our download page.

Binary builds are also available for:

  • Linux: snap install lxd
  • MacOS: brew install lxc
  • Windows: choco install lxc
5 Likes

https://youtu.be/Ic3Y4ziqT34

2 Likes

This is now rolling out to our stable snap users.

Note that this is the first LXD release where we’re using phased rollout, primarily to avoid creating a lot of stress on the infrastructure as everyone updates. The full rollout is expected to take up to 48h but we may speed it up if we see everything going smoothly.

Looks like I have a problem: last night one of the nodes on my cluster tried to update to LXD 4.19 (via snap). However, other nodes did not update, so the node fell off the cluster. Since then I am not able to put it back. I have reverted lxd to version 4.18, however LXD does not start giving me the error:

lxd.daemon[200097]: Error: Failed to open cluster database: failed to ensure schema: this node's version is behind, please upgrade

Looks like snap still thinks that a new version is available:

$ snap refresh --list
Name  Version  Rev    Publisher   Notes
lxd   4.19     21624  canonical✓  -

However, the version 4.19 is not currently listed in the stable channel wich is being tracked:

$ snap info lxd
...
snap-id:      J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking:     latest/stable/ubuntu-20.04
refresh-date: today at 16:18 UTC
channels:
  latest/stable:    4.18        2021-09-13 (21497) 75MB -
  latest/candidate: 4.19        2021-10-04 (21624) 76MB -
  latest/beta:      ↑                                   
  latest/edge:      git-e6523c3 2021-10-05 (21636) 76MB -
...
installed:          4.18                   (21497) 75MB in-cohort

Other nodes on version 4.18 have the identical lxd info except for the last in-cohort. What does it mean? They do not list any updates in snap refresh --list.

Also, after reverting to 4.18, the node cannot longer update to 4.19: refreshing gets stuck.

Any idea how to recover?

So the fact that you have servers that aren’t in the cohort would explain the different version, it’s puzzling that they don’t have the cohort key though…

To recover, run on all servers:

  • snap switch lxd --cohort=+
  • snap refresh lxd

This should get them all on 4.19

2 Likes

I just tried to restart lxd on one of the other nodes:

sudo snap disable lxd
sudo snap enable lxd

After that this node also starts seeing the update:

$ snap refresh --list
Name  Version  Rev    Publisher   Notes
lxd   4.19     21624  canonical✓  -

However, 4.19 is still listed in the latest/candidate channel. Confused…

It’s normal that you don’t see 4.19 in stable yet, that’s because of the phased rollout, the in-cohort line should however appear on all clustered servers, otherwise you can get in the situation where some get the new release and some don’t.

Can you show a journalctl -u snap.lxd.daemon -n 500 of a server which didn’t have the in-cohort listed?

Thanks! Adding --cohort=+ helped bringing the update to 4.19 on 3 out of 4 servers. However, one server still does not see the update. I guess, I need to wait?

Here you are (the hostname has been redacted):

-- Logs begin at Sat 2021-02-20 12:59:16 UTC, end at Tue 2021-10-05 17:07:19 UTC. --
Oct 05 16:36:13 server1 lxd.daemon[3386139]:   3: fd:   9: perf_event
Oct 05 16:36:13 server1 lxd.daemon[3386139]:   4: fd:  10: freezer
Oct 05 16:36:13 server1 lxd.daemon[3386139]:   5: fd:  11: rdma
Oct 05 16:36:13 server1 lxd.daemon[3386139]:   6: fd:  12: net_cls,net_prio
Oct 05 16:36:13 server1 lxd.daemon[3386139]:   7: fd:  13: pids
Oct 05 16:36:13 server1 lxd.daemon[3386139]:   8: fd:  14: cpu,cpuacct
Oct 05 16:36:13 server1 lxd.daemon[3386139]:   9: fd:  15: blkio
Oct 05 16:36:13 server1 lxd.daemon[3386139]:  10: fd:  16: hugetlb
Oct 05 16:36:13 server1 lxd.daemon[3386139]:  11: fd:  17: memory
Oct 05 16:36:13 server1 lxd.daemon[3386139]:  12: fd:  19: cpuset
Oct 05 16:36:13 server1 lxd.daemon[3386139]: Kernel supports pidfds
Oct 05 16:36:13 server1 lxd.daemon[3386139]: Kernel does not support swap accounting
Oct 05 16:36:13 server1 lxd.daemon[3386139]: api_extensions:
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - cgroups
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - sys_cpu_online
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - proc_cpuinfo
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - proc_diskstats
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - proc_loadavg
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - proc_meminfo
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - proc_stat
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - proc_swaps
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - proc_uptime
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - shared_pidns
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - cpuview_daemon
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - loadavg_daemon
Oct 05 16:36:13 server1 lxd.daemon[3386139]: - pidfds
Oct 05 16:36:13 server1 lxd.daemon[3386139]: Reloaded LXCFS
Oct 05 16:36:13 server1 lxd.daemon[3387820]: => Re-using existing LXCFS
Oct 05 16:36:13 server1 lxd.daemon[3387820]: ==> Setting snap cohort
Oct 05 16:36:13 server1 lxd.daemon[3387820]: => Starting LXD
Oct 05 16:36:13 server1 lxd.daemon[3387994]: t=2021-10-05T16:36:13+0000 lvl=warn msg=" - Couldn't find the CGroup blkio.weight, disk priority will be ignored"
Oct 05 16:36:13 server1 lxd.daemon[3387994]: t=2021-10-05T16:36:13+0000 lvl=warn msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored"
Oct 05 16:36:13 server1 lxd.daemon[3387994]: t=2021-10-05T16:36:13+0000 lvl=warn msg="Dqlite: attempt 1: server 134.60.40.196:8443: no known leader"
Oct 05 16:36:13 server1 lxd.daemon[3387994]: t=2021-10-05T16:36:13+0000 lvl=eror msg="Failed to start the daemon: Failed to open cluster database: failed to ensure schema: this node's version is behind, please upgrad>
Oct 05 16:36:13 server1 lxd.daemon[3387994]: Error: Failed to open cluster database: failed to ensure schema: this node's version is behind, please upgrade
Oct 05 16:36:14 server1 lxd.daemon[3387820]: => LXD failed to start
Oct 05 16:36:14 server1 systemd[1]: snap.lxd.daemon.service: Main process exited, code=exited, status=1/FAILURE
Oct 05 16:36:14 server1 systemd[1]: snap.lxd.daemon.service: Failed with result 'exit-code'.
Oct 05 16:36:14 server1 systemd[1]: snap.lxd.daemon.service: Scheduled restart job, restart counter is at 9.
Oct 05 16:36:14 server1 systemd[1]: Stopped Service for snap application lxd.daemon.
Oct 05 16:36:14 server1 systemd[1]: Started Service for snap application lxd.daemon.
Oct 05 16:36:14 server1 lxd.daemon[3388042]: => Preparing the system (21497)
Oct 05 16:36:14 server1 lxd.daemon[3388042]: ==> Setting snap cohort
Oct 05 16:36:14 server1 lxd.daemon[3388042]: ==> Loading snap configuration
Oct 05 16:36:14 server1 lxd.daemon[3388042]: ==> Setting up mntns symlink (mnt:[4026532610])
Oct 05 16:36:14 server1 lxd.daemon[3388042]: ==> Setting up kmod wrapper
Oct 05 16:36:14 server1 lxd.daemon[3388042]: ==> Preparing /boot
Oct 05 16:36:14 server1 lxd.daemon[3388042]: ==> Preparing a clean copy of /run
Oct 05 16:36:14 server1 lxd.daemon[3388042]: ==> Preparing /run/bin
Oct 05 16:36:14 server1 lxd.daemon[3388042]: ==> Preparing a clean copy of /etc
Oct 05 16:36:15 server1 lxd.daemon[3388042]: ==> Preparing a clean copy of /usr/share/misc
Oct 05 16:36:15 server1 lxd.daemon[3388042]: ==> Setting up ceph configuration
Oct 05 16:36:15 server1 lxd.daemon[3388042]: ==> Setting up LVM configuration
Oct 05 16:36:15 server1 lxd.daemon[3388042]: ==> Rotating logs
Oct 05 16:36:15 server1 lxd.daemon[3388042]: ==> Setting up ZFS (0.8)
Oct 05 16:36:15 server1 lxd.daemon[3388042]: ==> Escaping the systemd cgroups
Oct 05 16:36:15 server1 lxd.daemon[3388042]: ====> Detected cgroup V1
Oct 05 16:36:15 server1 lxd.daemon[3388042]: ==> Escaping the systemd process resource limits

Oh, looks like the this server has meanwhile updated by itself. Now everything is back online. Many thanks!

Glad you’re back online. It’s a bit odd that some servers didn’t have the cohort key set. We attempt to set it every time LXD starts so you’d think it would have caught it by now…

I’ll put some extra logic to ensure that this is always set before an in-cluster refresh, hopefully that helps avoid this.

Maybe it is a coincidence, but I just noticed that the node that fell off the cluster has role database-standby. The other three nodes have role database. As far as I understand, this means that this node does not participate in the database agreement.

Should just be a coincidence. LXD itself has no interaction with snapd and those database roles dynamically move around the cluster. Most likely what happened is that since that one server was offline, the roles were reshuffled so the 3 that are online would act as database voters.

Looks like I have again problems with snap updates of lxd. This time auto-refresh on one of the clusters node got stuck. Any hint how to recover?

$ snap changes
ID   Status  Spawn                   Ready               Summary
149  Doing   yesterday at 21:36 UTC  -                   Auto-refresh snaps "core20", "lxd"
150  Done    today at 11:57 UTC      today at 11:57 UTC  Refresh all snaps: no updates
$ snap tasks 149
Status  Spawn                   Ready                   Summary
Done    yesterday at 21:36 UTC  yesterday at 21:36 UTC  Ensure prerequisites for "core20" are available
Done    yesterday at 21:36 UTC  yesterday at 21:36 UTC  Download snap "core20" (1242) from channel "latest/stable"
Done    yesterday at 21:36 UTC  yesterday at 21:36 UTC  Fetch and check assertions for snap "core20" (1242)
Done    yesterday at 21:36 UTC  yesterday at 21:36 UTC  Mount snap "core20" (1242)
Done    yesterday at 21:36 UTC  yesterday at 21:36 UTC  Run pre-refresh hook of "core20" snap if present
Done    yesterday at 21:36 UTC  yesterday at 21:36 UTC  Stop snap "core20" services
Done    yesterday at 21:36 UTC  yesterday at 21:36 UTC  Remove aliases for snap "core20"
Done    yesterday at 21:36 UTC  yesterday at 21:36 UTC  Make current revision for snap "core20" unavailable
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Copy snap "core20" data
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Setup snap "core20" (1242) security profiles
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Make snap "core20" (1242) available to the system
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Automatically connect eligible plugs and slots of snap "core20"
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Set automatic aliases for snap "core20"
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Setup snap "core20" aliases
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Run post-refresh hook of "core20" snap if present
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Start snap "core20" (1242) services
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Remove data for snap "core20" (1081)
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Remove snap "core20" (1081) from the system
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Clean up "core20" (1242) install
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Run health check of "core20" snap
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Ensure prerequisites for "lxd" are available
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Download snap "lxd" (21902) from channel "latest/stable/ubuntu-20.04"
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Fetch and check assertions for snap "lxd" (21902)
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Mount snap "lxd" (21902)
Done    yesterday at 21:36 UTC  yesterday at 22:01 UTC  Run pre-refresh hook of "lxd" snap if present
Done    yesterday at 21:36 UTC  yesterday at 22:06 UTC  Stop snap "lxd" services
Done    yesterday at 21:36 UTC  yesterday at 22:06 UTC  Remove aliases for snap "lxd"
Done    yesterday at 21:36 UTC  yesterday at 22:06 UTC  Make current revision for snap "lxd" unavailable
Doing   yesterday at 21:36 UTC  -                       Copy snap "lxd" data
Do      yesterday at 21:36 UTC  -                       Setup snap "lxd" (21902) security profiles
Do      yesterday at 21:36 UTC  -                       Make snap "lxd" (21902) available to the system
Do      yesterday at 21:36 UTC  -                       Automatically connect eligible plugs and slots of snap "lxd"
Do      yesterday at 21:36 UTC  -                       Set automatic aliases for snap "lxd"
Do      yesterday at 21:36 UTC  -                       Setup snap "lxd" aliases
Do      yesterday at 21:36 UTC  -                       Run post-refresh hook of "lxd" snap if present
Do      yesterday at 21:36 UTC  -                       Start snap "lxd" (21902) services
Do      yesterday at 21:36 UTC  -                       Remove data for snap "lxd" (21780)
Do      yesterday at 21:36 UTC  -                       Remove snap "lxd" (21780) from the system
Do      yesterday at 21:36 UTC  -                       Clean up "lxd" (21902) install
Do      yesterday at 21:36 UTC  -                       Run configure hook of "lxd" snap if present
Do      yesterday at 21:36 UTC  -                       Run health check of "lxd" snap
Doing   yesterday at 21:36 UTC  -                       Handling re-refresh of "core20", "lxd" as needed
$ snap info lxd
...
snap-id:  J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking: latest/stable/ubuntu-20.04
channels:
  latest/stable:    4.20        2021-11-10 (21858) 76MB -
  latest/candidate: 4.20        2021-11-14 (21902) 76MB -
  latest/beta:      ↑                                   
  latest/edge:      git-50458e4 2021-11-17 (21912) 76MB -
  4.20/stable:      4.20        2021-11-10 (21858) 76MB -
  4.20/candidate:   4.20        2021-11-14 (21902) 76MB -
  4.20/beta:        ↑                                   
  4.20/edge:        ↑                                   
  4.19/stable:      4.19        2021-10-27 (21780) 76MB -
  4.19/candidate:   4.19        2021-11-04 (21836) 76MB -
  4.19/beta:        ↑                                   
  4.19/edge:        ↑                                   
  4.0/stable:       4.0.8       2021-11-06 (21835) 70MB -
  4.0/candidate:    4.0.8       2021-11-04 (21835) 70MB -
  4.0/beta:         ↑                                   
  4.0/edge:         git-79ea78f 2021-11-04 (21848) 70MB -
  3.0/stable:       3.0.4       2019-10-10 (11348) 55MB -
  3.0/candidate:    3.0.4       2019-10-10 (11348) 55MB -
  3.0/beta:         ↑                                   
  3.0/edge:         git-81b81b9 2019-10-10 (11362) 55MB -
  2.0/stable:       2.0.12      2020-08-18 (16879) 38MB -
  2.0/candidate:    2.0.12      2021-03-22 (19859) 39MB -
  2.0/beta:         ↑                                   
  2.0/edge:         git-82c7d62 2021-03-22 (19857) 39MB -
installed:          4.20                   (21858) 76MB disabled,in-cohort
$ snap info core20
name:      core20
summary:   Runtime environment based on Ubuntu 20.04
publisher: Canonical✓
store-url: https://snapcraft.io/core20
contact:   https://github.com/snapcore/core20/issues
license:   unset
description: |
  The base snap based on the Ubuntu 20.04 release.
type:         base
snap-id:      DLqre5XGLbDqg9jPtiAhRRjDuPVa5X1q
tracking:     latest/stable
refresh-date: yesterday at 22:01 UTC
channels:
  latest/stable:    20210928 2021-10-09 (1169) 64MB -
  latest/candidate: 20211115 2021-11-16 (1242) 64MB -
  latest/beta:      20211117 2021-11-17 (1245) 64MB -
  latest/edge:      20211117 2021-11-17 (1250) 64MB -
installed:          20211115            (1242) 64MB base

On another node of the cluster:

$ snap info lxd
...
snap-id:      J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking:     latest/stable
refresh-date: yesterday at 17:05 UTC
channels:
  latest/stable:    4.20        2021-11-10 (21858) 76MB -
  latest/candidate: 4.20        2021-11-14 (21902) 76MB -
  latest/beta:      ↑                                   
  latest/edge:      git-50458e4 2021-11-17 (21912) 76MB -
  4.20/stable:      4.20        2021-11-10 (21858) 76MB -
  4.20/candidate:   4.20        2021-11-14 (21902) 76MB -
  4.20/beta:        ↑                                   
  4.20/edge:        ↑                                   
  4.19/stable:      4.19        2021-10-27 (21780) 76MB -
  4.19/candidate:   4.19        2021-11-04 (21836) 76MB -
  4.19/beta:        ↑                                   
  4.19/edge:        ↑                                   
  4.0/stable:       4.0.8       2021-11-06 (21835) 70MB -
  4.0/candidate:    4.0.8       2021-11-04 (21835) 70MB -
  4.0/beta:         ↑                                   
  4.0/edge:         git-79ea78f 2021-11-04 (21848) 70MB -
  3.0/stable:       3.0.4       2019-10-10 (11348) 55MB -
  3.0/candidate:    3.0.4       2019-10-10 (11348) 55MB -
  3.0/beta:         ↑                                   
  3.0/edge:         git-81b81b9 2019-10-10 (11362) 55MB -
  2.0/stable:       2.0.12      2020-08-18 (16879) 38MB -
  2.0/candidate:    2.0.12      2021-03-22 (19859) 39MB -
  2.0/beta:         ↑                                   
  2.0/edge:         git-82c7d62 2021-03-22 (19857) 39MB -
installed:          4.20                   (21902) 76MB in-cohort
$ snap info core20
name:      core20
summary:   Runtime environment based on Ubuntu 20.04
publisher: Canonical✓
store-url: https://snapcraft.io/core20
contact:   https://github.com/snapcore/core20/issues
license:   unset
description: |
  The base snap based on the Ubuntu 20.04 release.
type:         base
snap-id:      DLqre5XGLbDqg9jPtiAhRRjDuPVa5X1q
tracking:     latest/stable
refresh-date: 39 days ago, at 00:22 UTC
channels:
  latest/stable:    20210928 2021-10-09 (1169) 64MB -
  latest/candidate: 20211115 2021-11-16 (1242) 64MB -
  latest/beta:      20211117 2021-11-17 (1245) 64MB -
  latest/edge:      20211117 2021-11-17 (1250) 64MB -
installed:          20210928            (1169) 64MB base
$ snap refresh --list
All snaps up to date.

Could it be related to updates in-cohort?

Here is the lxd log on the node that went off (host and ip addresses have been redacted):

journalctl -u snap.lxd.daemon -n 100
-- Logs begin at Fri 2021-03-26 12:26:54 UTC, end at Wed 2021-11-17 12:59:13 UTC. --
Nov 16 11:03:24 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).getSubvolumes(0xc03422c1e0, {0xc02495d110, 0x62})
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:62 +0xee
Nov 16 11:03:24 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).deleteSubvolume(0xc03422c1e0, {0xc02495d110, 0x62}, 0x1)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:162 +0xa7
Nov 16 11:03:24 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).DeleteVolumeSnapshot(0xc03422c1e0, {{0xc0c4619960, 0x20}, {0xc0e01e2ec5, 0x5}, 0xc071f83c80, {0x18a5d1b, 0xa}, {0x18a617b, 0xa}, ...}, ...)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_volumes.go:1319 +0xb1
Nov 16 11:03:24 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage.(*lxdBackend).DeleteInstanceSnapshot(0xc04f45fd40, {0x1bfb578, 0xc05e6de6e0}, 0xc0c5b62130)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/backend_lxd.go:2338 +0x7c3
Nov 16 11:03:24 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Delete(0xc05e6de6e0, 0x1)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:3635 +0x484
Nov 16 11:03:24 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshots({0xc0a2a20500, 0x77ba6a}, 0x1bea470, {0xc024265100, 0x8, 0x10100000174bd80})
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:612 +0x88
Nov 16 11:03:24 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshotsTask.func1.1(0x41a634)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:575 +0x31
Nov 16 11:03:24 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/operations.(*Operation).Run.func1(0xc049c71560, 0xc020525710)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:270 +0x42
Nov 16 11:03:24 server1 lxd.daemon[2148742]: created by github.com/lxc/lxd/lxd/operations.(*Operation).Run
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:269 +0x128
Nov 16 11:03:24 server1 lxd.daemon[2148742]: goroutine 316834 [syscall, 346 minutes]:
Nov 16 11:03:24 server1 lxd.daemon[2148742]: syscall.Syscall6(0x106, 0xffffffffffffff9c, 0xc164b6c820, 0xc0a9d7ad38, 0x100, 0x0, 0x0)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/asm_linux_amd64.s:43 +0x5
Nov 16 11:03:24 server1 lxd.daemon[2148742]: syscall.fstatat(0x0, {0xc164b6c750, 0xca}, 0xc0a9d7ad38, 0xca)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/zsyscall_linux_amd64.go:1441 +0x10f
Nov 16 11:03:24 server1 lxd.daemon[2148742]: syscall.Lstat(...)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/syscall_linux_amd64.go:74
Nov 16 11:03:24 server1 lxd.daemon[2148742]: os.lstatNolog.func1(...)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:46
Nov 16 11:03:24 server1 lxd.daemon[2148742]: os.ignoringEINTR(...)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/file_posix.go:246
Nov 16 11:03:24 server1 lxd.daemon[2148742]: os.lstatNolog({0xc164b6c750, 0x3})
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:45 +0x5b
Nov 16 11:03:24 server1 lxd.daemon[2148742]: os.Lstat({0xc164b6c750, 0xca})
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat.go:22 +0x34
Nov 16 11:03:24 server1 lxd.daemon[2148742]: path/filepath.walk({0xc002166360, 0x89}, {0x1b9f358, 0xc00eec2410}, 0xc002f2d430)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:436 +0x1db
Nov 16 11:03:24 server1 lxd.daemon[2148742]: path/filepath.walk({0xc003600900, 0x7e}, {0x1b9f358, 0xc00eec2340}, 0xc002f2d430)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:24 server1 lxd.daemon[2148742]: path/filepath.walk({0xc003600500, 0x78}, {0x1b9f358, 0xc00eec2270}, 0xc002f2d430)
Nov 16 11:03:24 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 13:41:08 server1 lxd.daemon[2148194]: => LXD failed with return code 2
Nov 16 13:41:08 server1 systemd[1]: snap.lxd.daemon.service: Main process exited, code=exited, status=1/FAILURE
Nov 16 13:41:08 server1 systemd[1]: snap.lxd.daemon.service: Failed with result 'exit-code'.
Nov 16 13:41:08 server1 systemd[1]: snap.lxd.daemon.service: Scheduled restart job, restart counter is at 1.
Nov 16 13:41:08 server1 systemd[1]: Stopped Service for snap application lxd.daemon.
Nov 16 13:41:08 server1 systemd[1]: Started Service for snap application lxd.daemon.
Nov 16 13:41:10 server1 lxd.daemon[2373891]: => Preparing the system (21858)
Nov 16 13:41:10 server1 lxd.daemon[2373891]: ==> Loading snap configuration
Nov 16 13:41:10 server1 lxd.daemon[2373891]: ==> Setting up mntns symlink (mnt:[4026533313])
Nov 16 13:41:10 server1 lxd.daemon[2373891]: ==> Setting up kmod wrapper
Nov 16 13:41:10 server1 lxd.daemon[2373891]: ==> Preparing /boot
Nov 16 13:41:10 server1 lxd.daemon[2373891]: ==> Preparing a clean copy of /run
Nov 16 13:41:10 server1 lxd.daemon[2373891]: ==> Preparing /run/bin
Nov 16 13:41:10 server1 lxd.daemon[2373891]: ==> Preparing a clean copy of /etc
Nov 16 13:41:11 server1 lxd.daemon[2373891]: ==> Preparing a clean copy of /usr/share/misc
Nov 16 13:41:11 server1 lxd.daemon[2373891]: ==> Setting up ceph configuration
Nov 16 13:41:11 server1 lxd.daemon[2373891]: ==> Setting up LVM configuration
Nov 16 13:41:11 server1 lxd.daemon[2373891]: ==> Rotating logs
Nov 16 13:41:11 server1 lxd.daemon[2373891]: ==> Setting up ZFS (0.8)
Nov 16 13:41:11 server1 lxd.daemon[2373891]: ==> Escaping the systemd cgroups
Nov 16 13:41:11 server1 lxd.daemon[2373891]: ====> Detected cgroup V1
Nov 16 13:41:11 server1 lxd.daemon[2373891]: ==> Escaping the systemd process resource limits
Nov 16 13:41:11 server1 lxd.daemon[2373891]: ==> Disabling shiftfs on this kernel (auto)
Nov 16 13:41:12 server1 lxd.daemon[2373891]: => Re-using existing LXCFS
Nov 16 13:41:12 server1 lxd.daemon[2373891]: ==> Cleaning up existing LXCFS namespace
Nov 16 13:41:12 server1 lxd.daemon[2373891]: => Starting LXD
Nov 16 13:41:13 server1 lxd.daemon[2374581]: t=2021-11-16T13:41:13+0000 lvl=warn msg=" - Couldn't find the CGroup blkio.weight, disk priority will be ignored"
Nov 16 13:41:13 server1 lxd.daemon[2374581]: t=2021-11-16T13:41:13+0000 lvl=warn msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored"
Nov 16 13:41:15 server1 lxd.daemon[2374581]: t=2021-11-16T13:41:15+0000 lvl=warn msg="Dqlite: attempt 1: server xxx.xx.xx.xx6:8443: no known leader"
Nov 16 13:41:18 server1 lxd.daemon[2374581]: t=2021-11-16T13:41:18+0000 lvl=warn msg="Failed to initialize fanotify, falling back on fsnotify" err="Failed to initialize fanotify: invalid argument"
Nov 16 13:41:20 server1 lxd.daemon[2373891]: => LXD is ready
Nov 16 13:41:27 server1 lxd.daemon[2374581]: t=2021-11-16T13:41:27+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T11:03:13+0000 raftID=4
Nov 16 15:37:02 server1 lxd.daemon[2374581]: 2021/11/16 15:37:02 http: TLS handshake error from 192.241.206.121:39970: tls: client used the legacy version field to negotiate TLS 1.3
Nov 16 16:22:40 server1 lxd.daemon[2374581]: 2021/11/16 16:22:40 http: TLS handshake error from 192.241.205.109:52792: tls: client offered only unsupported versions: [302 301]
Nov 16 17:05:41 server1 lxd.daemon[2374581]: t=2021-11-16T17:05:41+0000 lvl=warn msg="Failed to get events from member" address=xxx.xx.xx.xx7:8443 err="Unable to connect to: xxx.xx.xx.xx7:8443"
Nov 16 17:05:42 server1 lxd.daemon[2374581]: t=2021-11-16T17:05:42+0000 lvl=warn msg="Failed to get events from member" address=xxx.xx.xx.xx7:8443 err="Unable to connect to: xxx.xx.xx.xx7:8443"
Nov 16 17:05:43 server1 lxd.daemon[2374581]: t=2021-11-16T17:05:43+0000 lvl=warn msg="Failed to get events from member" address=xxx.xx.xx.xx7:8443 err="Unable to connect to: xxx.xx.xx.xx7:8443"
Nov 16 17:05:44 server1 lxd.daemon[2374581]: t=2021-11-16T17:05:44+0000 lvl=warn msg="Failed to get events from member" address=xxx.xx.xx.xx7:8443 err="Unable to connect to: xxx.xx.xx.xx7:8443"
Nov 16 17:05:45 server1 lxd.daemon[2374581]: t=2021-11-16T17:05:45+0000 lvl=warn msg="Failed to get events from member" address=xxx.xx.xx.xx7:8443 err="Unable to connect to: xxx.xx.xx.xx7:8443"
Nov 16 17:05:46 server1 lxd.daemon[2374581]: t=2021-11-16T17:05:46+0000 lvl=warn msg="Failed to get events from member" address=xxx.xx.xx.xx7:8443 err="Unable to connect to: xxx.xx.xx.xx7:8443"
Nov 16 17:05:47 server1 lxd.daemon[2374581]: t=2021-11-16T17:05:47+0000 lvl=warn msg="Failed to get events from member" address=xxx.xx.xx.xx7:8443 err="Unable to connect to: xxx.xx.xx.xx7:8443"
Nov 16 17:08:13 server1 lxd.daemon[2374581]: 2021/11/16 17:08:13 http: TLS handshake error from 192.241.209.114:50642: tls: client offered only unsupported versions: [301]
Nov 16 17:28:36 server1 lxd.daemon[2374581]: 2021/11/16 17:28:36 http: TLS handshake error from 192.241.199.52:45964: tls: client offered only unsupported versions: []
Nov 16 21:23:13 server1 lxd.daemon[2374581]: t=2021-11-16T21:23:13+0000 lvl=warn msg="Failed to get events from member" address=xxx.xx.xx.xx9:8443 err="Unable to connect to: xxx.xx.xx.xx9:8443"
Nov 16 21:23:14 server1 lxd.daemon[2374581]: t=2021-11-16T21:23:14+0000 lvl=warn msg="Failed to get events from member" address=xxx.xx.xx.xx9:8443 err="Unable to connect to: xxx.xx.xx.xx9:8443"
Nov 16 21:23:31 server1 lxd.daemon[2374581]: t=2021-11-16T21:23:31+0000 lvl=warn msg="Excluding offline member from refresh" ID=1 address=xxx.xx.xx.xx9:8443 lastHeartbeat=2021-11-16T21:22:55+0000 raftID=1
Nov 16 21:31:22 server1 lxd.daemon[2374581]: 2021/11/16 21:31:22 http: TLS handshake error from xxx.xx.xx.xx8:59938: EOF
Nov 16 21:31:53 server1 lxd.daemon[2374581]: 2021/11/16 21:31:53 http: TLS handshake error from xxx.xx.xx.xx8:60018: EOF
Nov 16 21:32:05 server1 lxd.daemon[2374581]: 2021/11/16 21:32:05 http: TLS handshake error from xxx.xx.xx.xx8:60040: EOF
Nov 16 21:32:08 server1 lxd.daemon[2374581]: 2021/11/16 21:32:08 http: TLS handshake error from xxx.xx.xx.xx8:60050: EOF
Nov 16 21:32:09 server1 lxd.daemon[2374581]: 2021/11/16 21:32:09 http: TLS handshake error from xxx.xx.xx.xx8:60052: EOF
Nov 16 21:32:10 server1 lxd.daemon[2374581]: 2021/11/16 21:32:10 http: TLS handshake error from xxx.xx.xx.xx8:60060: EOF
Nov 16 21:32:12 server1 lxd.daemon[2374581]: t=2021-11-16T21:32:12+0000 lvl=warn msg="Excluding offline member from refresh" ID=2 address=xxx.xx.xx.xx7:8443 lastHeartbeat=2021-11-16T21:31:45+0000 raftID=2
Nov 16 21:32:12 server1 lxd.daemon[2374581]: t=2021-11-16T21:32:12+0000 lvl=warn msg="Excluding offline member from refresh" ID=1 address=xxx.xx.xx.xx9:8443 lastHeartbeat=2021-11-16T21:31:49+0000 raftID=1
Nov 16 21:32:37 server1 lxd.daemon[2374581]: t=2021-11-16T21:32:37+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:32:12+0000 raftID=4
Nov 16 21:32:39 server1 lxd.daemon[2374581]: t=2021-11-16T21:32:39+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:32:12+0000 raftID=4
Nov 16 21:32:44 server1 lxd.daemon[2374581]: t=2021-11-16T21:32:44+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:32:12+0000 raftID=4
Nov 16 21:32:51 server1 lxd.daemon[2374581]: 2021/11/16 21:32:51 http: TLS handshake error from xxx.xx.xx.xx8:60140: EOF
Nov 16 21:32:56 server1 lxd.daemon[2374581]: t=2021-11-16T21:32:56+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:33:29 server1 lxd.daemon[2374581]: t=2021-11-16T21:33:29+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:32:54+0000 raftID=4
Nov 16 21:33:36 server1 lxd.daemon[2374581]: t=2021-11-16T21:33:36+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:32:54+0000 raftID=4
Nov 16 21:33:45 server1 lxd.daemon[2374581]: t=2021-11-16T21:33:45+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:32:54+0000 raftID=4
Nov 16 21:34:14 server1 lxd.daemon[2374581]: 2021/11/16 21:34:14 http: TLS handshake error from xxx.xx.xx.xx8:60268: EOF
Nov 16 21:34:30 server1 lxd.daemon[2374581]: 2021/11/16 21:34:30 http: TLS handshake error from xxx.xx.xx.xx8:60296: EOF
Nov 16 21:37:29 server1 lxd.daemon[2374581]: t=2021-11-16T21:37:29+0000 lvl=eror msg="Failed getting volumes for auto custom volume snapshot task" err="failed to begin transaction: context deadline exceeded"
Nov 16 21:38:22 server1 lxd.daemon[2374581]: 2021/11/16 21:38:22 http: TLS handshake error from xxx.xx.xx.xx8:60656: EOF
Nov 16 21:40:11 server1 lxd.daemon[2374581]: 2021/11/16 21:40:11 http: TLS handshake error from xxx.xx.xx.xx8:60812: EOF
Nov 16 21:41:18 server1 lxd.daemon[2374581]: 2021/11/16 21:41:18 http: TLS handshake error from xxx.xx.xx.xx8:60922: EOF
Nov 16 21:41:29 server1 lxd.daemon[2374581]: 2021/11/16 21:41:29 http: TLS handshake error from xxx.xx.xx.xx8:60946: EOF
Nov 16 21:42:33 server1 lxd.daemon[2374581]: 2021/11/16 21:42:33 http: TLS handshake error from xxx.xx.xx.xx8:32802: EOF
Nov 16 21:43:32 server1 lxd.daemon[2374581]: 2021/11/16 21:43:32 http: TLS handshake error from xxx.xx.xx.xx8:32886: EOF
Nov 16 21:44:00 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:00+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:43:31+0000 raftID=4
Nov 16 21:44:11 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:11+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:43:31+0000 raftID=4
Nov 16 21:44:22 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:22+0000 lvl=warn msg="Failed to rollback transaction after error (Failed to fetch field Devices: Failed to fetch  ref for instance_snapshots: sql: transaction has already been committed or rolled back): sql: transaction has already been committed or rolled back"
Nov 16 21:44:22 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:22+0000 lvl=eror msg="Failed to list instance snapshots" err="Failed to fetch field Devices: Failed to fetch  ref for instance_snapshots: sql: transaction has already been committed or rolled back" instance=kientran-server1 project=default
Nov 16 21:44:28 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:28+0000 lvl=warn msg="Dqlite: attempt 1: server xxx.xx.xx.xx6:8443: no known leader"
Nov 16 21:44:29 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:29+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:44:29 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:29+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:44:29 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:29+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:44:29 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:29+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:44:30 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:30+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:44:32 server1 lxd.daemon[2374581]: t=2021-11-16T21:44:32+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:44:32 server1 lxd.daemon[2374581]: 2021/11/16 21:44:32 http: TLS handshake error from xxx.xx.xx.xx8:32980: EOF
Nov 16 21:44:59 server1 lxd.daemon[2374581]: 2021/11/16 21:44:59 http: TLS handshake error from xxx.xx.xx.xx8:33022: EOF
Nov 16 21:46:42 server1 lxd.daemon[2374581]: 2021/11/16 21:46:42 http: TLS handshake error from xxx.xx.xx.xx8:33184: EOF
Nov 16 21:47:21 server1 lxd.daemon[2374581]: 2021/11/16 21:47:21 http: TLS handshake error from xxx.xx.xx.xx8:33240: EOF
Nov 16 21:48:33 server1 lxd.daemon[2374581]: 2021/11/16 21:48:33 http: TLS handshake error from xxx.xx.xx.xx8:33338: EOF
Nov 16 21:49:08 server1 lxd.daemon[2374581]: 2021/11/16 21:49:08 http: TLS handshake error from xxx.xx.xx.xx8:33380: EOF
Nov 16 21:49:17 server1 lxd.daemon[2374581]: t=2021-11-16T21:49:17+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:48:50+0000 raftID=4
Nov 16 21:49:32 server1 lxd.daemon[2374581]: t=2021-11-16T21:49:32+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:49:34 server1 lxd.daemon[2374581]: t=2021-11-16T21:49:34+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:49:34 server1 lxd.daemon[2374581]: t=2021-11-16T21:49:34+0000 lvl=warn msg="Dqlite: attempt 1: server xxx.xx.xx.xx6:8443: no known leader"
Nov 16 21:49:41 server1 lxd.daemon[2374581]: t=2021-11-16T21:49:41+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:49:42 server1 lxd.daemon[2374581]: 2021/11/16 21:49:42 http: TLS handshake error from xxx.xx.xx.xx8:33438: EOF
Nov 16 21:49:43 server1 lxd.daemon[2374581]: t=2021-11-16T21:49:43+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:49:59 server1 lxd.daemon[2374581]: t=2021-11-16T21:49:59+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:51:19 server1 lxd.daemon[2374581]: t=2021-11-16T21:51:19+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:50:51+0000 raftID=4
Nov 16 21:51:27 server1 lxd.daemon[2374581]: 2021/11/16 21:51:27 http: TLS handshake error from xxx.xx.xx.xx8:33608: EOF
Nov 16 21:51:31 server1 lxd.daemon[2374581]: t=2021-11-16T21:51:31+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:51:42 server1 lxd.daemon[2374581]: 2021/11/16 21:51:42 http: TLS handshake error from xxx.xx.xx.xx8:33636: EOF
Nov 16 21:52:42 server1 lxd.daemon[2374581]: t=2021-11-16T21:52:42+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:52:10+0000 raftID=4
Nov 16 21:52:44 server1 lxd.daemon[2374581]: 2021/11/16 21:52:44 http: TLS handshake error from xxx.xx.xx.xx8:33726: EOF
Nov 16 21:53:23 server1 lxd.daemon[2374581]: 2021/11/16 21:53:23 http: TLS handshake error from xxx.xx.xx.xx8:33782: EOF
Nov 16 21:54:28 server1 lxd.daemon[2374581]: t=2021-11-16T21:54:28+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:53:59+0000 raftID=4
Nov 16 21:54:40 server1 lxd.daemon[2374581]: 2021/11/16 21:54:40 http: TLS handshake error from xxx.xx.xx.xx8:33894: EOF
Nov 16 21:54:41 server1 lxd.daemon[2374581]: t=2021-11-16T21:54:41+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 21:55:22 server1 lxd.daemon[2374581]: 2021/11/16 21:55:22 http: TLS handshake error from xxx.xx.xx.xx8:33952: EOF
Nov 16 21:55:32 server1 lxd.daemon[2374581]: 2021/11/16 21:55:32 http: TLS handshake error from xxx.xx.xx.xx8:33966: EOF
Nov 16 21:56:24 server1 lxd.daemon[2374581]: 2021/11/16 21:56:24 http: TLS handshake error from 162.142.125.128:8370: read tcp xxx.xx.xx.xx6:8443->162.142.125.128:8370: read: connection reset by peer
Nov 16 21:56:40 server1 lxd.daemon[2374581]: t=2021-11-16T21:56:40+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:56:06+0000 raftID=4
Nov 16 21:57:29 server1 lxd.daemon[2374581]: t=2021-11-16T21:57:29+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:56:58+0000 raftID=4
Nov 16 21:57:40 server1 lxd.daemon[2374581]: t=2021-11-16T21:57:40+0000 lvl=eror msg="Error refreshing forkdns" err="failed to begin transaction: context deadline exceeded"
Nov 16 21:57:53 server1 lxd.daemon[2374581]: 2021/11/16 21:57:53 http: TLS handshake error from xxx.xx.xx.xx8:34202: EOF
Nov 16 21:58:50 server1 lxd.daemon[2374581]: t=2021-11-16T21:58:50+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:58:17+0000 raftID=4
Nov 16 21:59:00 server1 lxd.daemon[2374581]: t=2021-11-16T21:59:00+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:58:17+0000 raftID=4
Nov 16 21:59:42 server1 lxd.daemon[2374581]: t=2021-11-16T21:59:42+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T21:59:10+0000 raftID=4
Nov 16 22:00:00 server1 lxd.daemon[2374581]: 2021/11/16 22:00:00 http: TLS handshake error from xxx.xx.xx.xx8:34402: EOF
Nov 16 22:00:45 server1 lxd.daemon[2374581]: t=2021-11-16T22:00:45+0000 lvl=warn msg="Excluding offline member from refresh" ID=4 address=xxx.xx.xx.xx6:8443 lastHeartbeat=2021-11-16T22:00:19+0000 raftID=4
Nov 16 22:00:50 server1 lxd.daemon[2374581]: t=2021-11-16T22:00:50+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 22:01:01 server1 lxd.daemon[2374581]: t=2021-11-16T22:01:01+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 22:01:01 server1 lxd.daemon[2374581]: t=2021-11-16T22:01:01+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 22:01:03 server1 lxd.daemon[2374581]: t=2021-11-16T22:01:03+0000 lvl=warn msg="Transaction timed out. Retrying once" err="failed to begin transaction: context deadline exceeded" member=4
Nov 16 22:01:12 server1 systemd[1]: Stopping Service for snap application lxd.daemon...
Nov 16 22:01:13 server1 lxd.daemon[2394805]: => Stop reason is: snap refresh
Nov 16 22:01:13 server1 lxd.daemon[2394805]: => Stopping LXD
Nov 16 22:01:21 server1 lxd.daemon[2374581]: t=2021-11-16T22:01:21+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:01:27 server1 lxd.daemon[2374581]: t=2021-11-16T22:01:27+0000 lvl=eror msg="Failed to start expired instance snapshots operation" err="LXD is shutting down"
Nov 16 22:01:31 server1 lxd.daemon[2374581]: t=2021-11-16T22:01:31+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:01:39 server1 lxd.daemon[2374581]: t=2021-11-16T22:01:39+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:01:46 server1 lxd.daemon[2374581]: t=2021-11-16T22:01:46+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:01:55 server1 lxd.daemon[2374581]: t=2021-11-16T22:01:55+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:02:10 server1 lxd.daemon[2374581]: t=2021-11-16T22:02:10+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:02:19 server1 lxd.daemon[2374581]: t=2021-11-16T22:02:19+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:02:30 server1 lxd.daemon[2374581]: t=2021-11-16T22:02:30+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:02:38 server1 lxd.daemon[2374581]: t=2021-11-16T22:02:38+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:02:50 server1 lxd.daemon[2374581]: t=2021-11-16T22:02:50+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:02:57 server1 lxd.daemon[2374581]: t=2021-11-16T22:02:57+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:03:06 server1 lxd.daemon[2374581]: t=2021-11-16T22:03:06+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:03:17 server1 lxd.daemon[2374581]: t=2021-11-16T22:03:17+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:03:27 server1 lxd.daemon[2374581]: t=2021-11-16T22:03:27+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:03:38 server1 lxd.daemon[2374581]: t=2021-11-16T22:03:38+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:03:46 server1 lxd.daemon[2374581]: t=2021-11-16T22:03:46+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:04:00 server1 lxd.daemon[2374581]: t=2021-11-16T22:04:00+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:04:07 server1 lxd.daemon[2374581]: t=2021-11-16T22:04:07+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:04:21 server1 lxd.daemon[2374581]: t=2021-11-16T22:04:21+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:04:28 server1 lxd.daemon[2374581]: t=2021-11-16T22:04:28+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:04:34 server1 lxd.daemon[2374581]: t=2021-11-16T22:04:34+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:04:47 server1 lxd.daemon[2374581]: t=2021-11-16T22:04:47+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:04:58 server1 lxd.daemon[2374581]: t=2021-11-16T22:04:58+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:05:06 server1 lxd.daemon[2374581]: t=2021-11-16T22:05:06+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:05:17 server1 lxd.daemon[2374581]: t=2021-11-16T22:05:17+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:05:27 server1 lxd.daemon[2374581]: t=2021-11-16T22:05:27+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:05:36 server1 lxd.daemon[2374581]: t=2021-11-16T22:05:36+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:05:50 server1 lxd.daemon[2374581]: t=2021-11-16T22:05:50+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:05:56 server1 lxd.daemon[2374581]: t=2021-11-16T22:05:56+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:06:09 server1 lxd.daemon[2374581]: t=2021-11-16T22:06:09+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:06:17 server1 lxd.daemon[2374581]: t=2021-11-16T22:06:17+0000 lvl=warn msg="Rejecting heartbeat request as shutting down"
Nov 16 22:06:33 server1 lxd.daemon[2394805]: ==> Forcefully stopping LXD after 5 minutes wait
Nov 16 22:06:33 server1 lxd.daemon[2394805]: ==> Stopped LXD
Nov 16 22:06:33 server1 systemd[1]: snap.lxd.daemon.service: Succeeded.
Nov 16 22:06:33 server1 systemd[1]: Stopped Service for snap application lxd.daemon.

Looks like it is a known problem with snap refresh:

Can you provide a longer journalctl output? There’s a LXD crash in there that looks suspicious.

Indeed, something insane is going on. Here is the output of top:

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND 
2413759 root      20   0   15.6g 562948  21188 S  2023   0.4   1356:41 lxd                                                                                                                                                                                                     
   3260 root      20   0       0      0      0 R  99.7   0.0   8467:31 btrfs-cleaner                                                                                                                                                                                           
    235 root      20   0       0      0      0 R  97.4   0.0 384:59.31 kswapd1   

:flushed:

I think it could be related to btrfs, which occasionally gets locked up. I have several containers with automated daily snapshots. Maybe that is too much stress on the file system.

Here is the first 400 lines of the crash from the above log:

Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime: program exceeds 10000-thread limit
Nov 16 11:03:21 server1 lxd.daemon[2148742]: fatal error: thread exhaustion
Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime stack:
Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime.throw({0x18bc952, 0x7f30c609ec50})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/panic.go:1198 +0x71
Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime.checkmcount()
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/proc.go:760 +0x8c
Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime.mReserveID()
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/proc.go:776 +0x36
Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime.startm(0x0, 0x1)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/proc.go:2477 +0x90
Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime.wakep()
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/proc.go:2584 +0x5a
Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime.resetspinning()
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/proc.go:3216 +0x45
Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime.schedule()
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/proc.go:3374 +0x25e
Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime.park_m(0xc0000e41a0)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/proc.go:3516 +0x14d
Nov 16 11:03:21 server1 lxd.daemon[2148742]: runtime.mcall()
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/asm_amd64.s:307 +0x43
Nov 16 11:03:21 server1 lxd.daemon[2148742]: goroutine 1 [select, 11035 minutes]:
Nov 16 11:03:21 server1 lxd.daemon[2148742]: main.(*cmdDaemon).Run(0xc00050ef18, 0xc0006cfc58, {0xc00015b4c0, 0x0, 0x0})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/main_daemon.go:84 +0x6b1
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/spf13/cobra.(*Command).execute(0xc000442780, {0xc00013e010, 0x4, 0x4})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/.go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:856 +0x60e
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/spf13/cobra.(*Command).ExecuteC(0xc000442780)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/.go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:974 +0x3bc
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/spf13/cobra.(*Command).Execute(...)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/.go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:902
Nov 16 11:03:21 server1 lxd.daemon[2148742]: main.main()
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/main.go:222 +0x1a58
Nov 16 11:03:21 server1 lxd.daemon[2148742]: goroutine 1840 [select, 56 minutes]:
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/task.(*Task).loop(0xc0004fe108, {0x1b90740, 0xc000990500})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/task/task.go:66 +0x168
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/task.(*Group).Start.func1(0xc000564030)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/task/group.go:60 +0x3d
Nov 16 11:03:21 server1 lxd.daemon[2148742]: created by github.com/lxc/lxd/lxd/task.(*Group).Start
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/task/group.go:59 +0x2f7
Nov 16 11:03:21 server1 lxd.daemon[2148742]: goroutine 77 [select, 11035 minutes]:
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/cluster.dqliteProxy({0x18a005f, 0x8}, 0xc0007cc600, {0x1bb2250, 0xc000184380}, {0x1bb4a30, 0xc00020c038})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/cluster/gateway.go:1179 +0x790
Nov 16 11:03:21 server1 lxd.daemon[2148742]: created by github.com/lxc/lxd/lxd/cluster.(*Gateway).raftDial.func1
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/cluster/gateway.go:483 +0x1e5
Nov 16 11:03:21 server1 lxd.daemon[2148742]: goroutine 31 [syscall, 11035 minutes]:
Nov 16 11:03:21 server1 lxd.daemon[2148742]: os/signal.signal_recv()
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/sigqueue.go:169 +0x98
Nov 16 11:03:21 server1 lxd.daemon[2148742]: os/signal.loop()
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/signal/signal_unix.go:24 +0x19
Nov 16 11:03:21 server1 lxd.daemon[2148742]: created by os/signal.Notify.func1.1
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/signal/signal.go:151 +0x2c
Nov 16 11:03:21 server1 lxd.daemon[2148742]: goroutine 32 [select, 351 minutes]:
Nov 16 11:03:21 server1 lxd.daemon[2148742]: database/sql.(*DB).connectionOpener(0xc000730270, {0x1b90740, 0xc00052a040})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/database/sql/sql.go:1196 +0x93
Nov 16 11:03:21 server1 lxd.daemon[2148742]: created by database/sql.OpenDB
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/database/sql/sql.go:794 +0x188
Nov 16 11:03:21 server1 lxd.daemon[2148742]: goroutine 295862 [syscall, 346 minutes]:
Nov 16 11:03:21 server1 lxd.daemon[2148742]: syscall.Syscall6(0x106, 0xffffffffffffff9c, 0xc06bd991e0, 0xc11f7326b8, 0x100, 0x0, 0x0)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/asm_linux_amd64.s:43 +0x5
Nov 16 11:03:21 server1 lxd.daemon[2148742]: syscall.fstatat(0x0, {0xc06bd99110, 0xcf}, 0xc11f7326b8, 0xcf)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/zsyscall_linux_amd64.go:1441 +0x10f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: syscall.Lstat(...)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/syscall_linux_amd64.go:74
Nov 16 11:03:21 server1 lxd.daemon[2148742]: os.lstatNolog.func1(...)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:46
Nov 16 11:03:21 server1 lxd.daemon[2148742]: os.ignoringEINTR(...)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/file_posix.go:246
Nov 16 11:03:21 server1 lxd.daemon[2148742]: os.lstatNolog({0xc06bd99110, 0x3})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:45 +0x5b
Nov 16 11:03:21 server1 lxd.daemon[2148742]: os.Lstat({0xc06bd99110, 0xcf})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat.go:22 +0x34
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc00108b290, 0x89}, {0x1b9f358, 0xc0145b3110}, 0xc010573430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:436 +0x1db
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc010f4ef00, 0x7e}, {0x1b9f358, 0xc0145b3040}, 0xc010573430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc010f4ec00, 0x78}, {0x1b9f358, 0xc0145b2f70}, 0xc010573430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc008ad0580, 0x71}, {0x1b9f358, 0xc00afc8b60}, 0xc010573430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc0013b9960, 0x6d}, {0x1b9f358, 0xc00afa3450}, 0xc010573430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc00682e000, 0x69}, {0x1b9f358, 0xc00680d860}, 0xc010573430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc0004dbb20, 0x63}, {0x1b9f358, 0xc00680d5f0}, 0xc010573430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.Walk({0xc0004dbb20, 0x63}, 0xc000573430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:505 +0x6c
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).getSubvolumes(0xc0030002d0, {0xc0117784d0, 0x62})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:62 +0xee
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).deleteSubvolume(0xc0030002d0, {0xc0117784d0, 0x62}, 0x1)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:162 +0xa7
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).DeleteVolumeSnapshot(0xc0030002d0, {{0xc013c3a980, 0x20}, {0xc00cceca40, 0x5}, 0xc001f0a8a0, {0x18a5d1b, 0xa}, {0x18a617b, 0xa}, ...}, ...)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_volumes.go:1319 +0xb1
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage.(*lxdBackend).DeleteInstanceSnapshot(0xc002fc8e40, {0x1bfb578, 0xc00b1f0f20}, 0xc01607da20)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/backend_lxd.go:2338 +0x7c3
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Delete(0xc00b1f0f20, 0x1)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:3635 +0x484
Nov 16 11:03:21 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshots({0xc000ba24b8, 0x42fe2a}, 0x203000, {0xc00e9fa0e0, 0x2, 0xc000ba2528})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:612 +0x88
Nov 16 11:03:21 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshotsTask.func1.1(0x445900)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:575 +0x31
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/operations.(*Operation).Run.func1(0xc0026db680, 0xc010a82f80)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:270 +0x42
Nov 16 11:03:21 server1 lxd.daemon[2148742]: created by github.com/lxc/lxd/lxd/operations.(*Operation).Run
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:269 +0x128
Nov 16 11:03:21 server1 lxd.daemon[2148742]: goroutine 343836 [syscall, 347 minutes]:
Nov 16 11:03:21 server1 lxd.daemon[2148742]: syscall.Syscall6(0x106, 0xffffffffffffff9c, 0xc071847450, 0xc0810d7a38, 0x100, 0x0, 0x0)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/asm_linux_amd64.s:43 +0x5
Nov 16 11:03:21 server1 lxd.daemon[2148742]: syscall.fstatat(0x0, {0xc071846c30, 0xca}, 0xc0810d7a38, 0xca)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/zsyscall_linux_amd64.go:1441 +0x10f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: syscall.Lstat(...)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/syscall_linux_amd64.go:74
Nov 16 11:03:21 server1 lxd.daemon[2148742]: os.lstatNolog.func1(...)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:46
Nov 16 11:03:21 server1 lxd.daemon[2148742]: os.ignoringEINTR(...)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/file_posix.go:246
Nov 16 11:03:21 server1 lxd.daemon[2148742]: os.lstatNolog({0xc071846c30, 0x3})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:45 +0x5b
Nov 16 11:03:21 server1 lxd.daemon[2148742]: os.Lstat({0xc071846c30, 0xca})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat.go:22 +0x34
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc012b681b0, 0x89}, {0x1b9f358, 0xc00b036a90}, 0xc002e07430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:436 +0x1db
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc01050e580, 0x7e}, {0x1b9f358, 0xc00b0369c0}, 0xc002e07430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc01050e380, 0x78}, {0x1b9f358, 0xc00b0368f0}, 0xc002e07430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc005202880, 0x71}, {0x1b9f358, 0xc004288b60}, 0xc002e07430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc0126b4f50, 0x6d}, {0x1b9f358, 0xc009a8ab60}, 0xc002e07430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc009a92540, 0x69}, {0x1b9f358, 0xc004782750}, 0xc002e07430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.walk({0xc004e9ea10, 0x63}, {0x1b9f358, 0xc00c987ee0}, 0xc002e07430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:21 server1 lxd.daemon[2148742]: path/filepath.Walk({0xc004e9ea10, 0x63}, 0xc002e07430)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:505 +0x6c
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).getSubvolumes(0xc0026f4140, {0xc004e9f420, 0x62})
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:62 +0xee
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).deleteSubvolume(0xc0026f4140, {0xc004e9f420, 0x62}, 0x1)
Nov 16 11:03:21 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:162 +0xa7
Nov 16 11:03:21 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).DeleteVolumeSnapshot(0xc0026f4140, {{0xc019726760, 0x20}, {0xc0019b03a0, 0x5}, 0xc001477320, {0x18a5d1b, 0xa}, {0x18a617b, 0xa}, ...}, ...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_volumes.go:1319 +0xb1
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage.(*lxdBackend).DeleteInstanceSnapshot(0xc001507080, {0x1bfb578, 0xc0105da9a0}, 0xc0032699d0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/backend_lxd.go:2338 +0x7c3
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Delete(0xc0105da9a0, 0x1)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:3635 +0x484
Nov 16 11:03:22 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshots({0x476b47, 0x445912}, 0x4457e7, {0xc000432c20, 0x2, 0xc000bba340})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:612 +0x88
Nov 16 11:03:22 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshotsTask.func1.1(0xc0003b8cd8)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:575 +0x31
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/operations.(*Operation).Run.func1(0xc001f98b40, 0xc000c70000)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:270 +0x42
Nov 16 11:03:22 server1 lxd.daemon[2148742]: created by github.com/lxc/lxd/lxd/operations.(*Operation).Run
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:269 +0x128
Nov 16 11:03:22 server1 lxd.daemon[2148742]: goroutine 1024332 [syscall, 346 minutes]:
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.Syscall6(0x106, 0xffffffffffffff9c, 0xc123cd6ea0, 0xc0c459ec68, 0x100, 0x0, 0x0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/asm_linux_amd64.s:43 +0x5
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.fstatat(0x0, {0xc123cd6dd0, 0xca}, 0xc0c459ec68, 0xca)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/zsyscall_linux_amd64.go:1441 +0x10f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.Lstat(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/syscall_linux_amd64.go:74
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.lstatNolog.func1(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:46
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.ignoringEINTR(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/file_posix.go:246
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.lstatNolog({0xc123cd6dd0, 0x3})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:45 +0x5b
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.Lstat({0xc123cd6dd0, 0xca})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat.go:22 +0x34
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc017b30120, 0x89}, {0x1b9f358, 0xc016af9ee0}, 0xc02478b430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:436 +0x1db
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc007300500, 0x7e}, {0x1b9f358, 0xc016af9e10}, 0xc02478b430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc007300300, 0x78}, {0x1b9f358, 0xc016af9d40}, 0xc02478b430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc002ba4880, 0x71}, {0x1b9f358, 0xc015014a90}, 0xc02478b430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc00e6b2c40, 0x6d}, {0x1b9f358, 0xc006f328f0}, 0xc02478b430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc00c74acb0, 0x69}, {0x1b9f358, 0xc00c9712b0}, 0xc02478b430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc00c74a380, 0x63}, {0x1b9f358, 0xc00c971040}, 0xc02478b430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.Walk({0xc00c74a380, 0x63}, 0xc02478b430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:505 +0x6c
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).getSubvolumes(0xc001dc8320, {0xc00dcc2fc0, 0x62})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:62 +0xee
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).deleteSubvolume(0xc001dc8320, {0xc00dcc2fc0, 0x62}, 0x1)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:162 +0xa7
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).DeleteVolumeSnapshot(0xc001dc8320, {{0xc01b23eae0, 0x20}, {0xc006256525, 0x5}, 0xc0024d1e60, {0x18a5d1b, 0xa}, {0x18a617b, 0xa}, ...}, ...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_volumes.go:1319 +0xb1
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage.(*lxdBackend).DeleteInstanceSnapshot(0xc000051ec0, {0x1bfb578, 0xc010bc6580}, 0xc000e943b0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/backend_lxd.go:2338 +0x7c3
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Delete(0xc010bc6580, 0x1)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:3635 +0x484
Nov 16 11:03:22 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshots({0xc0040f4d00, 0x445ab2}, 0x203000, {0xc00920cf40, 0x3, 0xc000c70ab0})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:612 +0x88
Nov 16 11:03:22 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshotsTask.func1.1(0x16)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:575 +0x31
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/operations.(*Operation).Run.func1(0xc02279d440, 0xc000c70c60)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:270 +0x42
Nov 16 11:03:22 server1 lxd.daemon[2148742]: created by github.com/lxc/lxd/lxd/operations.(*Operation).Run
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:269 +0x128
Nov 16 11:03:22 server1 lxd.daemon[2148742]: goroutine 58 [IO wait]:
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.runtime_pollWait(0x7f4749a01fe0, 0x72)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/netpoll.go:229 +0x89
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.(*pollDesc).wait(0xc000558a00, 0xc11bfa86b0, 0x0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/internal/poll/fd_poll_runtime.go:84 +0x32
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.(*pollDesc).waitRead(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/internal/poll/fd_poll_runtime.go:89
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.(*FD).Accept(0xc000558a00)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/internal/poll/fd_unix.go:402 +0x22c
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net.(*netFD).accept(0xc000558a00)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/fd_unix.go:173 +0x35
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net.(*TCPListener).accept(0xc00012c7b0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/tcpsock_posix.go:140 +0x28
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net.(*TCPListener).Accept(0xc00012c7b0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/tcpsock.go:262 +0x3d
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/endpoints.(*networkListener).Accept(0xc00052c000)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/endpoints/network.go:220 +0x5e
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net/http.(*Server).Serve(0xc0003fa0e0, {0x1b732e0, 0xc00052c000})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/http/server.go:3001 +0x394
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/endpoints.(*Endpoints).serve.func1()
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/endpoints/endpoints.go:389 +0x25
Nov 16 11:03:22 server1 lxd.daemon[2148742]: gopkg.in/tomb%2ev2.(*Tomb).run(0xc00052c4b0, 0xc000490480)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163 +0x36
Nov 16 11:03:22 server1 lxd.daemon[2148742]: created by gopkg.in/tomb%2ev2.(*Tomb).Go
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:159 +0xf3
Nov 16 11:03:22 server1 lxd.daemon[2148742]: goroutine 59 [select, 338 minutes]:
Nov 16 11:03:22 server1 lxd.daemon[2148742]: database/sql.(*DB).connectionOpener(0xc0005204e0, {0x1b90740, 0xc000258ac0})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/database/sql/sql.go:1196 +0x93
Nov 16 11:03:22 server1 lxd.daemon[2148742]: created by database/sql.OpenDB
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/database/sql/sql.go:794 +0x188
Nov 16 11:03:22 server1 lxd.daemon[2148742]: goroutine 17315827 [syscall, 348 minutes]:
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.Syscall6(0x106, 0xffffffffffffff9c, 0xc10a52ab60, 0xc160c0d7c8, 0x100, 0x0, 0x0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/asm_linux_amd64.s:43 +0x5
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.fstatat(0x0, {0xc10a52aa90, 0xca}, 0xc160c0d7c8, 0xca)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/zsyscall_linux_amd64.go:1441 +0x10f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.Lstat(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/syscall_linux_amd64.go:74
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.lstatNolog.func1(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:46
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.ignoringEINTR(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/file_posix.go:246
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.lstatNolog({0xc10a52aa90, 0x3})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:45 +0x5b
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.Lstat({0xc10a52aa90, 0xca})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat.go:22 +0x34
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc025e737a0, 0x89}, {0x1b9f358, 0xc028c9e5b0}, 0xc006589430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:436 +0x1db
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc025e3b600, 0x7e}, {0x1b9f358, 0xc028c9e4e0}, 0xc006589430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc025e3b400, 0x78}, {0x1b9f358, 0xc028c9e410}, 0xc006589430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc025e07400, 0x71}, {0x1b9f358, 0xc025e11450}, 0xc006589430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc003cf5ea0, 0x6d}, {0x1b9f358, 0xc0213192b0}, 0xc006589430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc008eae3f0, 0x69}, {0x1b9f358, 0xc016dd6680}, 0xc006589430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc023aa5f10, 0x63}, {0x1b9f358, 0xc016dd6410}, 0xc006589430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.Walk({0xc023aa5f10, 0x63}, 0xc007955430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:505 +0x6c
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).getSubvolumes(0xc017544af0, {0xc0128fed20, 0x62})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:62 +0xee
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).deleteSubvolume(0xc017544af0, {0xc0128fed20, 0x62}, 0x1)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:162 +0xa7
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).DeleteVolumeSnapshot(0xc017544af0, {{0xc031d43960, 0x20}, {0xc035ab1a85, 0x5}, 0xc00e39def0, {0x18a5d1b, 0xa}, {0x18a617b, 0xa}, ...}, ...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_volumes.go:1319 +0xb1
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage.(*lxdBackend).DeleteInstanceSnapshot(0xc001c6b740, {0x1bfb578, 0xc00c962580}, 0xc012315940)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/backend_lxd.go:2338 +0x7c3
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Delete(0xc00c962580, 0x1)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:3635 +0x484
Nov 16 11:03:22 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshots({0xc01b330500, 0x77ba6a}, 0x1bea470, {0xc0120c8e40, 0x3, 0x1010000000000aa})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:612 +0x88
Nov 16 11:03:22 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshotsTask.func1.1(0x41a634)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:575 +0x31
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/operations.(*Operation).Run.func1(0xc010f070e0, 0xc017802b40)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:270 +0x42
Nov 16 11:03:22 server1 lxd.daemon[2148742]: created by github.com/lxc/lxd/lxd/operations.(*Operation).Run
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:269 +0x128
Nov 16 11:03:22 server1 lxd.daemon[2148742]: goroutine 57 [IO wait, 11035 minutes]:
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.runtime_pollWait(0x7f4749a020c8, 0x72)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/netpoll.go:229 +0x89
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.(*pollDesc).wait(0xc000558800, 0x20, 0x0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/internal/poll/fd_poll_runtime.go:84 +0x32
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.(*pollDesc).waitRead(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/internal/poll/fd_poll_runtime.go:89
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.(*FD).Accept(0xc000558800)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/internal/poll/fd_unix.go:402 +0x22c
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net.(*netFD).accept(0xc000558800)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/fd_unix.go:173 +0x35
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net.(*UnixListener).accept(0xc0008ffdc8)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/unixsock_posix.go:167 +0x1c
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net.(*UnixListener).Accept(0xc0009ff260)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/unixsock.go:260 +0x3d
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net/http.(*Server).Serve(0xc0003fa0e0, {0x1b778a0, 0xc0009ff260})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/http/server.go:3001 +0x394
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/endpoints.(*Endpoints).serve.func1()
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/endpoints/endpoints.go:389 +0x25
Nov 16 11:03:22 server1 lxd.daemon[2148742]: gopkg.in/tomb%2ev2.(*Tomb).run(0xc00052c4b0, 0xc00052b4c0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163 +0x36
Nov 16 11:03:22 server1 lxd.daemon[2148742]: created by gopkg.in/tomb%2ev2.(*Tomb).Go
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:159 +0xf3
Nov 16 11:03:22 server1 lxd.daemon[2148742]: goroutine 56 [IO wait, 11035 minutes]:
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.runtime_pollWait(0x7f4749a02298, 0x72)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/runtime/netpoll.go:229 +0x89
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.(*pollDesc).wait(0xc000558880, 0x0, 0x0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/internal/poll/fd_poll_runtime.go:84 +0x32
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.(*pollDesc).waitRead(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/internal/poll/fd_poll_runtime.go:89
Nov 16 11:03:22 server1 lxd.daemon[2148742]: internal/poll.(*FD).Accept(0xc000558880)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/internal/poll/fd_unix.go:402 +0x22c
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net.(*netFD).accept(0xc000558880)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/fd_unix.go:173 +0x35
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net.(*UnixListener).accept(0x4a4526)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/unixsock_posix.go:167 +0x1c
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net.(*UnixListener).Accept(0xc0009ff2f0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/unixsock.go:260 +0x3d
Nov 16 11:03:22 server1 lxd.daemon[2148742]: net/http.(*Server).Serve(0xc0003fa1c0, {0x1b778a0, 0xc0009ff2f0})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/net/http/server.go:3001 +0x394
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/endpoints.(*Endpoints).serve.func1()
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/endpoints/endpoints.go:389 +0x25
Nov 16 11:03:22 server1 lxd.daemon[2148742]: gopkg.in/tomb%2ev2.(*Tomb).run(0xc00052c4b0, 0xc000490e80)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163 +0x36
Nov 16 11:03:22 server1 lxd.daemon[2148742]: created by gopkg.in/tomb%2ev2.(*Tomb).Go
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/.go/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:159 +0xf3
Nov 16 11:03:22 server1 lxd.daemon[2148742]: goroutine 233540 [syscall, 347 minutes]:
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.Syscall6(0x106, 0xffffffffffffff9c, 0xc131859790, 0xc129f6ab98, 0x100, 0x0, 0x0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/asm_linux_amd64.s:43 +0x5
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.fstatat(0x0, {0xc1318596c0, 0xca}, 0xc129f6ab98, 0xca)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/zsyscall_linux_amd64.go:1441 +0x10f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.Lstat(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/syscall_linux_amd64.go:74
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.lstatNolog.func1(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:46
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.ignoringEINTR(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/file_posix.go:246
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.lstatNolog({0xc1318596c0, 0x3})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:45 +0x5b
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.Lstat({0xc1318596c0, 0xca})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat.go:22 +0x34
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc000d7d440, 0x89}, {0x1b9f358, 0xc0047b4680}, 0xc00ad21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:436 +0x1db
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc000396a00, 0x7e}, {0x1b9f358, 0xc0047b45b0}, 0xc00ad21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc000396600, 0x78}, {0x1b9f358, 0xc0047b44e0}, 0xc00ad21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc000ea5f80, 0x71}, {0x1b9f358, 0xc0054968f0}, 0xc00ad21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc0005c7570, 0x6d}, {0x1b9f358, 0xc006b62d00}, 0xc00ad21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc0004db960, 0x69}, {0x1b9f358, 0xc002f14dd0}, 0xc00ad21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc0004db0a0, 0x63}, {0x1b9f358, 0xc002f14b60}, 0xc00ad21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.Walk({0xc0004db0a0, 0x63}, 0xc000571430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:505 +0x6c
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).getSubvolumes(0xc0026f4000, {0xc002808fc0, 0x62})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:62 +0xee
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).deleteSubvolume(0xc0026f4000, {0xc002808fc0, 0x62}, 0x1)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:162 +0xa7
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).DeleteVolumeSnapshot(0xc0026f4000, {{0xc00ece5680, 0x20}, {0xc00ee1be80, 0x5}, 0xc000e52960, {0x18a5d1b, 0xa}, {0x18a617b, 0xa}, ...}, ...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_volumes.go:1319 +0xb1
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage.(*lxdBackend).DeleteInstanceSnapshot(0xc001dd2600, {0x1bfb578, 0xc0025e2dc0}, 0xc000e13390)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/backend_lxd.go:2338 +0x7c3
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Delete(0xc0025e2dc0, 0x1)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:3635 +0x484
Nov 16 11:03:22 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshots({0x0, 0xc011880878}, 0x203000, {0xc002296c00, 0x2, 0xc00059c510})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:612 +0x88
Nov 16 11:03:22 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshotsTask.func1.1(0x16)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:575 +0x31
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/operations.(*Operation).Run.func1(0xc002be57a0, 0x8)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:270 +0x42
Nov 16 11:03:22 server1 lxd.daemon[2148742]: created by github.com/lxc/lxd/lxd/operations.(*Operation).Run
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:269 +0x128
Nov 16 11:03:22 server1 lxd.daemon[2148742]: goroutine 140 [chan receive, 278 minutes]:
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/cluster.runDqliteProxy(0xc0007cc600, {0xc00068ef50, 0x6}, 0x0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/cluster/gateway.go:1120 +0x46
Nov 16 11:03:22 server1 lxd.daemon[2148742]: created by github.com/lxc/lxd/lxd/cluster.(*Gateway).init
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/cluster/gateway.go:826 +0x5c5
Nov 16 11:03:22 server1 lxd.daemon[2148742]: goroutine 21168398 [syscall, 347 minutes]:
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.Syscall6(0x106, 0xffffffffffffff9c, 0xc085899110, 0xc09abefca8, 0x100, 0x0, 0x0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/asm_linux_amd64.s:43 +0x5
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.fstatat(0x0, {0xc085899040, 0xca}, 0xc09abefca8, 0xca)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/zsyscall_linux_amd64.go:1441 +0x10f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.Lstat(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/syscall_linux_amd64.go:74
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.lstatNolog.func1(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:46
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.ignoringEINTR(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/file_posix.go:246
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.lstatNolog({0xc085899040, 0x3})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat_unix.go:45 +0x5b
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.Lstat({0xc085899040, 0xca})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/os/stat.go:22 +0x34
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc00d612240, 0x89}, {0x1b9f358, 0xc006ae7ee0}, 0xc037c21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:436 +0x1db
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc00b913380, 0x7e}, {0x1b9f358, 0xc006ae7e10}, 0xc037c21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc00b913180, 0x78}, {0x1b9f358, 0xc006ae7d40}, 0xc037c21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc010d6ea80, 0x71}, {0x1b9f358, 0xc01c3475f0}, 0xc037c21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc0237e8070, 0x6d}, {0x1b9f358, 0xc009a991e0}, 0xc037c21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc014dcea80, 0x69}, {0x1b9f358, 0xc029966340}, 0xc037c21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.walk({0xc0235deaf0, 0x63}, {0x1b9f358, 0xc0171badd0}, 0xc037c21430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:442 +0x28f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: path/filepath.Walk({0xc0235deaf0, 0x63}, 0xc00648b430)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/path/filepath/path.go:505 +0x6c
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).getSubvolumes(0xc016f02050, {0xc0031fb030, 0x62})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:62 +0xee
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).deleteSubvolume(0xc016f02050, {0xc0031fb030, 0x62}, 0x1)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_utils.go:162 +0xa7
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage/drivers.(*btrfs).DeleteVolumeSnapshot(0xc016f02050, {{0xc022351400, 0x20}, {0xc007226fd5, 0x5}, 0xc015505170, {0x18a5d1b, 0xa}, {0x18a617b, 0xa}, ...}, ...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/drivers/driver_btrfs_volumes.go:1319 +0xb1
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/storage.(*lxdBackend).DeleteInstanceSnapshot(0xc0150e83c0, {0x1bfb578, 0xc011d28dc0}, 0xc0059c8b50)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/storage/backend_lxd.go:2338 +0x7c3
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/instance/drivers.(*lxc).Delete(0xc011d28dc0, 0x1)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance/drivers/driver_lxc.go:3635 +0x484
Nov 16 11:03:22 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshots({0x2737c60, 0x0}, 0xc02f6e7da0, {0xc016369c40, 0x3, 0x0})
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:612 +0x88
Nov 16 11:03:22 server1 lxd.daemon[2148742]: main.pruneExpiredContainerSnapshotsTask.func1.1(0xc02616b200)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/instance.go:575 +0x31
Nov 16 11:03:22 server1 lxd.daemon[2148742]: github.com/lxc/lxd/lxd/operations.(*Operation).Run.func1(0xc00fec5320, 0xc0008aec60)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:270 +0x42
Nov 16 11:03:22 server1 lxd.daemon[2148742]: created by github.com/lxc/lxd/lxd/operations.(*Operation).Run
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /build/lxd/parts/lxd/src/lxd/operations/operations.go:269 +0x128
Nov 16 11:03:22 server1 lxd.daemon[2148742]: goroutine 9892444 [syscall, 348 minutes]:
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.Syscall6(0x106, 0xffffffffffffff9c, 0xc09e925790, 0xc138ef2b98, 0x100, 0x0, 0x0)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/asm_linux_amd64.s:43 +0x5
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.fstatat(0x0, {0xc09e9256c0, 0xca}, 0xc138ef2b98, 0xca)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/zsyscall_linux_amd64.go:1441 +0x10f
Nov 16 11:03:22 server1 lxd.daemon[2148742]: syscall.Lstat(...)
Nov 16 11:03:22 server1 lxd.daemon[2148742]:         /snap/go/8489/src/syscall/syscall_linux_amd64.go:74
Nov 16 11:03:22 server1 lxd.daemon[2148742]: os.lstatNolog.func1(...)

I suspect what is going on. I have a container within which docker containers were created (and destroyed), which uses about 2TB of space (compressed with lzo!). Docker used the native storage (btrfs), so lot’s of subvolumes were created. Perhaps the space was not reclaimed back. I have seen some reports about that. This container was running, perhaps with a few (mainly idle) docker containers inside. Perhaps btrfs was performing some housekeeping and 2TB is a lot of data to process. I am now trying to stop and delete this container since it is no longer needed. It could take some time.

Is there any way to inspect the file system of the container from the host? E.g., how many btrfs subvolumes were created inside the container? Looks like they are mounted in separate namespaces. Not sure how can I find them.