Useradd & subuid

I was testing some stuff on my 8 GPU server with LXD, but when i wanted to add a user i got this error:

useradd -m student

useradd: Can’t get unique subordinate UID range
useradd: can’t create subordinate user IDs

/home/jpe# cat /etc/subuid
lxd:100000:1000000000
root:100000:1000000000
jpe:165536:65536

So there must be a problem in here? Is this lxd countid not too high? Can I just modify it?

Jef

jpe here is already conflicting with LXD, I don’t know if that causes the issue.

Technically there are uid/gid it could select after the LXD range, but it looks like the tool is getting confused.

I normally place LXD from 1000000 rather than 100000 which then frees the few initial ranges for use by the system, not sure if that makes it easier on useradd though.

modifying this and restarting lxd seems to last forever…

Jef

Yeah, you need to be a bit careful with that. Changing the lxd/root ranges means every single file in all your containers will need modifying.

LXD will normally do that on container restart but this can be a very slow process.

Also make sure you update both subuid and subgid to match or things will be problematic.

ok, back online, all seems ok, and useradd is working again, thanks Stephane

Jef

there seem to be something wrong ->
Error: Failed to run: /usr/lib/lxd/lxd forkstart walter /var/lib/lxd/containers /var/log/lxd/walter/lxc.conf:
Try lxc info --show-log walter for more info
root@srv2:/usr/local/bin# lxc info --show-log walter
Name: walter
Remote: unix://
Architecture: x86_64
Created: 2020/04/29 10:48 UTC
Status: Stopped
Type: persistent
Profiles: all_gpu_250GB

Log:

lxc walter 20200429120514.464 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1219 - File exists - Failed to create directory “/sys/fs/cgroup/unified//lxc/walter”
lxc walter 20200429120514.464 ERROR cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1243 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/walter”
lxc walter 20200429120514.464 ERROR cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1321 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/walter”
lxc walter 20200429120514.475 ERROR conf - conf.c:lxc_map_ids:2999 - newuidmap failed to write mapping “newuidmap: uid range [0-65536) -> [655360-720896) not allowed”: newuidmap 30934 0 655360 65536
lxc walter 20200429120514.475 ERROR start - start.c:lxc_spawn:1708 - Failed to set up id mapping.
lxc walter 20200429120514.541 WARN network - network.c:lxc_delete_network_priv:2613 - Invalid argument - Failed to remove interface “veth37FWQT” from “lxdbr0”
lxc walter 20200429120514.541 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:842 - Received container state “ABORTING” instead of “RUNNING”
lxc walter 20200429120514.542 ERROR start - start.c:__lxc_start:1939 - Failed to spawn container “walter”
lxc walter 20200429120514.548 ERROR conf - conf.c:lxc_map_ids:2999 - newuidmap failed to write mapping “newuidmap: uid range [0-65536) -> [655360-720896) not allowed”: newuidmap 30949 0 655360 65536 65536 0 1
lxc walter 20200429120514.548 ERROR conf - conf.c:userns_exec_1:4352 - Error setting up {g,u}id mappings for child process “30949”
lxc walter 20200429120514.549 WARN cgfsng - cgroups/cgfsng.c:cgfsng_payload_destroy:1122 - Failed to destroy cgroups
lxc 20200429120514.549 WARN commands - commands.c:lxc_cmd_rsp_recv:132 - Connection reset by peer - Failed to receive response for command “get_state”

What’s your current /etc/subuid and /etc/subgid?

root@srv2:/usr/local/bin# cat /etc/subuid
jpe:165536:65536
lxd:1000000:1000000000
root:1000000:1000000000
student:100000:65536
root@srv2:/usr/local/bin# cat /etc/subgid
jpe:165536:65536
lxd:1000000:1000000000
root:1000000:1000000000
student:100000:65536

Ok, that looks good, so I think LXD will just need a bit of a nudge to re-allocate a new range for the containers.

If you haven’t already, restart the LXD daemon with systemctl restart lxd.

Then you’ll want to temporarily mark your containers as privileged:

  • lxc config set NAME security.privileged true

And then mark them as unprivileged again:

  • lxc config unset NAME security.privileged

And finally try to start them. This should cause a new map to have been calculated and then applied during that startup.

If that doesn’t work, then you’ll have to temporarily start the container in between marking them privileged and unsetting the privileged flag. But hopefully that’s not necessary as it would double the time needed to fix this.

setting priviledged on and off didn’t work, i’ll try to start it in priviledged mode now, but still an error

oot@srv2:/usr/local/bin# lxc config set jefcuda security.privileged true
root@srv2:/usr/local/bin# lxc start jefcuda
Error: Failed to run: /usr/lib/lxd/lxd forkstart jefcuda /var/lib/lxd/containers /var/log/lxd/jefcuda/lxc.conf:
Try lxc info --show-log jefcuda for more info
root@srv2:/usr/local/bin# lxc info --show-log jefcuda
Name: jefcuda
Remote: unix://
Architecture: x86_64
Created: 2020/04/23 09:27 UTC
Status: Stopped
Type: persistent
Profiles: all_gpu_250GB

Log:

lxc jefcuda 20200429122249.100 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1219 - File exists - Failed to create directory “/sys/fs/cgroup/unified//lxc/jefcuda”
lxc jefcuda 20200429122249.101 ERROR cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1243 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda”
lxc jefcuda 20200429122249.101 ERROR cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1321 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda”
lxc jefcuda 20200429122249.101 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1219 - File exists - Failed to create directory “/sys/fs/cgroup/unified//lxc/jefcuda-1”
lxc jefcuda 20200429122249.101 ERROR cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1243 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda-1”
lxc jefcuda 20200429122249.101 ERROR cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1321 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda-1”
lxc jefcuda 20200429122249.102 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1219 - File exists - Failed to create directory “/sys/fs/cgroup/unified//lxc/jefcuda-2”
lxc jefcuda 20200429122249.102 ERROR cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1243 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda-2”
lxc jefcuda 20200429122249.102 ERROR cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1321 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda-2”
lxc jefcuda 20200429122249.206 ERROR conf - conf.c:run_buffer:335 - Script exited with status 1
lxc jefcuda 20200429122249.206 ERROR conf - conf.c:lxc_setup:3589 - Failed to run mount hooks
lxc jefcuda 20200429122249.206 ERROR start - start.c:do_start:1263 - Failed to setup container “jefcuda”
lxc jefcuda 20200429122249.206 ERROR sync - sync.c:__sync_wait:62 - An error occurred in another process (expected sequence number 5)
lxc jefcuda 20200429122249.207 WARN network - network.c:lxc_delete_network_priv:2589 - Operation not permitted - Failed to remove interface “eth0” with index 41
lxc jefcuda 20200429122249.207 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:842 - Received container state “ABORTING” instead of “RUNNING”
lxc jefcuda 20200429122249.207 ERROR start - start.c:__lxc_start:1939 - Failed to spawn container “jefcuda”
lxc 20200429122249.210 WARN commands - commands.c:lxc_cmd_rsp_recv:132 - Connection reset by peer - Failed to receive response for command “get_state”

Anything useful in /var/log/lxd/lxd.log?

not realy, apart from failed startup

jef

Ok, unset privileged, try starting it again and show that failure log, I just want to make sure it’s still getting caught up on the uid/gid map.

Also, lxc config show --expanded jefcuda would be useful.

i already deleted jefcuda, but here is another one elotfi ->
root@srv2:/usr/local/bin# lxc start elotfi
Error: Failed to run: /usr/lib/lxd/lxd forkstart elotfi /var/lib/lxd/containers /var/log/lxd/elotfi/lxc.conf:
Try lxc info --show-log elotfi for more info
root@srv2:/usr/local/bin# lxc config show --expanded elotfi
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 18.04 LTS amd64 (release) (20200407)
image.label: release
image.os: ubuntu
image.release: bionic
image.serial: “20200407”
image.version: “18.04”
limits.cpu: “6”
limits.memory: 26GB
nvidia.runtime: “true”
security.idmap.isolated: “true”
volatile.base_image: 2cfc5a5567b8d74c0986f3d8a77a2a78e58fe22ea9abd2693112031f85afa1a1
volatile.eth0.hwaddr: 00:16:3e:f1:00:c3
volatile.idmap.base: “524288”
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:524288,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:524288,“Nsid”:0,“Maprange”:65536}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:524288,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:524288,“Nsid”:0,“Maprange”:65536}]’
volatile.last_state.power: STOPPED
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
gpu:
type: gpu
root:
path: /
pool: cpool
size: 250GB
type: disk
ephemeral: false
profiles:

  • all_gpu_250GB
    stateful: false
    description: “”
    root@srv2:/usr/local/bin# lxc info --show-log elotfi
    Name: elotfi
    Remote: unix://
    Architecture: x86_64
    Created: 2020/04/27 08:41 UTC
    Status: Stopped
    Type: persistent
    Profiles: all_gpu_250GB

Log:

lxc elotfi 20200429123029.716 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1219 - File exists - Failed to create directory “/sys/fs/cgroup/unified//lxc/elotfi”
lxc elotfi 20200429123029.716 ERROR cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1243 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/elotfi”
lxc elotfi 20200429123029.716 ERROR cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1321 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/elotfi”
lxc elotfi 20200429123029.716 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1219 - File exists - Failed to create directory “/sys/fs/cgroup/unified//lxc/elotfi-1”
lxc elotfi 20200429123029.716 ERROR cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1243 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/elotfi-1”
lxc elotfi 20200429123029.716 ERROR cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1321 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/elotfi-1”
lxc elotfi 20200429123029.716 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1219 - File exists - Failed to create directory “/sys/fs/cgroup/unified//lxc/elotfi-2”
lxc elotfi 20200429123029.716 ERROR cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1243 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/elotfi-2”
lxc elotfi 20200429123029.716 ERROR cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1321 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/elotfi-2”
lxc elotfi 20200429123029.728 ERROR conf - conf.c:lxc_map_ids:2999 - newuidmap failed to write mapping “newuidmap: uid range [0-65536) -> [524288-589824) not allowed”: newuidmap 11967 0 524288 65536
lxc elotfi 20200429123029.729 ERROR start - start.c:lxc_spawn:1708 - Failed to set up id mapping.
lxc elotfi 20200429123029.808 WARN network - network.c:lxc_delete_network_priv:2613 - Invalid argument - Failed to remove interface “vethHJ9SNV” from “lxdbr0”
lxc elotfi 20200429123029.809 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:842 - Received container state “ABORTING” instead of “RUNNING”
lxc elotfi 20200429123029.810 ERROR start - start.c:__lxc_start:1939 - Failed to spawn container “elotfi”
lxc elotfi 20200429123029.816 ERROR conf - conf.c:lxc_map_ids:2999 - newuidmap failed to write mapping “newuidmap: uid range [0-65536) -> [524288-589824) not allowed”: newuidmap 11982 0 524288 65536 65536 0 1
lxc elotfi 20200429123029.816 ERROR conf - conf.c:userns_exec_1:4352 - Error setting up {g,u}id mappings for child process “11982”
lxc elotfi 20200429123029.817 WARN cgfsng - cgroups/cgfsng.c:cgfsng_payload_destroy:1122 - Failed to destroy cgroups
lxc 20200429123029.817 WARN commands - commands.c:lxc_cmd_rsp_recv:132 - Connection reset by peer - Failed to receive response for command “get_state”

i also tried to recreate jefcuda, but also an error ->
root@srv2:/usr/local/bin# lxc launch ubuntu: -p all_gpu_250GB jefcuda
Creating jefcuda
Starting jefcuda
Error: Failed to run: /usr/lib/lxd/lxd forkstart jefcuda /var/lib/lxd/containers /var/log/lxd/jefcuda/lxc.conf:
Try lxc info --show-log local:jefcuda for more info
root@srv2:/usr/local/bin# lxc info --show-log local:jefcuda
Name: jefcuda
Remote: unix://
Architecture: x86_64
Created: 2020/04/29 12:32 UTC
Status: Stopped
Type: persistent
Profiles: all_gpu_250GB

Log:

lxc jefcuda 20200429123251.618 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1219 - File exists - Failed to create directory “/sys/fs/cgroup/unified//lxc/jefcuda”
lxc jefcuda 20200429123251.618 ERROR cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1243 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda”
lxc jefcuda 20200429123251.618 ERROR cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1321 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda”
lxc jefcuda 20200429123251.618 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1219 - File exists - Failed to create directory “/sys/fs/cgroup/unified//lxc/jefcuda-1”
lxc jefcuda 20200429123251.618 ERROR cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1243 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda-1”
lxc jefcuda 20200429123251.618 ERROR cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1321 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda-1”
lxc jefcuda 20200429123251.618 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1219 - File exists - Failed to create directory “/sys/fs/cgroup/unified//lxc/jefcuda-2”
lxc jefcuda 20200429123251.618 ERROR cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1243 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda-2”
lxc jefcuda 20200429123251.618 ERROR cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1321 - Failed to create cgroup “/sys/fs/cgroup/unified//lxc/jefcuda-2”
lxc jefcuda 20200429123251.629 ERROR conf - conf.c:lxc_map_ids:2999 - newuidmap failed to write mapping “newuidmap: uid range [0-65536) -> [786432-851968) not allowed”: newuidmap 14696 0 786432 65536
lxc jefcuda 20200429123251.629 ERROR start - start.c:lxc_spawn:1708 - Failed to set up id mapping.
lxc jefcuda 20200429123251.701 WARN network - network.c:lxc_delete_network_priv:2613 - Invalid argument - Failed to remove interface “vethNL1705” from “lxdbr0”
lxc jefcuda 20200429123251.701 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:842 - Received container state “ABORTING” instead of “RUNNING”
lxc jefcuda 20200429123251.702 ERROR start - start.c:__lxc_start:1939 - Failed to spawn container “jefcuda”
lxc jefcuda 20200429123251.709 ERROR conf - conf.c:lxc_map_ids:2999 - newuidmap failed to write mapping “newuidmap: uid range [0-65536) -> [786432-851968) not allowed”: newuidmap 14713 0 786432 65536 65536 0 1
lxc jefcuda 20200429123251.709 ERROR conf - conf.c:userns_exec_1:4352 - Error setting up {g,u}id mappings for child process “14713”
lxc jefcuda 20200429123251.710 WARN cgfsng - cgroups/cgfsng.c:cgfsng_payload_destroy:1122 - Failed to destroy cgroups
lxc 20200429123251.710 WARN commands - commands.c:lxc_cmd_rsp_recv:132 - Connection reset by peer - Failed to receive response for command “get_state”

Hmm, sounds like LXD didn’t detect the new range.

Can you do:

  • systemctl stop lxd.service lxd.socket
  • lxd --debug --group lxd

That should show the detected maps.

root@srv2:/usr/local/bin# systemctl stop lxd.service lxd.socket
root@srv2:/usr/local/bin# lxd --debug --group lxd
DBUG[04-29|14:35:43] Connecting to a local LXD over a Unix socket
DBUG[04-29|14:35:43] Sending request to LXD method=GET url=http://unix.socket/1.0 etag=
INFO[04-29|14:35:43] LXD 3.0.3 is starting in normal mode path=/var/lib/lxd
INFO[04-29|14:35:43] Kernel uid/gid map:
INFO[04-29|14:35:43] - u 0 0 4294967295
INFO[04-29|14:35:43] - g 0 0 4294967295
INFO[04-29|14:35:43] Configured LXD uid/gid map:
INFO[04-29|14:35:43] - u 0 1000000 1000000000
INFO[04-29|14:35:43] - g 0 1000000 1000000000
WARN[04-29|14:35:43] CGroup memory swap accounting is disabled, swap limits will be ignored.
INFO[04-29|14:35:43] Kernel features:
INFO[04-29|14:35:43] - netnsid-based network retrieval: no
INFO[04-29|14:35:43] - unprivileged file capabilities: yes
INFO[04-29|14:35:43] Initializing local database
DBUG[04-29|14:35:43] Initializing database gateway
DBUG[04-29|14:35:43] Start database node address= id=1
DBUG[04-29|14:35:43] Raft: Restored from snapshot 1-4157-1587626671707
DBUG[04-29|14:35:43] Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}]
DBUG[04-29|14:35:43] Raft: Node at 0 [Leader] entering Leader state
DBUG[04-29|14:35:43] Dqlite: starting event loop
DBUG[04-29|14:35:43] Dqlite: accepting connections
DBUG[04-29|14:35:43] Connecting to a local LXD over a Unix socket
DBUG[04-29|14:35:43] Sending request to LXD method=GET url=http://unix.socket/1.0 etag=
DBUG[04-29|14:35:43] Detected stale unix socket, deleting
INFO[04-29|14:35:43] Starting /dev/lxd handler:
INFO[04-29|14:35:43] - binding devlxd socket socket=/var/lib/lxd/devlxd/sock
INFO[04-29|14:35:43] REST API daemon:
INFO[04-29|14:35:43] - binding Unix socket socket=/var/lib/lxd/unix.socket
INFO[04-29|14:35:43] - binding TCP socket socket=[::]:8443
INFO[04-29|14:35:43] Initializing global database
DBUG[04-29|14:35:43] Dqlite: handling new connection (fd=19)
DBUG[04-29|14:35:43] Dqlite: connected address=0 attempt=0
INFO[04-29|14:35:43] Initializing storage pools
DBUG[04-29|14:35:43] Initializing and checking storage pool “cpool”
DBUG[04-29|14:35:43] Checking ZFS storage pool “cpool”
INFO[04-29|14:35:43] Initializing networks
DBUG[04-29|14:35:44] Connecting to a remote simplestreams server
DBUG[04-29|14:35:44] New task operation: 06b2fb5b-8322-4506-85a7-ef3d2e299e7e
INFO[04-29|14:35:44] Pruning leftover image files
DBUG[04-29|14:35:44] Started task operation: 06b2fb5b-8322-4506-85a7-ef3d2e299e7e
INFO[04-29|14:35:44] Done pruning leftover image files
INFO[04-29|14:35:44] Loading daemon configuration
DBUG[04-29|14:35:44] Success for task operation: 06b2fb5b-8322-4506-85a7-ef3d2e299e7e
DBUG[04-29|14:35:44] Initialized inotify with file descriptor 18
DBUG[04-29|14:35:44] New task operation: de322b28-9e19-454d-8263-25456a48b3db
INFO[04-29|14:35:44] Pruning expired images
DBUG[04-29|14:35:44] Started task operation: de322b28-9e19-454d-8263-25456a48b3db
INFO[04-29|14:35:44] Done pruning expired images
DBUG[04-29|14:35:44] New task operation: e2f9c541-78b1-4be7-9953-aa37bb43227c
INFO[04-29|14:35:44] Expiring log files
DBUG[04-29|14:35:44] Started task operation: e2f9c541-78b1-4be7-9953-aa37bb43227c
INFO[04-29|14:35:44] Done expiring log files
DBUG[04-29|14:35:44] Success for task operation: e2f9c541-78b1-4be7-9953-aa37bb43227c
DBUG[04-29|14:35:44] New task operation: 6a58640a-e1ea-41f8-a6ff-73832dd5ef16
INFO[04-29|14:35:44] Updating images
DBUG[04-29|14:35:44] Started task operation: 6a58640a-e1ea-41f8-a6ff-73832dd5ef16
INFO[04-29|14:35:44] Done updating images
DBUG[04-29|14:35:44] Success for task operation: de322b28-9e19-454d-8263-25456a48b3db
DBUG[04-29|14:35:44] New task operation: dde83630-9817-4990-906b-01d8546604d2
INFO[04-29|14:35:44] Updating instance types
DBUG[04-29|14:35:44] Started task operation: dde83630-9817-4990-906b-01d8546604d2
INFO[04-29|14:35:44] Done updating instance types
DBUG[04-29|14:35:44] Processing image fp=2cfc5a5567b8d74c0986f3d8a77a2a78e58fe22ea9abd2693112031f85afa1a1 server=https://cloud-images.ubuntu.com/releases protocol=simplestreams alias=default
DBUG[04-29|14:35:44] Connecting to a remote simplestreams server
DBUG[04-29|14:35:45] Image already exists in the db image=2cfc5a5567b8d74c0986f3d8a77a2a78e58fe22ea9abd2693112031f85afa1a1
DBUG[04-29|14:35:45] Image already exists on storage pool “cpool”
DBUG[04-29|14:35:45] Already up to date fp=2cfc5a5567b8d74c0986f3d8a77a2a78e58fe22ea9abd2693112031f85afa1a1
DBUG[04-29|14:35:45] Processing image protocol=simplestreams alias=16.04 fp=ec1cd72cb9d1c00f4163b3da9ad24fdeb372a782375fcdd9c6161c1086a3fbda server=https://cloud-images.ubuntu.com/releases
DBUG[04-29|14:35:45] Using SimpleStreams cache entry server=https://cloud-images.ubuntu.com/releases expiry=2020-04-29T15:35:45+0200
DBUG[04-29|14:35:45] Image already exists in the db image=ec1cd72cb9d1c00f4163b3da9ad24fdeb372a782375fcdd9c6161c1086a3fbda
DBUG[04-29|14:35:45] Already up to date fp=ec1cd72cb9d1c00f4163b3da9ad24fdeb372a782375fcdd9c6161c1086a3fbda
DBUG[04-29|14:35:45] Success for task operation: 6a58640a-e1ea-41f8-a6ff-73832dd5ef16
DBUG[04-29|14:35:46] Success for task operation: dde83630-9817-4990-906b-01d8546604d2
DBUG[04-29|14:36:02] Found cert k=2
DBUG[04-29|14:36:02] handling method=GET url=/1.0 ip=143.169.171.52:38160
DBUG[04-29|14:36:02] Found cert k=2
DBUG[04-29|14:36:02] Found cert k=2
DBUG[04-29|14:36:02] handling method=GET url=/1.0 ip=143.169.171.52:38160
DBUG[04-29|14:36:02] Found cert k=2
DBUG[04-29|14:36:02] Found cert k=2
DBUG[04-29|14:36:02] handling method=GET url=/1.0 ip=143.169.171.52:38168
DBUG[04-29|14:36:02] Found cert k=2
DBUG[04-29|14:36:02] Found cert k=2
DBUG[04-29|14:36:02] handling ip=143.169.171.52:38176 method=GET url=/1.0
DBUG[04-29|14:36:02] Found cert k=2
DBUG[04-29|14:36:04] Found cert k=2
DBUG[04-29|14:36:04] handling method=GET url=/1.0 ip=143.169.171.52:38182
DBUG[04-29|14:36:04] Found cert k=2

Ok, so it detected the right range. Is that lxc launch still failing?

i restarted, and now it gets stuck in rebuilding database:

t=2020-04-29T14:38:53+0200 lvl=info msg=“Kernel features:”
t=2020-04-29T14:38:53+0200 lvl=info msg=" - netnsid-based network retrieval: no"
t=2020-04-29T14:38:53+0200 lvl=info msg=" - unprivileged file capabilities: yes"
t=2020-04-29T14:38:53+0200 lvl=info msg=“Initializing local database”
t=2020-04-29T14:38:54+0200 lvl=info msg=“Starting /dev/lxd handler:”
t=2020-04-29T14:38:54+0200 lvl=info msg=" - binding devlxd socket" socket=/var/lib/lxd/devlxd/sock
t=2020-04-29T14:38:54+0200 lvl=info msg=“REST API daemon:”
t=2020-04-29T14:38:54+0200 lvl=info msg=" - binding Unix socket" inherited=true socket=/var/lib/lxd/unix.socket
t=2020-04-29T14:38:54+0200 lvl=info msg=" - binding TCP socket" socket=[::]:8443
t=2020-04-29T14:38:54+0200 lvl=info msg=“Initializing global database”

you stopped the one you started by hand right?

If you had two LXD running that would explain it hanging due to locking.