Failed detecting root disk device

Hi,

When I create a new container I’m getting this error.

# lxc init images:ubuntu/focal/amd64 test04
Creating test04
Error: Failed instance creation: Failed creating instance record: Failed initialising instance: Invalid devices: Failed detecting root disk device: No root device could be found

I haven’t changed anything to the configuration, honestly.
It’s a cluster with 6 nodes. Storage is LVM. Ubuntu 20.04 with lxd snap 4.8
The last time I created a container must be been one or two weeks ago.

The (default) profile has a root disk defined.

# lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: local
    size: 20GB
    type: disk
name: default
used_by:
...

Does it occur if you add the -p default flag?

No it does snot

Can you show output of lxc storage ls please

# lxc storage ls
+-------+-------------+--------+---------+---------+
| NAME  | DESCRIPTION | DRIVER |  STATE  | USED BY |
+-------+-------------+--------+---------+---------+
| local |             | lvm    | CREATED | 9       |
+-------+-------------+--------+---------+---------+

Can you create containers from other images?

Can you enable debug mode and then capture the logs for the lxc init command:

sudo snap set lxd daemon.debug=true; sudo systemctl reload snap.lxd.daemon
sudo tail -f /var/snap/lxd/common/lxd/logs/lxd.log

Also what about doing:

lxc init images:ubuntu/focal/amd64 test04 -s local

Yes, for example this succeeded without error

lxc init ubuntu:18.04 u1
lxc init images:ubuntu/bionic/amd64 u2

Before getting the debug log, how do I undo snap set lxd daemon.debug=true after the test?

# lxc init images:ubuntu/focal/amd64 test04 -s local
Creating test04
                                          
The instance you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to an instance, use: lxc network attach

So, that created the container (without the network)

sudo snap set lxd daemon.debug=false; sudo systemctl reload snap.lxd.daemon

Sounds like you’re not in the default project.

Can you show lxc project ls please.

# lxc project ls
+-------------------+--------+----------+-----------------+----------+---------+
|       NAME        | IMAGES | PROFILES | STORAGE VOLUMES | NETWORKS | USED BY |
+-------------------+--------+----------+-----------------+----------+---------+
| default (current) | YES    | YES      | YES             | YES      | 72      |
+-------------------+--------+----------+-----------------+----------+---------+
t=2020-12-01T19:16:45+0100 lvl=dbug msg="Found cert" name=0
t=2020-12-01T19:16:45+0100 lvl=dbug msg="Found cert" name=0
t=2020-12-01T19:16:46+0100 lvl=dbug msg="Replace current raft nodes with [{ID:1 Address:172.16.16.54:8443 Role:voter} {ID:2 Address:172.16.16.45:8443 Role:stand-by} {ID:3 Address:172.16.16.59:8443 Role:stand-by} {ID:4 Address:172.16.16.33:8443 Role:voter} {ID:5 Address:172.16.16.20:8443 Role:spare} {ID:6 Address:172.16.16.76:8443 Role:voter}]" 
t=2020-12-01T19:16:46+0100 lvl=dbug msg="Replace current raft nodes with [{ID:3 Address:172.16.16.59:8443 Role:stand-by} {ID:4 Address:172.16.16.33:8443 Role:voter} {ID:5 Address:172.16.16.20:8443 Role:spare} {ID:6 Address:172.16.16.76:8443 Role:voter} {ID:1 Address:172.16.16.54:8443 Role:voter} {ID:2 Address:172.16.16.45:8443 Role:stand-by}]" 
t=2020-12-01T19:16:46+0100 lvl=dbug msg="Replace current raft nodes with [{ID:1 Address:172.16.16.54:8443 Role:voter} {ID:2 Address:172.16.16.45:8443 Role:stand-by} {ID:3 Address:172.16.16.59:8443 Role:stand-by} {ID:4 Address:172.16.16.33:8443 Role:voter} {ID:5 Address:172.16.16.20:8443 Role:spare} {ID:6 Address:172.16.16.76:8443 Role:voter}]" 
t=2020-12-01T19:16:45+0100 lvl=dbug msg="Found cert" name=0
t=2020-12-01T19:16:46+0100 lvl=dbug msg="Replace current raft nodes with [{ID:1 Address:172.16.16.54:8443 Role:voter} {ID:2 Address:172.16.16.45:8443 Role:stand-by} {ID:3 Address:172.16.16.59:8443 Role:stand-by} {ID:4 Address:172.16.16.33:8443 Role:voter} {ID:5 Address:172.16.16.20:8443 Role:spare} {ID:6 Address:172.16.16.76:8443 Role:voter}]" 
t=2020-12-01T19:16:46+0100 lvl=dbug msg="Partial node list heartbeat received, skipping full update" 
t=2020-12-01T19:16:46+0100 lvl=dbug msg="Partial node list heartbeat received, skipping full update" 
t=2020-12-01T19:16:46+0100 lvl=dbug msg="Partial node list heartbeat received, skipping full update" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg=Handling ip=@ method=GET protocol=unix url=/1.0 username=root
t=2020-12-01T19:16:57+0100 lvl=dbug msg=Handling ip=@ method=GET protocol=unix url=/1.0/events username=root
t=2020-12-01T19:16:57+0100 lvl=dbug msg="New event listener: ac549f04-c910-4e3a-81e6-52a1f195d12f" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg=Handling ip=@ method=POST protocol=unix url=/1.0/instances username=root
t=2020-12-01T19:16:57+0100 lvl=dbug msg="\n\t{\n\t\t\"architecture\": \"\",\n\t\t\"config\": {},\n\t\t\"devices\": {},\n\t\t\"ephemeral\": false,\n\t\t\"profiles\": null,\n\t\t\"stateful\": false,\n\t\t\"description\": \"\",\n\t\t\"name\": \"test05\",\n\t\t\"source\": {\n\t\t\t\"type\": \"image\",\n\t\t\t\"certificate\": \"\",\n\t\t\t\"alias\": \"ubuntu/focal/amd64\",\n\t\t\t\"server\": \"https://images.linuxcontainers.org\",\n\t\t\t\"protocol\": \"simplestreams\",\n\t\t\t\"mode\": \"pull\"\n\t\t},\n\t\t\"instance_type\": \"\",\n\t\t\"type\": \"container\"\n\t}" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg="Responding to instance create" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg="Connecting to a remote simplestreams server" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg="New task Operation: 17d5f1ea-e056-43d4-a98d-38cba499a372" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg="Started task operation: 17d5f1ea-e056-43d4-a98d-38cba499a372" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"async\",\n\t\t\"status\": \"Operation created\",\n\t\t\"status_code\": 100,\n\t\t\"operation\": \"/1.0/operations/17d5f1ea-e056-43d4-a98d-38cba499a372\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"17d5f1ea-e056-43d4-a98d-38cba499a372\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Creating container\",\n\t\t\t\"created_at\": \"2020-12-01T19:16:57.550153125+01:00\",\n\t\t\t\"updated_at\": \"2020-12-01T19:16:57.550153125+01:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/test05\"\n\t\t\t\t],\n\t\t\t\t\"instances\": [\n\t\t\t\t\t\"/1.0/instances/test05\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": null,\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"maas\"\n\t\t}\n\t}" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg="Connecting to a remote simplestreams server" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg=Handling ip=@ method=GET protocol=unix url=/1.0/operations/17d5f1ea-e056-43d4-a98d-38cba499a372 username=root
t=2020-12-01T19:16:57+0100 lvl=dbug msg="Image already exists in the DB" fingerprint=6a26c611488b9018dec0f2f84a36a69e16b27a4bcd4b08ff4a2bddb67b1a5fcb
t=2020-12-01T19:16:57+0100 lvl=info msg="Creating container" ephemeral=false name=test05 project=default
t=2020-12-01T19:16:57+0100 lvl=info msg="Deleting container" created=2020-12-01T19:16:57+0100 ephemeral=false name=test05 project=default used=1970-01-01T01:00:00+0100
t=2020-12-01T19:16:57+0100 lvl=dbug msg="Database error: &errors.errorString{s:\"No such object\"}" 
t=2020-12-01T19:16:57+0100 lvl=info msg="Deleted container" created=2020-12-01T19:16:57+0100 ephemeral=false name=test05 project=default used=1970-01-01T01:00:00+0100
t=2020-12-01T19:16:57+0100 lvl=eror msg="Failed initialising instance" err="Invalid devices: Failed detecting root disk device: No root device could be found" instance=test05 project=default type=container
t=2020-12-01T19:16:57+0100 lvl=dbug msg="Database error: &errors.errorString{s:\"Query deleted 0 rows instead of 1\"}" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg="Failure for task operation: 17d5f1ea-e056-43d4-a98d-38cba499a372: Failed creating instance record: Failed initialising instance: Invalid devices: Failed detecting root disk device: No root device could be found" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg="Event listener finished: ac549f04-c910-4e3a-81e6-52a1f195d12f" 
t=2020-12-01T19:16:57+0100 lvl=dbug msg="Disconnected event listener: ac549f04-c910-4e3a-81e6-52a1f195d12f" 
t=2020-12-01T19:16:59+0100 lvl=dbug msg="Found cert" name=0
t=2020-12-01T19:16:59+0100 lvl=dbug msg="Replace current raft nodes with [{ID:5 Address:172.16.16.20:8443 Role:spare} {ID:6 Address:172.16.16.76:8443 Role:voter} {ID:1 Address:172.16.16.54:8443 Role:voter} {ID:2 Address:172.16.16.45:8443 Role:stand-by} {ID:3 Address:172.16.16.59:8443 Role:stand-by} {ID:4 Address:172.16.16.33:8443 Role:voter}]" 
t=2020-12-01T19:17:03+0100 lvl=dbug msg="Found cert" name=0
t=2020-12-01T19:17:03+0100 lvl=dbug msg="Replace current raft nodes with [{ID:1 Address:172.16.16.54:8443 Role:voter} {ID:2 Address:172.16.16.45:8443 Role:stand-by} {ID:3 Address:172.16.16.59:8443 Role:stand-by} {ID:4 Address:172.16.16.33:8443 Role:voter} {ID:5 Address:172.16.16.20:8443 Role:spare} {ID:6 Address:172.16.16.76:8443 Role:voter}]" 

And does this work?

lxc init images:ubuntu/focal test04

Out of interest, are you specifying the arch for a reason?

# lxc init images:ubuntu/focal test05
Creating test05
Error: Failed instance creation: Failed creating instance record: Failed initialising instance: Invalid devices: Failed detecting root disk device: No root device could be found

BTW. There was no particular reason to specify the arch.

Can you show lxc image show for the relevant image in your lxc image list?

I suspect it’s that odd bug where images sometimes get dissociated from the default profile. If that’s the case, you can edit add it back manually or delete the image and have it get downloaded and loaded again.

2 Likes
# lxc image list
+----------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
|     ALIAS      | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 | ARCHITECTURE |   TYPE    |   SIZE   |         UPLOAD DATE          |
+----------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
|                | 0dbe67ba5490 | no     | Ubuntu bionic amd64 (20201201_07:42)        | x86_64       | CONTAINER | 97.61MB  | Dec 1, 2020 at 6:01pm (UTC)  |
+----------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
|                | 6a26c611488b | no     | Ubuntu focal amd64 (20201201_07:42)         | x86_64       | CONTAINER | 99.97MB  | Dec 1, 2020 at 9:14am (UTC)  |
+----------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
|                | f42eab18aa24 | no     | ubuntu 18.04 LTS amd64 (release) (20201125) | x86_64       | CONTAINER | 189.81MB | Dec 1, 2020 at 6:00pm (UTC)  |
+----------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
# lxc image show 6a26c611488b
auto_update: true
properties:
  architecture: amd64
  description: Ubuntu focal amd64 (20201201_07:42)
  os: Ubuntu
  release: focal
  serial: "20201201_07:42"
  type: squashfs
  variant: default
public: false
expires_at: 1970-01-01T01:00:00+01:00
profiles: []

You mean this: profiles: [] ?

Deleting that image helped. Now I can create the container again.

Thanks for the quick response.