"The requested storage pool "default" already exists." when trying to initialize Incus

Hi,

Trying out Incus for our CI servers, as part of our internal evaluation on whether to continue using LXD or switching to Incus, but I’m running into issues on the server in question. This machine is an Ubuntu 22.04 machine and has previously had LXD installed (as a snap), but this has been uninstalled completely from the system (after manually deleting images, storage volumes and networks).

I’ve then installed the Zabbly-provided Incus packages (using the Index of /incus/stable/ URI), and incus list works fine (I’ve added my own user to the incus group). However, when trying to initialize Incus for usage, I run into an error like this:

foo@some-host:~$ incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
The requested storage pool "default" already exists. Please choose another name.
Name of the new storage pool [default=default]: ^C

I then try to list the available storage volumes, but this gives me another error:

foo@some-host:~$ incus storage list
Error: Certificate is restricted

Any ideas on how to resolve this? :thinking: This is with the 0.4-202312232115-ubuntu22.04 version of the package. Thanks in advance. :pray:

Hmm, I managed to launch an Ubuntu 22.04 container now without having run incus admin init:thinking: How is this even possible? According to my notes, with LXD we could get errors saying Invalid devices: Failed detecting root disk device: No root device could be found if we hadn’t properly executed lxc init before launching containers.

Hmm… I seem to have managed to launch containers in a “user-restricted” project somehow. :see_no_evil: I guess there’s some fine manual reading to do…

foo@some-host:~$ incus project ls
+---------------------+--------+----------+-----------------+-----------------+----------+---------------+--------------------------------------------+---------+
|        NAME         | IMAGES | PROFILES | STORAGE VOLUMES | STORAGE BUCKETS | NETWORKS | NETWORK ZONES |                DESCRIPTION                 | USED BY |
+---------------------+--------+----------+-----------------+-----------------+----------+---------------+--------------------------------------------+---------+
| user-1000 (current) | YES    | YES      | YES             | YES             | NO       | YES           | User restricted project for "foo" (1000) | 3       |
+---------------------+--------+----------+-----------------+-----------------+----------+---------------+--------------------------------------------+---------+

Hmm… I may be getting somewhere:

foo@some-host:~$ sudo incus storage list
+---------+--------+--------------------------------------+-------------+---------+---------+
|  NAME   | DRIVER |                SOURCE                | DESCRIPTION | USED BY |  STATE  |
+---------+--------+--------------------------------------+-------------+---------+---------+
| default | dir    | /var/lib/incus/storage-pools/default |             | 2       | CREATED |
+---------+--------+--------------------------------------+-------------+---------+---------+

However, I don’t seem to be able to delete it:

foo@some-host:~$ sudo incus storage delete default
Error: The storage pool is currently in use

It seems to be used by the default profile somehow (?)… but this profile can’t be deleted. :thinking:

foo@some-host:~$ sudo incus storage show default
config:
  source: /var/lib/incus/storage-pools/default
description: ""
name: default
driver: dir
used_by:
- /1.0/profiles/default
status: Created
locations:
- none

foo@some-host:~$ sudo incus profile delete default
Error: The "default" profile cannot be deleted

Ah, Error: The 'default' profile cannot be deleted - #3 by stgraber seems to have helped. :tada: After doing this config change, I managed to delete the default storage:

foo@some-host:~$ sudo incus storage delete default
Storage pool default deleted

…and I could now run sudo incus admin init successfully. :tada: :tada: :tada: Very happy about this!

@stgraber - what’s the feeling with incus in general, are you supposed to use sudo incus if you want to operate on the “global” level (in the default project) and never just incus as your own user? If so, this seems a bit different in philosophy vs LXD, where you could just adduser <foo> lxd and be able to run things in the “global” context even as your own user.

I skimmed your messages and I think that the issue relates to how you removed LXD.
If you had used lxd-to-incus, it would deal with replacing LXD and removing it. Either before or after the migration, you would remove all images, etc. using the appropriate commands.

In an older post with the same theme, there are too many small things here and there that you can miss, when you want to reinstall.

I envision a tutorial about how to fully remove LXD (snap package).

1 Like

I think what got you here is that we now have two groups:

  • incus
  • incus-admin

If your user is only a member of incus, then they get a safe isolated Incus experience.
That is, they get their own project and within that project can only do things that won’t allow a full takeover of the system.

If your user is a member of the incus-admin group, then you have full access to Incus, can see all projects, have no restrictions placed on what you can do with your instances, meaning you’re the equivalent of root on the system.

That distinction didn’t exist with LXD and was the cause of many security reports as being a member of the lxd group effectively was equivalent to password-less sudo to root.
We tried to do better out of the box with Incus by making it easy to decide what level of access to provide to a particular user.

1 Like

TLDR: If you want the same behavior as with LXD, make sure your user is in the incus-admin group as that’s the equivalent to LXD’s lxd group.

Yeah, it felt like there was something left from the LXD installation or something. :thinking:

Sounds great! :+1:

Alright, thanks for this and the thorough explanation of the thinking behind the incus and incus-admin groups. I’ll try with incus-admin now and see if I can get our CI image built using that approach.

Hmm… trying now with the user in incus-admin (verified with id to ensure that it’s present in the list of effective groups) but getting a new error:

hibox@some-server:~$ incus launch images:ubuntu/22.04 temp
Creating temp
Error: Failed instance creation: Failed loading project: Project not found

I get the feeling there’s some discrepancy now, probably because I ran sudo incus admin init previously (before I had added the user to the incus-admin group); I presume that should have been just incus admin init now, as my own user? The default project seems to be the current one for root but not the current one for my non-root account:

hibox@some-server:~$ incus project list
+---------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+
|  NAME   | IMAGES | PROFILES | STORAGE VOLUMES | STORAGE BUCKETS | NETWORKS | NETWORK ZONES |      DESCRIPTION      | USED BY |
+---------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+
| default | YES    | YES      | YES             | YES             | YES      | YES           | Default Incus project | 4       |
+---------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+
hibox@some-server:~$ sudo incus project list
+-------------------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+
|       NAME        | IMAGES | PROFILES | STORAGE VOLUMES | STORAGE BUCKETS | NETWORKS | NETWORK ZONES |      DESCRIPTION      | USED BY |
+-------------------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+
| default (current) | YES    | YES      | YES             | YES             | YES      | YES           | Default Incus project | 4       |
+-------------------+--------+----------+-----------------+-----------------+----------+---------------+-----------------------+---------+

I get the feeling that I would be better off just purging all Incus data and restarting. :see_no_evil: I’ll see if I manage to do it…

Hmm… apt-get purge incus, which did remove /var/lib/incus properly (thanks for that, great-working .deb package in fact!) didn’t seem to help. I re-ran incus admin init as non-root after reinstalling Incus, but getting the same error still. :thinking:

Continuing the monologue with myself :grin:, but incus project switch default seemed to have helped. :tada: Is this a minor glitch that should be documented somewhere, perhaps it already is?

I get the feeling that this was some local state that persisted for some reason. Now after playing around and again reinstalling Incus, it seems to properly set the default project as current, as expected… :slightly_smiling_face:

For reference: the original problem with “the requested storage pool” was because both LXD and Incus were running on ZFS, and both presumed that they could use default as the name of the storage pool.

We ran into the same error on another server, and a colleague (@slovdahl, thank you :slightly_smiling_face:) helped me figure it out. :point_down: Posting it here in case it is useful to someone else.

hibox@some-server:~$ zfs list
NAME                                                                              USED  AVAIL     REFER  MOUNTPOINT
default                                                                          39.6G  44.2G       24K  legacy

The “solution” in this case was to use btrfs instead of ZFS as the storage backend, but it would also have been possible to just select a different name of the pool than default. However, moving to btrfs was preferable for us.