How to specify user/passwd when creating a new vm

It’s very easy to start a new vm with, say Ubuntu 22.04 or CentOS 9. It is also easy to connect to the console. That all works.

However, there is no default user with password, is there? How do I specify a user and a password? I’m guessing I have to use cloud-init. Are there any examples floating around?

Until now I found this stackoverflow [1] that helped me with the Ubuntu 22.04 VM
However, for CentOS I’m still searching.

[1] How do I set a custom password with Cloud-init on Ubuntu 20.04? - Stack Overflow

I’m sure someone will correct me if I’m wrong, but I believe that you need to do it manually via the terminal. So you would get into a root shell on your machine by doing something like

incus exec mycentosvm bash

Then you would proceed to use the command line to add users, change passwords, etc.

However, it’s not so hard to do it once, then use that VM as a template and copy to the new VMs you make.

(An even better answer submitted moments after mine)

I believe cloud-init should work with CentOS too, if you’re using the right image, which means one with a name ending /cloud:

$ incus image list images: centos variant=cloud type=virtual-machine
|             ALIAS              | FINGERPRINT  | PUBLIC |              DESCRIPTION               | ARCHITECTURE |      TYPE       |   SIZE    |     UPLOAD DATE      |
| centos/7/cloud (1 more)        | 32b7938298f4 | yes    | Centos 7 amd64 (20240425_07:08)        | x86_64       | VIRTUAL-MACHINE | 417.66MiB | 2024/04/25 00:00 UTC |
| centos/8-Stream/cloud (1 more) | 82df195d5858 | yes    | Centos 8-Stream amd64 (20240425_07:08) | x86_64       | VIRTUAL-MACHINE | 859.29MiB | 2024/04/25 00:00 UTC |
| centos/9-Stream/cloud (1 more) | c8f256fac559 | yes    | Centos 9-Stream amd64 (20240425_07:08) | x86_64       | VIRTUAL-MACHINE | 741.19MiB | 2024/04/25 00:00 UTC |

Of course, if you’re not looking to automate the VM setup you can just use incus shell <foo> and create a user the traditional way using adduser/useradd. (This assumes the VM has a running incus agent, which should be true if you’re using any of the supplied images).

$ incus launch images:centos/9-Stream/cloud c9 --vm
Launching c9
Error: Failed instance creation: This virtual machine image requires an agent:config disk be added

BTW I’m on Fedora 39, if that matters at all.

This virtual machine image requires an agent:config disk be added

Indeed, that’s what provides the cloud-init data. See here:

incus config device add INSTANCE cloud-init disk source=cloud-init:config

… then set your cloud-init user-data config (if it’s not already in the profile you are using), and restart the VM.

To avoid the error you can use incus create instead of incus launch, then add the config device and set your cloud-init user-data (in either order), then incus start.

EDIT: I note that the documentation says it should work if incus agent is present in the VM image, and all the VM images from the images: remote have it. In practice, I find this is not the case, and I have to use the cloud-init disk. From what I understand, it’s a question of the agent not starting early enough in the boot process for it to be detected by cloud-init as a valid data source.

So far, no luck with CentOS or Rocky

$ incus init images:centos/9-Stream/cloud c9 --vm --profile default --profile keesb-centos
Creating c9
$ incus config device add c9 cloud-init disk source=cloud-init:config
Device cloud-init added to c9
$ incus start c9
Error: This virtual machine image requires an agent:config disk be added
$ incus init images:rockylinux/9/cloud r9 --vm --profile default --profile keesb-centos
Creating r9
$ incus config device add r9 cloud-init disk source=cloud-init:config
Device cloud-init added to r9
$ incus start r9
Error: This virtual machine image requires an agent:config disk be added

On the other hand, Alpine works. And Ubuntu too, which doesn’t need all these extra steps.

Here’s the code that gives that error message:

        // Ensure an agent drive is present if the image requires it.
        if util.IsTrue(d.localConfig["image.requirements.cdrom_agent"]) {
                found := false
                for _, dev := range d.expandedDevices {
                        if dev["type"] == "disk" && dev["source"] == "agent:config" {
                                found = true

                if !found {
                        return fmt.Errorf("This virtual machine image requires an agent:config disk be added")

OK, this is another one of the special properties and I apologise, I am getting confused between agent:config and cloud-init:config disks, and to be honest I’m not clear on the difference.

The documentation seems to be limited, but these devices are mentioned under disk devices and API extensions.

I was able to get it to work like this:

$ incus profile show agenttest
  cloud-init.user-data: |
      expire: False
    ssh_pwauth: True
      - name: sysadm
        gecos: Student System Administrator
        groups: [adm, audio, cdrom, dialout, dip, floppy, plugdev, sudo, video, incus-admin]
        lock_passwd: false
        passwd: $6$XqBb4pf3$rTN75u32r30VDbY252DwLLJ0rAuxIMvZceX02YFXK/WjAJ0FVjrUCQSkdPWA7nW0DoSNJrdu9w.PGOLbZmWlb/
        shell: /bin/bash
description: Connect eth0 to virbr0 and enable cloud-init
    source: agent:config
    type: disk
    name: eth0
    nictype: bridged
    parent: virbr0
    type: nic
    path: /
    pool: default
    type: disk
name: agenttest
$ incus launch images:centos/9-Stream/cloud c9 --vm -p agenttest
Launching c9
$ incus console c9
To detach from the console, press: <ctrl>+a q

c9 login: sysadm
[sysadm@c9 ~]$

Note: it didn’t work if I had both agent:config and cloud-init:config disks. If the profile contained these:

    source: agent:config
    type: disk
    source: cloud-init:config
    type: disk

then the VM would start, but the console showed it was not booting properly:

$ incus console c9
[    **] (1 of 2) A start job is running for Incus - agent (30s / no limit)
[  *** ] (2 of 2) A stop job is running for Disk Manager (49s / no limit)
... cycles around these

EDIT: I spoke too soon. I tried changing the storage pool in the profile to zfs (I have a zfs pool with this name), leaving just the agent disk, and it hangs at the same position.

[  OK  ] Stopped Generate network units from Kernel command line.
[    6.259400] iTCO_vendor_support: vendor-support=0
[    6.286532] iTCO_wdt Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
[    6.296474] iTCO_wdt initialized. heartbeat=30 sec (nowayout=0)
[    6.309525] virtio_net virtio10 enp5s0: renamed from eth0
[    6.323557] RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
[    6.376342] [drm] pci: virtio-vga detected at 0000:04:00.0
[    6.379413] Console: switching to colour dummy device 80x25
[    6.381387] virtio-pci 0000:04:00.0: vgaarb: deactivate vga console
[    6.381616] [drm] features: -virgl +edid -resource_blob -host_visible
[    6.381617] [drm] features: -context_init
[    6.389296] [drm] number of scanouts: 1
[    6.389302] [drm] number of cap sets: 0
[    6.393352] Error: Driver 'pcspkr' is already registered, aborting...
[    6.399458] [drm] Initialized virtio_gpu 0.1.0 0 for 0000:04:00.0 on minor 0
[    6.407972] fbcon: virtio_gpudrmfb (fb0) is primary device
[    6.408746] Console: switching to colour frame buffer device 160x50
[    6.420560] virtio-pci 0000:04:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device
[ ***  ] (2 of 2) A start job is running for Incus - agent (1min 5s / no limit)

Then I changed back to my default pool, and it’s still like this. So I think there’s some sort of race condition going on.

When it’s in this state, the agent clearly isn’t running:

$ incus shell c9
Error: VM agent isn't currently running

EDIT 2: I tried it again with incus launch ... --console and this time it worked, although the machine rebooted itself:

[    4.043426] systemd[1]: Reached target Preparation for Network.
[    4.045222] fuse: init (API version 7.36)
[    4.056991] systemd[1]: Mounting Kernel Configuration File System...
         Starting Rebuild Hardware Database...[    4.059325] systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
[    4.071156] systemd[1]: Starting Rebuild Hardware Database...

[    4.091647] systemd[1]: Starting Load/Save OS Random Seed...
         Starting Load/Save OS Random Seed...
         Starting Apply Kernel Variables...[    4.101066] systemd[1]: Starting Apply Kernel Variables...

         Starting Create System Users...
[    4.117153] systemd[1]: Starting Create System Users...
[  OK  ] Started Journal Service.[    4.138524] systemd[1]: Started Journal Service.

[  OK  ] Finished Load Kernel Module fuse.
[  OK  ] Mounted Kernel Configuration File System.
[  OK  ] Finished Load/Save OS Random Seed.
[  OK  ] Finished Apply Kernel Variables.
         Mounting FUSE Control File System...
         Starting Flush Journal to Persistent Storage...
[  OK  ] Mounted FUSE Control File System.
[  OK  ] Finished Create System Users.
[    4.248106] systemd-journald[400]: Received client request to flush runtime journal.
         Starting Create Static Device Nodes in /dev...
[  OK  ] Finished Flush Journal to Persistent Storage.
[  OK  ] Finished Create Static Device Nodes in /dev.
[  OK  ] Reached target Preparation for Local File Systems.
[    4.303534] ACPI: bus type drm_connector registered
[  OK  ] Finished Load Kernel Module drm.
[  OK  ] Finished Coldplug All udev Devices.
[  OK  ] Finished Rebuild Hardware Database.
         Starting Rule-based Manage…for Device Events and Files...
[  OK  ] Started Rule-based Manager for Device Events and Files.
         Starting Load Kernel Module configfs...
[  OK  ] Finished Load Kernel Module configfs.
[    5.094237] sd 0:0:0:1: Attached scsi generic sg0 type 0
[    5.094277] sr 0:0:1:1: Attached scsi generic sg1 type 5
         Starting Incus - agent...
[    5.236179] virtio-fs: tag <config> not found
[    5.244060] input: QEMU Virtio Keyboard as /devices/pci0000:00/0000:00:01.0/0000:01:00.2/virtio2/input/input5
[    5.273163] NET: Registered PF_VSOCK protocol family
[    5.306022] input: QEMU Virtio Tablet as /devices/pci0000:00/0000:00:01.0/0000:01:00.3/virtio3/input/input6
         Mounting /boot/efi...
[  OK  ] Mounted /boot/efi.
[  OK  ] Reached target Local File Systems.
         Starting Rebuild Dynamic Linker Cache...
         Starting Mark the need to relabel after reboot...
         Starting Automatic Boot Loader Update...
         Starting Commit a transient machine-id on disk...
         Starting Create Volatile Files and Directories...
[  OK  ] Finished Mark the need to relabel after reboot.
[  OK  ] Finished Commit a transient machine-id on disk.
[  OK  ] Finished Automatic Boot Loader Update.
[  OK  ] Removed slice Slice /system/modprobe.
[  OK  ] Closed Process Core Dump Socket.
[    5.589789] lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
[  OK  ] Stopped Commit a transient machine-id on disk.
         Stopping Load/Save OS Random Seed...
[  OK  ] Removed slice Slice /system/getty.
[  OK  ] Removed slice Slice /system/serial-getty.
[  OK  ] Removed slice Slice /system/sshd-keygen.
[  OK  ] Stopped target Preparation for Network.
[  OK  ] Stopped Generate network units from Kernel command line.
[  OK  ] Stopped target Remote File Systems.
[  OK  ] Stopped target Path Units.
[  OK  ] Stopped target Slice Units.
[  OK  ] Removed slice User and Session Slice.
[  OK  ] Unset automount Arbitrary …s File System Automount Point.
[  OK  ] Stopped target Local Encrypted Volumes.
[  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
[  OK  ] Stopped Forward Password R…uests to Wall Directory Watch.
[  OK  ] Stopped target Local Integrity Protected Volumes.
[  OK  ] Stopped target Local Verity Protected Volumes.
[  OK  ] Stopped Read and set NIS d…e from /etc/sysconfig/network.
[  OK  ] Stopped Mark the need to relabel after reboot.
[  OK  ] Stopped Apply Kernel Variables.
[  OK  ] Stopped Load Kernel Modules.
[  OK  ] Reached target System Shutdown.
[  OK  ] Reached target Late Shutdown Services.
[  OK  ] Finished System Reboot.
[  OK  ] Reached target System Reboot.

However, it rebooted successfully. On reconnecting with incus console it was fine, and I could login.

Great. I finally got this to work too. Thanks a lot.

So indeed the essential part was to add the following in my cloud-init.user-data

    source: agent:config
    type: disk