Cloud Init not Working

I created a .yml file that is supposed setup SSH on my container

config:
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - STATIC_IP/32
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
  cloud-init.user-data: |
    #cloud-config
    users:
      - name: ansible
        ssh-authorized-keys:
          - PUBKEY
        shell: /bin/bash
        sudo: ALL=(ALL) NOPASSWD:ALL
    ssh_pwauth: True ## This line enables ssh password authentication

description: Default LXD profile
devices:
  eth0:
    ipv4.address: STATIC_IP
    nictype: routed
    parent: eno1
    type: nic
name: routed_STATIC_IP
used_by:

I start my instance with incus launch images:ubuntu/24.04/cloud $INSTANCENAME -c security.nesting=true -c security.syscalls.intercept.mknod=true -c security.syscalls.intercept.setxattr=true --profile default < ./config.yml yet it seems like ssh is not setup correctly. Is the issue with my cloud init config or the way I am launching the instance?

Did you install the SSH server in the instance? It doesn’t look like you installed it as part of the cloud-init configuration.

If you already did then make sure it’s enabled with systemctl

EDIT: I’m also not familiar with the routed NIC, so someone else will have to chime in, but if it was a bridge NIC then you would also need to add a proxy device to enable network connectivity to the SSH ports

You can look in the instance for the cloud-init logs. It should be quite straightforward what went wrong. Those are under /var/ in the instance.

1 Like

Which exact part of “SSH is not setup correctly”? Is the “ansible” user missing completely? Is the “ansible” user present, but without the authorized_keys? Is the networking not set up correctly?

For consistency I suggest you use cloud-init.network-config here. Whilst user.network-config and user.user-data are supported for backwards compatibility, I’m not sure if you can mix them.

You don’t need to set up the IP address twice. Set the IP address in “devices / eth0” if you are running on an incus-managed internal bridge, where incus acts as the upstream gateway, and you want Incus to give out the IP address via DHCP. Set the IP address in cloud-init network configuration if the container is going onto an unmanaged bridge.

A prefix length of /32 is going to cause problems for routing, are you sure that’s what you want?

@SirGiggles Thank you for catching that I changed my config.
@simos I tried, but it seems like there is an issue with my networking section.
@candlerb I got rid of the cloud-init user part and replaced it all with incus. However, I’m not quite sure if this is the correct syntax for a incus managed bridge:

config:
  cloud-init.user-data: |
    #cloud-config
    packages:
      - openssh-server
    users:
      - name: ubuntu
      - ssh_authorized_keys:
        - ssh-ed25519 PUBKEY

description: Default LXD profile
devices:
  eth0:
    ipv4.address: 192.168.68.92
    nictype: routed
    parent: eno1
    type: nic
name: routed_192.168.68.92
used_by:

Any thoughts what is wrong now?

For an incus managed bridge (e.g. incusbr0), this means that incus itself is acting as the default gateway for a private network, and performs NAT. This is what a default install gives you. Normally you would have the network device defined in the default profile, and it would look like this:

$ incus profile show default
config: {}
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
...
project: default
$ incus network show incusbr0
config:
  ipv4.address: 10.136.163.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:93af:f187:61fa::1/64
  ipv6.nat: "true"
description: ""
name: incusbr0
type: bridge
used_by:
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
project: default

There’s no need to supply network details when creating a container. You would just incus launch ... and it would pick up the network from the default profile, with an address assigned by DHCP. If you want to get DHCP to give a different address, then:

devices:
  eth0:
    ipv4.address: 192.168.68.92
    network: incusbr0
    type: nic

If you don’t want this NAT, but instead want the instance to be connected directly to whatever network eno1 is plugged into (i.e. eno1 connects to the 192.168.68.x network), then normally you would use an unmanaged bridge for this.

On the host, you’d create a bridge, say br0. Optionally you can give the host itself an IP address on that network.

# in netplan
network:
  version: 2
  ethernets:
    eno1:
      dhcp4: false
      accept-ra: false
      link-local: []
  bridges:
    br0:
      interfaces: [eno1]
      parameters:
        stp: false
        forward-delay: 0
      dhcp4: false
      accept-ra: false
      # Include all the following if the host itself has an IP on the eno1 network
      # (i.e. configure IP on the bridge, not on eno1)
      addresses: [192.168.68.2]
      routes:
        - to: default
          via: 192.168.68.1
      nameservers:
        addresses: [1.1.1.1]
        search: [example.com]

incus network list would show this as an unmanaged bridge. Then you’d create a new profile, e.g.

$ incus profile show br0
config: {}
description: Bridge to br0
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: zfs
    type: disk
name: br0

Then incus launch -p br0 .... and everything is fine. You can run multiple containers bridged onto this same network. In this case, it’s your upstream DHCP server which assigns the IP (incus has no knowledge of this). If you want to set a static IP address, do it in the cloud-init network configuration for the container.

However, you show that you are using nictype: routed which implies you are trying to do something much fancier, and you’ll need to explain your topology and what you’re trying to achieve.

@candlerb I think the routed keyword is me copying down old lxd docs, I’m not trying to do anything special but a basic network bridge.

I set up a network bridge with netplan like this:

network:
  ethernets:
    eno1:
      dhcp4: false
      dhcp6: false
  version: 2

  bridges:
    incusbr0:
      interfaces: [eno1]
      addresses: [192.168.68.83/24]
      routes:
       - to: default
         via: 192.168.68.1
      nameservers:
        addresses: [8.8.8.8]
      mtu: 1500
      dhcp4: no
      dhcp6: no

And here is the default profile:

incus profile show default
config: {}
description: Default Incus profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: incusbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/sure-egret
- /1.0/instances/docker

Everything seemed good, so I tried launching an instance with my old command:
incus launch images:ubuntu/24.04/cloud docker -c security.nesting=true -c security.syscalls.intercept.mknod=true -c security.syscalls.intercept.setxattr=true < config.yml

I fixed my yml use cloud init to set a static ip

config:
  cloud-init.user-data: |
    #cloud-config
    packages:
      - openssh-server
    users:
      - name: ubuntu
      - ssh_authorized_keys:
        - PUBKEY
    network:
      version: 2
      ethernets:
        eth0:
          dhcp4: no
          addresses: [192.168.68.101/24]
          routes:
            - to: default
              via: 192.168.68.1
          nameservers:
            addresses: [8.8.8.8,8.8.4.4]

And the ssh login works but it seems like since the IP is not 192.168.68.101 that networking is being obeyed. Any idea how to fix that?

You need to separate cloud-init.network-config into its own section:

config:
  cloud-init.user-data: |
    #cloud-config
    packages:
      - openssh-server
    users:
      - name: ubuntu
        ssh_authorized_keys:        # Correction
          - PUBKEY
  cloud-init.network-config: |      # You were missing this line
    network:
      version: 2
      ethernets:
        eth0:
          dhcp4: no
          addresses: [192.168.68.101/24]
          routes:
            - to: default
              via: 192.168.68.1
          nameservers:
            addresses: [8.8.8.8,8.8.4.4]

Result:

# incus launch images:ubuntu/24.04/cloud testy <config.yml
Launching testy
# incus list testy
+-------+---------+-----------------------+---------------------------------------------+-----------+-----------+
| NAME  |  STATE  |         IPV4          |                    IPV6                     |   TYPE    | SNAPSHOTS |
+-------+---------+-----------------------+---------------------------------------------+-----------+-----------+
| testy | RUNNING | 192.168.68.101 (eth0) | XXXX:XXX:XX:XXXX:1266:6aff:fe7f:152d (eth0) | CONTAINER | 0         |
+-------+---------+-----------------------+---------------------------------------------+-----------+-----------+

EDIT: You also had a spurious dash in the users: section, I’ve corrected that above too

1 Like