Netplan lxc profile problem

Good morning,

I have a problem with netplan.
when i declare a profile with a fixed ip it works.
But it is impossible to run a package installation as an ssh server for exemple or htop.

here is my profile:

config:
  user.user-data: |
    #cloud-config
    host: openvpn
    packages:
          - python3
          - htop
    ssh_authorized_keys:
      - ssh-rsa mykey
    write_files:
      - content: |
          network:
              version: 2
              ethernets:
                  eth0:
                      dhcp4: false
                      addresses:
                      - 192.168.0.212/24
                      gateway4: 192.168.0.1
                      nameservers:
                           addresses:
                           - 192.168.0.1
        path : /etc/netplan/50-cloud-init.yaml
      - content: |
          network: {config: disabled}
        path: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
    package_update: true
    runcmd:
      - echo "192.168.0.212 openvpn" >> /etc/hosts
      - [netplan, apply]
description: ipvlan lxd profile
devices:
  eth0:
    nictype: ipvlan
    parent: eth0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: vpn
used_by:
- /1.0/instances/openvpn

the netplan works but it takes a long time in the logs I have this:
/var/log/cloud-init-output.log

Reading package lists...
W: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/focal/InRelease  Temporary failure resolving 'ports.ubuntu.com'
W: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/focal-updates/InRelease  Temporary failure resolving 'ports.ubuntu.com'
W: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/focal-backports/InRelease  Temporary failure resolving 'ports.ubuntu.com'
W: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/focal-security/InRelease  Temporary failure resolving 'ports.ubuntu.com'
W: Some index files failed to download. They have been ignored, or old ones used instead.
Reading package lists...
Building dependency tree...
Reading state information...
python3 is already the newest version (3.8.2-0ubuntu2).
Suggested packages:
  lsof strace
The following NEW packages will be installed:
  htop
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 76.1 kB of archives.
After this operation, 220 kB of additional disk space will be used.
Err:1 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 htop arm64 2.2.0-2build1
  Temporary failure resolving 'ports.ubuntu.com'
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/h/htop/htop_2.2.0-2build1_arm64.deb  Temporary failure resolving 'ports.ubuntu.com'
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'modules:final' at Mon, 07 Mar 2022 10:08:59 +0000. Up 131.24 seconds.
2022-03-07 10:09:03,955 - util.py[WARNING]: Failed to install packages: ['python3', 'htop']
2022-03-07 10:09:03,960 - cc_package_update_upgrade_install.py[WARNING]: 1 failed with exceptions, re-raising the last one
2022-03-07 10:09:03,962 - util.py[WARNING]: Running module package-update-upgrade-install (<module 'cloudinit.config.cc_package_update_upgrade_install' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_package_update_upgrade_install.py'>) failed

I thank you very much for your help

You should use user.network-config for your netplan so that cloud-init can correctly apply it before trying to run the other actions.

Hello, thank you for your answer.
I am not an expert.

Do you mean that I go through a lxc network that I configure
before.

example lxc create network ...
OR
through a cloud-init function ?

I’m not sure how to proceed
Thank you for your help
Sincerely

Hi tintin,
You can create a simple yaml file like that, which is called net.yaml. Then you can launch container as,
lxc launch images:ubuntu/20.04/cloud testcloud -c user.network-config="$(cat net.yaml)"
Beside this you can specify network and user data seperately.
Regards.

network:
  version: 1
  config:
  - type: physical
    name: eth0
    subnets:
      - type: static
        ipv4: true
        address: 10.240.176.200
        netmask: 255.255.255.0
        gateway: 10.240.176.1
        control: auto
1 Like

Good morning,
Thank you but in this case.
My container has an ip but it is totally isolated from my local network.
cordially

Hi tintin,
Please change address, gateway information with your needs, your network information can be different. That net.yaml file is just an example.
Regards.

absolutely ,
I have adapted with my lan configuration.
but nothing comes out.
surprising.
Thanks for your help

The thing in here is to give your container a static ip with cloud-init nothing fancy. What do you want to achieve?
Regards.

what I’m looking to do:

  • launch the creation of my container (via ansible but currently I do it by hand)
  • set an ipfix to connect it directly to my lan network (192.168.0.0/24).
  • then install openssh-server and configure a service on it.

In this case the machine starts well with the good ip but it does not ping its gateway (192.168.0.1)
I have indeed adapted your configuration:

network:
  version: 1
  config:
  - type: physical
    parent: eth0
    name: eth0
    subnets:
      - type: static
        ipv4: true
        address: 192.168.0.220
        netmask: 255.255.255.0
        gateway: 192.168.0.1
        control: auto

the config of my container:

architecture: aarch64
config:
  image.architecture: arm64
  image.description: Ubuntu focal arm64 (20220305_07:43)
  image.os: Ubuntu
  image.release: focal
  image.serial: "20220305_07:43"
  image.type: squashfs
  image.variant: cloud
  user.network-config: |-
    network:
      version: 1
      config:
      - type: physical
        parent: eth0
        name: eth0
        subnets:
          - type: static
            ipv4: true
            address: 192.168.0.220
            netmask: 255.255.255.0
            gateway: 192.168.0.1
            control: auto
  volatile.base_image: 52d904c86d2626d2924b2e4c2dba5024cc19c9b04b2ee063555bd818cfb71124
  volatile.eth0.host_name: veth55ae45f0
  volatile.eth0.hwaddr: 00:16:3e:e7:1b:16
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 8454ca3a-8771-4d6f-97cf-188867f84363
devices: {}
ephemeral: false
profiles:
- default
stateful: false

thank you for your advice

Could you post the ip a and ip r command outputs of the container.
Regards.

hello,
ip a:
root@test2:~# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
162: eth0@if163: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:8a:8a:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.224/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd42:a714:7749:96fe:216:3eff:fe8a:8aa2/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 3598sec preferred_lft 3598sec
inet6 fe80::216:3eff:fe8a:8aa2/64 scope link
valid_lft forever preferred_lft forever

and ip r :

root@test2:~# ip r
default via 192.168.0.1 dev eth0 proto static 
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.224

thanks

Looks fine, can you restart the systemd-networkd service or restart the container should solve the case.
Regards.

I just did it without any change

I note that my container depends on the default profile which on a bode bridge at the network level. It can have an impact
thanks

This worked for me.