[rockchip][lxd] Yet another "I cannot get IPv4 assigned to a container"

I installed Ubuntu Focal on my Rock Pi 4A board and decided to create a few containers inside. After following all default steps I don’t see my container get an IPV4 assigned. If you can help me to find out what causes a problem…
I don’t think stock image has netplan or systemd-networking

rock@rockpi4a:~$ sudo ps -ef | grep -i net
root 37 2 0 00:18 ? 00:00:00 [netns]
root 403 1 0 00:18 ? 00:00:03 /usr/sbin/NetworkManager --no-daemon
lxd 1794 852 0 00:20 ? 00:00:01 dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=lxdbr0 --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=10.163.223.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.leases --dhcp-hostsfile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.hosts --dhcp-range 10.163.223.2,10.163.223.254,1h -s lxd -S /lxd/ --conf-file=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.raw -u lxd -g lxd

My container:

lxc list
±-----------±--------±-----±-----±----------±----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
±-----------±--------±-----±-----±----------±----------+
| postgresql | RUNNING | | | CONTAINER | 0 |
±-----------±--------±-----±-----±----------±----------+

lxc config show postgresql --expanded
architecture: aarch64
config:
image.architecture: arm64
image.description: Alpine 3.10 arm64 (20201118_13:00)
image.os: Alpine
image.release: “3.10”
image.serial: “20201118_13:00”
image.type: squashfs
image.variant: default
user.network-config: |
version: 1
config:
- type: physical
name: eth0
subnets:
- type: dhcp
ipv4: true
volatile.base_image: 03dd52a03cdc01c2c0d61672cc4eb817a982a665cea3d56f88b53bf3e569847b
volatile.eth0.host_name: veth9703bff4
volatile.eth0.hwaddr: 00:16:3e:25:56:ef
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
volatile.uuid: 9653378f-803d-410a-845b-0a855df67871
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: rock_pool
type: disk
ephemeral: false
profiles: - privatenetwork
stateful: false
description: “”

Profile:

lxc profile show privatenetwork
config:
user.network-config: |
version: 1
config:
- type: physical
name: eth0
subnets:
- type: dhcp
ipv4: true
description: Private network LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: rock_pool
type: disk
name: privatenetwork
used_by: - /1.0/instances/postgresql

My assumption that I should get an ipv4 address assigned to my container so I can ping it from my host machine and also my container should be able to access internet via eth0. I have a feeling that lxdbr0 wasn’t created correctly but I am not sure.

ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether ea:68:26:39:e2:7c brd ff:ff:ff:ff:ff:ff
inet 192.168.1.2/24 brd 192.168.1.255 scope global dynamic noprefixroute eth0
valid_lft 70838sec preferred_lft 70838sec
inet6 fe80::fd8d:3a6f:96dc:37e1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:c5:6e:59 brd ff:ff:ff:ff:ff:ff
inet 10.163.223.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fe80::6659:1bc7:d27b:4dae/64 scope link
valid_lft forever preferred_lft forever
5: veth9703bff4@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether b2:3f:9b:7a:07:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.123.208/16 brd 169.254.255.255 scope global noprefixroute veth9703bff4
valid_lft forever preferred_lft forever

Please send output of ip a and ip r from LXD host and the problem container.

Please also send output of lxc config show <container> --expanded and lxc network show <network> for the bridged network in question.

Also please send output of iptables-save and nft list ruleset.

Thanks

ip a

rock@rockpi4a:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether ea:68:26:39:e2:7c brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/24 brd 192.168.1.255 scope global dynamic noprefixroute eth0
       valid_lft 70450sec preferred_lft 70450sec
    inet6 fe80::fd8d:3a6f:96dc:37e1/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:c5:6e:59 brd ff:ff:ff:ff:ff:ff
    inet 10.163.223.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::6659:1bc7:d27b:4dae/64 scope link
       valid_lft forever preferred_lft forever
5: veth9703bff4@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether b2:3f:9b:7a:07:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.123.208/16 brd 169.254.255.255 scope global noprefixroute veth9703bff4
       valid_lft forever preferred_lft forever

ip r

rock@rockpi4a:~$ ip r
default via 192.168.1.1 dev eth0 proto dhcp metric 100
default via 192.168.1.1 dev eth0 proto dhcp src 192.168.1.2 metric 202
10.163.223.0/24 dev lxdbr0 proto kernel scope link src 10.163.223.1
169.254.0.0/16 dev veth9703bff4 scope link src 169.254.123.208 metric 205
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.2 metric 100
192.168.1.0/24 dev eth0 proto dhcp scope link src 192.168.1.2 metric 202

lxc config show postgresql --expanded

rock@rockpi4a:~$ lxc config show postgresql --expanded
architecture: aarch64
config:
  image.architecture: arm64
  image.description: Alpine 3.10 arm64 (20201118_13:00)
  image.os: Alpine
  image.release: "3.10"
  image.serial: "20201118_13:00"
  image.type: squashfs
  image.variant: default
  user.network-config: |
    version: 1
    config:
      - type: physical
        name: eth0
        subnets:
          - type: dhcp
            ipv4: true
  volatile.base_image: 03dd52a03cdc01c2c0d61672cc4eb817a982a665cea3d56f88b53bf3e569847b
  volatile.eth0.host_name: veth9703bff4
  volatile.eth0.hwaddr: 00:16:3e:25:56:ef
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 9653378f-803d-410a-845b-0a855df67871
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: rock_pool
    type: disk
ephemeral: false
profiles:
- privatenetwork
stateful: false
description: ""

lxc network show lxdbr0

rock@rockpi4a:~$ lxc network show lxdbr0
config:
  ipv4.address: 10.163.223.1/24
  ipv4.nat: "true"
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/postgresql
- /1.0/profiles/default
- /1.0/profiles/privatenetwork
managed: true
status: Created
locations:
- none

I don’t think iptables and nft are present in that image. Would it be a problem?

Forgot output for a problem container

rock@rockpi4a:~$ lxc exec postgresql /bin/ash
~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 00:16:3e:25:56:ef brd ff:ff:ff:ff:ff:ff
    inet6 fe80::216:3eff:fe25:56ef/64 scope link
       valid_lft forever preferred_lft forever
~ # ip r
~ #

Thanks,

So neither iptables nor nftables are installed on RockPI?

Also please show output of sudo ss -ulpn on LXD host.

If you configure an IP statically inside the container can you ping the host? This will show if its a DHCP issue or whether its a more general comms issue.

I don’t see those packages anywhere. I will try static ip a bit later and let you know how it goes.
ss -ulpn

rock@rockpi4a:~$ sudo ss -ulpn
State                     Recv-Q                    Send-Q                                                     Local Address:Port                                          Peer Address:Port                    Process
UNCONN                    0                         0                                                                0.0.0.0:51671                                              0.0.0.0:*                        users:(("avahi-daemon",pid=396,fd=14))
UNCONN                    0                         0                                                                0.0.0.0:5353                                               0.0.0.0:*                        users:(("avahi-daemon",pid=396,fd=12))
UNCONN                    0                         0                                                           10.163.223.1:53                                                 0.0.0.0:*                        users:(("dnsmasq",pid=1794,fd=6))
UNCONN                    0                         0                                                             127.0.0.53:53                                                 0.0.0.0:*                        users:(("systemd-resolve",pid=367,fd=12))
UNCONN                    0                         0                                                                0.0.0.0:67                                                 0.0.0.0:*                        users:(("dnsmasq",pid=1794,fd=4))
UNCONN                    0                         0                                                                0.0.0.0:68                                                 0.0.0.0:*                        users:(("dhcpcd",pid=440,fd=12))
UNCONN                    0                         0                                                        169.254.123.208:123                                                0.0.0.0:*                        users:(("ntpd",pid=2140,fd=21))
UNCONN                    0                         0                                                           10.163.223.1:123                                                0.0.0.0:*                        users:(("ntpd",pid=2140,fd=20))
UNCONN                    0                         0                                                            192.168.1.2:123                                                0.0.0.0:*                        users:(("ntpd",pid=2140,fd=19))
UNCONN                    0                         0                                                              127.0.0.1:123                                                0.0.0.0:*                        users:(("ntpd",pid=2140,fd=18))
UNCONN                    0                         0                                                                0.0.0.0:123                                                0.0.0.0:*                        users:(("ntpd",pid=2140,fd=17))
UNCONN                    0                         0                                                                      *:5353                                                     *:*                        users:(("avahi-daemon",pid=396,fd=13))
UNCONN                    0                         0                                            [fe80::6659:1bc7:d27b:4dae]:53                                                       *:*                        users:(("dnsmasq",pid=1794,fd=8))
UNCONN                    0                         0                                            [fe80::6659:1bc7:d27b:4dae]:123                                                      *:*                        users:(("ntpd",pid=2140,fd=24))
UNCONN                    0                         0                                            [fe80::fd8d:3a6f:96dc:37e1]:123                                                      *:*                        users:(("ntpd",pid=2140,fd=23))
UNCONN                    0                         0                                                                  [::1]:123                                                      *:*                        users:(("ntpd",pid=2140,fd=22))
UNCONN                    0                         0                                                                      *:123                                                      *:*                        users:(("ntpd",pid=2140,fd=16))
UNCONN                    0                         0                                                                      *:49293                                                    *:*                        users:(("avahi-daemon",pid=396,fd=15))

You have a DHCP server listening on all DHCP ports, this is likely interfering with LXD’s dnsmasq DHCP service.

Sorry, I am a bit confused. I thought dhcpcd5 is a client to get ip address from router and it is not a server. Or it is all in one? Anyway I will try to solely use dnsmasq and see if it helps.

You are probably correct. However it appears to listening on the DHCP ports and dnsmasq isn’t.

Note LXD runs its own dnsmasq so avoid running your own or make sure it’s not listening on all interfaces.

Hm, I’ve checked rfc. And it says dhcp client should live on port 68. That is what dhcpcd is doing there. It doesn’t look wrong to me. Wouldn’t any Ubuntu/Debian have some sort of dhcp client sitting on port 68 by default? So anyone who installs lxd with built-in dnsmasq would have a conflict?

I stopped dhpcd.service and restarted snap.lxd.daemon. dnsmasq was still sitting on port 67. I restarted container and it didn’t get its ip. So I assigned a static ip to a container and restarted it. It didn’t get ip again.
Does it look as correct config with a static ip?

rock@rockpi4a:~$ lxc config show postgresql --expanded
architecture: aarch64
config:
  image.architecture: arm64
  image.description: Alpine 3.10 arm64 (20201118_13:00)
  image.os: Alpine
  image.release: "3.10"
  image.serial: "20201118_13:00"
  image.type: squashfs
  image.variant: default
  user.network-config: |
    version: 1
    config:
      - type: physical
        name: eth0
        subnets:
          - type: dhcp
            ipv4: true
  volatile.base_image: 03dd52a03cdc01c2c0d61672cc4eb817a982a665cea3d56f88b53bf3e569847b
  volatile.eth0.host_name: veth81f7ca1e
  volatile.eth0.hwaddr: 00:16:3e:25:56:ef
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 9653378f-803d-410a-845b-0a855df67871
devices:
  eth0:
    ipv4.address: 10.99.10.42
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: rock_pool
    type: disk
ephemeral: false
profiles:
- privatenetwork
stateful: false
description: ""

Assigning a static IP via LXD config isn’t going to check whether network is working and its just DHCP that is broken as LXD static IP assignments are just static DHCP leases, i.e it needs DHCP to work.

Use ip a add x.x.x.x.x/24 dev eth0 inside your container instead and then try to ping the lxdbr0 address.

As for dhcpc, I double checked and you’re right, its not listening on port 67, only dnsmasq is, so that is a mistake, I read the wrong line when horizontally scrolling the output.

Although for what its worth I see no persistent dhcp client process listening on port 68 on my ubuntu desktop, but either way it shouldnt affect dnsmasq.

Also have you tried running a manual dhcp client inside your container, just to check its not a configuration issue cause by cloud-init in your profile?

Should x.x.x.x.x/24 be the same as what I defined for lxdbr0 on a host machine? I assumed, yes…
So now I have this in my container.

~ # ifconfig
eth0      Link encap:Ethernet  HWaddr 00:16:3E:25:56:EF
          inet addr:10.163.223.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe25:56ef/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:60 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:10008 (9.7 KiB)  TX bytes:1925 (1.8 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:528 (528.0 B)  TX bytes:528 (528.0 B)

Did you mean to ping lxdbr0 from a container or from a host?

rock@rockpi4a:~$ lxc list
+------------+---------+---------------------+------+-----------+-----------+
|    NAME    |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+------------+---------+---------------------+------+-----------+-----------+
| postgresql | RUNNING | 10.163.223.1 (eth0) |      | CONTAINER | 0         |
+------------+---------+---------------------+------+-----------+-----------+

I don’t think Alpine has a dhcp client there… I can try another image actually. But my main goal is alpine really.

No it shouldnt be 10.163.223.1 as that is the same IP as the bridge, if you do that you will guarantee conflicts with the bridge address and break connectivity.

You should choose any IP apart from that in the same /24 subnet as lxdbr0.

So I tried another ip

rock@rockpi4a:~$ lxc list
+------------+---------+---------------------+------+-----------+-----------+
|    NAME    |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+------------+---------+---------------------+------+-----------+-----------+
| postgresql | RUNNING | 10.163.223.2 (eth0) |      | CONTAINER | 0         |
+------------+---------+---------------------+------+-----------+-----------+
rock@rockpi4a:~$ ping 10.163.223.2
PING 10.163.223.2 (10.163.223.2) 56(84) bytes of data.
From 10.163.223.1 icmp_seq=1 Destination Host Unreachable
From 10.163.223.1 icmp_seq=2 Destination Host Unreachable
From 10.163.223.1 icmp_seq=3 Destination Host Unreachable
From 10.163.223.1 icmp_seq=4 Destination Host Unreachable

Here’s an example.

  1. Simulate a DHCP fault on the host by disabling DHCP services on lxdbr0:
lxc network set lxdbr0 ipv4.dhcp false
  1. Launch alpine container and check for no IPv4:
lxc launch images:alpine/3.12 c1
lxc ls
+------+---------+------+-----------------------------------------------+-----------+-----------+
| NAME |  STATE  | IPV4 |                     IPV6                      |   TYPE    | SNAPSHOTS |
+------+---------+------+-----------------------------------------------+-----------+-----------+
| c1   | RUNNING |      | fd42:6433:2aa7:637d:216:3eff:fed7:e8b2 (eth0) | CONTAINER | 0         |
+------+---------+------+-----------------------------------------------+-----------+-----------+
  1. Add manual IP to container and test ping to bridge:
lxc exec c1 -- ip a add 10.143.8.2/24 dev eth0
lxc exec c1 -- ping 10.143.8.1
PING 10.143.8.1 (10.143.8.1): 56 data bytes
64 bytes from 10.143.8.1: seq=0 ttl=64 time=0.104 ms
64 bytes from 10.143.8.1: seq=1 ttl=64 time=0.153 ms
  1. Re-enable DHCP on network and test manual DHCP client:
lxc stop c1
lxc start c1
lxc network set lxdbr0 ipv4.dhcp true
lxc exec c1 -- udchpc
lxc ls
+------+---------+--------------------+-----------------------------------------------+-----------+-----------+
| NAME |  STATE  |        IPV4        |                     IPV6                      |   TYPE    | SNAPSHOTS |
+------+---------+--------------------+-----------------------------------------------+-----------+-----------+
| c1   | RUNNING | 10.143.8.84 (eth0) | fd42:6433:2aa7:637d:216:3eff:fed7:e8b2 (eth0) | CONTAINER | 0         |
+------+---------+--------------------+-----------------------------------------------+-----------+-----------+

Can you show ip r inside container once you have added address please I want to check you specified the subnet correctly.