Containers not getting ipv4

I am using Ubuntu 20.04 Lxd v 4.0.5

container is not getting IP

root@test24:/etc/netplan# lxc profile show br0profile
config: {}
description: Bridged networking LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
name: br0profile
used_by:
- /1.0/instances/testserver1
- /1.0/instances/testserver2
root@test24:/etc/netplan# lxc config show testserver2
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20210325)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20210325"
  image.type: squashfs
  image.version: "20.04"
  volatile.base_image: 46701fa2d99c72583f858c50a25f9f965f06a266b997be7a57a8e66c72b5175b
  volatile.eth0.host_name: veth7ae01572
  volatile.eth0.hwaddr: 00:16:3e:11:d5:12
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: d0177ed5-66cc-461f-8d2b-ca49df95009b
devices:
  eth0:
    ipv4.address: 172.17.5.22
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
ephemeral: false
profiles:
- default
- br0profile
stateful: false
description: ""
root@test24:/etc/netplan# lxc shell testserver2
root@testserver2:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
45: eth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:11:d5:12 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fe11:d512/64 scope link
       valid_lft forever preferred_lft forever

on the host bridge is setup like this

root@test24:/etc/netplan# more 00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  version: 2
  bonds:
    bond0:
      interfaces:
        - eno1
        - eno2
      parameters:
        mode: active-backup
        primary: eno1
  ethernets:
    eno1: {}
    eno2: {}
  bridges:
    br0:
      addresses:
       - 172.17.1.24/16
      dhcp4: false
      gateway4: 172.17.1.1
      nameservers:
        addresses:
         - 8.8.8.8
         - 172.17.1.104
         - 172.17.1.106
        search:
         - xxxxx.com
      interfaces:
       - bond0
      parameters:
       stp: true
       forward-delay: 4
      dhcp4: false
      dhcp6: false

The ipv4.address NIC setting of 172.17.5.22 will have no effect unless your instance is connected to a LXD managed bridge (because that setting is used to create a static DHCP lease in the LXD managed DHCP server). So I would suggest removing that as a starting point to avoid confusion.

In this case I believe your instance NIC is connected to an unmanaged bridge br0, which on the hosy is connected (by way of a bond) to the host’s external network.

Thus I am assuming you’re expecting an external DHCP server to be giving your instances IP config?

Have you checked your host’s firewall config (discussed in this thread) to see if its blocking DHCP packets?

Hi Thomas
Yes I am using a bridge, br0 created under netplan. Since we aren’t running a dhcp service, I was able to get it working by just updating the /etc/netplan/50-cloud-init.yaml to
network:
version: 2
ethernets:
eth0:
addresses:
- 172.17.5.22/16
gateway4: 172.17.1.1
nameservers:
addresses:
- 8.8.8.8
- 172.17.1.104

netplan apply

Is there anyway to get the LXD containers or VMs on the LAN using the LXD managed bridge lxdbr0?

The problem will be that if you connect the lxdbr0 managed bridge to the external network then the DHCP server LXD runs on lxdbr0 will also start issuing IPs for the other devices on your external network, and start routing traffic to the LXD host as the default gateway.

You can instead run an DHCP server on the external network and then LXD’s instances will use it when connected to an unmanaged bridge.

I notice the at the br0 is getting the same mac address as one of the vm tap interfaces
72:1b:9a:60:0e:f2 i.e. the lowest one

10: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 72:1b:9a:60:0e:f2 brd ff:ff:ff:ff:ff:ff
inet 172.17.1.24/16 brd 172.17.255.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::8813:ff:fe08:75e5/64 scope link
valid_lft forever preferred_lft forever
58: tapcf0c3fc3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 72:1b:9a:60:0e:f2 brd ff:ff:ff:ff:ff:ff
67: tap7431a4b9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether da:bc:4f:4f:fb:29 brd ff:ff:ff:ff:ff:ff

Is that normal?

Yes that’s normal unless you explicitly set the Mac address of your bridge, which is a good idea to avoid it fluctuating.

The worst part is the every time I create a new VM I lose my connection to the host for 4-5 minutes. I don’t recall this happening when using KVM

That’s what happens when you bridge your physical ethernet device and don’t directly set a MAC address on the bridge. The bridge changes address every time something is added/removed to it, requiring everything on your network to do new ARP queries to talk to your host.

can I just add a made up mac address for br0 in the netplan config
or does it have to be an existing mac from one of the interfaces on the host?

bridges:
br0:
macaddress: 00:0a:2e:c9:20:03 - just add this?
addresses:
- 172.17.1.24/16
dhcp4: false
gateway4: 172.17.1.1
nameservers:
addresses:
- 172.17.1.104
- 172.17.1.106
search:
- ssss.com
interfaces:
- bond0
parameters:
stp: true
forward-delay: 4
dhcp4: false
dhcp6: false


That should work fine

confirmed, that worked. I no longer lose the network connection and the br0 mac doesn’t change
thank you very much

This simple solution works for me : lxc exec container_name -- dhclient

In my case my fps server had a firewall running and this interferes with the ldx firewall. The solution was to turn of the ldx firewall. I did the following:

turn of the lxd firewall

lxc network set lxdbr0 ipv4.firewall false
lxc network set lxdbr0 ipv6.firewall false

redirect the lxd firewall:

firewall-cmd --zone=trusted --change-interface=lxdbr0 --permanent
firewall-cmd --reload

Install firewall in case not available with:

apt install firewalld

Now create your container again:

lxc launch images:ubuntu/22.10 containername

and it should have ipv4

1 Like

Hello! I don’t know much about networking but when I encountered this issue of no IPv4 (but IPv6 is present) on my containers after upgrading Linux Mint from 20.3 to 21.1 (for some reason) I noticed that disabling UFW solves the issue. But I need it to be enabled, so I started experimenting and found out that it looks like adding a new rule to my UFW (via Gufw) with the following settings helps:

  • Policy: Allow
  • Direction: In
  • Interface: lxdbr0
  • Protocol: UDP
  • Port: 67 (UPD: Also, it looks like adding the same rule for port 53 is required to get internet access from within containers)

Now I have 2 questions:

  1. Do you have any idea why it stopped getting IPv4 after the OS upgrade (UFW rules stayed the same, at least as I remember them)?
  2. Do you think my solution is correct for my case or is there a better one?

UPD 2: Turned out my solution doesn’t fully solve the no internet access from within containers issue :frowning:

We have a section that may help in the docs:

https://linuxcontainers.org/lxd/docs/master/howto/network_bridge_firewalld/

I am just learning LXD and ubuntu container system. I have the same IP4 issue and also see the
-A INPUT -j REJECT --reject-with icmp-host-prohibited
entry in my iptables.

My question is how did you remove it from the iptables?

Thank you

Sane rule with -D instead of -A will remove it, though you probably want to figure out what put it there to begin with.