Containers not getting ipv4

OK so there is quite a lot going on there. The first thing I notice is that you have mentions of docker, which is known to add rules that can interfere with LXD’s networking (although I cannot specifically see the problem it normally causes in your ruleset).

However you are also missing the rules that LXD adds to allow inbound DHCP and DNS to lxdbr0 from the containers. So this suggests that another firewall in your system is wiping the rules added by LXD.

Can you reload LXD (without rebooting) and see if the lxdbr0 related rules are added. If they are and DHCP then works, then it will be an issue with the start order of LXD in relation to your other applications that are modifying the firewall rules.

Done. I’ve disabled ufw and restarted snap.lxd.daemon.service. But, seems there is nothing related to LXD in iptables:

# Generated by iptables-save v1.8.4 on Thu Mar  4 13:26:16 2021
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1757:210562]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:FORWARD_IN_ZONES - [0:0]
:FORWARD_OUT_ZONES - [0:0]
:FORWARD_direct - [0:0]
:FWDI_public - [0:0]
:FWDI_public_allow - [0:0]
:FWDI_public_deny - [0:0]
:FWDI_public_log - [0:0]
:FWDI_public_post - [0:0]
:FWDI_public_pre - [0:0]
:FWDO_public - [0:0]
:FWDO_public_allow - [0:0]
:FWDO_public_deny - [0:0]
:FWDO_public_log - [0:0]
:FWDO_public_post - [0:0]
:FWDO_public_pre - [0:0]
:INPUT_ZONES - [0:0]
:INPUT_direct - [0:0]
:IN_public - [0:0]
:IN_public_allow - [0:0]
:IN_public_deny - [0:0]
:IN_public_log - [0:0]
:IN_public_post - [0:0]
:IN_public_pre - [0:0]
:LIBVIRT_FWI - [0:0]
:LIBVIRT_FWO - [0:0]
:LIBVIRT_FWX - [0:0]
:LIBVIRT_INP - [0:0]
:LIBVIRT_OUT - [0:0]
:OUTPUT_direct - [0:0]
-A INPUT -i lxcbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i lxcbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i lxcbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A INPUT -i lxcbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -j LIBVIRT_INP
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED,DNAT -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -j INPUT_direct
-A INPUT -j INPUT_ZONES
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o lxcbr0 -j ACCEPT
-A FORWARD -i lxcbr0 -j ACCEPT
-A FORWARD -j LIBVIRT_FWX
-A FORWARD -j LIBVIRT_FWI
-A FORWARD -j LIBVIRT_FWO
-A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED,DNAT -j ACCEPT
-A FORWARD -i lo -j ACCEPT
-A FORWARD -j FORWARD_direct
-A FORWARD -j FORWARD_IN_ZONES
-A FORWARD -j FORWARD_OUT_ZONES
-A FORWARD -m conntrack --ctstate INVALID -j DROP
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -j LIBVIRT_OUT
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -j OUTPUT_direct
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A FORWARD_IN_ZONES -i veth5a3aa82d -g FWDI_public
-A FORWARD_IN_ZONES -i vethc67af6b2 -g FWDI_public
-A FORWARD_IN_ZONES -i wlp2s0 -g FWDI_public
-A FORWARD_IN_ZONES -i veth1c04a137 -g FWDI_public
-A FORWARD_IN_ZONES -g FWDI_public
-A FORWARD_OUT_ZONES -o veth5a3aa82d -g FWDO_public
-A FORWARD_OUT_ZONES -o vethc67af6b2 -g FWDO_public
-A FORWARD_OUT_ZONES -o wlp2s0 -g FWDO_public
-A FORWARD_OUT_ZONES -o veth1c04a137 -g FWDO_public
-A FORWARD_OUT_ZONES -g FWDO_public
-A FWDI_public -j FWDI_public_pre
-A FWDI_public -j FWDI_public_log
-A FWDI_public -j FWDI_public_deny
-A FWDI_public -j FWDI_public_allow
-A FWDI_public -j FWDI_public_post
-A FWDI_public -p icmp -j ACCEPT
-A FWDO_public -j FWDO_public_pre
-A FWDO_public -j FWDO_public_log
-A FWDO_public -j FWDO_public_deny
-A FWDO_public -j FWDO_public_allow
-A FWDO_public -j FWDO_public_post
-A INPUT_ZONES -i veth5a3aa82d -g IN_public
-A INPUT_ZONES -i vethc67af6b2 -g IN_public
-A INPUT_ZONES -i wlp2s0 -g IN_public
-A INPUT_ZONES -i veth1c04a137 -g IN_public
-A INPUT_ZONES -g IN_public
-A IN_public -j IN_public_pre
-A IN_public -j IN_public_log
-A IN_public -j IN_public_deny
-A IN_public -j IN_public_allow
-A IN_public -j IN_public_post
-A IN_public -p icmp -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT
-A LIBVIRT_FWI -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A LIBVIRT_FWI -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWO -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A LIBVIRT_FWO -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWX -i virbr0 -o virbr0 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 68 -j ACCEPT
COMMIT
# Completed on Thu Mar  4 13:26:16 2021

Before this issue happened I’ve installed cloud-init via apt. And it worked fine, before I’ve rebooted the laptop.

Please show output of lxc info | grep 'firewall:'

lxc info | grep 'firewall:'
  firewall: nftables

Ah so you’re using nftables, so sudo apt install nftables -y and then sudo nft list ruleset.

Is it working with ufw disabled btw?

1 Like

But this line is going to be causing problems, any ideas what’s adding that to your firewall (its not LXD).

Because although LXD is using nftables, its likely that iptables is actually using nftables too, and any reject or drop statements added in a netfilter chain that LXD doesn’t know about will still be evaluated even if LXD’s own rules say to accept the inbound DHCP/DNS packets. This is a rather unfortunate behaviour of nftables, compared to iptables, that any reject or drop in any other chain will cause the packet to be rejected/dropped even if its already been accepted by an earlier chain in a different netfilter hook.

See Upgraded to Ubuntu 20.10, now no ipv4 - #7 by tomp

So you need to ensure that no rules generated by your other firewalls would cause LXD’s traffic to be dropped.

See Lxd bridge doesn't work with IPv4 and UFW with nftables - #17 by tomp for a way to instruct ufw to allow lxdbr0 traffic.

2 Likes

Thank you!!!
I’ve removed this line from iptables, restarted snap.lxd.daemon.service and it works!
So I need to find what puts this line in iptables for ipv4 work cross reboots, or install iptables-persistent.

I had a similar problem: all my containers had lost network connectivity. After removing ufw package and rebooting, everything is working again.

1 Like

Thank you. I just solved the problem.

I am using VestaCP / CentOS 7.

I just realized that my containers’ ipv4 disappeared after changing firewall rules in VestaCP. Possibly VestaCP removed some iptables rules which generated by LXD.

The temporary solution is re-adding the rules by restarting the lxd daemon. And then, connect the network again.

service snap.lxd.daemon.service restart

lxc exec <container name> bash
ifup eth0
1 Like

I am using Ubuntu 20.04 Lxd v 4.0.5

container is not getting IP

root@test24:/etc/netplan# lxc profile show br0profile
config: {}
description: Bridged networking LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
name: br0profile
used_by:
- /1.0/instances/testserver1
- /1.0/instances/testserver2
root@test24:/etc/netplan# lxc config show testserver2
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20210325)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20210325"
  image.type: squashfs
  image.version: "20.04"
  volatile.base_image: 46701fa2d99c72583f858c50a25f9f965f06a266b997be7a57a8e66c72b5175b
  volatile.eth0.host_name: veth7ae01572
  volatile.eth0.hwaddr: 00:16:3e:11:d5:12
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: d0177ed5-66cc-461f-8d2b-ca49df95009b
devices:
  eth0:
    ipv4.address: 172.17.5.22
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
ephemeral: false
profiles:
- default
- br0profile
stateful: false
description: ""
root@test24:/etc/netplan# lxc shell testserver2
root@testserver2:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
45: eth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:11:d5:12 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fe11:d512/64 scope link
       valid_lft forever preferred_lft forever

on the host bridge is setup like this

root@test24:/etc/netplan# more 00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  version: 2
  bonds:
    bond0:
      interfaces:
        - eno1
        - eno2
      parameters:
        mode: active-backup
        primary: eno1
  ethernets:
    eno1: {}
    eno2: {}
  bridges:
    br0:
      addresses:
       - 172.17.1.24/16
      dhcp4: false
      gateway4: 172.17.1.1
      nameservers:
        addresses:
         - 8.8.8.8
         - 172.17.1.104
         - 172.17.1.106
        search:
         - xxxxx.com
      interfaces:
       - bond0
      parameters:
       stp: true
       forward-delay: 4
      dhcp4: false
      dhcp6: false

The ipv4.address NIC setting of 172.17.5.22 will have no effect unless your instance is connected to a LXD managed bridge (because that setting is used to create a static DHCP lease in the LXD managed DHCP server). So I would suggest removing that as a starting point to avoid confusion.

In this case I believe your instance NIC is connected to an unmanaged bridge br0, which on the hosy is connected (by way of a bond) to the host’s external network.

Thus I am assuming you’re expecting an external DHCP server to be giving your instances IP config?

Have you checked your host’s firewall config (discussed in this thread) to see if its blocking DHCP packets?

Hi Thomas
Yes I am using a bridge, br0 created under netplan. Since we aren’t running a dhcp service, I was able to get it working by just updating the /etc/netplan/50-cloud-init.yaml to
network:
version: 2
ethernets:
eth0:
addresses:
- 172.17.5.22/16
gateway4: 172.17.1.1
nameservers:
addresses:
- 8.8.8.8
- 172.17.1.104

netplan apply

Is there anyway to get the LXD containers or VMs on the LAN using the LXD managed bridge lxdbr0?

The problem will be that if you connect the lxdbr0 managed bridge to the external network then the DHCP server LXD runs on lxdbr0 will also start issuing IPs for the other devices on your external network, and start routing traffic to the LXD host as the default gateway.

You can instead run an DHCP server on the external network and then LXD’s instances will use it when connected to an unmanaged bridge.

I notice the at the br0 is getting the same mac address as one of the vm tap interfaces
72:1b:9a:60:0e:f2 i.e. the lowest one

10: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 72:1b:9a:60:0e:f2 brd ff:ff:ff:ff:ff:ff
inet 172.17.1.24/16 brd 172.17.255.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::8813:ff:fe08:75e5/64 scope link
valid_lft forever preferred_lft forever
58: tapcf0c3fc3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 72:1b:9a:60:0e:f2 brd ff:ff:ff:ff:ff:ff
67: tap7431a4b9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether da:bc:4f:4f:fb:29 brd ff:ff:ff:ff:ff:ff

Is that normal?

Yes that’s normal unless you explicitly set the Mac address of your bridge, which is a good idea to avoid it fluctuating.

The worst part is the every time I create a new VM I lose my connection to the host for 4-5 minutes. I don’t recall this happening when using KVM

That’s what happens when you bridge your physical ethernet device and don’t directly set a MAC address on the bridge. The bridge changes address every time something is added/removed to it, requiring everything on your network to do new ARP queries to talk to your host.

can I just add a made up mac address for br0 in the netplan config
or does it have to be an existing mac from one of the interfaces on the host?

bridges:
br0:
macaddress: 00:0a:2e:c9:20:03 - just add this?
addresses:
- 172.17.1.24/16
dhcp4: false
gateway4: 172.17.1.1
nameservers:
addresses:
- 172.17.1.104
- 172.17.1.106
search:
- ssss.com
interfaces:
- bond0
parameters:
stp: true
forward-delay: 4
dhcp4: false
dhcp6: false


That should work fine

confirmed, that worked. I no longer lose the network connection and the br0 mac doesn’t change
thank you very much