LXD instances on Ubuntu 22.04 on Oracle Cloud does not get IPV4 addresses

I’ve been trying to get LXD running on a fresh Oracle cloud machine running Ubuntu 22.04 as a way to learn LXD.

I have however been running into issues with my LXD instances not getting IPV4 addresses. It seems to be due to nftables not properly allowing ports 67 and 53. It seems like a conflict between the rules injected by oracle-cloud for the 169.254.0.0/16 range and the automatic LXD rules.

I note that the Oracle firewall rules are coming from /etc/iptables/rules.v4 via netfilter-persistent but I can’t tell where the LXD rules are coming from.

Although many others seem to have similar issues with firewalls (conflicts in UFW or with Docker) I am surprised I can’t find others online who’ve previously had the same problems as here.

Can someone on this forum point me in the right direction on how to resolve this?
Is the solution to remove the automatic firewalls from LXD and add them myself into the Oracle tables?

The MRE is very basic:
Spinning up my first LXD machine on a fresh install does not get an IPV4 address.

USER@ORACLE:~$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (cephobject, dir, lvm, zfs, btrfs, ceph) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=8GiB]: 
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the LXD server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 
USER@ORACLE:~$ lxc launch ubuntu:22.04 first
Creating first
Starting first                              
USER@ORACLE:~$ lxc list    
+-------+---------+------+-----------------------------------------------+-----------+-----------+
| NAME  |  STATE  | IPV4 |                     IPV6                      |   TYPE    | SNAPSHOTS |
+-------+---------+------+-----------------------------------------------+-----------+-----------+
| first | RUNNING |      | fd42:b915:76ff:9282:216:3eff:fe09:b203 (eth0) | CONTAINER | 0         |
+-------+---------+------+-----------------------------------------------+-----------+-----------+

The ip settings for the host are

USER@ORACLE:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
    link/ether 02:00:17:00:cc:9f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.230/24 metric 100 brd 10.0.0.255 scope global enp0s6
       valid_lft forever preferred_lft forever
    inet6 fe80::17ff:fe00:cc9f/64 scope link 
       valid_lft forever preferred_lft forever
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f2:14:8b brd ff:ff:ff:ff:ff:ff
    inet 10.117.179.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:b915:76ff:9282::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fef2:148b/64 scope link 
       valid_lft forever preferred_lft forever
5: veth3a9d62a8@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether d2:9f:b7:f5:2c:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
USER@ORACLE:~$ ip r
default via 10.0.0.1 dev enp0s6 
default via 10.0.0.1 dev enp0s6 proto dhcp src 10.0.0.230 metric 100 
10.0.0.0/24 dev enp0s6 proto kernel scope link src 10.0.0.230 metric 100 
10.0.0.1 dev enp0s6 proto dhcp scope link src 10.0.0.230 metric 100 
10.117.179.0/24 dev lxdbr0 proto kernel scope link src 10.117.179.1 
169.254.0.0/16 dev enp0s6 scope link 
169.254.0.0/16 dev enp0s6 proto dhcp scope link src 10.0.0.230 metric 100 
169.254.169.254 via 10.0.0.1 dev enp0s6 proto dhcp src 10.0.0.230 metric 100 

And the LXD config

USER@ORACLE:~$ lxc config show first --expanded
architecture: aarch64
config:
  image.architecture: arm64
  image.description: ubuntu 22.04 LTS arm64 (release) (20240223)
  image.label: release
  image.os: ubuntu
  image.release: jammy
  image.serial: "20240223"
  image.type: squashfs
  image.version: "22.04"
  volatile.base_image: f7bdd63df04bbe88411ee9e114ead65bd19ab30d03797ba05c03dc5bb132f87b
  volatile.cloud-init.instance-id: 8cc37382-de54-4071-b42e-6c3b481c6376
  volatile.eth0.host_name: veth3a9d62a8
  volatile.eth0.hwaddr: 00:16:3e:09:b2:03
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: d20ddb8b-caf9-40db-b523-5a27b2b65aef
  volatile.uuid.generation: d20ddb8b-caf9-40db-b523-5a27b2b65aef
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
USER@ORACLE:~$ lxc network show lxdbr0 
config:
  ipv4.address: 10.117.179.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:b915:76ff:9282::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/first
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

and finally the firewall settings

USER@ORACLE:~$ lxc info | grep firewall
- network_firewall_filtering
- firewall_driver
  firewall: nftables
USER@ORACLE:~$ sudo nft list ruleset
table ip filter {
	chain INPUT {
		type filter hook input priority filter; policy accept;
		ct state related,established counter packets 18032 bytes 400067863 accept
		meta l4proto icmp counter packets 0 bytes 0 accept
		iifname "lo" counter packets 38 bytes 3290 accept
		meta l4proto udp udp sport 123 counter packets 0 bytes 0 accept
		meta l4proto tcp ct state new tcp dport 22 counter packets 2 bytes 104 accept
		counter packets 11 bytes 3272 reject with icmp type host-prohibited
	}

	chain FORWARD {
		type filter hook forward priority filter; policy accept;
		counter packets 0 bytes 0 reject with icmp type host-prohibited
	}

	chain OUTPUT {
		type filter hook output priority filter; policy accept;
		ip daddr 169.254.0.0/16 counter packets 235 bytes 18899 jump InstanceServices
	}

	chain InstanceServices {
		meta l4proto tcp ip daddr 169.254.0.2 skuid 0 tcp dport 3260  counter packets 0 bytes 0 accept
		meta l4proto tcp ip daddr 169.254.2.0/24 skuid 0 tcp dport 3260  counter packets 0 bytes 0 accept
		meta l4proto tcp ip daddr 169.254.4.0/24 skuid 0 tcp dport 3260  counter packets 0 bytes 0 accept
		meta l4proto tcp ip daddr 169.254.5.0/24 skuid 0 tcp dport 3260  counter packets 0 bytes 0 accept
		meta l4proto tcp ip daddr 169.254.0.2 tcp dport 80  counter packets 0 bytes 0 accept
		meta l4proto udp ip daddr 169.254.169.254 udp dport 53  counter packets 21 bytes 1973 accept
		meta l4proto tcp ip daddr 169.254.169.254 tcp dport 53  counter packets 0 bytes 0 accept
		meta l4proto tcp ip daddr 169.254.0.3 skuid 0 tcp dport 80  counter packets 0 bytes 0 accept
		meta l4proto tcp ip daddr 169.254.0.4 tcp dport 80  counter packets 0 bytes 0 accept
		meta l4proto tcp ip daddr 169.254.169.254 tcp dport 80  counter packets 211 bytes 16698 accept
		meta l4proto udp ip daddr 169.254.169.254 udp dport 67  counter packets 0 bytes 0 accept
		meta l4proto udp ip daddr 169.254.169.254 udp dport 69  counter packets 0 bytes 0 accept
		meta l4proto udp ip daddr 169.254.169.254 udp dport 123  counter packets 3 bytes 228 accept
		meta l4proto tcp ip daddr 169.254.0.0/16   counter packets 0 bytes 0 reject with tcp reset
		meta l4proto udp ip daddr 169.254.0.0/16   counter packets 0 bytes 0 reject
	}
}
table ip6 filter {
	chain INPUT {
		type filter hook input priority filter; policy accept;
	}

	chain FORWARD {
		type filter hook forward priority filter; policy accept;
	}

	chain OUTPUT {
		type filter hook output priority filter; policy accept;
	}
}
table inet lxd {
	chain pstrt.lxdbr0 {
		type nat hook postrouting priority srcnat; policy accept;
		ip saddr 10.117.179.0/24 ip daddr != 10.117.179.0/24 masquerade
		ip6 saddr fd42:b915:76ff:9282::/64 ip6 daddr != fd42:b915:76ff:9282::/64 masquerade
	}

	chain fwd.lxdbr0 {
		type filter hook forward priority filter; policy accept;
		ip version 4 oifname "lxdbr0" accept
		ip version 4 iifname "lxdbr0" accept
		ip6 version 6 oifname "lxdbr0" accept
		ip6 version 6 iifname "lxdbr0" accept
	}

	chain in.lxdbr0 {
		type filter hook input priority filter; policy accept;
		iifname "lxdbr0" tcp dport 53 accept
		iifname "lxdbr0" udp dport 53 accept
		iifname "lxdbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		iifname "lxdbr0" udp dport 67 accept
		iifname "lxdbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		iifname "lxdbr0" udp dport 547 accept
	}

	chain out.lxdbr0 {
		type filter hook output priority filter; policy accept;
		oifname "lxdbr0" tcp sport 53 accept
		oifname "lxdbr0" udp sport 53 accept
		oifname "lxdbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		oifname "lxdbr0" udp sport 67 accept
		oifname "lxdbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		oifname "lxdbr0" udp sport 547 accept
	}
}

Finally, my understanding is that nftables is doing the firewalling on my machine:

USER@ORACLE:~$ sudo systemctl status netfilter-persistent.service iptables.service ip6tables.service nftables.service ufw.service 
● netfilter-persistent.service - netfilter persistent configuration
     Loaded: loaded (/lib/systemd/system/netfilter-persistent.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/netfilter-persistent.service.d
             └─iptables.conf
     Active: active (exited) since Sun 2024-02-25 13:58:32 UTC; 22min ago
       Docs: man:netfilter-persistent(8)
    Process: 688 ExecStart=/usr/sbin/netfilter-persistent start (code=exited, status=0/SUCCESS)
   Main PID: 688 (code=exited, status=0/SUCCESS)
        CPU: 18ms

Feb 25 13:58:32 take-for systemd[1]: Starting netfilter persistent configuration...
Feb 25 13:58:33 take-for netfilter-persistent[694]: run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables start
Feb 25 13:58:32 take-for netfilter-persistent[694]: run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables start
Feb 25 13:58:32 take-for systemd[1]: Finished netfilter persistent configuration.

○ iptables.service - netfilter persistent configuration
     Loaded: loaded (/lib/systemd/system/iptables.service; alias)
     Active: inactive (dead)
       Docs: man:netfilter-persistent(8)

○ ip6tables.service - netfilter persistent configuration
     Loaded: loaded (/lib/systemd/system/ip6tables.service; alias)
     Active: inactive (dead)
       Docs: man:netfilter-persistent(8)

○ nftables.service - nftables
     Loaded: loaded (/lib/systemd/system/nftables.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: man:nft(8)
             http://wiki.nftables.org

● ufw.service - Uncomplicated firewall
     Loaded: loaded (/lib/systemd/system/ufw.service; enabled; vendor preset: enabled)
     Active: active (exited) since Sun 2024-02-25 13:58:32 UTC; 22min ago
       Docs: man:ufw(8)
    Process: 695 ExecStart=/lib/ufw/ufw-init start quiet (code=exited, status=0/SUCCESS)
   Main PID: 695 (code=exited, status=0/SUCCESS)
        CPU: 1ms

Feb 25 13:58:32 take-for systemd[1]: Starting Uncomplicated firewall...
Feb 25 13:58:32 take-for systemd[1]: Finished Uncomplicated firewall.
USER@ORACLE:~$ sudo ufw status
Status: inactive

Not the right place for LXD support these days but your nftables certainly have rules denying all ingress and forward, so that’s the issue.

No idea what wrote those rules though, may be some kind of cloud agent.