No IPv4 address on Ubuntu 22

I recently upgraded my laptop to Ubuntu 22. Now I noticed that on Ubuntu 22 my containers get an IPv6 address but not an IPv4 address.

I can disable IPv6 of course with lxc network set lxdbr0 ipv6.address none but then the containers won’t have any IP address at all.

Is it possible to run LXD containers on Ubuntu 22 with IPv4 addresses? I think Ubuntu 22 assumes IPv6 and it is not clear how I can switch back to IPv4. Is it possible?

Yes you can. I have a host with Ubuntu 22.04 and many containers with different OS’es with IPv4.

I assume that you during initialisation “lxd init” you disabled IPv4 and enabled only IPv6.

...
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
...

Now to check what configuration your lxd bridge has, run

lxc network list

+--------+----------+---------+-----------------+------+-------------+---------+---------+
|  NAME  |   TYPE   | MANAGED |      IPV4       | IPV6 | DESCRIPTION | USED BY |  STATE  |
+--------+----------+---------+-----------------+------+-------------+---------+---------+
| br0    | bridge   | NO      |                 |      |             | 5       |         |
+--------+----------+---------+-----------------+------+-------------+---------+---------+
| eno1   | physical | NO      |                 |      |             | 0       |         |
+--------+----------+---------+-----------------+------+-------------+---------+---------+
| lxdbr0 | bridge   | YES     | 10.252.104.1/24 | none |             | 2       | CREATED |
+--------+----------+---------+-----------------+------+-------------+---------+---------+

Then, check lxdbr0 configuration

lxc network show lxdbr0 

config:
  ipv4.address: 10.252.104.1/24
  ipv4.nat: "true"
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

The above is my configuration and it has IPv4 enabled. To enable it in your setup, you can edit the configuration with

lxc network edit lxdbr0

Or by setting the parameters in command line:

lxc network set lxdbr0 ipv4.address 10.252.104.1/24    # or the CIDR you prefer.
lxc network set lxdbr0 ipv4.nat true    # if you want to enable nat

References:

I removed LXD snap remove and then did the ldx init again

[:ansible-dev]└2 master(+201/-38)* ± lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=30GiB]: 
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: none
  description: ""
  name: lxdbr0
  type: ""
  project: default
storage_pools:
- config:
    size: 30GiB
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null

After this the same thing, networks look okay

:ansible-dev]└2 master(+201/-38)* 1 ± lxc network ls
+-----------+----------+---------+-----------------+------+-------------+---------+---------+
|   NAME    |   TYPE   | MANAGED |      IPV4       | IPV6 | DESCRIPTION | USED BY |  STATE  |
+-----------+----------+---------+-----------------+------+-------------+---------+---------+
| crc       | bridge   | NO      |                 |      |             | 0       |         |
+-----------+----------+---------+-----------------+------+-------------+---------+---------+
| docker0   | bridge   | NO      |                 |      |             | 0       |         |
+-----------+----------+---------+-----------------+------+-------------+---------+---------+
| enp0s31f6 | physical | NO      |                 |      |             | 0       |         |
+-----------+----------+---------+-----------------+------+-------------+---------+---------+
| lxdbr0    | bridge   | YES     | 10.217.160.1/24 | none |             | 2       | CREATED |
+-----------+----------+---------+-----------------+------+-------------+---------+---------+
| lxdbr1    | bridge   | YES     | 1.1.4.1/24      | none |             | 2       | CREATED |
+-----------+----------+---------+-----------------+------+-------------+---------+---------+
| virbr0    | bridge   | NO      |                 |      |             | 0       |         |
+-----------+----------+---------+-----------------+------+-------------+---------+---------+
| wlp0s20f3 | physical | NO      |                 |      |             | 0       |         |
+-----------+----------+---------+-----------------+------+-------------+---------+---------+
| wwan0     | physical | NO      |                 |      |             | 0       |         |
+-----------+----------+---------+-----------------+------+-------------+---------+---------+

But my container does not get an IP address

[:ansible-dev]└2 master(+201/-38)* ± lxc ls
+-------------+---------+------+------+-----------+-----------+
|    NAME     |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-------------+---------+------+------+-----------+-----------+
| c2d-rproxy1 | RUNNING |      |      | CONTAINER | 0         |
+-------------+---------+------+------+-----------+-----------+

I can exec into the container and then set static ip. That works. netplan apply. But I don’t get an IP automatically from bridge. IPv6 addresses I can get but see init I disabled IPv6.

[:ansible-dev]└2 master(+201/-38)* ± lxc network show lxdbr0
config:
  ipv4.address: 10.217.160.1/24
  ipv4.nat: "true"
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/c2d-rproxy1
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
  • Can you please show the output of lxc network get lxdbr0 ipv4.dhcp? (the default value is true, but just thought of checking it in case)
  • What image do you use?
  • And does the same thing happen with different images/OS’es?

Also, can you try launching new containers that use different networks and see if they get dhcp IPs.
e.g.

lxc launch images:ubuntu/20.04 -n virbr0 c1
lxc launch images:ubuntu/20.04 -n enp0s31f6 c2
lxc launch images:ubuntu/20.04 -n docker0 c3

lxc network get lxdbr0 ipv4.dhcp does not return anything.

[:ansible-dev]└2 master(+203/-40)* ± lxc ls
+------+---------+------------------------+------+-----------+-----------+
| NAME |  STATE  |          IPV4          | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------------------------+------+-----------+-----------+
| c1   | RUNNING | 192.168.122.176 (eth0) |      | CONTAINER | 0         |
+------+---------+------------------------+------+-----------+-----------+
| c2   | RUNNING | 192.168.3.222 (eth0)   |      | CONTAINER | 0         |
+------+---------+------------------------+------+-----------+-----------+
| c3   | RUNNING |                        |      | CONTAINER | 0         |
+------+---------+------------------------+------+-----------+-----------+

I have used CentOS and Ubuntu images. Via Vagrant but also without Vagrant with just lxc lauch

Then the last thing I would investigate is if there is any IPv4 subnet overlap between lxdbr0 and other networks. Also to help others investigate with you, can you please run (on host)

ip -a

ip route
[:ansible-dev]└2 master(+204/-40)* 1 ± ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether cc:96:e5:55:98:0b brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.214/24 brd 192.168.3.255 scope global dynamic noprefixroute enp0s31f6
       valid_lft 6710sec preferred_lft 6710sec
    inet6 fe80::a7df:4bc7:502c:6f9b/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: wwan0: <POINTOPOINT,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/[519] 
4: crc: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:fd:be:d0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.130.1/24 brd 192.168.130.255 scope global crc
       valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:b4:56:10 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
6: wlp0s20f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether e0:d0:45:c2:ae:f2 brd ff:ff:ff:ff:ff:ff
9: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:b7:cb:b9:aa brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:b7ff:fecb:b9aa/64 scope link 
       valid_lft forever preferred_lft forever
18: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
102: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:5b:5e:05 brd ff:ff:ff:ff:ff:ff
    inet 10.36.170.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:9f73:8454:9db3::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe5b:5e05/64 scope link 
       valid_lft forever preferred_lft forever
103: lxdbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:b3:aa:a1 brd ff:ff:ff:ff:ff:ff
    inet 1.1.4.1/24 scope global lxdbr1
       valid_lft forever preferred_lft forever
105: veth620abb02@if104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether e2:76:47:84:10:26 brd ff:ff:ff:ff:ff:ff link-netnsid 0
107: veth3b66d5e1@if106: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr1 state UP group default qlen 1000
    link/ether 92:9a:2c:47:28:b1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[:ansible-dev]└2 master(+204/-40)* ± ip route
default via 192.168.3.1 dev enp0s31f6 proto dhcp metric 100 
1.1.4.0/24 dev lxdbr1 proto kernel scope link src 1.1.4.1 
10.36.170.0/24 dev lxdbr0 proto kernel scope link src 10.36.170.1 
169.254.0.0/16 dev virbr0 scope link metric 1000 linkdown 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.3.0/24 dev enp0s31f6 proto kernel scope link src 192.168.3.214 metric 100 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown 
192.168.130.0/24 dev crc proto kernel scope link src 192.168.130.1 linkdown 

@sugarmoose you have Docker installed on the host so you might want to check it is not the one interfering with your IPv4 assignments. For reference: https://linuxcontainers.org/lxd/docs/master/howto/network_bridge_firewalld/#prevent-issues-with-lxd-and-docker

1 Like

Deinstalled Docker, that did not fix it.

Please show output of sudo iptables-save and sudo nft list ruleset on LXD host.

Please also show output of sudo ss -ulpn

Finally please show lxc config show <instance> --expanded.

I can fix this issue with systemctl stop ufw.

The issue is caused I think by CodeReady Containers ( CRC ) / OpenShift on my laptop. See the LIBVIRT chains below. I don’t think Docker is the issue because when I de-installed Docker and removed those Docker chains, it still was not working.

After stop of ufw CRC is still working so that is a good enough workaround for me. CRC and LXD are both working fine now I think.

65° [@io1:ansible-dev]└2 master ± sudo iptables -L | grep Chain
Chain INPUT (policy ACCEPT)
Chain FORWARD (policy ACCEPT)
Chain OUTPUT (policy ACCEPT)
Chain DOCKER (2 references)
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
Chain DOCKER-USER (1 references)
Chain LIBVIRT_FWI (1 references)
Chain LIBVIRT_FWO (1 references)
Chain LIBVIRT_FWX (1 references)
Chain LIBVIRT_INP (1 references)
Chain LIBVIRT_OUT (1 references)
1 Like