I recently upgraded my laptop to Ubuntu 22. Now I noticed that on Ubuntu 22 my containers get an IPv6 address but not an IPv4 address.
I can disable IPv6 of course with lxc network set lxdbr0 ipv6.address none but then the containers won’t have any IP address at all.
Is it possible to run LXD containers on Ubuntu 22 with IPv4 addresses? I think Ubuntu 22 assumes IPv6 and it is not clear how I can switch back to IPv4. Is it possible?
Yes you can. I have a host with Ubuntu 22.04 and many containers with different OS’es with IPv4.
I assume that you during initialisation “lxd init” you disabled IPv4 and enabled only IPv6.
...
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
...
Now to check what configuration your lxd bridge has, run
lxc network list
+--------+----------+---------+-----------------+------+-------------+---------+---------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
+--------+----------+---------+-----------------+------+-------------+---------+---------+
| br0 | bridge | NO | | | | 5 | |
+--------+----------+---------+-----------------+------+-------------+---------+---------+
| eno1 | physical | NO | | | | 0 | |
+--------+----------+---------+-----------------+------+-------------+---------+---------+
| lxdbr0 | bridge | YES | 10.252.104.1/24 | none | | 2 | CREATED |
+--------+----------+---------+-----------------+------+-------------+---------+---------+
I removed LXD snap remove and then did the ldx init again
[:ansible-dev]└2 master(+201/-38)* ± lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
Size in GiB of the new loop device (1GiB minimum) [default=30GiB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
ipv4.address: auto
ipv6.address: none
description: ""
name: lxdbr0
type: ""
project: default
storage_pools:
- config:
size: 30GiB
description: ""
name: default
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
projects: []
cluster: null
[:ansible-dev]└2 master(+201/-38)* ± lxc ls
+-------------+---------+------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------+---------+------+------+-----------+-----------+
| c2d-rproxy1 | RUNNING | | | CONTAINER | 0 |
+-------------+---------+------+------+-----------+-----------+
I can exec into the container and then set static ip. That works. netplan apply. But I don’t get an IP automatically from bridge. IPv6 addresses I can get but see init I disabled IPv6.
Then the last thing I would investigate is if there is any IPv4 subnet overlap between lxdbr0 and other networks. Also to help others investigate with you, can you please run (on host)
[:ansible-dev]└2 master(+204/-40)* 1 ± ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether cc:96:e5:55:98:0b brd ff:ff:ff:ff:ff:ff
inet 192.168.3.214/24 brd 192.168.3.255 scope global dynamic noprefixroute enp0s31f6
valid_lft 6710sec preferred_lft 6710sec
inet6 fe80::a7df:4bc7:502c:6f9b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: wwan0: <POINTOPOINT,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/[519]
4: crc: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:fd:be:d0 brd ff:ff:ff:ff:ff:ff
inet 192.168.130.1/24 brd 192.168.130.255 scope global crc
valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:b4:56:10 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
6: wlp0s20f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether e0:d0:45:c2:ae:f2 brd ff:ff:ff:ff:ff:ff
9: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b7:cb:b9:aa brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:b7ff:fecb:b9aa/64 scope link
valid_lft forever preferred_lft forever
18: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
102: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:5b:5e:05 brd ff:ff:ff:ff:ff:ff
inet 10.36.170.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fd42:9f73:8454:9db3::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe5b:5e05/64 scope link
valid_lft forever preferred_lft forever
103: lxdbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:b3:aa:a1 brd ff:ff:ff:ff:ff:ff
inet 1.1.4.1/24 scope global lxdbr1
valid_lft forever preferred_lft forever
105: veth620abb02@if104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether e2:76:47:84:10:26 brd ff:ff:ff:ff:ff:ff link-netnsid 0
107: veth3b66d5e1@if106: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr1 state UP group default qlen 1000
link/ether 92:9a:2c:47:28:b1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[:ansible-dev]└2 master(+204/-40)* ± ip route
default via 192.168.3.1 dev enp0s31f6 proto dhcp metric 100
1.1.4.0/24 dev lxdbr1 proto kernel scope link src 1.1.4.1
10.36.170.0/24 dev lxdbr0 proto kernel scope link src 10.36.170.1
169.254.0.0/16 dev virbr0 scope link metric 1000 linkdown
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.3.0/24 dev enp0s31f6 proto kernel scope link src 192.168.3.214 metric 100
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
192.168.130.0/24 dev crc proto kernel scope link src 192.168.130.1 linkdown
The issue is caused I think by CodeReady Containers ( CRC ) / OpenShift on my laptop. See the LIBVIRT chains below. I don’t think Docker is the issue because when I de-installed Docker and removed those Docker chains, it still was not working.
After stop of ufw CRC is still working so that is a good enough workaround for me. CRC and LXD are both working fine now I think.