LXC containers randomly stop getting ipv4 dhcp addresses

I want …

  • All LXC containers should be assigned an ipv4 address

Actual results

  • ipv4 addresses are assigned to up to ~70 containers
  • after this approximate number of containers, further containers created do not receive and ipv4 address

environment

  • Host: ubuntu 23.04
  • LXC version: 5.20
  • host also connected to the bridge
  • host has external ip address

I tried…

sudo lxc network set lxdbr0 ipv6.address none
sudo lxc network show lxdbr0
config:
  ipv4.address: a.b.c.d/24
  ipv4.nat: "true"
  ipv6.address: none
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/profiles/default
- /1.0/profiles/network-lxdbr0
managed: true
status: Created
locations:
- none
sudo lxc profile show network-lxdbr0
config: {}
description: 
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
name: network-lxdbr0
used_by:
  • ipv4 addresses are still not assigned after ~70 containers are started

Here is a snippet of part of the current state. Ignore node1. It was created before ipv6 was disabled.

+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-56 | RUNNING | 10.140.233.146 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-57 | RUNNING | 10.140.233.191 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-58 | RUNNING | 10.140.233.91 (eth0)  |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-59 | RUNNING | 10.140.233.57 (eth0)  |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-60 | RUNNING | 10.140.233.201 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-61 | RUNNING | 10.140.233.104 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-62 | RUNNING | 10.140.233.48 (eth0)  |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-63 | RUNNING | 10.140.233.194 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-64 | RUNNING | 10.140.233.182 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-65 | RUNNING | 10.140.233.96 (eth0)  |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-66 | RUNNING | 10.140.233.168 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-67 | RUNNING | 10.140.233.139 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-68 | RUNNING | 10.140.233.211 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-69 | RUNNING | 10.140.233.184 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-70 | RUNNING |                       |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-71 | RUNNING | 10.140.233.185 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-72 | RUNNING |                       |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-73 | RUNNING | 10.140.233.3 (eth0)   |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-74 | RUNNING |                       |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-75 | RUNNING | 10.140.233.176 (eth0) |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-76 | RUNNING |                       |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-77 | RUNNING |                       |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| 100-78 | RUNNING |                       |                                              | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| node1  | RUNNING | 10.140.233.138 (eth0) | fd42:f37c:ba7:614a:216:3eff:fef6:f9cd (eth0) | CONTAINER | 0         |
+--------+---------+-----------------------+----------------------------------------------+-----------+-----------+

I can help you with Incus (a continuation of LXD).

Here is me launching 100 containers.

$ /opt/incus/bin/incus-benchmark launch --count 100 images:alpine/3.19/cloud
Test environment:
  Server backend: incus
  Server version: 0.5.1
  Kernel: Linux
  Kernel tecture: x86_64
  Kernel version: 6.5.0-15-generic
  Storage backend: zfs
  Storage version: 2.1.5-1ubuntu6~22.04.2
  Container backend: lxc | qemu
  Container version: 5.0.3 | 8.2.1

Test variables:
  Container count: 100
  Container mode: unprivileged
  Startup mode: normal startup
  Image: images:alpine/3.19/cloud
  Batches: 8
  Batch size: 12
  Remainder: 4

[Feb  3 00:17:50.038] Importing image into local store: 5fc72e9ed16cc3f3db367a3e97d7726d796a6450813446bde06f7f586b4de7d5
[Feb  3 00:17:54.611] Found image in local store: 5fc72e9ed16cc3f3db367a3e97d7726d796a6450813446bde06f7f586b4de7d5
[Feb  3 00:17:54.611] Batch processing start
[Feb  3 00:18:02.864] Processed 12 containers in 8.253s (1.454/s)
[Feb  3 00:18:10.331] Processed 24 containers in 15.720s (1.527/s)
[Feb  3 00:18:25.662] Processed 48 containers in 31.051s (1.546/s)
[Feb  3 00:18:57.019] Processed 96 containers in 62.408s (1.538/s)
[Feb  3 00:19:01.481] Batch processing completed in 66.871s

All containers are named automatically, with the format benchmark-%03d.

I list them with the following. The benchmark- is used as a filter. incus list matches whatever is given as argument in the end. In my case, all containers got IPv4 and IPv6 address. So far, so good.

$ incus list benchmark-

I then delete the benchmark containers in one go. 100 containers gone in 24 seconds.
I hope you do not have any containers named benchmark-something :slight_smile:

$ /opt/incus/bin/incus-benchmark delete
...
[Feb  3 00:23:14.040] Batch processing completed in 23.902s

We have seen how to start many containers for our testing.

Disabling IPv6 is tricky. IPv6 is important. But if you insist,

  1. The GRUB method does not work in containers. You just want the containers not to have IPv6.
  2. The sysctl method for IPv6 does not appear to work.
  3. What works, is echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6 in the container.
  4. But sysctl and number 3 are the same. What gives?

Here is how to disable IPv6 in images:alpine/3.19/cloud. You can automate with cloud-init.

$ incus launch images:alpine/3.19/cloud alpine
Launching alpine
$ incus shell alpine
alpine:~# rc-update add sysctl default
 * service sysctl added to runlevel default
alpine:~# echo "net.ipv6.conf.all.disable_ipv6 = 1" | tee /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
alpine:~# 
$ incus restart alpine
$ incus list alpine
+--------+---------+---------------------+------+-----------+-----------+
|  NAME  |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+--------+---------+---------------------+------+-----------+-----------+
| alpine | RUNNING | 10.10.10.44 (eth0) |      | CONTAINER | 0         |
+--------+---------+---------------------+------+-----------+-----------+
$ 

Thanks @simos! I disabled ipv6 only because I was trying to isolate the issue with ipv4.

Do you mean that I should explore Incus as a replacement to LXD? If so, I will start examining it.
Also, is the API very different from LXD? This will help in assessing the amount of modification I need to do to my current implementation.

Currently, there are no API differences between LXD and Incus. I suggest to switch (snap refresh --channel ...) to an LTS version of LXD so that it is easier to perform the migration.

In this forum we provide support for Incus.

I had the same problem and solved it using the settings in

Hope this will help you if you stil have the problem