Eth0 IP missing on container

I have a privileged lxd container on a Ubuntu server and all of a sudden it has lost its ipv4 eth0 IP address.

I am not sure what’s the issue here:

+--------------------------------------------+---------+----------------------+------+-----------+-----------+
|       NAME     |  STATE  |         IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+--------------------------------------------+---------+----------------------+------+-----------+-----------+
| qblxc-c9010341 | RUNNING | 172.17.0.1 (docker0) |      | CONTAINER | 0         |
+--------------------------------------------+---------+----------------------+------+-----------+-----------+

Network attached to the container:

lxc network show qbn9fe8593a8a 
config:
  ipv4.address: 10.229.63.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:b876:f0e7:7e4c::1/64
  ipv6.nat: "true"
description: ""
name: qbn9fe8593a8a
type: bridge
used_by:
- /1.0/instances/qblxc-c9010341
- /1.0/profiles/qbpro-c9010341
managed: true
status: Created
locations:
- none

What should I do here?

Here’s all that I have tried so far:

  • System reboot
  • snap lxd restart
  • Tried this too:
# for ipt in iptables iptables-legacy ip6tables ip6tables-legacy; do $ipt --flush; $ipt --flush -t nat; $ipt --delete-chain; $ipt --delete-chain -t nat; $ipt -P FORWARD ACCEPT; $ipt -P INPUT ACCEPT; $ipt -P OUTPUT ACCEPT; done
# systemctl reload snap.lxd.daemon 

Nothing is working and the IPv4 IP address is not coming and thus unable to access this container publicly or ping any site from inside.

Would appreciate help on this.

I’m sorry but given the recent actions from Canonical regarding LXD:

We really can’t be providing support to LXD users on this forum anymore.

You may want to consider switching to Incus instead, or if you’d like to stay on LXD, you should reach out on the Canonical forum instead.

Sorry about that!

Oh I see. Thanks Stephane for sharing this. Didn’t know about this development.

Could you give me a rough idea about how complex the migration would be from LXD to Incus for Nvidia GPU servers with privileged mode requirements for running docker containers inside the Incus system?

Also, atleast for this request I still need to get it resolved for the current LXD based server. So I’ll ask it in Canonical forum.

Your issue sounds like a potential firewall issue on the host, such as what happens if Docker gets installed on the host system, so I’d probably look around that.

As for Incus, anything that works with LXD will work there too.
Actually, unlike LXD, we do have daily GPU tests that run on our CI infrastructure.

For standalone (non-clustered) systems, the migration is basically just installing the Incus packages (GitHub - zabbly/incus: Incus package repository) and running lxd-to-incus to move your data over from LXD into Incus. Usually leads to just a few minutes where your instances are stopped before they start back up under Incus.

Thanks. I’ll check on the firewall settings. Although UFW seems to be disabled. Let’s see.

I’m inclined towards using Incus. Appreciate your effort and contribution in the community so far. Will definitely explore Incus in the coming days. Thanks again! :slight_smile:

1 Like