No IP address with "default" profile

Hi,
I am trying to learn how to use LXD.

I am using a Debian “bullseye” host and I need to run a Ubuntu/Xenial container.

I created it with plain:

mcon@Lenovo:~$ sudo /snap/bin/lxc launch images:ubuntu/xenial macinino
Creating macinino
Starting macinino
mcon@Lenovo:~$ sudo /snap/bin/lxc list
+----------+---------+------+------+-----------+-----------+
|   NAME   |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+----------+---------+------+------+-----------+-----------+
| macinino | RUNNING |      |      | CONTAINER | 0         |
+----------+---------+------+------+-----------+-----------+
mcon@Lenovo:~$ sudo /snap/bin/lxc info --show-log macinino
Name: macinino
Status: RUNNING
Type: container
Architecture: x86_64
PID: 10223
Created: 2021/12/14 00:04 CET
Last Used: 2021/12/14 00:04 CET

Resources:
  Processes: 1
  CPU usage:
    CPU usage (in seconds): 0
  Memory usage:
    Memory (current): 6.86MiB
  Network usage:
    lo:
      Type: loopback
      State: UP
      MTU: 65536
      Bytes received: 0B
      Bytes sent: 0B
      Packets received: 0
      Packets sent: 0
      IP addresses:
        inet:  127.0.0.1/8 (local)
        inet6: ::1/128 (local)
    eth0:
      Type: broadcast
      State: UP
      Host interface: veth92bede54
      MAC address: 00:16:3e:eb:c4:f8
      MTU: 1500
      Bytes received: 2.02kB
      Bytes sent: 726B
      Packets received: 44
      Packets sent: 9
      IP addresses:
        inet6: fe80::216:3eff:feeb:c4f8/64 (link)

Log:

lxc macinino 20211213230410.128 WARN     conf - conf.c:lxc_map_ids:3579 - newuidmap binary is missing
lxc macinino 20211213230410.128 WARN     conf - conf.c:lxc_map_ids:3585 - newgidmap binary is missing
lxc macinino 20211213230410.129 WARN     conf - conf.c:lxc_map_ids:3579 - newuidmap binary is missing
lxc macinino 20211213230410.129 WARN     conf - conf.c:lxc_map_ids:3585 - newgidmap binary is missing

mcon@Lenovo:~$ 

My problem is I have no IP address in my container, as you can see.
Most likely I’m missing some configuration, but I’m unable to divine what.
Can someone help, please?

You need to post your LXD network configuration in here:

lxc network list

Is it a bridge? Is it a FAN?

Usual suspects are something interfering with network on your system, most common firewalld (if running on Fedora) or having Docker installed on your system.

Pronto!

mcon@Lenovo:~/Downloads$ sudo /snap/bin/lxc network list
+--------+----------+---------+-----------------+------+-------------+---------+
|  NAME  |   TYPE   | MANAGED |      IPV4       | IPV6 | DESCRIPTION | USED BY |
+--------+----------+---------+-----------------+------+-------------+---------+
| enp9s0 | physical | NO      |                 |      |             | 0       |
+--------+----------+---------+-----------------+------+-------------+---------+
| lxcbr0 | bridge   | NO      |                 |      |             | 0       |
+--------+----------+---------+-----------------+------+-------------+---------+
| lxdbr0 | bridge   | YES     | 10.100.177.1/24 | none |             | 3       |
+--------+----------+---------+-----------------+------+-------------+---------+
| wlp8s0 | physical | NO      |                 |      |             | 0       |
+--------+----------+---------+-----------------+------+-------------+---------+

Thanks stgraber,
My host is a very recent and completely up-to-date Debian bullseye (not Fedora).
I do not have docker installed, but I have ZeroTier-One if it matters.

I do not think I have any strange firewalling active (I did not explicitly install any, that’ for sure).

My full net config is:

mcon@Lenovo:~/Downloads$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp9s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether f0:76:1c:57:93:b1 brd ff:ff:ff:ff:ff:ff
3: wlp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d0:7e:35:48:8c:21 brd ff:ff:ff:ff:ff:ff
    inet 10.73.246.55/21 brd 10.73.247.255 scope global dynamic noprefixroute wlp8s0
       valid_lft 1451sec preferred_lft 1451sec
    inet6 fe80::38aa:daea:f776:66d8/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:b9:85:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.100.177.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:feb9:85f0/64 scope link 
       valid_lft forever preferred_lft forever
7: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
       valid_lft forever preferred_lft forever
8: ztyqbt4opw: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2800 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether 22:bd:97:56:25:80 brd ff:ff:ff:ff:ff:ff
    inet 172.28.157.25/16 brd 172.28.255.255 scope global ztyqbt4opw
       valid_lft forever preferred_lft forever
    inet6 fc5d:f964:6fcb:bb80:6a92::1/40 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::20bd:97ff:fe56:2580/64 scope link 
       valid_lft forever preferred_lft forever
12: veth92bede54@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 72:aa:fe:02:26:97 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Any hint about what to check/try?
TiA!

Just another data point: I have lxc installed from the standard Debian repos (not from snap).
Can that conflict? Should I purge it or is it somehow needed?

FYI: purging lxc via sudo apt purge apt did not change situation at all.

Are you using ufw firewall?

Sorry for the late comeback (I was on a business trip).
No, ufw is not even installed on laptop.

I had this same problem after running lxd recover

I finally did a system update and rebooted and it seemed to fix the problem.

I managed to “somehow” overcome the problem, but in a highly unsatisfactory way:

  • completely remove all lxd snap
  • compile from sources (including hand-installing a recent version of go)
  • incur in the same problem discussed here
  • Apply solution (boot with systemd.unified_cgroup_hierarchy=0)
  • add root:100000:65536 to /etc/subuid and /etc/subgid (apparently debian “standard install” adds that only for “normal users”)

This actually works (including networking), but (IMHO, of course) is an indication something is “not aligned” in CGroup2 handling.
I know too little about the whole matter to venture in a guess about root cause, but I strongly believe it should be investigated, perhaps on both sides.

I am willing to help in testing, if deemed useful.
Please advise.

Its likely the containers operating system not likely a pure cgroup2 environment and needing a hybrid environment.

Thank Thomas.
As stated laptop where I’m testing is a pure Debian 11 “bullseye” (stable), not exactly an “unheard of” distribution.
I would like to get to the bottom of this, if at all possible, before filing a bug report to debian, lxd or both.
I am aware they are actively working in the issue but I would like to have a “stable” install without waiting for them to finish their work.

How sh0uld I proceed?

booting with systemd.unified_cgroup_hierarchy=0 is the solution to running older container oses on newer systems that use pure cgroupv2 by default.

Thanks Thomas,
but I wasn’t clear enough, evidently.

I am not trying to run old containers.
I currently have no containers.
The errors I see are either on freshly made containers or at lxd daemon start (before lxd init).

I am unsure if transition to “pure CGroupv2” on Debian bullseye is really complete or if problem is somewhere else.

This is where I thought you were trying to run an old container os that doesn’t support cgroupv2.

My bad.
I actually have a build environment needing Ubuntu/xenial (don’t ask me why!).
I wanted to use a container to avoid tying a machine to that ancient piece of software while avoiding the inefficiencies of a full Virtual Machine (using all cores/threads in my hardware is useful).
There I started to get problems.
Thanks for your help.
I suppose converting host to a recent Ubuntu might be the solution, if I insist on using lxd.

LXD supports VMS using the ‘–vm’ instance creation flag if you decided to go that route.