Lxc can not get IP address

I want …

  • lxc get ip address from my router.

Actual results

  • NIC on LXC that connected to the router do not get IP addresses.
  • After manually assigning an IP, LXC could access internet and SSH.

environment

  • HOST os: Fedora 38
  • container’s nic is connected to bridge that managed by Network manager.
  • host also connected to the bridge
  • host get ip address from my router.

I tried…

net.bridge.bridge-nf-call-ip6tables=0
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-arptables=0
sysctl net.ipv4.ip_forward=1
  • tcpdump- nothing goes out from veth which created on start lxc

config show Fedora

#  lxc config show FedoraLxd --expanded
architecture: aarch64
config:
  boot.autostart: "true"
  image.architecture: arm64
  image.description: Fedora 38 arm64 (20230527_03:00)
  image.os: Fedora
  image.release: "38"
  image.serial: "20230527_03:00"
  image.type: squashfs
  image.variant: default
  linux.kernel_modules: wireguard,ip_tables
  raw.idmap: |
    both 0-999 0-999
    both 4261 4261
  security.nesting: "true"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.mount.allowed: cifs,smb,nfs,nfsv4
  security.syscalls.intercept.setxattr: "true"
  security.syscalls.intercept.sysinfo: "true"
  volatile.base_image: fdccb6cc3f5ecffe3e7346b1a53970a3d3dc7fadbe255a1a50d8455209e1b797
  volatile.cloud-init.instance-id: b8f28851-4fc4-4ca5-bd19-8e5c5cd5898b
  volatile.ext.host_name: veth76a73317
  volatile.ext.hwaddr: 00:16:3e:bf:b8:82
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":true,"Hostid":0,"Nsid":0,"Maprange":1000},{"Isuid":true,"Isgid":false,"Hostid":1001000,"Nsid":1000,"Maprange":3261},{"Isuid":true,"Isgid":true,"Hostid":4261,"Nsid":4261,"Maprange":1},{"Isuid":true,"Isgid":false,"Hostid":1004262,"Nsid":4262,"Maprange":999995738},{"Isuid":true,"Isgid":true,"Hostid":0,"Nsid":0,"Maprange":1000},{"Isuid":false,"Isgid":true,"Hostid":1001000,"Nsid":1000,"Maprange":3261},{"Isuid":true,"Isgid":true,"Hostid":4261,"Nsid":4261,"Maprange":1},{"Isuid":false,"Isgid":true,"Hostid":1004262,"Nsid":4262,"Maprange":999995738}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":true,"Hostid":0,"Nsid":0,"Maprange":1000},{"Isuid":true,"Isgid":false,"Hostid":1001000,"Nsid":1000,"Maprange":3261},{"Isuid":true,"Isgid":true,"Hostid":4261,"Nsid":4261,"Maprange":1},{"Isuid":true,"Isgid":false,"Hostid":1004262,"Nsid":4262,"Maprange":999995738},{"Isuid":true,"Isgid":true,"Hostid":0,"Nsid":0,"Maprange":1000},{"Isuid":false,"Isgid":true,"Hostid":1001000,"Nsid":1000,"Maprange":3261},{"Isuid":true,"Isgid":true,"Hostid":4261,"Nsid":4261,"Maprange":1},{"Isuid":false,"Isgid":true,"Hostid":1004262,"Nsid":4262,"Maprange":999995738}]'
  volatile.int.host_name: veth2f87e77a
  volatile.int.hwaddr: "16:10:01:06:00:06"
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: c08b2fef-3630-467f-9c74-c177e1590487
  volatile.uuid.generation: c08b2fef-3630-467f-9c74-c177e1590487
devices:
  USERDATA:
    path: /_USERDATA
    propagation: rshared
    recursive: "true"
    source: /_USERDATA/
    type: disk
  ext:
    name: ext
    nictype: bridged
    parent: lxdbr0
    type: nic
  int:
    name: int
    nictype: bridged
    parent: nmbr-local
    type: nic
  root:
    path: /
    pool: lxd_pool
    type: disk
ephemeral: false
profiles:
- dockerProfile
stateful: false
description: ""
# lxc profile show dockerProfile
config:
  boot.autostart: "true"
  linux.kernel_modules: wireguard,ip_tables
  raw.idmap: |
    both 0-999 0-999
    both 4261 4261
  security.nesting: "true"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.mount.allowed: cifs,smb,nfs,nfsv4
  security.syscalls.intercept.setxattr: "true"
  security.syscalls.intercept.sysinfo: "true"
description: ""
devices:
  USERDATA:
    path: /_USERDATA
    propagation: rshared
    recursive: "true"
    source: /_USERDATA/
    type: disk
  int:
    name: int
    nictype: bridged
    parent: nmbr-local
    type: nic
  root:
    path: /
    pool: lxd_pool
    type: disk
name: dockerProfile
used_by:
- /1.0/instances/FedoraLxd

What can I check and what can I try?

add : I dont have docker on my host

What does ip a and ip r show on the host and inside the container?

Also, have you disabled all firewalls on the host to confirm that is not an issue?

I disable firewalld on host(systemctl disable firewalld) and lxc container(did not install)

On Host

ip a; ip r
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enu2c2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master nmbr-local state UP group default qlen 1000
    link/ether a0:ce:c8:fa:95:76 brd ff:ff:ff:ff:ff:ff
3: enabcm6e4ei0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether dc:a6:32:76:fc:d8 brd ff:ff:ff:ff:ff:ff
4: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether dc:a6:32:76:fc:d9 brd ff:ff:ff:ff:ff:ff
    inet 10.1.5.223/14 brd 10.3.255.255 scope global dynamic noprefixroute wlan0
       valid_lft 252774sec preferred_lft 252774sec
    inet6 fdb9:f7d3:d2c0:418b:40a1:5f1d:8d28:168/64 scope global dynamic noprefixroute
       valid_lft 1674sec preferred_lft 1674sec
    inet6 fe80::7a6d:8e9c:1840:3bd0/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
5: nmbr-local: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 62:3a:4f:99:08:f6 brd ff:ff:ff:ff:ff:ff
    inet 10.1.5.222/14 brd 10.3.255.255 scope global dynamic noprefixroute nmbr-local
       valid_lft 252771sec preferred_lft 252771sec
    inet6 fdb9:f7d3:d2c0:418b:ff12:b0d0:3478:117a/64 scope global dynamic noprefixroute
       valid_lft 1674sec preferred_lft 1674sec
    inet6 fe80::1408:1db8:b3cc:b38b/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
35: podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:b6:bd:7c:9d:d1 brd ff:ff:ff:ff:ff:ff
    inet 10.88.0.1/16 brd 10.88.255.255 scope global podman0
       valid_lft forever preferred_lft forever
    inet6 fe80::a8a6:93ff:fe30:adfe/64 scope link
       valid_lft forever preferred_lft forever
37: podman2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:3e:38:a9:9f:4e brd ff:ff:ff:ff:ff:ff
    inet 172.23.0.1/16 brd 172.23.255.255 scope global podman2
       valid_lft forever preferred_lft forever
    inet6 fe80::9080:4bff:feda:db91/64 scope link
       valid_lft forever preferred_lft forever
41: veth2@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman0 state UP group default qlen 1000
    link/ether 5e:b6:bd:7c:9d:d1 brd ff:ff:ff:ff:ff:ff link-netns netns-689c7075-2c90-7166-1f35-79ba3e1b680d
    inet6 fe80::c444:e9ff:fe97:c75f/64 scope link
       valid_lft forever preferred_lft forever
42: veth3@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether 32:3e:38:a9:9f:4e brd ff:ff:ff:ff:ff:ff link-netns netns-cf46f911-afe0-1fa8-9f82-12a1dfcd7330
    inet6 fe80::303e:38ff:fea9:9f4e/64 scope link
       valid_lft forever preferred_lft forever
46: veth4@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman0 state UP group default qlen 1000
    link/ether 6a:64:c0:da:1d:55 brd ff:ff:ff:ff:ff:ff link-netns netns-80e4d092-e6b1-f431-e744-6db2321cf1ab
    inet6 fe80::10a7:ffff:fe60:d0ae/64 scope link
       valid_lft forever preferred_lft forever
47: veth5@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether 7e:74:fe:fd:f9:31 brd ff:ff:ff:ff:ff:ff link-netns netns-8e5646ff-9de5-4aca-ed8d-8958ab2b7866
    inet6 fe80::9439:bff:fe92:226d/64 scope link
       valid_lft forever preferred_lft forever
48: veth6@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether b6:36:57:f4:60:8e brd ff:ff:ff:ff:ff:ff link-netns netns-8b94a42e-a854-b192-f8f6-e7b6dcef0f53
    inet6 fe80::b436:57ff:fef4:608e/64 scope link
       valid_lft forever preferred_lft forever
49: veth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman0 state UP group default qlen 1000
    link/ether ca:ad:29:53:cf:e7 brd ff:ff:ff:ff:ff:ff link-netns netns-7a65782d-2ba4-1117-b580-5b6f8a4002b3
    inet6 fe80::7845:4dff:feb4:7c80/64 scope link
       valid_lft forever preferred_lft forever
50: veth1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman2 state UP group default qlen 1000
    link/ether ba:fd:08:48:50:60 brd ff:ff:ff:ff:ff:ff link-netns netns-03f8e968-768a-69ea-d0ca-b1a98dcc2a0b
    inet6 fe80::447f:caff:fea6:eb37/64 scope link
       valid_lft forever preferred_lft forever
51: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:6b:70:7c brd ff:ff:ff:ff:ff:ff
    inet 10.167.250.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:f6fd:6862:f4ac::1/64 scope global
       valid_lft forever preferred_lft forever
52: nmbr-public: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e6:fd:7e:73:3f:f5 brd ff:ff:ff:ff:ff:ff
54: veth6ade2eec@if53: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master nmbr-public state UP group default qlen 1000
    link/ether e6:fd:7e:73:3f:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 2
56: veth48e578e1@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master nmbr-local state UP group default qlen 1000
    link/ether 62:3a:4f:99:08:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 2
default via 10.0.0.1 dev nmbr-local proto dhcp src 10.1.5.222 metric 425
default via 10.0.0.1 dev wlan0 proto dhcp src 10.1.5.223 metric 600
10.0.0.0/14 dev nmbr-local proto kernel scope link src 10.1.5.222 metric 425
10.0.0.0/14 dev wlan0 proto kernel scope link src 10.1.5.223 metric 600
10.88.0.0/16 dev podman0 proto kernel scope link src 10.88.0.1
10.167.250.0/24 dev lxdbr0 proto kernel scope link src 10.167.250.1 linkdown
172.23.0.0/16 dev podman2 proto kernel scope link src 172.23.0.1

in container

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
53: ext@if54: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:ff:cd:76 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::5054:ff:feff:cd76/64 scope link
       valid_lft forever preferred_lft forever
55: int@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 16:10:01:06:00:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.1.6.6/14 scope global int
       valid_lft forever preferred_lft forever
    inet6 fdb9:f7d3:d2c0:418b:1410:1ff:fe06:6/64 scope global dynamic mngtmpaddr
       valid_lft 1797sec preferred_lft 1797sec
    inet6 fe80::1410:1ff:fe06:6/64 scope link
       valid_lft forever preferred_lft forever
default via 10.0.0.1 dev int
10.0.0.0/14 dev int proto kernel scope link src 10.1.6.6

10.1.6.6/14 is added by the command that /usr/sbin/ip add add 10.1.6.6/14 dev int.

thankyou very much

Right, so can I check I’m understanding correctly?
You’re saying that if you manually configure the container it works OK being connected to thenmbr-local bridge, but doing DHCP on it doesn’t work?

Yes. that’s right!

And have you configured your instance to request DHCP on that interface inside the instance?

No, I didn’t request DHCP manually.
In the past, when I using rocky linux, I used to request it through nmcli, but I can’t because there is no interface in the nmcli of fedora container.

# nmcli con show in lxc container
NAME             UUID                                  TYPE      DEVICE
docker_gwbridge  9084c55c-3f41-4768-8234-c41788a4aa4e  bridge    docker_gwbridge
lo               d30f239e-e003-4a10-af01-6aa9c53cff38  loopback  lo
br-a8f409a04963  0033f77e-c457-4ea3-8fd8-e8039d5e419e  bridge    br-a8f409a04963
br-c695aaa016c6  885c747e-95d6-42b7-a48c-1b5387efa64a  bridge    br-c695aaa016c6
docker0          450b29c4-e62f-4405-acb0-42e81e6ff6be  bridge    docker0

Is it correct that requesting dhcp manually?
If so, how can I control the nic from network manager or,
how to request dhcp with other software rather than network manager?

Yes, as the interface isn’t called eth0 or enp5s0 (which LXD images are configured to perform DHCP on automatically), you will need to configure your guest OS to perform DHCP on the relevant interface(s).

Thabk you.
then only two nic can be used to dhcp client?

I dont understand the question.

you saids that lxc perform dhcp only for eth0 or enp5s0, didn’t it?

Yes, by default , LXD images only do DHCP on the primary interface (which is eth0 for containers or enp5s0 for VMs).

defualt?
can I add other interface?

You have already :slight_smile:

I apologize. My wording seems to have caused a misunderstanding.

I means that can I add more interface to perform dhcp i lxd options?

Inside your guest you can configure the guest OS to perform DHCP on any interface.

How specifically to do this depends on the guest OS being used.

thank you.
I’ll have to contact fedora to find out how to do this.