LXD container doesnt pick up an IP

We are using a virbr0 bridge and assigning ips to our lxc containers, we have a cluster of 7 machines, all 6 machines assigns an IP which is pingable from the outside. One of the node doesnt provide an IP:

i created a container named c15

lxc list | grep c15
| c15                    | RUNNING |                     |      | CONTAINER | 0         | lxd-05 |

No ip listed

but on manually running the below command, an Ip is leased and the container is accessible, how do i automate the Ip leasing ? if there is any configuration that could satisfy this, kindly advice.

  lxc exec c15 -- sh -c "/sbin/dhclient -1 -q -lf /var/lib/dhclient/dhclient--eth0.lease -pf /var/run/dhclient-eth0.pid -H c15 eth0"

 lxc list | grep c15                               | c15                    | RUNNING | 17x.x.x.x2 (eth0)  |      | CONTAINER | 0         | lxd-05 |

What image is used for those containers?

centos image !

Hi,

Please can you show output of:

lxc network ls

lxc config show c15 --expanded

Thanks

Also ps fauxww may be useful.

Knowing what centos image would also be good.

Thanks for your time Thomas, output is as below:

lxc network ls
±---------±---------±--------±------------±--------±------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY | STATE |
±---------±---------±--------±------------±--------±------+
| docker0 | bridge | NO | | 0 | |
±---------±---------±--------±------------±--------±------+
| enp5s0f0 | physical | NO | | 0 | |
±---------±---------±--------±------------±--------±------+
| enp5s0f1 | physical | NO | | 0 | |
±---------±---------±--------±------------±--------±------+
| virbr0 | bridge | NO | | 25 | |
±---------±---------±--------±------------±--------±------+

lxc config show c15 --expanded
architecture: x86_64
config:
volatile.base_image: 573a9f08a3d5d0529d6b498b83d3943df472cd7949b74e1cac1ba87962ae64cf
volatile.eth0.host_name: veth5c6c5075
volatile.eth0.hwaddr: 00:16:3e:14:07:e8
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
devices:
eth0:
name: eth0
nictype: bridged
parent: virbr0
type: nic
root:
path: /
pool: lxd_storage
type: disk
ephemeral: false
profiles:

  • default
    stateful: false
    description: “”

Thanks for your time Stéphane, output for the command:

lxc exec c15 – sh -c “ps fauxww”
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 296 0.0 0.0 155340 1716 ? Rs+ 14:27 0:00 ps fauxww
root 1 0.0 0.0 42696 1896 ? Ss Mar22 0:00 /sbin/init
root 253 0.0 0.0 102896 2772 ? Ss Mar22 0:00 /sbin/dhclient -1 -q -lf /var/lib/dhclient/dhclient–eth0.lease -pf /var/run/dhclient-eth0.pid -H c15 eth0

its the centos7 x86_64 image thats available in the images repo !

So I’ve checked the centos 7 image and its working OK with the managed LXD bridge.

Can you show me the output of ip a on the host, and tell me a bit more about virbr0 and how it is connected to the wider network, and where the DHCP server is running?

Thanks
Tom

Which version of centos are you using?

Its works fine with the other 6 nodes in the cluster only one node doesn’t provide the IP automatically, i’ll have to manually run the dhclient cmd for it to pick one.

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master virbr0 state UP group default qlen 1000
link/ether 0c:x4:xa:de:x5:0x xxx xf:fx:xf:xf:xf:xf
inet6 fxx0::ec4:7xxf:xxde:f506/64 scope link
valid_lft forever preferred_lft forever
3: enp5s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 0c:x4:xa:de:x5:0x xxx xf:fx:xf:xf:xf:xf
4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0c:x4:xa:de:x5:0x xxx xf:fx:xf:xf:xf:xf
inet 1xx.xx.0.x0x/xx brd 1x2.xx.3.xx5 scope global dynamic virbr0
valid_lft 65187sec preferred_lft 65187sec
inet6 xxx0::xx4:xxxf:fxxe:fxx6/64 scope link
valid_lft forever preferred_lft forever
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:94:0e:b6:65 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:94ff:fe0e:b665/xx scope link
valid_lft forever preferred_lft forever
7: veth1ae94a8@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether x6:7x:ax:cx:x8:x5 brd xf:xf:xf:xf:xf:xf link-netnsid 2
inet6 xxx0::xxx0:axxf:fxxe:xxx5/x4 scope link
valid_lft forever preferred_lft forever
9: veth3e8326a@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether xxe:x2:xx:xx:xa:xc brd xf:xf:xf:xf:fx:xf link-netnsid 1
inet6 fe80::6xx2:xxff:fexx:xxx6c/x4 scope link
valid_lft forever preferred_lft forever
11: veth78deaf6@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 6x:x8:6x:xx:xx:0b brd xf:xf:f:xf:xf:xf link-netnsid 0
inet6 xxx0::xxx0:axxf:fxxe:xxx5/x4 scope link
valid_lft forever preferred_lft forever
13: vethe0d08fa2@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UP group default qlen 1000
link/ether xxe:x2:xx:xx:xa:xc brd xf:xf:xf:xf:fx:xf link-netnsid 3
inet6 xxx0::xxx0:axxf:fxxe:xxx5/x4 scope link
valid_lft forever preferred_lft forever
15: vethd70ac677@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UP group default qlen 1000
link/ether xxe:x2:xx:xx:xa:xc brd xf:xf:xf:xf:fx:xf link-netnsid 4
inet6 xxx0::xxx0:axxf:fxxe:xxx5/x4 scope link
valid_lft forever preferred_lft forever
17: veth123d979b@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UP group default qlen 1000
link/ether xxe:x2:xx:xx:xa:xc brd xf:xf:xf:xf:fx:xf link-netnsid 5
inet6 xxx0::xxx0:axxf:fxxe:xxx5/x4 scope link
valid_lft forever preferred_lft forever
19: vethe530604d@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UP group default qlen 1000
link/ether xxe:x2:xx:xx:xa:xc brd xf:xf:xf:xf:fx:xf link-netnsid 6
inet6 xxx0::xxx0:axxf:fxxe:xxx5/x4 scope link
valid_lft forever preferred_lft forever
25: vethb41eb9fa@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UP group default qlen 1000
link/ether xxe:x2:xx:xx:xa:xc brd xf:xf:xf:xf:fx:xf link-netnsid 7
inet6 xxx0::xxx0:axxf:fxxe:xxx5/x4 scope link
valid_lft forever preferred_lft forever
27: veth44d0dfcb@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UP group default qlen 1000
link/ether xxe:x2:xx:xx:xa:xc brd xf:xf:xf:xf:fx:xf link-netnsid 8
inet6 xxx0::xxx0:axxf:fxxe:xxx5/x4 scope link
valid_lft forever preferred_lft forever
29: veth2d8010cb@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UP group default qlen 1000
link/ether xxe:x2:xx:xx:xa:xc brd xf:xf:xf:xf:fx:xf link-netnsid 9
inet6 xxx0::xxx0:axxf:fxxe:xxx5/x4 scope link valid_lft forever preferred_lft forever

release: CentOS Linux release 7.7.1908 (Core)

We have coupled our dhcp server (its common for the all components) in such a way that any containers or vms, that’s created on the network gets automatically assigned to the domain, and its accessible for the outside world.

Hi,

I’m not sure what you mean by “we have coupled our dhcp server”, but what I meant was, can you explain where the DHCP is? Is it running on the LXD host or is it on another machine on the wider network.

Also, which external port (if any) does the virbr0 connect to?

I would suggest try changing the MAC address (removing the old one and a new one will be generated on next start) and see if it helps:

lxc stop c15
lxc config unset c15 volatile.eth0.hwaddr
lxc start c15

Sorry for the confusion, yes the dhcp server is placed on the wider network, there is no ext port used. I did try running the cmds that you provided, stop itself hasn’t completed, its taking a super long time on the container. Thanks

Try lxc stop -f c15 to force it to stop.

yep i could force stop and i ran the succession cmds, but the container still doesn’t have an IP, looks like the container isn’t even trying to get an IP address.
Unless i force it with
lxc exec c15 – sh -c “/sbin/dhclient -1 -q -lf /var/lib/dhclient/dhclient–eth0.lease -pf /var/run/dhclient-eth0.pid -H c15 eth0”

Can you reboot the container and then once it starts run lxc exec c15 -- ps aux to see if dhclient is running.

lxc exec c15 – ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 190780 3508 ? Ss 15:52 0:00 /sbin/init
root 584 0.0 0.0 44628 1700 ? Ss 15:52 0:00 /usr/lib/systemd/sys
root 3749 0.0 0.0 255528 5004 ? Ssl 15:52 0:00 /usr/sbin/rsyslogd -
root 3839 0.0 0.0 126288 1568 ? Ss 15:52 0:00 /usr/sbin/crond -n
root 3924 0.0 0.0 112920 4328 ? Ss 15:52 0:00 /usr/sbin/sshd -D
root 4005 0.0 0.0 155344 1728 ? Rs+ 15:54 0:00 ps aux

No dhclient process after reboot, Thanks

Could you resolve the issue? I’ve had various similar issues with DHCP and using libvirt virbr0 for LXD containers.

Thanks

Try lxc exec container-name – sh -c “dhclient” and check if it gets an IP, else try to dhclient -r eth0 and restart lxd and check.

Can you advise what networking config changes you have made inside the container?

Also, worth testing, if you modify the network config inside the container to assign a static IP and then restart the container, does that come up properly?

It looks like the network config isn’t being applied on boot.

Hi, actually no changes to the network config file

DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
HOSTNAME=c77.x.com
NM_CONTROLLED=no
TYPE=Ethernet
MTU=
DHCP_HOSTNAME=hostname

yes so i did a dhclient -r and restarted lxd daemon, containers start getting an IP, yes looks like once the host machine is rebooted, things change. But I’m not certain what really happened, it was sort of some magic actually.