Lxd unable to access it's network

Hi all,

I’m unable to ping and resolve the container name.

It looks like that interface is down. And for some reason, I cannot make it up.

30: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:55:d6:cb brd ff:ff:ff:ff:ff:ff
inet 10.16.118.1/24 scope global lxdbr0

Please can you show output of lxc config show test --expanded

Also show output of ip a and ip l on the host.

Finally, have you tried reloading LXD as that will cause lxdbr0 to be brought up?

sudo systemctl reload snap.lxd.daemon

Hi, reload didn’t help.

architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 20.04 LTS amd64 (release) (20211118)
image.label: release
image.os: ubuntu
image.release: focal
image.serial: “20211118”
image.type: squashfs
image.version: “20.04”
volatile.base_image: 39bdbf191acd49807930a11b46f76d3d3b31f01efa1af5f26c40402f33b11426
volatile.eth0.host_name: veth8ba20abe
volatile.eth0.hwaddr: 00:16:3e:3c:7e:07
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
volatile.uuid: 099ee077-1756-4959-a9e4-63492daa74a7
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: zpool1
size: 30GB
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: “”

Firewall is most likely the issue then.

Can you show the output of the other 2 commands I asked for and output of sudo iptables-save and sudo nft list ruleset please.

Cluster it’s created by 3 machines only on the second one interface shows up. There is no any iptables rules, nftables it is not installed.

iptables -L -vn
Chain INPUT (policy ACCEPT 17598 packets, 4320K bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 21260 packets, 2723K bytes)
pkts bytes target prot opt in out source destination

Reboot the machines and this did not help.

Please show output of lxc info | grep firewall:

It’s nftables but it is NOT installed.

It’ll be in the snap package.

If you do sudo apt install nftables and then show output of sudo nft list ruleset.

Still shows interface down, still unable to ping the container. :frowning:

table inet lxd {
chain pstrt.lxdbr0 {
type nat hook postrouting priority srcnat; policy accept;
@nh,96,24 659574 @nh,128,24 != 659574 masquerade
@nh,64,64 18249411622945604703 @nh,192,64 != 18249411622945604703 masquerade
}

chain fwd.lxdbr0 {
	type filter hook forward priority filter; policy accept;
	ip version 4 oifname "lxdbr0" accept
	ip version 4 iifname "lxdbr0" accept
	ip6 version 6 oifname "lxdbr0" accept
	ip6 version 6 iifname "lxdbr0" accept
}

chain in.lxdbr0 {
	type filter hook input priority filter; policy accept;
	iifname "lxdbr0" tcp dport 53 accept
	iifname "lxdbr0" udp dport 53 accept
	iifname "lxdbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
	iifname "lxdbr0" udp dport 67 accept
	iifname "lxdbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
	iifname "lxdbr0" udp dport 547 accept
}

chain out.lxdbr0 {
	type filter hook output priority filter; policy accept;
	oifname "lxdbr0" tcp sport 53 accept
	oifname "lxdbr0" udp sport 53 accept
	oifname "lxdbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
	oifname "lxdbr0" udp sport 67 accept
	oifname "lxdbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
	oifname "lxdbr0" udp sport 547 accept
}

}

Please start the container and then show output of:

bridge link

on the host. If it shows there is a veth device connected to the lxdbr0 interface, and the lxdbr0 is still down, then it means there’s something on your system that is trying to manage the lxdbr0 interface and is setting it down.

I’m sorry I forgot to mention that container is on another project not from the default one.

6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 master br0 state forwarding priority 32 cost 2
10: vethbb6ff189@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 master br0 state forwarding priority 32 cost 2
12: veth2cab55c2@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 master br0 state forwarding priority 32 cost 2
14: veth80bb0ea1@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 master br0 state forwarding priority 32 cost 2
16: veth0319bbd2@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 master br0 state forwarding priority 32 cost 2

Please show lxc config show <instance> --expanded

Also please explain in detail what the issue is, as it looks like your containers are connected to br0 and not lxdbr0, so I’m not sure why lxdbr0 is relevant at all in this case.

Where are you trying to ping to/from that isn’t working?

It looks like that is connected on lxdbr0.

architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 20.04 LTS amd64 (release) (20211118)
image.label: release
image.os: ubuntu
image.release: focal
image.serial: “20211118”
image.type: squashfs
image.version: “20.04”
volatile.base_image: 39bdbf191acd49807930a11b46f76d3d3b31f01efa1af5f26c40402f33b11426
volatile.eth0.host_name: veth8ba20abe
volatile.eth0.hwaddr: 00:16:3e:3c:7e:07
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
volatile.uuid: 099ee077-1756-4959-a9e4-63492daa74a7
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: zpool1
size: 30GB
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: “”

So what are you trying to achieve?

I need to be able to ping and resolve container name from the cluster.

What do you mean by “the cluster”?

within the LXD cluster. When execute host/dig/ping <container_name> to be able to resolve it and ping it.

If you’re using lxdbr0 you’ll only be able to ping the container from the host running it.

You need to connect the container to an external network or use a fan or ovn overlay network for it to be reachable from other cluster members.