LXC setup bridge

Hello,
I am here because after reading all the tutorial for creating a bridge it still not working.
What I have it is a box so my lan is 192.168.1…
My server need to have a static IP for example 192.168.1.160
But when I am trying to build the bridge lxc-net erase the network/interface option and I lost the Broadcast.
And from my container I have no connection.
my configuration is this :
/etc/network/interfaces

allow-hotplug enp1s0
    auto lxcbr0
    iface lxcbr0 inet dhcp
           bridge_ports enp1s0
           bridge_fd 0
           bridge_maxwait 0
       	   bridge_stp off

/etc/default/lxc-net

USE_LXC_BRIDGE="true"
LXC_DHCP_RANGE="192.168.1.161,192.168.1.250"
LXC_ADDR="192.168.1.161"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="192.168.1.0/24"
LXC_DHCP_CONFILE=/etc/lxc/dhcp.conf
LXC_BRIDGE="lxcbr0"

the result of the ifconfig is
lxcbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.160 netmask 255.255.255.0 broadcast 0.0.0.0

the config of my container is

 #network config
 lxc.net.0.type = veth
 lxc.net.0.flags = up
 lxc.net.0.link = lxcbr0
 lxc.net.0.hwaddr = 70:85:c2:8a:00:ef
 lxc.net.0.ipv4.address = 192.168.1.163
 lxc.net.0.ipv4.gateway = 192.168.1.1
 lxc.sysctl.net.ipv6.conf.eth0.disable_ipv6=1
 lxc.sysctl.net.ipv6.conf.all.disable_ipv6=1

What is wrong, where is my mistake?
Thanks

So I think the problem here is that you don’t actually want to use the lxc-net helper script because you are not intending to use a private bridge with dnsmasq providing DHCP and DNS.

So you should set USE_LXC_BRIDGE=false and instead configure the IP, subnet, gateway and DNS settings for a different bridge name (e.g. br0) on your LXC host in /etc/network/interfaces, along with the existing bridge_ports setting that will connect your host’s physical NIC to the external network.

At that point, make sure that connectivity is working OK on your LXC host, at which point you should be able to use your existing container config, just change the bridge link setting to:

lxc.net.0.link = br0

Otherwise if you use the lxc-net script, that starts its own DHCP server, and combine that with /etc/network/interfaces that is connected to an external port, you’ll end up exposing dnsmasq’s DHCP service to the rest of the network. This is likely something you do not want to do as it may cause issues on your wider network.

I don’t understand if the bridge is named lxcbr0, am I not supposed to used the same name in the lxc.net.0.link = lxcbr0
I think I already tried this kind of configuration but it was not working, in that case how to know where it is going wrong?

Yes you can use what ever bridge you like. My main point was that you shouldn’t use the lxc-net script if the bridge is also being created by /etc/network/interfaces. My suggestion to rename your bridge from lxcbr0 to br0 was really to avoid confusion with the bridge that lxc-net creates (which is called lxcbr0 by default), if the bridge you’re using is not created by lxc-net. But thats just a nice-to-have and certainly not essential.

But yes the bridge name must match what is in lxc.net.0.link which is why I suggested renaming both the bridge and the lxc.net.0.link line.

I did it but it is not working
the /etc/network/interfaces :

allow-hotplug enp1s0
auto br0iface br0 inet static
        bridge_ports enp1s0
        bridge_fd 0
        bridge_maxwait 0
	bridge_stp off
	address 192.168.1.160
	network 192.168.1.0
	netmask 255.255.255.0
	gateway 192.168.1.1
	dns-nameserver 1.1.1.1

ifconfig :

br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.160  netmask 255.255.255.0  broadcast 192.168.1.255

config of the container :

lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = br0
lxc.net.0.hwaddr = 00:FF:C0:B8:13:62
lxc.net.0.ipv4.address = 192.168.1.162/24
lxc.net.0.ipv4.gateway = 192.168.1.1
lxc.sysctl.net.ipv6.conf.eth0.disable_ipv6=1
lxc.sysctl.net.ipv6.conf.all.disable_ipv6=1

when I made lxc-ls -f
I got for my container :

NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
test RUNNING 0 - 192.168.1.162 - false

in the ifconfig I got a new line :

veth2G8VLQ: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether fe:10:48:9a:1e:2f  txqueuelen 1000  (Ethernet)

inside the container for ifconfig I got :

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.162  netmask 255.255.255.0  broadcast 192.168.1.255
        ether 00:ff:c0:b8:13:62  txqueuelen 1000  (Ethernet)

When I try apt update :

Err :1 http://security.debian.org buster/updates InRelease         
  Erreur temporaire de résolution de « security.debian.org »

The file resolv.conf is this one :

domain home
search home
nameserver 192.168.1.1
nameserver 1.1.1.1

OK so there are some networking diagnostics steps you can take to track down the problem:

  1. Confirm your host is pingable on its IP from the external network, and that there are no IPs configured on enp1s0.

  2. When the container is running, check the host-side veth interface is connected to the bridge by running bridge link show, you would expect to see something like:

sudo bridge link show
[sudo] password for user: 
24: veth7e905796@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxdbr0 state forwarding priority 32 cost 2 
  1. Inside the container check whether you can ping the host IP 192.168.1.160.

  2. If not then run tcpdump -n -i br0 and check if you see the ICMP packets arriving at the bridge.

  3. Check your host isn’t running a firewall that may be blocking traffic.

ifconfig

enp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether xx:xx:xx:xx:xx:xx  txqueuelen 1000  (Ethernet)

bridge link show
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 4 
10: veth2G8VLQ@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2 
14: vethSXFL3T@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2 

From my laptop on the same lan

ping 192.168.1.160
PING 192.168.1.160 (192.168.1.160) 56(84) bytes of data.
64 bytes from 192.168.1.160: icmp_seq=1 ttl=64 time=3.27 ms

ping 192.168.1.162
PING 192.168.1.162 (192.168.1.162) 56(84) bytes of data.
64 bytes from 192.168.1.162: icmp_seq=1 ttl=64 time=3.95 ms

ping 192.168.1.163
PING 192.168.1.163 (192.168.1.163) 56(84) bytes of data.
From 192.168.1.160: icmp_seq=2 Redirect Host(New nexthop: 192.168.1.163)

from the container 192.168.1.162

ping 192.168.1.160
PING 192.168.1.160 (192.168.1.160) 56(84) bytes of data.
64 bytes from 192.168.1.160: icmp_seq=1 ttl=64 time=0.045 ms

from the container 192.168.1.163

ping 192.168.1.160
PING 192.168.1.160 (192.168.1.160) 56(84) bytes of data.
^C
--- 192.168.1.160 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 108ms

But if I do the tcpdump -n -i br0 on the NAS and I got this :

18:05:25.500219 IP 192.168.1.160.22 > 192.168.1.14.40944: Flags [P.], seq 1488544218:1488544406, ack 3934575503, win 501, options [nop,nop,TS val 424511679 ecr 3669901604], length 188

So is container 192.168.1.162 working properly now? It is reachable from your laptop.

This would prove the bridge is working.

As for container 192.168.1.163 it looks like there is a routing issue, please can you put the output of ip r and ip a inside the container.

I fixed the problem regarding the ping from the container 192.168.1.163
I copied the config from the first to the other.
I can ping in both side. And yes I can join the Gitlab install in the 192.168.1.163
But still it is impossible to perform apt update.
So basically if I need to update the system it is impossible.

ip r
default via 192.168.1.1 dev eth0 
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.163
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:ff:c0:b8:13:63 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.163/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever

I add that I can ping 8.8.8.8 I got an answer. But when I ping google.com the cursor stay and wait, nothing append.

here the complete config file of the container

lxc.net.0.type = empty
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.rootfs.path = dir:/var/lib/lxc/gitlab/rootfs

#partage dossier
#lxc.mount.entry = /home/gitlab/bdd mnt/bdd none bind 0 0

# Common configuration
lxc.include = /usr/share/lxc/config/debian.common.conf

# Container specific configuration
lxc.tty.max = 4
lxc.uts.name = gitlab
lxc.arch = amd64
lxc.pty.max = 1024

#configuration internet
lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = br0
lxc.net.0.hwaddr = 00:FF:C0:B8:13:63
lxc.net.0.ipv4.address = 192.168.1.163/24
lxc.net.0.ipv4.gateway = 192.168.1.1
lxc.sysctl.net.ipv6.conf.eth0.disable_ipv6=1
lxc.sysctl.net.ipv6.conf.all.disable_ipv6=1

So sounds to me like your DNS isn’t working. How have you configured that inside your container?

I found a part of the solution. It is linked with the firwall on the NAS. When it is activated, it is not working.
I opened the port 53, but still no result. I am still looking!

I found the answer here
I add these rules on the host

iptables -I ufw-user-input 1 -i br0 -j ACCEPT
iptables -I ufw-user-output 1 -i br0 -j ACCEPT
iptables -I ufw-user-forward 1 -i br0 -j ACCEPT

Thanks for the help, to configure properly the bridge without lxc-net.