Cannot get bridge networking (LXC 1.0.11 on CentOS 7.8)

Using LXC 1.0.11 installed from @epel on CentOS 7.8.

LXC NAT networking fully works. Goal is to have bridged networking and have the LXC container on the same subnet as the host OS.

“ping” to the started container from the host OS and vice versa does not work, however.

That’s my setup:

“lxcbr0” on the host OS is bridged to enp0s8 which has IP
The container gets IP

The container config has:

lxc.utsname = bnode091 = macvlan = up = lxcbr0 = eth0 = fe:42:d3:10:b5:66 = =

In the container I have:

[root@bnode091 ~]# ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet  netmask  broadcast

[root@bnode091 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 

[root@bnode091 ~]# netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface         UG        0 0          0 eth0     U         0 0          0 eth0   U         0 0          0 eth0

What do I miss to get this working?

First. insert blow line in config file = bridge

— macvlan 3 mode type —
private, vepa, bridge

I usually configure ip in a container.

Check if it is a macvlan interface in cotainer with the command shown below.


The host OS runs in a VM under Virtualbox. Setting in VB the network adapter to allow promiscuous mode and making the following changes below I can

  • ping/ssh other VMs in the same subnet and vice versa

I cannot

  • ping the host OS wher the container resides and vice verse. Internet access does not work either)

Iptables are not configured/activated, firewalld neither.

In the container I have:

# cat /etc/sysconfig/network-scripts/ifcfg-eth0

LXC config for network

lxc.utsname = bnode091 = macvlan = bridge = up = lxcbr0 = eth0 = auto =

In container:

# ip -d link show eth0
9: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether de:29:e0:39:34:65 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0 
    macvlan mode bridge addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 

On the host OS:

root@node09 ~]# ip -d link show enp0s8
3: enp0s8: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lxcbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:d4:a3:77 brd ff:ff:ff:ff:ff:ff promiscuity 2 
    bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.8:0:27:d4:a3:77 designated_root 8000.8:0:27:d4:a3:77 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
[root@node09 ~]# ip -d link show lxcbr0
6: lxcbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:d4:a3:77 brd ff:ff:ff:ff:ff:ff promiscuity 2 
    bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.8:0:27:d4:a3:77 designated_root 8000.8:0:27:d4:a3:77 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer  225.51 vlan_default_pvid 1 vlan_stats_enabled 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 4 mcast_hash_max 512 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 

What is the missing piece?

Thanks. me too. ^^

I wonder if I understood your system stack correctly.
PC (or NoteBook or Server)-VirtualBox (CentOS)-container (CentOS)

  1. My thoughts are as follows.
    It is recommended to set all IP settings in the container configuration file or in the container.

  2. Test it by modifying the following in your current settings:
    Delete the gateway and ip settings in the lxc configuration file.

  3. In the container’s ifcfg-eth0 file, specify the gateway and dns settings as I showed you at first. also specify the gateway as the ip of lxcbr0(maybe in ifconfig (or ip a s) on the host, not auto.

In a nested virtualized configuration (in your configuration), it should be pinged from the container to the outside. Here, external means ip in the same band as the host.

In promiscuous mode, nestedvm (or container) may not be able to ping to host ip. It is pinged to other nodes except the host (ip in the same network band as host).

I have tested ovn gateway in nested vm environment before. At that time, communication was confirmed up to the host network band, but no communication with the outside was confirmed.

The promiscuous mode means that the nic device driver does not discard the Ethernet frame sent to the MAC address other than itself, and delivers it to the upper layer.

For this reason, it is correct to ping and communicate from the container to the host band.

The environment where I tested lxc is:

notebook (unbuntu 20.04) — lxd vm (centos) — lxc container (centos)
In this environment, it was confirmed that the ping from the container to works well.

The result of the test seems to be slightly different depending on how the nested vm is constructed.

When configuring lxc on vm, I install bridge-utils and configured virbr0 manually. In other words, the vm network and the container network ip band were the same, even if macvlan was not basically used.

So it was ping to is successful. I don’t know.

Both firewall and secureos are disabled. me too.

If you are okay, please share your test results.

Yes, you understand my system stack correctly.

I added and removed from the container config = =

(“auto” makes no change, too)

And I added/removed GATEWAY= from ifcfg-eth0 in the container.

It is always the same result: I can reach other VMs on the laptop from within the container but not the host OS and thus not the Internet from the container.

As I’m on CentOS 7.8 and the @epel repository only has a pretty old version of lxc (1.0.11) I’m wondering if I’m running into an issue with that version - either a bug or further settings on the bridge interface of the host OS wich need to be done.

I also installed the epel repo with yum install -y epel-release

In the macvlan configuration, the ip-related configuration can be set in one container configuration file or ifcfg-eth0.

In the container configuration file, delete all lines( = auto, =
for IP settings.
Change the promiscuous mode of the virtual box to all allowed.