LXC container networking issues

I have messed with this for days, trying several different ways to get a bridged network to work with an LXC container. I am brand new to lxc and this is the first thing I wanted to do. I have tried with my current bridge I use for KVM/qemu. The VMs work without any problems at all. Trying to run a debian 10 container on debian 10. I have even tried uninstalling and reinstalling LXC and still no luck. Any help at all would be greatly appreciated.
Also, this is my first post and my first ever forum. So if i am doing this wrong please let me know :slight_smile:

cat /etc/default/lxc
# LXC_AUTO - whether or not to start containers at boot
LXC_AUTO="true"

# BOOTGROUPS - What groups should start on bootup?
#	Comma separated list of groups.
#	Leading comma, trailing comma or embedded double
#	comma indicates when the NULL group should be run.
# Example (default): boot the onboot group first then the NULL group
BOOTGROUPS="onboot,"

# SHUTDOWNDELAY - Wait time for a container to shut down.
#	Container shutdown can result in lengthy system
#	shutdown times.  Even 5 seconds per container can be
#	too long.
SHUTDOWNDELAY=5

# OPTIONS can be used for anything else.
#	If you want to boot everything then
#	options can be "-a" or "-a -A".
OPTIONS=

# STOPOPTS are stop options.  The can be used for anything else to stop.
#	If you want to kill containers fast, use -k
STOPOPTS="-a -A -s"

USE_LXC_BRIDGE="true"  # overridden in lxc-net

[ ! -f /etc/default/lxc-net ] || . /etc/default/lxc-net

cat /etc/lxc/default.conf
lxc.net.0.type = veth
lxc.net.0.link = virbr0
lxc.net.0.flags = up

lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1

cat /var/lib/lxc/deb/config
# Template used to create this container: /usr/share/lxc/templates/lxc-debian
# Parameters passed to the template: -r buster
# Template script checksum (SHA-1): d5aa397522e36a17c64c014dd63c70d8607c9873
# For additional config options, please look at lxc.container.conf(5)

# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)

lxc.net.0.type = veth
lxc.net.0.hwaddr = 00:16:3e:e6:32:6b
lxc.net.0.link = virbr0
lxc.net.0.flags = up
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.rootfs.path = dir:/var/lib/lxc/deb/rootfs

# Common configuration
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.apparmor.profile = unconfined
# Container specific configuration
lxc.tty.max = 4
lxc.uts.name = deb
lxc.arch = amd64
lxc.pty.max = 1024

Inside container:

ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
92: eth0@if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:e6:32:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fee6:326b/64 scope link
       valid_lft forever preferred_lft forever

Can you show ip a show and ip l show on the host?

    ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master virbr0 state UP group default qlen 1000
        link/ether 18:66:da:e7:cd:2c brd ff:ff:ff:ff:ff:ff
    3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether 18:66:da:e7:cd:2d brd ff:ff:ff:ff:ff:ff
    4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether 18:66:da:e7:cd:2e brd ff:ff:ff:ff:ff:ff
    5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether 18:66:da:e7:cd:2f brd ff:ff:ff:ff:ff:ff
    6: enp5s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether a0:36:9f:0e:a2:e8 brd ff:ff:ff:ff:ff:ff
    7: enp5s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether a0:36:9f:0e:a2:ea brd ff:ff:ff:ff:ff:ff
    8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 18:66:da:e7:cd:2c brd ff:ff:ff:ff:ff:ff
        inet 10.20.8.2/24 brd 10.20.8.255 scope global virbr0
           valid_lft forever preferred_lft forever
        inet6 fe80::1a66:daff:fee7:cd2c/64 scope link
           valid_lft forever preferred_lft forever
    94: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:3f:77:10:38 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever

  ip l
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master virbr0 state UP mode DEFAULT group default qlen 1000
        link/ether 18:66:da:e7:cd:2c brd ff:ff:ff:ff:ff:ff
    3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether 18:66:da:e7:cd:2d brd ff:ff:ff:ff:ff:ff
    4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether 18:66:da:e7:cd:2e brd ff:ff:ff:ff:ff:ff
    5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether 18:66:da:e7:cd:2f brd ff:ff:ff:ff:ff:ff
    6: enp5s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether a0:36:9f:0e:a2:e8 brd ff:ff:ff:ff:ff:ff
    7: enp5s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether a0:36:9f:0e:a2:ea brd ff:ff:ff:ff:ff:ff
    8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
        link/ether 18:66:da:e7:cd:2c brd ff:ff:ff:ff:ff:ff
    94: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
        link/ether 02:42:3f:77:10:38 brd ff:ff:ff:ff:ff:ff

Hmm, and the container was running at that time?

I’m surprised not to see a vethXYZ type device on the host.

The container was running. I added another network card and a different vlan for it and now it is working. Is it impossible to share a bridge between KVM and lxc? Docker gave slightly more verbose errors, not by much. This bridge ‘virbr0’ was created and used on KVM, and wouldn’t work for anything else.

Solution for me: Create separate bridge from KVM and assign that for lxc containers to use.

It should work fine, though libvirt can be configured in some cases to do MAC filtering, if that’s active, then only the devices it itself attaches to the bridge would be added to the MAC table and everything else would fail to send traffic through.