Unable to ping container after starting OpenVPN client inside of it

Hello. I am having a networking issue that I cannot seem to solve.

Starting an OpenVPN client inside a container makes the container stop responding to outside pings. Without OpenVPN, I am able to ping the container from another computer on the same LAN, using it’s lxdbr IP. After the vpn connection is made, I can no longer reach the container from another computer, which means I can no longer access services running in that container from the LAN. I can still ping the container from the LXD host, or from other containers on the same host. Of note, I am using the update-systemd-resolved script provided with OpenVPN.

Here is how things are set up:

LXD host is Ubuntu 20.04.2 LTS on LAN IP 192.168.1.30
LXD v4.19 via snap
Container is on lxdbr0 IP 10.1.1.54

I have a static route on the router that reads: 10.1.1.0/24 next hop 192.168.1.30.
So from a computer @ 192.168.1.22, I can $ ping 10.1.1.54 and it will respond.

$ lxc profile show default
...
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic

$ lxc network show lxdbr0 
config:
  ipv4.address: 10.1.1.1/24
  ipv4.nat: "true"
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge

$ lxc config show test
  ...
  user.network-config: |
    version: 2
    ethernets:
      eth0:
        addresses: [10.1.1.54/24]
        gateway4: 10.1.1.1
        nameservers:
          addresses: [10.1.1.1]

ubuntu@test:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:02:4e:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.1.1.54/24 brd 10.1.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe02:4ea7/64 scope link 
       valid_lft forever preferred_lft forever

ubuntu@test:~$ resolvectl status
Global
       LLMNR setting: no                  
MulticastDNS setting: no                  
  DNSOverTLS setting: no                  
      DNSSEC setting: no                  
    DNSSEC supported: no                  
          DNSSEC NTA: 10.in-addr.arpa     
                      16.172.in-addr.arpa 
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa 
                      18.172.in-addr.arpa 
                      19.172.in-addr.arpa 
                      20.172.in-addr.arpa 
                      21.172.in-addr.arpa 
                      22.172.in-addr.arpa 
                      23.172.in-addr.arpa 
                      24.172.in-addr.arpa 
                      25.172.in-addr.arpa 
                      26.172.in-addr.arpa 
                      27.172.in-addr.arpa 
                      28.172.in-addr.arpa 
                      29.172.in-addr.arpa 
                      30.172.in-addr.arpa 
                      31.172.in-addr.arpa 
                      corp                
                      d.f.ip6.arpa        
                      home                
                      internal            
                      intranet            
                      lan                 
                      local               
                      private             
                      test                

Link 16 (eth0)
      Current Scopes: DNS     
DefaultRoute setting: yes     
       LLMNR setting: yes     
MulticastDNS setting: no      
  DNSOverTLS setting: no      
      DNSSEC setting: no      
    DNSSEC supported: no      
  Current DNS Server: 10.1.1.1
         DNS Servers: 10.1.1.1

After openvpn client starts…

ubuntu@test:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 10.10.66.67/26 brd 10.10.66.127 scope global tun0
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:02:4e:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.66.69/26 brd 10.10.66.127 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe02:4ea7/64 scope link 
       valid_lft forever preferred_lft forever

ubuntu@test:~$ resolvectl status
...
Link 2 (tun0)
      Current Scopes: DNS        
DefaultRoute setting: yes        
       LLMNR setting: yes        
MulticastDNS setting: no         
  DNSOverTLS setting: no         
      DNSSEC setting: no         
    DNSSEC supported: no         
         DNS Servers: 10.10.66.65
                      80.67.14.78
          DNS Domain: ~.

Hello

it has been 4 days, you probably solved it yourself. Anyway, when dealing with several network interfaces and intractable routing problems, it has been imprinted in my head a motto: beware martians ! when nothing work although logically all should be fine, always, always enable log martians and look at kernel logs. Last time I forgot this rule I wasted yet some hours and it was… with openvpn ! Hope this helps. Fix to martians problems: usually add an explicit routing rule that makes no sense since routing works already. Yet it makes all the difference for the kernel.

Unfortunately, I still haven’t solved this. There are no martians in the logs.

well, what’s making them nasty is that there is nothing in syslog (at least on Ubuntu) unless you enable them;

sudo sysctl net.ipv4.conf.all.log_martians=1