Help in routing LXD container network

Hi all,

I have a container whose network I’ve routed using instructions from the How to get LXD containers get IP from the LAN with routed network tutorial. Here’s the profile I used to launch it:

name: routed-ubuntu
description: Default LXD profile
config:
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - 192.168.233.15/24
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
devices:
  eth0:
    ipv4.address: 192.168.233.15
    nictype: routed
    parent: enp5s0f0
    type: nic

The host node has the following network:

# ip -brief address
enp5s0f0          UP             192.168.233.12/24 
veth443017e8@if78 UP             169.254.0.1/32 fe80::30a0:69ff:fe6d:5005/64 

For reference, here’s the set up in question (the physical servers are connected to a switch with configured VLANs):

  • Container: 192.168.233.15/24
  • Host server: 192.168.233.12/24
  • Other servers: 192.168.233.11/24 and 192.168.233.101/24
  • Laptop (via LAN): 172.16.0.xxx
  • Laptop (via OpenVPN): 192.168.192.xxx

The host and container can reach each other (via ping and ssh). The container and the laptop (both via LAN and OpenVPN) can also reach other.

The funky thing is that the other servers, within the same subnet as the host and the container, cannot reach the container and vice versa. However, the other servers do know to look for the container in the host IP, they just get stuck in the host for some reason:

# traceroute 192.168.233.15
traceroute to 192.168.233.15 (192.168.233.15), 30 hops max, 60 byte packets
 1  192.168.233.12  0.356 ms  0.324 ms  0.310 ms
 2  * * *
 3  * * *
 4  * * *
 5  * * *
 6  * * *
 7  * * *
 8  * * *
 9  * * *
10  * * *
11  * * *
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *

The host is running CentOS 9 Stream with firewalld disabled.

Can anyone help me to understand how to fix this? Thank you very much.

I think I may have found the solution:

Inside the container, the routing table is:

# ip route show
default via 169.254.0.1 dev eth0 proto static onlink
192.168.233.0/24 dev eth0 proto kernel scope link src 192.168.233.15

I just pointed the route to the 192.168.233.0/24 subnet’s gateway (192.168.233.1) / router (192.168.233.2) (either one works) and now the other servers in the subnet can access the container:

# ip route delete 192.168.233.0/24
# ip route add 192.168.233.1 via 192.168.233.15 dev eth0
# ip route show
default via 169.254.0.1 dev eth0 proto static onlink
192.168.233.1 via 192.168.233.15 dev eth0

Just simply deleting the 192.168.233.0/24 route works too:

# ip route delete 192.168.233.0/24
# ip route show
default via 169.254.0.1 dev eth0 proto static onlink

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.