Problem route host en lxd container

Good morning,
I have a problem with a route that is systematically added at startup.
This prevents the network connection between the host and the lxd container.

these two machines are not able to communicate with each other and I found the origin (wrong route)
I delete them manually but they come back after each boot.
I confess I don’t know how to do it.

My Container

here is the netplan of my container

> network:
>     version: 2
>     ethernets:
>         eth0:
>             dhcp4: false
>             addresses:
>             - 192.168.0.212/24
>             gateway4: 192.168.0.1
>             nameservers:
>                  addresses:
>                  - 192.168.0.1
>             routes:
>             - to: 192.168.0.0/24
>               via: 192.168.0.1

here are the roads of my container

> default via 192.168.0.1 dev eth0 proto static 
> 10.8.0.0/24 via 10.8.0.2 dev tun0 
> 10.8.0.2 dev tun0 proto kernel scope link src 10.8.0.1 
> 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.212 
> 192.168.0.0/24 via 192.168.0.1 dev eth0 proto static

the road that I have trouble with and that comes back after each start is:

**> 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.212**

my config ip of my container

> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever 2: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel
> state UNKNOWN group default qlen 500
>     inet 10.8.0.1 peer 10.8.0.2/32 scope global tun0
>        valid_lft forever preferred_lft forever 7: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
> group default qlen 1000 link-netnsid 0
>     inet 192.168.0.212/24 brd 192.168.0.255 scope global eth0
>        valid_lft forever preferred_lft forever

HOST Machine

here is the netpan of my host machine

> network:
>     ethernets:
>         eth0:
>             dhcp4: true
>             optional: true
>     version: 2

here are the roads ofmy host machine

> default via 192.168.0.1 dev eth0 proto dhcp src 192.168.0.201 metric
> 100 
> 10.20.68.0/24 dev lxdbr0 proto kernel scope link src 10.20.68.1 linkdown 
> 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.201 
> 192.168.0.1 dev eth0 proto dhcp scope link src 192.168.0.201 metric 100

the road that I have trouble with and that comes back after each start is:

**> 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.201** 

My config ip on host machine:

> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group
> default qlen 1000
>     inet 192.168.0.201/24 brd 192.168.0.255 scope global dynamic eth0
>        valid_lft 84213sec preferred_lft 84213sec 4: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
> group default qlen 1000
>     inet 10.20.68.1/24 scope global lxdbr0
>        valid_lft forever preferred_lft forever

I thank you for your advice

The reason the route 192.168.0.0/24 dev eth0 is being added is because your container’s netplan is specifying an IP address of 192.168.0.212/24. The /24 part is saying “this NIC is in the same network as the other hosts in the /24 and so you don’t need to use a router gateway to get to them”.

You’ve not explained why this is a problem, as it normally isn’t. What is your network setup?

You’ve also not explained what sort of networking mode you’re using with your container.

Please can you provide the output of lxc config show <instance> --expanded and lxc network show lxdbr0 if the instance NIC is using this managed network.

However generally speaking if you change your netplan config to specify an IP using 192.168.0.212/32 this will remove the automatically generated route. But this won’t solve your problem as then netplan will be unable to add the route to 192.168.0.1 as it won’t know how to reach it. You will then also need to add a device route to 192.168.0.1/32.

Good morning,
Thank you for your answer.
I use this container to host an openvpn server.
I use a mac vlan profile.

Here is ex export the configuration of my container:

> architecture: aarch64
> config:
>   image.architecture: arm64
>   image.description: Ubuntu focal arm64 (20220323_20:07)
>   image.os: Ubuntu
>   image.release: focal
>   image.serial: "20220323_20:07"
>   image.type: squashfs
>   image.variant: cloud
>   user.user-data: |
>     #cloud-config
>     hostname: openvpn
>     ssh_authorized_keys:
>       - ssh-rsa    rsa keyroot@ansible-deploy
>     packages:
>       - openssh-server
>     write_files:
>         - content: |
>             network:
>                 version: 2
>                 ethernets:
>                     eth0:
>                         dhcp4: false
>                         addresses:
>                         - 192.168.0.212/24
>                         gateway4: 192.168.0.1
>                         nameservers:
>                              addresses:
>                              - 192.168.0.1
>           path : /etc/netplan/50-cloud-init.yaml
>     runcmd:
>        - echo "192.168.0.212 openvpn" >> /etc/hosts
>        - [netplan, apply]
>        - [timedatectl, set-timezone, Europe/Paris]
>        - echo "rsa key root@ansible-deploy" > /root/.ssh/authorized_keys
>   volatile.base_image: e37abbd3eeee9c08ce7ad76ab6796d655a001fb9f56dc018a4cf2d1ca02041d3
>   volatile.eth0.host_name: mace0b322cc
>   volatile.eth0.hwaddr: 00:16:3e:fc:49:d2
>   volatile.eth0.last_state.created: "false"
>   volatile.eth0.name: eth0
>   volatile.idmap.base: "0"
>   volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
>   volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
>   volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
>   volatile.last_state.power: RUNNING
>   volatile.uuid: c71cd78f-6593-43fc-b0b0-0ddac6725018
> devices:
>   eth0:
>     nictype: macvlan
>     parent: eth0
>     type: nic
>   root:
>     path: /
>     pool: default
>     type: disk
> ephemeral: false
> profiles:
> - vpn-macvlan
> stateful: false
> description: ""
> root@ubuntuserv:~# lxc config show openvpn --expanded
> architecture: aarch64
> config:
>   image.architecture: arm64
>   image.description: Ubuntu focal arm64 (20220323_20:07)
>   image.os: Ubuntu
>   image.release: focal
>   image.serial: "20220323_20:07"
>   image.type: squashfs
>   image.variant: cloud
>   user.user-data: |
>     #cloud-config
>     hostname: openvpn
>     ssh_authorized_keys:
>       - ssh-rsa key
>     packages:
>       - openssh-server
>     write_files:
>         - content: |
>             network:
>                 version: 2
>                 ethernets:
>                     eth0:
>                         dhcp4: false
>                         addresses:
>                         - 192.168.0.212/24
>                         gateway4: 192.168.0.1
>                         nameservers:
>                              addresses:
>                              - 192.168.0.1
>           path : /etc/netplan/50-cloud-init.yaml
>     runcmd:
>        - echo "192.168.0.212 openvpn" >> /etc/hosts
>        - [netplan, apply]
>        - [timedatectl, set-timezone, Europe/Paris]
>        - echo "rsa key" > /root/.ssh/authorized_keys
>   volatile.base_image: e37abbd3eeee9c08ce7ad76ab6796d655a001fb9f56dc018a4cf2d1ca02041d3
>   volatile.eth0.host_name: mace0b322cc
>   volatile.eth0.hwaddr: 00:16:3e:fc:49:d2
>   volatile.eth0.last_state.created: "false"
>   volatile.eth0.name: eth0
>   volatile.idmap.base: "0"
>   volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
>   volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
>   volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
>   volatile.last_state.power: RUNNING
>   volatile.uuid: c71cd78f-6593-43fc-b0b0-0ddac6725018
> devices:
>   eth0:
>     nictype: macvlan
>     parent: eth0
>     type: nic
>   root:
>     path: /
>     pool: default
>     type: disk
> ephemeral: false
> profiles:
> - vpn-macvlan
> stateful: false
> description: "

my lxdbr0:

root@ubuntuserv:~# lxc network show lxdbr0
config:
  ipv4.address: 10.20.68.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:a714:7749:96fe::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/test2
- /1.0/profiles/default
- /1.0/profiles/lan
- /1.0/profiles/myprofile2
managed: true
status: Created
locations:
- none

the profile use for my container:

config:
  user.user-data: |
    #cloud-config
    hostname: openvpn
    ssh_authorized_keys:
      - rsa key
    packages:
      - openssh-server
    write_files:
        - content: |
            network:
                version: 2
                ethernets:
                    eth0:
                        dhcp4: false
                        addresses:
                        - 192.168.0.212/24
                        gateway4: 192.168.0.1
                        nameservers:
                             addresses:
                             - 192.168.0.1
          path : /etc/netplan/50-cloud-init.yaml
    runcmd:
       - echo "192.168.0.212 openvpn" >> /etc/hosts
       - [netplan, apply]
       - [timedatectl, set-timezone, Europe/Paris]
       - echo "rsa key" > /root/.ssh/authorized_keys
description: mac vlan lxd profile
devices:
  eth0:
    nictype: macvlan
    parent: eth0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: vpn-macvlan
used_by:
- /1.0/instances/openvpn

I thank you for your advice

OK so all that looks fine.
I think the actual issue here is that you’re using macvlan and that device type, by design, does not allow the instance to communicate with the host. So its nothing to do with routing, its just not allowed by macvlan.

You could look at using routed NIC type, as you’re not using DHCP inside the container and using static IPs, as this avoids the need for a bridge (lxdbr0 isn’t involved here).

See

1 Like

Indeed it is precisely this blog I used!
At the beginning I had used ipvlan.
But when I set the ip address (because in the case of my openvpn server, I need it) the container did not load my profile.
It was impossible for me to install packages (ssh server) from the lxc profile, because of a too long network loss.

In this case for the macvlan, apart from deleting the routes that have problems after each reboot, for the moment, I have not found other solutions.

Translated with DeepL Translate: The world's most accurate translator (free version)

ipvlan has the same restrictions as macvlan with regard to not being able to communicate with the host, whereas routed does not.

You’ll need to use cloud-init to configure your netplan profile inside the container with routed NIC (as well as specifying it on the LXD NIC device config).

Please re-create what you used with routed so I can see the specific setup you used originally.

Good morning,
I’m sorry maybe I don’t understand your request.
But all configurations are based on this profile:
root@ubuntuserv:~# lxc profile show vpn-macvlan

> config:
>   user.user-data: |
>     #cloud-config
>     hostname: openvpn
>     ssh_authorized_keys:
>       - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsW6QzrRlo8E96yBqnR/yvdXl4jOdT+JqNXW5Z3o0KSEMqPfjv+iqBer6hqVGygSG4Y+YeKf9dotskGfKcKAWghwz38Nc8uOpktIANDfka66YCJ531ezbEGJRPZmqDeGCDuUUESgFE78XNvdVcPhyknMYYY26XGZ6+xh5wGJBvthe0gbHgY8LeM2N9sKYfrj7yR1e8yLJDE8C981ZYGlLUU9xYUkVVrfbc6TipkpwNgx8+Y1nSBi1hj7oBW1vQhpNcuuzCNg2ILxylvfKfwDNVbmSDeU+I10P+NTjU3drp0Q4rWuC15lUk9332Io2hWflVBn0gkqgQSwC9jHOD/+40+YEdLXp9gRonHSGVIVyaX+r+W57XZkS9CW7a8u3SYZzoVcyMuTADKikwYAjQd3uB7mAbn2nsRMUUK6NH2+s97xMI7vsNmYNsllgoXl+pM2NWQSN0iG2Te0ON6ZKdtRtOKbgDpYZJXOp4WII/PJ6577uW52eHfIay7yKAl82mpSk= root@ansible-deploy
>     packages:
>       - openssh-server
>     write_files:
>         - content: |
>             network:
>                 version: 2
>                 ethernets:
>                     eth0:
>                         dhcp4: false
>                         addresses:
>                         - 192.168.0.212/24
>                         gateway4: 192.168.0.1
>                         nameservers:
>                              addresses:
>                              - 192.168.0.1
>           path : /etc/netplan/50-cloud-init.yaml
>     runcmd:
>        - echo "192.168.0.212 openvpn" >> /etc/hosts
>        - [netplan, apply]
>        - [timedatectl, set-timezone, Europe/Paris]
>        - echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsW6QzrRlo8E96yBqnR/yvdXl4jOdT+JqNXW5Z3o0KSEMqPfjv+iqBer6hqVGygSG4Y+YeKf9dotskGfKcKAWghwz38Nc8uOpktIANDfka66YCJ531ezbEGJRPZmqDeGCDuUUESgFE78XNvdVcPhyknMYYY26XGZ6+xh5wGJBvthe0gbHgY8LeM2N9sKYfrj7yR1e8yLJDE8C981ZYGlLUU9xYUkVVrfbc6TipkpwNgx8+Y1nSBi1hj7oBW1vQhpNcuuzCNg2ILxylvfKfwDNVbmSDeU+I10P+NTjU3drp0Q4rWuC15lUk9332Io2hWflVBn0gkqgQSwC9jHOD/+40+YEdLXp9gRonHSGVIVyaX+r+W57XZkS9CW7a8u3SYZzoVcyMuTADKikwYAjQd3uB7mAbn2nsRMUUK6NH2+s97xMI7vsNmYNsllgoXl+pM2NWQSN0iG2Te0ON6ZKdtRtOKbgDpYZJXOp4WII/PJ6577uW52eHfIay7yKAl82mpSk= root@ansible-deploy" > /root/.ssh/authorized_keys
> description: mac vlan lxd profile
> devices:
>   eth0:
>     nictype: macvlan
>     parent: eth0
>     type: nic
>   root:
>     path: /
>     pool: default
>     type: disk
> name: vpn-macvlan
> used_by:
> - /1.0/instances/openvpn

Thanks for your advice.

That blog uses the routed NIC rather than ipvlan or macvlan, so if you had previously used that blog, I was expecting to see your config using the routed NIC type.

Have you tried that?

Indeed, you are right, I did not understand your explanation.
I have just tested in this case I do not have the road anymore
I don’t have the line anymore:
192.168.0.0/24 dev eth0 proto kernel scope link src

on the other hand it is impossible for me to ping an ip address outside the container

here is my routed container configuration:

architecture: aarch64
config:
  image.architecture: arm64
  image.description: Ubuntu focal arm64 (20220323_20:07)
  image.os: Ubuntu
  image.release: focal
  image.serial: "20220323_20:07"
  image.type: squashfs
  image.variant: cloud
  user.user-data: |
    #cloud-config
    hostname: openvpn
    ssh_authorized_keys:
      - ssh-rsa key
    packages:
      - openssh-server
    write_files:
        - content: |
            network:
                version: 2
                ethernets:
                    eth0:
                        addresses:
                        - 192.168.0.213/32
                        nameservers:
                             addresses:
                             - 192.168.0.1
                        routes:
                        - to: 0.0.0.0/0
                          via: 192.168.0.1
                          on-link: true
          path : /etc/netplan/50-cloud-init.yaml
    runcmd:
       - echo "192.168.0.213 openvpn" >> /etc/hosts
       - [netplan, apply]
       - [timedatectl, set-timezone, Europe/Paris]
       - echo "ssh-rsa key" > /root/.ssh/authorized_keys
  volatile.base_image: e37abbd3eeee9c08ce7ad76ab6796d655a001fb9f56dc018a4cf2d1ca02041d3
  volatile.eth0.host_name: veth965f00c5
  volatile.eth0.hwaddr: 00:16:3e:01:1a:1a
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 108f7bd1-c64a-4d2f-b97e-cc1482656842
devices:
  eth0:
    ipv4.address: 192.168.1.213
    nictype: routed
    parent: eth0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- vpn-routed
stateful: false
description: ""

thank you for your help

OK we are getting somewhere. But you cannot use your default gateway of 192.168.0.1 when using routed, you need to use the special default link-local gateway address that allow packets to reach the LXD host from the instance.

For containers it uses a veth pair, and for VMs it uses a TAP device. It then configures the following link-local gateway IPs on the host end which are then set as the default gateways in the instance:
169.254.0.1 fe80::1

So you need to use 169.254.0.1 for the container’s default 0.0.0.0/0 route.

See:

Indeed, you are right.
I have just modified the route with : 169.254.0.1
from my container i can’t ping 169.254.0.1

From my host I can ping my container.
but from my container, I can’t get out.

I read in your article that I have to activate the forwarding ? on the host.
I did it (via sysctl) but it doesn’t change anything I can’t exit my container.
Sincerely

thank you,
I can now correctly deploy my container in routed mode!
The only problem I have is that during the execution of the profile. it is not able to install onpen-ssh server.
I see that in the logs the network interface is cut too long and does not allow the profile to install a packet directly. Surprising and annoying because I really need it for ansible afterwards.
here are the logs for information :

> Err:5 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 openssh-server arm64 1:8.2p1-4ubuntu0.4
>   Temporary failure resolving 'ports.ubuntu.com'
> Err:6 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 python3-distro all 1.4.0-1
>   Temporary failure resolving 'ports.ubuntu.com'
> Err:7 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 ssh-import-id all 5.10-0ubuntu1
>   Temporary failure resolving 'ports.ubuntu.com'
> E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/w/wget/wget_1.20.3-1ubuntu2_arm64.deb  Temporary failure resolving 'ports.ubuntu.com'
> E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/t/tcp-wrappers/libwrap0_7.6.q-30_arm64.deb  Temporary failure resolving 'ports.ubuntu.com'
> E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/n/ncurses/ncurses-term_6.2-0ubuntu2_all.deb  Temporary failure resolving 'ports.ubuntu.com'
> E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openssh/openssh-sftp-server_8.2p1-4ubuntu0.4_arm64.deb  Temporary failure resolving 'ports.ubuntu.com'
> E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/o/openssh/openssh-server_8.2p1-4ubuntu0.4_arm64.deb  Temporary failure resolving 'ports.ubuntu.com'
> E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/p/python-distro/python3-distro_1.4.0-1_all.deb  Temporary failure resolving 'ports.ubuntu.com'
> E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/s/ssh-import-id/ssh-import-id_5.10-0ubuntu1_all.deb  Temporary failure resolving 'ports.ubuntu.com'
> E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
> Cloud-init v. 22.1-14-g2e17a0d6-0ubuntu1~20.04.2 running 'modules:final' at Mon, 28 Mar 2022 15:06:14 +0000. Up 130.84 seconds.
> 2022-03-28 15:06:19,116 - util.py[WARNING]: Failed to install packages: ['openssh-server']
> 2022-03-28 15:06:19,123 - cc_package_update_upgrade_install.py[WARNING]: 1 failed with exceptions, re-raising the last one
> 2022-03-28 15:06:19,124 - util.py[WARNING]: Running module package-update-upgrade-install (<module 'cloudinit.config.cc_package_update_upgrade_install' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_package_update_upgrade_install.py'>) failed
> Cloud-init v. 22.1-14-g2e17a0d6-0ubuntu1~20.04.2 finished at Mon, 28 Mar 2022 15:06:20 +0000. Datasource DataSourceNoCloud [seed=/var/lib/cloud/seed/nocloud-net][dsmode=net].  Up 137.67 seconds
> (END)

I really thank you for your help and your patience

Hello,
Thank you very much, I managed to get everything working with your advice.
There are some principles that I did not know well.
I thank you very much for your help and your advice.

1 Like

Hey could you paste your setup / routed profile please. Running into odd problems myself with openvpn that i am trying to decipher. Once i start the vpn container stops being reachable by lan / stops being able to ping lan.