Routed network — IP address in use on parent network

Hi,

I have a dedicated server with a public IP (e.g. 1.2.3.174), and have the range 1.2.3.174-179 allocated to me.

I’m using the routed method to assign external IPs to each container:

lxc profile device add default eth0 nic nictype=routed parent=enp34s0
lxc config device override mta eth0 ipv4.address=1.2.3.175

When starting the container, I get:

Error: Failed to start device "eth0": IP address "1.2.3.175" in use on parent network "enp34s0"

The address 1.2.3.175 is not in active use, but I bet it’s because it’s in the same /24. ip a shows:

    inet 1.2.3.174/24 brd 1.2.3.255 scope global enp34s0

How could I work around this? Thanks in advance!

No, its because something is responding to an ARP solicitation for that address on the parent network (which suggests there is another device on the network that has that IP).

You can set ipv4.neighbor_probe=false on the routed NIC, see:

However if there is another device also claiming that IP, then when LXD adds a proxy ARP entry onto the parent it may cause connectivity problems for that device and your container as they will be competing over that address.

Thanks! But actually this never worked out.

lxc profile show default

devices:
eth0:
ipv4.neighbor_probe: “false”
nictype: routed
parent: enp34s0
type: nic

lxc start c1
Error: Failed to start device “eth0”: IP address “1.2.3.177” in use on parent network “enp34s0”

$ ip a
2: enp34s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 2c:f0:5d:a6:99:8a brd ff:ff:ff:ff:ff:ff
inet 1.2.3.174/24 brd 1.2.3.255 scope global enp34s0

Would you have any other suggestions on how to get routed working?

What does lxc config show <instance> --expanded show for the affected instance?

Thanks again for the reply.

# lxc config show c1 --expanded

config:
  image.architecture: amd64
  image.description: ubuntu 18.04 LTS amd64 (release) (20230112)
  image.label: release
  image.os: ubuntu
  image.release: bionic
  image.serial: "20230112"
  image.type: squashfs
  image.version: "18.04"
  security.nesting: "true"
  volatile.apply_template: create
  volatile.base_image: d78ff45458cdb18817988a9abb8c9ce7ebb81b87282cc0aebf66872321d77dba
  volatile.cloud-init.instance-id: 9b3e384f-29e8-421a-a8bc-583fcb029b02
  volatile.eth0.hwaddr: 00:16:3e:ef:86:fa
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.uuid: 48c377b0-6e11-432e-9bee-a4bb0017d973
devices:
  eth0:
    ipv4.address: 1.2.3.177
    nictype: routed
    parent: enp34s0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

(I changed the IP to be prefixed with 1.2.3 instead of the actual external IP)

Ah OK, I think I know what is happening, but to confirm can you show lxc config show c1 (without the --expanded bit).

I expect thats happened is that you’ve defined the routed eth0 on the profile with ipv4.neighbor_probe: “false” but your actual container has its own routed eth0 with different settings (it needs to because it needs to have ipv4.address: 1.2.3.177 set).

So you need to apply the ipv4.neighbor_probe directly to the instance using:

lxc config device set c1 eth0 ipv4.neighbor_probe=false

Effectively the eth0 in the profile isn’t being used and could be removed entirely.

1 Like

I never got it to work on that network, and ended just not using that particular provider. On other networks, routed works as expected.

Thank you for all your help, though!