Failed to start proxy device because of connect ip must be one of the instance static IPv4 address

  • lxd version: 5.4
  • lxc version: 5.4
  • description: I added a proxy device to my virtual machine, if the virtual machine is connected to the bridge network lxdbr0, it can be started successfully. if the virtual machine is connected to the ovn network, virtual machine startup failed. I have already set a static IPv4 address for eth0 of virtual machine.

here is lxc ls result:

here is lxc config device show vm-Ni4cUWUO85 result:

here is lxc network list result:

here is lxc network show net-0vSsiypl2B
image

This problem does not exist if I use bridging networks.

Right, I would expect this because the OVN networks are private and are not reachable from the LXD host itself. And when using the proxy device in nat=true mode it uses DNAT rules to forward packets received by the LXD host to the connect IP, which must be reachable from the LXD host.

1 Like

Oh. i see, well, I have two further questions.
I have tested that the proxy device can be used normally when the container is connected to the ovn network, because the proxy deivce of container in not in nat=true mode.So:

Question1:
Since the OVN network is private, what technology does the container use to expose services to the external network? I’m curious about what the proxy device of the container has done.(proxy device is not in nat=true mode)

Question2:
I need a dedicated network to deploy a distributed software,
and I have to expose the gateway to the external network,
So that my users can access my software.

At present, if I use containers to join the same ovn network,
and create one proxy device for gateway container,
There is no problem with such a plan.

But if I have to use virtual machines instead of containers,
Is there any other way for me to expose the virtual machine connected to the ovn network to the external network?

Thank you.

In non-nat mode the proxy device starts a small proxy process that listens on the host (or container) network namespace and then connects to the host (or container) target address in the respective network namespace.

In this way it can proxy packets between host and container without needing network connectivity between the host and the container. However as it is a proxy (rather than a DNAT forward) the packets are being recreated and so will appear to be from the IP address of the proxy (rather than the original client IP).

The proxy device does have support for the PROXY protocol, so if the target application understands that then it is still possible to have the original client IP transmitted as well.

As the proxy process in non-nat mode is running in userspace it is not as peformant as a kernel level DNAT forward. But as you have seen, it is more flexible.

1 Like

Because LXD doesn’t cannot easily run arbitrary processes inside a VM (unlike a container) the proxy device doesn’t currently support non-NAT mode for VMs.

This is theoretically possible in the future if we use the lxd-agent running inside the VM to proxy packets over the vsock connection, but not something we support today.

1 Like

Thank you for telling me so thoroughly.
I have learned very meaningful knowledge from you.

1 Like

I just thought of a fairly wacky way of achieving this, although it may not be suitable for you.

You could have a container connected to the OVN network with a proxy device that has its connect setting set to the internal IP of a VM running on the OVN network.

E.g.

devices:
  p1:
    connect: tcp:10.21.203.10:80
    listen: tcp:127.0.0.1:80
    type: proxy

Where 127.0.0.1:80 is the IP on the LXD host, and 10.21.203.10:80 is the internal OVN IP of a VM on the same OVN network as the container running the proxy device.

So effectively you would have a “proxy” container.

1 Like

This way can solve the problems I encounter in the current production environment. I will try it now.

1 Like