Failed to start proxy device because of connect ip must be one of the instance static IPv4 address

  • lxd version: 5.4
  • lxc version: 5.4
  • description: I added a proxy device to my virtual machine, if the virtual machine is connected to the bridge network lxdbr0, it can be started successfully. if the virtual machine is connected to the ovn network, virtual machine startup failed. I have already set a static IPv4 address for eth0 of virtual machine.

here is lxc ls result:

here is lxc config device show vm-Ni4cUWUO85 result:

here is lxc network list result:

here is lxc network show net-0vSsiypl2B
image

This problem does not exist if I use bridging networks.

Right, I would expect this because the OVN networks are private and are not reachable from the LXD host itself. And when using the proxy device in nat=true mode it uses DNAT rules to forward packets received by the LXD host to the connect IP, which must be reachable from the LXD host.

1 Like

Oh. i see, well, I have two further questions.
I have tested that the proxy device can be used normally when the container is connected to the ovn network, because the proxy deivce of container in not in nat=true mode.So:

Question1:
Since the OVN network is private, what technology does the container use to expose services to the external network? I’m curious about what the proxy device of the container has done.(proxy device is not in nat=true mode)

Question2:
I need a dedicated network to deploy a distributed software,
and I have to expose the gateway to the external network,
So that my users can access my software.

At present, if I use containers to join the same ovn network,
and create one proxy device for gateway container,
There is no problem with such a plan.

But if I have to use virtual machines instead of containers,
Is there any other way for me to expose the virtual machine connected to the ovn network to the external network?

Thank you.

In non-nat mode the proxy device starts a small proxy process that listens on the host (or container) network namespace and then connects to the host (or container) target address in the respective network namespace.

In this way it can proxy packets between host and container without needing network connectivity between the host and the container. However as it is a proxy (rather than a DNAT forward) the packets are being recreated and so will appear to be from the IP address of the proxy (rather than the original client IP).

The proxy device does have support for the PROXY protocol, so if the target application understands that then it is still possible to have the original client IP transmitted as well.

As the proxy process in non-nat mode is running in userspace it is not as peformant as a kernel level DNAT forward. But as you have seen, it is more flexible.

1 Like

Because LXD doesn’t cannot easily run arbitrary processes inside a VM (unlike a container) the proxy device doesn’t currently support non-NAT mode for VMs.

This is theoretically possible in the future if we use the lxd-agent running inside the VM to proxy packets over the vsock connection, but not something we support today.

1 Like

Thank you for telling me so thoroughly.
I have learned very meaningful knowledge from you.

1 Like

I just thought of a fairly wacky way of achieving this, although it may not be suitable for you.

You could have a container connected to the OVN network with a proxy device that has its connect setting set to the internal IP of a VM running on the OVN network.

E.g.

devices:
  p1:
    connect: tcp:10.21.203.10:80
    listen: tcp:127.0.0.1:80
    type: proxy

Where 127.0.0.1:80 is the IP on the LXD host, and 10.21.203.10:80 is the internal OVN IP of a VM on the same OVN network as the container running the proxy device.

So effectively you would have a “proxy” container.

1 Like

This way can solve the problems I encounter in the current production environment. I will try it now.

1 Like

   Hey there, It has been 3 months since I asked this question,you have explained very clearly abount how proxy devices work.
   And I just learned how to use load balancer for OVN. Suddenly, I found that I still didn’t understand why the ovn network was private.

About Load Balancer, First, I configured the ipv4.routes attribute for my uplink network,just like this

lxc network set UPLINK ipv4.routes 172.31.30.168/29

and lxc network show UPLINK

root@lxdserver3:~# lxc network show UPLINK
config:
  dns.nameservers: 8.8.8.8
  ipv4.gateway: 172.31.30.1/24
  ipv4.ovn.ranges: 172.31.30.151-172.31.30.158
  ipv4.routes: 172.31.30.168/29
  volatile.last_state.created: "false"

151-158 are my vrouter ip range.
168-175 are my loadbalancer public ip listen address range.

   After I created load balancer, I found that I can not only access containers in the ovn network through listen address, but also access vms in the ovn network. Although it does prove that my load balancer has been worked.

   Currently, the vm in the ovn network cannot be accessed through proxy device (nat=true). But I can access the vm in the ovn network through load balancer.

Question3:
   Can you continue to help me understand that the following ovn is private and what does the private means? I even thought that there was no way to access the virtual machine in the ovn network(Obviously, I don’t understand what does private means).

Question4:
   The host can access the vm in ovn network through the load balanced listen address.
Therefore, the load balancer of the ovn network can be seen as that there are two vrouters between the uplink network and the ovn network? one vrouters’ public ip is volatile.network.ipv4.address,another vrouters’ public ip is in ipv4.routes range?
One vrouter allows dnat, and the other vrouter does not allow dnat. Is that the case?

Please show lxc network show <ovn network> and lxc network load-balancer show <ovn network> <listen-address>?

root@lxdserver1:~# lxc network show net-LiPTyTDoM5
config:
bridge.mtu: “1442”
ipv4.address: 10.79.122.1/24
ipv4.nat: “true”
ipv6.address: fd42:4c46:dbc3:e5b::1/64
ipv6.nat: “true”
network: UPLINK
volatile.network.ipv4.address: 172.31.30.152
description: “”
name: net-LiPTyTDoM5
type: ovn
used_by:

  • /1.0/instances/vm-ufIU7BvGR2
    managed: true
    status: Created
    locations:
  • lxdserver2
  • lxdserver3
  • lxdserver1

root@lxdserver1:~# lxc network load-balancer show net-LiPTyTDoM5 172.31.30.169
description: My public IP load balancer
config:
user.mykey: foo
backends:

  • name: c1-http
    description: C1 webserver
    target_port: “22”
    target_address: 10.79.122.3
    ports:
  • description: My web server load balancer
    protocol: tcp
    listen_port: “22”
    target_backend:
    • c1-http
      listen_address: 172.31.30.169
      location: “”
      root@lxdserver1:~#

NAT proxy devices require that there is directly connectivity between the LXD host (where the DNAT rule is added) and the instance’s target address.

When using OVN this isn’t the case by default because OVN networks are isolated from the host behind a virtual router.

It is possible to disable NAT ipv4.nat: “false” on the OVN network and setup routes on the LXD host towards the OVN network’s subnet via to the OVN network’s virtual router external address on the uplink such that the OVN network’s IPs are then directly addressable from the LXD host, at which point a nat=true proxy would work.

Yes this is what the OVN load balancer (and equally the lxc network forward) feature do.
They create listeners on the uplink network that forward traffic into the OVN network through the virtual router.

It means that, by default anyway, the OVN network’s subnet (10.79.122.0/24 in this case) is behind a virtual OVN router that is connected to the uplink network. This virtual router has SNAT enabled by default (ipv4.nat: “true”) and so allows egress traffic from the OVN network to make it out onto the wider external network, but all traffic appears to come from the virtual router’s IP address on the uplink (volatile.network.ipv4.address).

But, if you setup a static route on the host(s) or upstream routers towards the OVN network’s subnet going via the OVN virtual router’s address (volatile.network.ipv4.address) then the instances inside that OVN network will be directly reachable. They are not firewalled off by default.

You can also disable the SNAT setting so that egress traffic uses the instance’s own IP rather than the virtual router’s one.

I don’t understand this question.