Difference between network forward and proxy device

Whilst proxy device can define source protocol and target protocol in particular (which can defer), there only is one protocol key in network forward. I guess, its value applies to both, source and target.
Network forward:
tcp <-> tcp
udp <-> udp
Proxy device:
tcp <-> tcp
udp <-> udp
udp <-> tcp
tcp <-> udp
How to achieve this in network forward:
source:53 UDP → target:53 UDP
source:53 TCP → target:53 TCP
Need to create two network forwards with same listen address or only one forward of listen address and multiple ports items as children?

Are the strings “listen_port”: “80,81,8080-8090”, and “target_port”: “80,81,8080-8090” mapped by (,)position?

Correct, as it is network (DNAT) based rather than using a proxy process to translate connection types.

To create a port based network forward, lets image we have an IP on the LXD host of and an instance connected to the lxdbr0 network with a static IP of

lxc network show lxdbr0
  ipv4.nat: "true"
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
lxc init images:ubuntu/focal c1
lxc config device override c1 eth0 ipv4.address=
lxc start c1
lxc ls
| NAME |  STATE  |        IPV4         |                     IPV6                     |   TYPE    | SNAPSHOTS |
| c1   | RUNNING | (eth0) | fd42:ab1f:7a24:8e73:216:3eff:fe70:a44 (eth0) | CONTAINER | 0         |

Now setup the forward listening on forwarding to

lxc network forward create lxdbr0
lxc network forward port add lxdbr0 udp 53 53
lxc network forward port add lxdbr0 tcp 53 53

lxc network forward show lxdbr0
description: ""
config: {}
- description: ""
  protocol: udp
  listen_port: "53"
  target_port: "53"
- description: ""
  protocol: tcp
  listen_port: "53"
  target_port: "53"
location: none

You can also do multi port forwards:

lxc network forward port add lxdbr0 tcp 80,81,8080-8090

or multi ports to a different set of multi ports:

lxc network forward port add lxdbr0 tcp 80,81,8080-8090 90,91,9080-9090

or multi ports to a single port:

lxc network forward port add lxdbr0 tcp 80,81,8080-8090 80
1 Like

Both, proxy device and network forward very interesting features.
I am trying to find out which to apply, as in many cases they are redundant when aiming an IP/ Port/Protocol forward.

Whilst network forward offers a centralized organisation and overview, proxies are hidden in container config and need to be fetched from containers.

At the other hand, as proxy is part of container config, by sudden change of container IP (new Lease or dynamic to static …) it adapts the new situation when target was iface.
Also by stopping or deleting container, the proxy disapears and don’t hang around as zombie in network forward.

Yes there is certainly some overlap, especially with bridged networks.

Network forwards can be useful with OVN networks in restricted projects where proxy devices are not allowed, as the administrator can delegate which IPs can be used for network level forwards and and then the project user can setup their own forwards to their instances using only the allowed external IPs.

I would point out that when using proxy device in the more performant nat=true mode it will also require the instance’s NIC IP to be statically configured in LXD as it uses the same underlying mechanism (firewall dnat) to forward traffic.

In proxy, there is a bind option, how does it work?

What is recommended as target wildcard? or

The proxy docs say:

“Which side to bind on (host/instance)”

Meaning do you want the listen socket opened on the LXD host side or inside the container?

I don’t quite understand what you mean, one address you’ve specified is a wildcard address, and another is the local loopback address?

lxc config device add container port80 proxy listen=tcp: connect=tcp:
lxc config device add container port80 proxy listen=tcp: connect=tcp:

As target wildcard, both and work fine and seems being interchangeable.

As you mentioned already, proxy device in NAT mode and network forward are gradually same. Though

  • proxy is defined in instance config as device, naming is flexible and of course only apply to the container. Not sure it works in profile as well.
  • network forward name takes the source ip only, but can handle all sorts of forwards throughout all containers as sets of port definitions.

A non-nat proxy produces an lxd fork main process and at least 7 to 12 child processes per idle device, havent tested whether it increases by ingress/egress). Having numerous containers with each multiple proxy dev will be quite CPU and MEM consuming on Host.
Certainly an L7 or application level proxy supposed to consume, but not sure about min. child processes in idle.

Network forward on the other hand as expected for L3 very humble and performant, just so very limited to OVN and bridges and to my surprise only managed bridges:

Error: Invalid devices: Device validation failed for "eth1": Cannot use manually specified ipv4.address when using unmanaged parent bridge
The iptables entries made by lxd are only based on source IP and container IP, which are given in an unmanaged bridge as well.

  1. is there a way, forward works for unmanaged bridges as well?
  2. Is it necessary having a static IP for container? Iptables can as well forward to any bridge address (dhcp pulled incl.).
  3. Is it possible to forward all ip traffic (all ports) from source to container and vice versa per network forward? This would be an alternative to ipvlan/macvlan/routed/physical.
    Something like:

-A PREROUTING -d sourceIP/32 -j DNAT --to-destination containerIP
-A POSTROUTING -s containerIP/32 -j SNAT --to-source sourceIP

These two setups are the same, they both connect to

That is not true, it produces 1 process per non-nat proxy device. What you may be seeing in your process list is threads.

The forkproxy process is very lighweight and consumes little resources.

The error you are seeing is because when specifying a static IP address on a container’s NIC device, LXD does not actually configure the NIC interface inside the container, but rather creates a static DHCP allocation on its local (managed) DHCP server for the NIC’s MAC address, so it always gets the same IP.

For unmanaged bridges, as the name suggests, LXD does not manage the network and so does not provide DHCP or DNS services to it, and without DHCP it cannot create static DHCP allocations, and so specifying a static IP address in LXD config would be meaningless and misleading.

  1. Currently network forwarding does not work for unmanaged bridges because there is no “network” concept in the database for an unmanaged parent bridge (as by definition they are unmanaged) and so there is nowhere to store the config for it.
  2. The instance NIC requires a static endpoint IP otherwise if it changes the forward may go elsewhere.
  3. Network forwards allow can allow all inbound traffic to be forwarded, and for output traffic on managed bridges you can set ipv4.nat.address on the managed bridge to specify the SNAT address for outbound traffic.

However to actually have the container ‘own’ the IP you can use routed in many scenarios.

That’s right. Indeed it might need a managed network. The reason I don’t use managed bridge, still is the partly interference with my own iptables rules, and a complex way of dealing with dnsmasq config. Some implemented but getiing passed in same line as -c when starting dnsmasq.
A lot of other dnsmasq central config options/files/hosts/resolve/logs/leases missing, which is an easier way to centrally manage hosts/leases / static leases etc. rather passing it as config to single container.

Didn’t understand this really. How do I achieve passing all traffic of a source IP to target IP (container) through network forwards?

Source based forwarding is not available at the moment.
But as the forward in bridged network is a firewall DNAT entry, you can still add additional rules to prohibit certain traffic from using the forward.

Any suggestion on the preferred way to forward ports from both a public IPv4 and a public IPv6, to a static IP 10.x.x.x, on the lxdbr0, of an LXD container running a conferencing app jitsi, the ports it says it needs are
|Media Traffic |UDP |10000 |JVB |Public|
|Media Traffic in Restricted Firewalls |TCP |4443 |JVB |Public|
|For XMPP components (eg: Jicofo)* |TCP |5347 |Prosody |Private|
|For external XMPP clients (eg: JVB, Jibri)** |TCP |5222 |Prosody |Private|

You cannot forward IPv6 packets to IPv4 addresses using nat=true mode so you would need to use a proxy in non-NAT mode. Or add an IPv6 address to the container and then setup multiple proxy devices on the container using nat=true mode, for the IPv4 and IPv6 addresses respectively.

Ah OK. Should’ve been more specific, not trying to cross over IP protocol versions 4 and 6. What about, four network forwards, one for each of the 4 ports, listening on host public IPv6 address, to the Jitsi app LXD container’s unique local IPv6 address FDxx? Plus, four network forwards, one for each of the 4 ports, from host public IPv4 address, to Jitsi app LXD container’s static IPv4 address 10.x.x.x ?

Yes using network forwards will work to. This uses the same underlying firewall DNAT rules as LXD’s proxy device does with nat=true set. When using either approaches it is necessary to setup a static internal IP in the instance so the firewall rules have somewhere static to forward to.

When using proxy in nat=true mode it will require that the instance’s NIC config have a static ipv{n}.address set. Whereas using network level forwards will just expect you to provide IPs for an instance.

This difference is because the proxy device allows the firewall forward rules to follow the instance (if its stopped/started/moved), whereas network forward rules remain linked to the network and host.