Difference between network forward and proxy device

It seems both network forward and proxy device can achieve same, forwarding a source Ip/port/protocol to a target container/port.
What exactly is the use case for either those?
Whilst proxy device is described sufficiently, the network forward lack of CLI command pattern or examples.

The design doc for this feature may help to illuminate the design rationale behind this feature:

The proxy device is instance specific and can forward connections in either direction, between protocols, and when operating in non Nat mode doesn’t require a network connection between host and instance.

A forward on the other hand is more like a proxy device operating in Nat mode. But its defined at the network level rather than instance level and allows sharing an ip between multiple instances because it can forward different ports to different ips.

Was helpful. Just in order to find out the CLI syntax, needed to fast forward to end of it, as a lot of optioned were discussed, tossed or modified.
Seems CLI takes too many parameters, so either edit it in yaml or API would be easier for that.
Is network forward translating the input into iptables rules?
I’ve tried proxy device. It worked fine. Just cant determine where on system those rules happen, as iptables hasnt shown any additional entries.

You can use lxc network forward edit <network> <listen_address> if you prefer.

For bridge networks it adds iptables or nftables depending on the firewall driver being used lxc info | grep -i firewall:

On ovn networks it uses the built in forwarding feature.

Whilst proxy device can define source protocol and target protocol in particular (which can defer), there only is one protocol key in network forward. I guess, its value applies to both, source and target.
Network forward:
tcp <-> tcp
udp <-> udp
Proxy device:
tcp <-> tcp
udp <-> udp
udp <-> tcp
tcp <-> udp
How to achieve this in network forward:
source:53 UDP → target:53 UDP
source:53 TCP → target:53 TCP
Need to create two network forwards with same listen address or only one forward of listen address and multiple ports items as children?

Are the strings “listen_port”: “80,81,8080-8090”, and “target_port”: “80,81,8080-8090” mapped by (,)position?

Correct, as it is network (DNAT) based rather than using a proxy process to translate connection types.

To create a port based network forward, lets image we have an IP on the LXD host of and an instance connected to the lxdbr0 network with a static IP of

lxc network show lxdbr0
  ipv4.nat: "true"
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
lxc init images:ubuntu/focal c1
lxc config device override c1 eth0 ipv4.address=
lxc start c1
lxc ls
| NAME |  STATE  |        IPV4         |                     IPV6                     |   TYPE    | SNAPSHOTS |
| c1   | RUNNING | (eth0) | fd42:ab1f:7a24:8e73:216:3eff:fe70:a44 (eth0) | CONTAINER | 0         |

Now setup the forward listening on forwarding to

lxc network forward create lxdbr0
lxc network forward port add lxdbr0 udp 53 53
lxc network forward port add lxdbr0 tcp 53 53

lxc network forward show lxdbr0
description: ""
config: {}
- description: ""
  protocol: udp
  listen_port: "53"
  target_port: "53"
- description: ""
  protocol: tcp
  listen_port: "53"
  target_port: "53"
location: none

You can also do multi port forwards:

lxc network forward port add lxdbr0 tcp 80,81,8080-8090

or multi ports to a different set of multi ports:

lxc network forward port add lxdbr0 tcp 80,81,8080-8090 90,91,9080-9090

or multi ports to a single port:

lxc network forward port add lxdbr0 tcp 80,81,8080-8090 80

Both, proxy device and network forward very interesting features.
I am trying to find out which to apply, as in many cases they are redundant when aiming an IP/ Port/Protocol forward.

Whilst network forward offers a centralized organisation and overview, proxies are hidden in container config and need to be fetched from containers.

At the other hand, as proxy is part of container config, by sudden change of container IP (new Lease or dynamic to static …) it adapts the new situation when target was iface.
Also by stopping or deleting container, the proxy disapears and don’t hang around as zombie in network forward.

Yes there is certainly some overlap, especially with bridged networks.

Network forwards can be useful with OVN networks in restricted projects where proxy devices are not allowed, as the administrator can delegate which IPs can be used for network level forwards and and then the project user can setup their own forwards to their instances using only the allowed external IPs.

I would point out that when using proxy device in the more performant nat=true mode it will also require the instance’s NIC IP to be statically configured in LXD as it uses the same underlying mechanism (firewall dnat) to forward traffic.

In proxy, there is a bind option, how does it work?

What is recommended as target wildcard? or

The proxy docs say:

“Which side to bind on (host/instance)”

Meaning do you want the listen socket opened on the LXD host side or inside the container?

I don’t quite understand what you mean, one address you’ve specified is a wildcard address, and another is the local loopback address?

lxc config device add container port80 proxy listen=tcp: connect=tcp:
lxc config device add container port80 proxy listen=tcp: connect=tcp:

As target wildcard, both and work fine and seems being interchangeable.

As you mentioned already, proxy device in NAT mode and network forward are gradually same. Though

  • proxy is defined in instance config as device, naming is flexible and of course only apply to the container. Not sure it works in profile as well.
  • network forward name takes the source ip only, but can handle all sorts of forwards throughout all containers as sets of port definitions.

A non-nat proxy produces an lxd fork main process and at least 7 to 12 child processes per idle device, havent tested whether it increases by ingress/egress). Having numerous containers with each multiple proxy dev will be quite CPU and MEM consuming on Host.
Certainly an L7 or application level proxy supposed to consume, but not sure about min. child processes in idle.

Network forward on the other hand as expected for L3 very humble and performant, just so very limited to OVN and bridges and to my surprise only managed bridges:

Error: Invalid devices: Device validation failed for "eth1": Cannot use manually specified ipv4.address when using unmanaged parent bridge
The iptables entries made by lxd are only based on source IP and container IP, which are given in an unmanaged bridge as well.

  1. is there a way, forward works for unmanaged bridges as well?
  2. Is it necessary having a static IP for container? Iptables can as well forward to any bridge address (dhcp pulled incl.).
  3. Is it possible to forward all ip traffic (all ports) from source to container and vice versa per network forward? This would be an alternative to ipvlan/macvlan/routed/physical.
    Something like:

-A PREROUTING -d sourceIP/32 -j DNAT --to-destination containerIP
-A POSTROUTING -s containerIP/32 -j SNAT --to-source sourceIP

These two setups are the same, they both connect to

That is not true, it produces 1 process per non-nat proxy device. What you may be seeing in your process list is threads.

The forkproxy process is very lighweight and consumes little resources.

The error you are seeing is because when specifying a static IP address on a container’s NIC device, LXD does not actually configure the NIC interface inside the container, but rather creates a static DHCP allocation on its local (managed) DHCP server for the NIC’s MAC address, so it always gets the same IP.

For unmanaged bridges, as the name suggests, LXD does not manage the network and so does not provide DHCP or DNS services to it, and without DHCP it cannot create static DHCP allocations, and so specifying a static IP address in LXD config would be meaningless and misleading.

  1. Currently network forwarding does not work for unmanaged bridges because there is no “network” concept in the database for an unmanaged parent bridge (as by definition they are unmanaged) and so there is nowhere to store the config for it.
  2. The instance NIC requires a static endpoint IP otherwise if it changes the forward may go elsewhere.
  3. Network forwards allow can allow all inbound traffic to be forwarded, and for output traffic on managed bridges you can set ipv4.nat.address on the managed bridge to specify the SNAT address for outbound traffic.

However to actually have the container ‘own’ the IP you can use routed in many scenarios.

That’s right. Indeed it might need a managed network. The reason I don’t use managed bridge, still is the partly interference with my own iptables rules, and a complex way of dealing with dnsmasq config. Some implemented but getiing passed in same line as -c when starting dnsmasq.
A lot of other dnsmasq central config options/files/hosts/resolve/logs/leases missing, which is an easier way to centrally manage hosts/leases / static leases etc. rather passing it as config to single container.

Didn’t understand this really. How do I achieve passing all traffic of a source IP to target IP (container) through network forwards?

Source based forwarding is not available at the moment.
But as the forward in bridged network is a firewall DNAT entry, you can still add additional rules to prohibit certain traffic from using the forward.