Macvlan vs network bridge

bridged networking (lxdbr0 by default) seems to be the most common method to make a service running in a container accessible to the outside world

in a different context I have picked up a statement implying macvlan being a more preferable solution

I tend to avoid the default bridge mode within containers. If you use bridge mode, then there are ways to perhaps use a proxy on the host to access services running in the containers or do port forwarding and such.

For container networking, I personally prefer macvlan, but it’s hard to get good information about this

it’s implied macvlan would be ‘more secure’ by design somewhat (at least that’s how I understand that comment). Just curious whether anybody here shares that belief, or what the general take on macvlan vs bridged networking may be.

There is no real security difference between the two as root on the host has kernel access so you can pretty much always assume that it will have access to your container’s network traffic one way or another.

The main points to consider with macvlan are:

  • macvlan tends to be faster than bridging as it skips a whole bunch of kernel code
  • when using macvlan, the host will not be able to communicate with the container over the macvlan interface
  • the number of macvlan devices may be restricted based on hardware limitations of your physical NIC
  • it can be very difficult to debug macvlan related issues as it may not behave the same on all kernel drivers and physical cards

If none of the limitations affect your setup, then go for it. If you need more flexibility, debug-ability or a more reproducible environment, bridging may be preferable (with openvswitch being one way to optimize performance there).

1 Like

Is it possible to setup internal bridge lxdbr1 on host and communicate through it with the container another eth1 card?
something like internal LAN