Hi, I’m asking this as a separate question, but it relates in various ways to the network acls thread.
There are a lot of options for networking with lxd, but all have various limitations and it’s not clear to me what the default recommended option is for new installs? I think I have read and absorbed the videos on the youtube channel, the LXD documentation website and the blog series at http://stgraber.org on “creating an inexpensive cluster”. However, I’m struggling to condense this all and make a decision…
What I have are two sites, one with a /24 IPV4 allocation. The other with a /28 allocation.
I’m starting with individual machines running LXD and I’m content to develop a solution to backup and move machines between individual host machines. However, the direction of travel is towards a cluster + some extra individual machines in each location.
The /24 site is broadly facing a switch and multiple machines can simply start answering requests on any IP to claim it. The /28 is more restrictive and has a firewall in front, currently doing proxyarp to push individual IPs to individual servers. Probably only 2-4 IPs are available for use, currently contiguous, but might not be in the future (proxyarp avoids wasting IP space by needing to subnet)
The host machine has a bond over multiple nics, that bond is then in a bridge (br0) which is then the main interface in use by that physical machine to get internet access (there is another bond which makes up a private net between machines as well, but not relevant for now)
I will be setting up (only) around 10-20 containers, mostly running very simple services, eg exposing a few ports (mail/http, similar) and I would prefer to limit the outbound traffic to a few select services as well. Whilst some could survive with forwarded/proxied traffic, for the machines that I want to allocate an public ipv4, I’ve experimented with “bridged”, “routed”, and “ipvlan” devices. None seem perfect (or more likely I don’t understand how to use them?)
-
Bridged means creating the networking inside the container. I would prefer the container not to have an option to choose IP addresses and this to be set on the host if possible. Also, it seems more complex to prevent machines having unrestricted access to each other, but I’m not so worried about that.
-
Routed seems better. I can create a device in the instance config which sets up the public IP. Also networking is easily controlled between containers
-
ipvlan didn’t seem to offer any benefits to me over routed?
However, I liked the idea of using the ACLs feature for networking. However, I don’t see that I can use the ACL feature for outbound traffic in any of the above cases? This does seem to massively reduce the utility of the ACL feature, in fact I’m not totally sure what you can use it for apart from controlling intra instance traffic? I guess it could be used on an internal managed bridge which has a NAT to the outside world? What do people actually use it for??
However, the specifics aren’t covered in the blog series on
However, I get the impression that OVN networking can make use of ACLs for outbound networking control? There is a lot of docs on it, but I’m struggling to grok the big picture on what it really is and how devices work? Q: Is OVN the solution to what I’m looking for here? Am I right in understanding it’s something like the simple internal NAT bridge, but I can also route external public IPv4 traffic into it? I’m not quite getting the big picture to be honest…
For now I’m just planning to do the same as I did with my old linux-vserver installs and run iptables rules on the host. However, this is mildly inconvenient in that it it’s out of sync with creating the container instances and not so simple to apply traffic profiles to classes of servers (sure I can create chains with traffic profiles).
It seems like a useful feature would be for unmanaged networks, to be able to apply ACLs to the devices in the instances (feature request?). So eg in the case of a bridge or routed device in an instance, attached to the unmanaged device on the host, to be able to apply profiles/config to the instance device? This seems possible with OVN, hence wondering if the correct solution is to persevere with understanding that and migrating everything to that?
If the answer is OVN, can I use this across both a cluster and a few independent machines? Is this a good solution overall?
Alternatively, would I be better to look at “forwarding” for the entire solution? (none of the instances are high traffic, so load isn’t a concern). What I’m unclear about with forwarding is how this interacts with the outbound IP and NAT? I don’t see anything in the documentation on this, so presuming it doesn’t affect outbound traffic from the instance? In cases of Letsencrypt (or FTP) you have some challenges to match outbound and incoming traffic and instances work better if they are aware of their external IP, etc. So I don’t think forwarding is going to work as a general solution as I still need to figure out my outbound traffic through some other mechanism?
So far “routed” appears best for my needs (mainly because I can configure IP address outside the container). Is there another option I should research though? Any hidden issues? Can I combine forwarding and “routed”? eg to avoid writing iptables DNAT port rewriting rules (port 80 → 3080), can I write these using “forwarding” and push the IP via “routed”?
Why do others go with “bridged”? Or in fact what do people generally go with? Why?