I bought this computer to serve as both for a router and a server running lxd, but it’ll take a while to arrive.
The main issue is still the planning and I see two options for it.
The first option is to run OpenWRT on the system and run lxd inside it. This would probably be the easiest approach if lxd runs inside OpenWRT. I didn’t find anyone doing this, so I wanted to know if this is possible. Does anyone know?
The second option is to run Ubuntu Server on the system, OpenWRT as an lxd container and manage the connections from inside the container. This has its advantages as well, but seems more complicated to manage. How can I let the OpenWRT container manager the port connections and still have the host Ubuntu server have access to the network to provide access to the other containers?
I guess there’s also a third option which is to get some recommendation on any alternative I have not thought about. Can someone recommend a better alternative in which I can run lxd and also administer the router through an easy web interface?
I’m not particularly familiar with OpenWRT, so I’m not sure what you mean by “manage the port connections”. But in principle you could use a physical NIC to move the physical ports into the instance, that way they should be fully accessible to the OpenWRT container and won’t appear on the host.
One port receives input from the WAN network. This port would probably need to be shared between the server and the OpenWRT container or otherwise the server would not have internet connection. Then, since OpenWRT would manage my LAN, it would probably manage anyone connected to every port which is not the WAN port. This is what I meant when I said “manage the port connections”. I just read the documentation you sent and indeed this is probably what I need for all ports except for the WAN port.
Ideally, the WAN port should also be managed by OpenWRT in the same way. But if I do this, then every physical port will be passed through to the OpenWRT instance and be made unavailable to the host server and I’m not sure how it will connect to the internet. Is there a way for the host to create a virtual port connection in which the host server connects to the internet through a container running OpenWRT?
I’ve seen this video and this is not really what I want. Btw, this also exposes the confusion in my original message more clearly as I’m talking about physical Rj45 port connections and not network ports. On the previous messages, I’m only talking about physical Rj45 ports.
I guess I should have been talking about passing through devices instead of ports. So my question should have been the following:
Is there a way for the host to create a virtual device which is shares with a container in which the host server connects to the internet through a OpenWRT container and the container accepts the host through DHCP (say)?
Well if you truly want your OpenWRT container to be a router then, yes, you would need to pass the WAN interface into the container too.
At that point your host will be effectively cut off from the network.
Then you would need to add a link between your host and your router container (such as a connection to lxdbr0 bridge) which the host can then use the container as the default gateway.
This does introduce a chicken/egg scenario in terms of start-up time connectivity which may introduce some issues/challenges as effectively your host won’t have internet connection until LXD starts up your router container.
What connectivity protocol does your internet access use, and how many global IPs do you have?
It’s currently behind NAT and my current router gets a local IP through DHCP. In the future I’ll talk to the internet company to set their modem in bridge mode and I’m still not sure how the connection is going to be, but my modem only has one global IP.
Btw, I don’t really want this. I’m mostly considering what my options are to plan what I should do exactly.
Maybe the best solution would be to simply just run OpenWRT directly and install snapd and then the lxd snap, but I have not seen anywhere if this is possible. In fact, this was talked about here and I got the idea it’s not easy.
I thought that the dangling veth pair could be handled by OpenWRT as if it was another ethernet port. In this case, OpenWRT would look at it like any other connection through a rj45 port and include it on its networks.
I’m not sure how to use the lxdbr0 bridge as, from my understanding, it’s managed by lxd and not by OpenWRT. I’m not sure if this would be an issue.
It makes sense. But the downside is that you’ll have to have something configure the host-side IP address of the dangling veth interface each time the instance starts.
But if you use a bridged NIC type connected to the lxdbr0 interface, then your host will always be reachable via the lxdbr0 IP address from the instance, and you can configure the instance to have a static IP so its reachable from the host.
You’ll most likely need to disable DHCP on lxdbr0 so it doesn’t conflict with the instance itself.
Interesting timing since running openwrt in a container is exactly what I’ve spent the last month implementing (in a secure way). Today I finally managed to put my setup into a final state and pushed most of my configuration to GitHub. The openwrt part is missing because they mix configuration and secrets(e.g. for ppp and wireguard) in the same configs so it’s hard to share properly. I can post snippets though If you’re interested.
In future I’ll also improve the readme of the repo but here’s the basic breakdown for now:
NanoPi R4S (4GB RAM, 64GB microSD)
a 500GB SSD with: 16GB /var, 16GB swap, LXD ZFS storage for the rest
I created my own alpine-based distro for the host OS. You can use ubuntu, but
alpine needs less resources
it has better support form aarch64 hardware that is not the raspberry pi
I can generate the image from configurations which I can publish on GitHub
It’s easier to make the OS readonly without loosing updateability.
the network interface setup:
wan0 get’s moved into the container
lan0 stays on the host but all IP configuration is removed
a macvlan interface on top of lan0 is created for openwrt
a macvlan interface on top of lan0 with a static IP is created for the host so I can reach it when openwrt is down. This also gives the host internet access via openwrt since the host is basically like another physical device inside your home network.
an unused macvlan interface on top of lan0 without any IPs is created for the host because linux bridges don’t allow communication with the outside world when there’s one device only.(which is the case when openwrt is down)
an additional bridge lxdpriv0 which is not managed by LXD but by the host OS is created. Untrusted software which should not be able to communicate with the home network goes here.(the other way around still works). I basically put everything there that doesn’t need to communicate with other containers since I don’t trust any software It has port_isolation and mac_filtering enabled. No internet access by default.
a veth on top of lxdpriv0 gets created for openwrt. openwrt is the only port with port_isolation disabled so everything has to go through it. openwrt provides DNS, DHCP and RA for that bridge. That way I can see and manage container addresses inside openwrt. I can also allow internet access to selected instances using their static MAC addresses.
I put a simple nftables firewall on the host to allow ssh and prevent forwarding
I recommend disabling ip forwarding inside containers and/or giving them their own firewall - especially if you want to add more macvlan devices rather than lxdpriv0 devices (e.g. for performance reasons).
This is getting long so I’ll stop and let you ask questions instead
This is the part I have the biggest question about. How are those macvlan interfaces created? Through lxd or through the host OS? Who manages each one? What commands do you use to create them? I guess I have the same questions about the bridge and the veth on top of the bridge.
Sorry for the questions, but I’m learning all of this just now.
No need to be sorry for asking questions, I like answering them
LXD can create macvlan interfaces for containers itself. The reason why I’m removing all addresses from lan0 and use the macvlan lan0_host instead is that macvlan devices cannot communicate with the native interface, meaning containers like openwrt would not be able to communicate with the LXD host at lan0. Since I wanted the LXD host to be a DHCP client on openwrt’s network I needed that though and connecting the LXD host through it’s own macvlan interface instead is an efficient workaround for that.
I create the LXD hosts macvlans through ifupdown-ng (/etc/network/interfaces) because that happens to be one of the most commonly used network manages for alpine linux. If you’re using ubuntu It’d also be very easy to do using systemd-networkd. You can find my ifupdown-ng config on GitHub. You may also wanna look at the other network-related files generated by that script.
I forgot to answer the veth and lxdpriv0 parts:
If you configure a container to join a bridge, LXD does everything that’s needed for that automatically. That is creating a veth pair, moving one side into the container as e.g. eth0, and connect the other side to the bridge.
And that works as is with the default bridge lxdbr0. I just needed to disable LXDs firewall with ipv4.firewall: false and ipv6.firewall: false since I’m using a nftables script to make the whole setup more secure. Replacing what LXD did is just 5 simple lines so that’s not much of a loss.
If you also want a more secure bridge like what I did with lxdpriv0, here’s how that works:
First, what do I want?
a bridge without any IP config because I need no host-communication
openwrt should be connected to that bridge and provide DNS and DHCP.
openwrt can then also define firewall rules for forwarding between hosts on that bridge, to a different network or to the internet.
openwrt’s DNS server can also provide DNS names for hosts on that network(e.g. CONTAINERNAME.home.arpa)
even on umanaged LXD bridges, LXD can provide the following functionality:
security.mac_filtering: to prevent spoofing MAC addresses to bypass openwrts firewall. This is implemented using nftables and does not conflict with my nftable rules.
security.port_isolation: this sets a flag on the bridge port to prevent communication with all other hosts that have that flag.
While you can disable LXDs dnsmasq for a single bridge, I still created lxdpriv0 through ifupdown-ng because I didn’t find a way to set net.ipv6.conf.lxdpriv0.* sysctls at the right time through ifupdown-ng. Creating the bridge outside of LXD doesn’t have any disadvantages anyway if you don’t want it’s dnsmasq service.
All that’s left is to allow forwarding between ports on lxdpriv0 in the LXD hosts nftables firewall, connect openwrt to that bridge, and configure openwrt to provide it’s services on that interface.
I’ve also created a LXD profile named private to simplify creating containers that are connected to that bridge.
If I assumed knowledge that you don’t have just ask more questions or say what you don’t know so I can explain it or provide links to sites with more information about these topics.
I moved my setup from macvlan to a bridge because it was too limited for my use-case.
macvlan doesn’t support routing multiple devices through one end(so you couldn’t e.g. bridge eth0 and wlan0 on the openwrt side)
macvlan doesn’t support nftable bridge-filtering
The bridge itself doesn’t seem to have any noticeable performance impact, but the veth that connects openwrt to the bridge seems to have one depending on NIC and traffic direction. So for me I have around 930Mbit/s rx/tx on the bridge itself, 930MBit/s rx to openwrt, 830MBit/s tx from openwrt. My cheap realtek NICs definitely play a huge role here since throughput between my router and other devices is vastly different e.g. for Intel vs realtek NICs.
The reason why most container-related sites state bridges as slow is that they usually consider the case where you NAT between the bridge and eth0, not the one where eth0 itself is a port on the bridge and thus eliminating NAT.