Awesome news. What about the software firewall? Is this automatically in-place with UFW?
I’m configuring UFW with rules to allow Incus on Ubuntu. But I would of course like to do it with IncusOS. What is the recommendation for eg. public VPS machines?
For me it’s essential to DNAT ports into containers/VMs. As for the OP, we have dedicated servers or VPSs with incus managed containers, and in the current Real World where IP addresses are super expensive, the IncuOS holds the IP and it should be able to DNAT its ports into subservices located in containers.
And you can still use an Incus proxy device or a network forward to forward specific ports from that static IP to your containers, you don’t need a manual firewall script for that.
stgraber@castiana:~ (incus:dev/default)$ incus admin os system network show
WARNING: The IncusOS API and configuration is subject to change
config:
interfaces:
- addresses:
- dhcp4
- slaac
hwaddr: 10:66:6a:16:e4:a1
lldp: false
name: enp5s0
required_for_online: "no"
state:
interfaces:
enp5s0:
addresses:
- 10.244.64.67
- 2602:fc62:c:251:1266:6aff:fe16:e4a1
hwaddr: 10:66:6a:16:e4:a1
mtu: 1500
roles:
- management
- cluster
routes:
- to: default
via: 10.244.64.1
speed: "-1"
state: routable
stats:
rx_bytes: 534714
rx_errors: 0
tx_bytes: 94713
tx_errors: 0
type: interface
stgraber@castiana:~ (incus:dev/default)$ incus list
+----------+---------+-----------------------+------------------------------------------------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------+---------+-----------------------+------------------------------------------------+-----------------+-----------+
| c1 | RUNNING | 10.242.142.210 (eth0) | fd42:2937:7446:71dc:1266:6aff:fec4:2adf (eth0) | CONTAINER | 0 |
+----------+---------+-----------------------+------------------------------------------------+-----------------+-----------+
| my-nginx | RUNNING | 10.242.142.84 (eth0) | fd42:2937:7446:71dc:1266:6aff:fe93:b8f8 (eth0) | CONTAINER (APP) | 0 |
+----------+---------+-----------------------+------------------------------------------------+-----------------+-----------+
| v1 | RUNNING | 10.242.142.146 (eth0) | fd42:2937:7446:71dc:1266:6aff:fe2b:6ef3 (eth0) | VIRTUAL-MACHINE | 0 |
+----------+---------+-----------------------+------------------------------------------------+-----------------+-----------+
stgraber@castiana:~ (incus:dev/default)$ incus network forward create incusbr0 10.244.64.67
Network forward 10.244.64.67 created
stgraber@castiana:~ (incus:dev/default)$ incus network forward port add incusbr0 10.244.64.67 tcp 80 10.242.142.84
stgraber@castiana:~ (incus:dev/default)$ curl http://10.244.64.67
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
That’s an IncusOS box with external IP address 10.244.64.67, running 3 instances, one of which is running nginx on tcp/80, then getting configured to forward tcp/80 on the external address over to the nginx container’s port 80.
yes but it’s not as flexible as firewalling, from real world example, you must set and maintain your ipsets separately in every container or create a separate container for that which will be a proxy to another containers.
We’re most likely to do this directly through nft.
The ufw syntax is convenient but it’s a python script and we’re trying pretty hard to keep the base image as small as possible. So far we’ve managed to keep it to basically just shell scripts and binary stuff. No python in there and we’ve stripped most of the perl stuff.
I also don’t know if ufw actually works these days on a system that only has nft, no xtables legacy commands, no legacy xtables kernel support and no xtables-to-nft command wrappers. Maybe @jdstrand still lurks around here and can answer that one
Yeah, we’d probably go with something very simple, input only, not touching forward as that can (and should) be handled through Incus ACLs instead.
IncusOS itself doesn’t listen on any port, so it’s only Incus that’s listening on its default port 8443. That can be changed and some extra ports can be open (BGP endpoint, DNS endpoint, debug endpoint, …).
Would do just fine with an empty firewall allowing everything and any firewall rule being present causing the default action to be reject. We’d handle the usual conntrack stuff to allow established/related and also allow the minimal set of ICMP stuff that’s needed to keep a machine functional (echo and fragment basically).
ufw is and will remain a python application for the foreseeable future. It currently is a wrapper around iptables, which (as you’re aware) can work just fine on a netfilter system assuming the system has iptables 1.8 or higher and the system is configured for iptables to use the nft backend rather than the legacy xtables backend. Based on this thread, to use ufw in IncusOS, you’d need to pull in at least the python interpreter (it doesn’t have external dependencies) and a non-ancient iptables.
I have for a long time wanted to implement a nftables backend for ufw to remove the dependency on iptables, but haven’t done it yet (since iptables-nft has (mostly) been sufficient). This work has largely been blocked behind a desire to modernize the code base and testsuite. I can’t commit to when I’ll implement the nftables backend, but can say that I’ve recently been working on modernizing things. Regardless, even when done, you’d still require the python interpreter and the nftables command to use ufw on a pure nft system….
Separately, you mentioned “We’re most likely to do this directly through nft” which makes a lot of sense from a developer POV, however, while I don’t know who the target audience for IncusOS is, I’ll remind you how the mixture of nft and xtables causes the kernel to do weird, unexpected things that totally breaks networking (which I know you’re well aware of). Recently on 2 systems that are running Ubuntu 24.04 with incus, I attempted to go the ‘pure nft‘ route (one running a container with docker and one running migrated container workloads from older distros with xtables) and broke networking on the host because the containers were loading xtables firewall rules (I adjusted the Ubuntu host to use iptables-legacy which forced ufw on the host to use xtables and incus detected this and fell back to xtables, thus matching all the guests and fixing networking). If you suspect users will run older workloads that load firewall rules themselves, I suspect they may face similar problems, but if you only support nft, they won’t be able to run their unmodified workloads (ie, they’ll have to convert to a VM, adjust the container to not load firewall rules (which may not be possible with docker in the container, not sure), etc).
Hey @jdstrand, nice to see you around and great to hear that you’re still working on ufw!
Indeed, the mix of xtables and nft at the kernel level is quite messy with basically non-deterministic results. For IncusOS, we’d be nft only at the host level. Containers for now could still directly play with xtables which hopefully shouldn’t really conflict with things given that they’ll be tied to the network namespace.
But with Linux 6.17 we’ve seen another push in disabling a bunch of the kernel xtables interface, basically moving more stuff being legacy/deprecated type kernel configs. So once we start building IncusOS optimized kernels (rather than generic Zabbly kernels), we may end up ripping out xtables kernel support entirely.