Hello, I’m trying to get LXD working on Groovy Desktop.
root@HOLODECK:~# snap install lxd
lxd 4.10 from Canonical✓ installed
root@HOLODECK:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxdpool
Name of the storage backend to use (dir, lvm, zfs, ceph, btrfs) [default=zfs]:
Would you like to create a new zfs dataset under rpool/lxd? (yes/no) [default=yes]: no
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=30GB]: 10GB
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
ipv4.address: auto
ipv6.address: none
description: ""
name: lxdbr0
type: ""
project: default
storage_pools:
- config:
size: 10GB
description: ""
name: lxdpool
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: lxdpool
type: disk
name: default
cluster: null
root@HOLODECK:~# ip addr show lxdbr0
8: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group
default qlen 1000
link/ether 00:16:3e:07:49:7b brd ff:ff:ff:ff:ff:ff
inet 10.52.135.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
For some reason lxdbr0 is always down and I can’t get it working.
root@HOLODECK:~# ip link set lxdbr0 up
root@HOLODECK:~# ip addr show lxdbr0
8: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:07:49:7b brd ff:ff:ff:ff:ff:ff
inet 10.52.135.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
root@HOLODECK:~# lxc network list
+---------+----------+---------+----------------+------+-------------+---------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY |
+---------+----------+---------+----------------+------+-------------+---------+
| enp0s25 | physical | NO | | | | 0 |
+---------+----------+---------+----------------+------+-------------+---------+
| lxdbr0 | bridge | YES | 10.52.135.1/24 | none | | 2 |
+---------+----------+---------+----------------+------+-------------+---------+
| virbr0 | bridge | NO | | | | 0 |
+---------+----------+---------+----------------+------+-------------+---------+
| wlp3s0 | physical | NO | | | | 0 |
+---------+----------+---------+----------------+------+-------------+---------+
I don’t really know how to diagnose it further. Any ideas, please?
It’s changing to UP but no address attached to the container:
root@HOLODECK:~# lxc launch ubuntu:20.04 test
Creating test
Starting test
root@HOLODECK:~# lxc list
+------+---------+------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+------+------+-----------+-----------+
| test | RUNNING | | | CONTAINER | 0 |
+------+---------+------+------+-----------+-----------+
root@HOLODECK:~# ip addr show lxdbr0
8: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:07:49:7b brd ff:ff:ff:ff:ff:ff
inet 10.52.135.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe07:497b/64 scope link
valid_lft forever preferred_lft forever
Then, when I jump into the container:
root@HOLODECK:~# lxc shell test
root@test:~# ip -br a
lo UNKNOWN 127.0.0.1/8 ::1/128
eth0@if10 UP fe80::216:3eff:feb3:15db/64
root@test:~# ip addr show eth0@if10
Device "eth0@if10" does not exist.
root@test:~# ip addr show eth0
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:b3:15:db brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::216:3eff:feb3:15db/64 scope link
valid_lft forever preferred_lft forever
So there is no firewall rules to allow DNS on LXD bridge. Should they be added automatically on initialization? I’m using LXD with default deny firewalls on other hosts (Focal instead of Groovy like this one) and I can confirm that appropriate rules are generated automatically at some point.
So it looks like UFW or libvirt has removed the LXD rules added for lxdbr0.
Suggest restarting LXD using sudo systemctl reload snap.lxd.daemon and then comparing the iptables rules, and then adding equivalent rules to your firewall so they are restored on boot.
root@HOLODECK:~# lxc info | grep 'firewall:'
firewall: nftables
root@HOLODECK:~# nft list ruleset
Command 'nft' not found, but can be installed with:
apt install nftables
root@HOLODECK:~#
It’s a minimal desktop installation of Groovy, I didn’t touch anything firewall related other than:
ufw enable
As far as I understand it’s still the default way to interact with firewall, regardless of underlying technology.
If when LXD started there were no iptables rules active, then it would prefer to use nftables if the kernel is recent enough (and the tool is bundled in the snap).
If after that additional iptables rules are added, then you can end up in a mixed environment.
LXD tries to detect various combinations and make the ‘best’ decision at start up, and after that its own rules it adds can influence its driver choice on subsequent start up.
To further complicate matters, nftables provides an iptables shim compatibility layer, except its not fully compatible with iptables or ebtables, so in that scenario we would prefer nftables too.
The logic is here:
Can you install nftables sudo apt install nftables and then try listing the ruleset again.
So on Groovy, installing ufw and using its default configuration will use the iptables shim to insert nftables rules.
So LXD then uses nftables.
Installing nftables package allows you see both the rules added by UFW and by LXD.
However in its default configuration ufw drops all incoming traffic.
Although LXD has added allow rules for DHCP and DNS, these are still blocked because, as per nftables documentation: Configuring chains - nftables wiki
NOTE : If a packet is accepted and there is another chain, bearing the same hook type and with a later priority, then the packet will subsequently traverse this other chain. Hence, an accept verdict - be it by way of a rule or the default chain policy - isn’t necessarily final. However, the same is not true of packets that are subjected to a drop verdict. Instead, drops take immediate effect, with no further rules or chains being evaluated.
So you can get this working by allowing incoming traffic (which rather defeats the purpose of installing ufw):
ufw default allow incoming
Or you could add rules equivalent to those added in the lxd chains, to ufw to allow inbound DNS and DHCP.
@stgraber because of the way nftables allows multiple base chains to operate on a packet (those that have a netfilter hook in them), even if one of them accepts the packet, it may still be dropped later due to another chain’s rules and policies.
Because we create our own lxd chains with netfilter hooks. This means that if an application is using the iptables nftables shim and creates its own base chains with netfilter hooks and a default drop policy then our rules will not be final and packets will be dropped.
This makes it pretty tricky for two applications that create their own hook chains into netfilter to coexist as neither one can selectively accept traffic if the other has dropped all traffic.
One way around this would be for us to create another non-base chain inside a well-known table (like the one used by the iptables shim) and then add jump rules to the well-known chains it uses to give our rules a chance of accepting the packets before their policy kicks in.
We would need to add a rule to the filter table in the INPUT and FOWARD chains to jump into our own chains to stand a chance of the packets being allowed. However nftables doesn’t allow you to jump into a base chain (one that has a netfilter hook in it), and I’m not sure you can jump across tables either.
So to get compatibility with the iptables shim we’d need to add our custom chains to the filter table and then add rules to the INPUT and FOWARD chains to get our rules applied before the DROP policy kicks in.
This would only work with applications that use the filter table, and any other application that creates its own tables and sets up drop policies would cause the same problem again.
Yeah, I’m honestly not sure what’s the right thing to do here…
I’m not super optimistic about us putting workarounds in place to handle the compatibility xtables tooling. By definition this will cause issues as it’s trying to pretend that nft is xtables and so comes with the same issues around rule ordering…
I believe ufw was natively ported to nft recently so it may instead be better to see how we handle the rules generated by that version.
Cooperating properly with other native nft users and pushing distros to ship the native nft support in those tools when available feels like a more future proof way to handle this.
As you can see getting the various firewall implementations to play nicely together is a non-trivial task.
In the meantime these commands seems to suffice to allow traffic from lxdbr0 interface to the LXD host and for traffic from lxdbr0 to be routed to the external network without allowing all external inbound traffic:
sudo ufw allow in on lxdbr0
sudo ufw route allow in on lxdbr0
sudo ufw route allow out on lxdbr0
@tomp Thank you for getting to the bottom of that issue, I would have never solve it myself
Not quite. In my case, it makes container get IPv4 address. But then, inside the container, I still have networking issues.
root@test:~# apt update
Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8001::24). - connect (101:
Network is unreachable) Cannot initiate the connection to archive.ubuntu.com:80
(2001:67c:1360:8001::23). - connect (101: Network is unreachable) Could not connect to
archive.ubuntu.com:80 (91.189.88.142), connection timed out Could not connect to
archive.ubuntu.com:80 (91.189.88.152), connection timed out
[...snap...]
root@test:~# ip -br a
lo UNKNOWN 127.0.0.1/8 ::1/128
eth0@if14 UP 10.52.135.83/24 fe80::216:3eff:feb3:15db/64
root@test:~# ping archive.ubuntu.com
PING archive.ubuntu.com (91.189.88.142) 56(84) bytes of data.
64 bytes from aerodent.canonical.com (91.189.88.142): icmp_seq=1 ttl=50 time=58.3 ms
64 bytes from aerodent.canonical.com (91.189.88.142): icmp_seq=2 ttl=50 time=66.4 ms
^C
root@test:~# curl -m 30 archive.ubuntu.com
curl: (28) Connection timed out after 30000 milliseconds