Thank you Stéphane.
The machine running LXD is a fresh install of Ubuntu 20.04 for the sole purpose of running containers, so I hope that there isn’t too much fuss happening on the host level.
I have tried changing the privileged flag to false for the offending containers, as there were no issues with non-privileged containers, and as expected their file systems got remapped, but there was no effect on ipv4 connectivity.
iptables
returns no rules being in place:
sudo iptables -L -n -v
Chain INPUT (policy ACCEPT 726K packets, 84M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 60875 packets, 19M bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 697K packets, 7909M bytes)
pkts bytes target prot opt in out source destination
ufw
is installed, but inactive.
sudo nft list ruleset
table inet lxd {
chain pstrt.lxdbr0 {
type nat hook postrouting priority srcnat; policy accept;
@nh,96,24 704899 @nh,128,24 != 704899 masquerade
}
chain fwd.lxdbr0 {
type filter hook forward priority filter; policy accept;
ip version 4 oifname "lxdbr0" accept
ip version 4 iifname "lxdbr0" accept
}
chain in.lxdbr0 {
type filter hook input priority filter; policy accept;
iifname "lxdbr0" tcp dport 53 accept
iifname "lxdbr0" udp dport 53 accept
iifname "lxdbr0" udp dport 67 accept
}
chain out.lxdbr0 {
type filter hook output priority filter; policy accept;
oifname "lxdbr0" tcp sport 53 accept
oifname "lxdbr0" udp sport 53 accept
oifname "lxdbr0" udp sport 67 accept
}
}
For DNS, I did set up resolved
to be able to use container names on the host.
In the netstat output i replaced the names of containers that receive an ipv4 addres with good-n
and the ones that do not with bad-n
:
sudo netstat -lnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 10.193.131.1:53 0.0.0.0:* LISTEN 39594/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 947/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1247/sshd: /usr/sbi
tcp6 0 0 fe80::d872:8cff:fe88:53 :::* LISTEN 39594/dnsmasq
tcp6 0 0 :::22 :::* LISTEN 1247/sshd: /usr/sbi
tcp6 0 0 :::8443 :::* LISTEN 1563/lxd
udp 0 0 10.193.131.1:53 0.0.0.0:* 39594/dnsmasq
udp 0 0 127.0.0.53:53 0.0.0.0:* 947/systemd-resolve
udp 0 0 0.0.0.0:67 0.0.0.0:* 39594/dnsmasq
udp 0 0 10.150.0.7:68 0.0.0.0:* 944/systemd-network
udp 0 0 127.0.0.1:323 0.0.0.0:* 995/chronyd
udp6 0 0 fe80::d872:8cff:fe88:53 :::* 39594/dnsmasq
udp6 0 0 ::1:323 :::* 995/chronyd
raw6 0 0 :::58 :::* 7 944/systemd-network
raw6 0 0 :::58 :::* 7 944/systemd-network
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node PID/Program name Path
unix 2 [ ACC ] STREAM LISTENING 27672 1563/lxd /var/snap/lxd/common/lxd/devlxd/sock
unix 2 [ ACC ] STREAM LISTENING 4655965 1469234/systemd /run/user/1000/systemd/private
unix 2 [ ACC ] STREAM LISTENING 4655970 1469234/systemd /run/user/1000/bus
unix 2 [ ACC ] STREAM LISTENING 4655971 1469234/systemd /run/user/1000/gnupg/S.dirmngr
unix 2 [ ACC ] STREAM LISTENING 4655972 1469234/systemd /run/user/1000/gnupg/S.gpg-agent.browser
unix 2 [ ACC ] STREAM LISTENING 4655973 1469234/systemd /run/user/1000/gnupg/S.gpg-agent.extra
unix 2 [ ACC ] STREAM LISTENING 2407 1/init @/org/kernel/linux/storage/multipathd
unix 2 [ ACC ] STREAM LISTENING 4655974 1469234/systemd /run/user/1000/gnupg/S.gpg-agent.ssh
unix 2 [ ACC ] STREAM LISTENING 4655975 1469234/systemd /run/user/1000/gnupg/S.gpg-agent
unix 2 [ ACC ] STREAM LISTENING 4655976 1469234/systemd /run/user/1000/pk-debconf-socket
unix 2 [ ACC ] STREAM LISTENING 4655978 1469234/systemd /run/user/1000/snapd-session-agent.socket
unix 2 [ ACC ] STREAM LISTENING 30314 3899/[lxc monitor] @/var/snap/lxd/common/lxd/containers/good-1/command
unix 2 [ ACC ] STREAM LISTENING 20997 1/init /run/dbus/system_bus_socket
unix 2 [ ACC ] STREAM LISTENING 19962 1/init /run/snapd.socket
unix 2 [ ACC ] STREAM LISTENING 19964 1/init /run/snapd-snap.socket
unix 2 [ ACC ] STREAM LISTENING 19966 1/init /run/uuidd/request
unix 2 [ ACC ] STREAM LISTENING 33216 5029/[lxc monitor] @/var/snap/lxd/common/lxd/containers/good-2/command
unix 2 [ ACC ] STREAM LISTENING 2394 1/init /run/systemd/private
unix 2 [ ACC ] STREAM LISTENING 2396 1/init /run/systemd/userdb/io.systemd.DynamicUser
unix 2 [ ACC ] STREAM LISTENING 3289299 583201/[lxc monitor @/var/snap/lxd/common/lxd/containers/good-3/command
unix 2 [ ACC ] STREAM LISTENING 19960 1/init /var/snap/lxd/common/lxd/unix.socket
unix 2 [ ACC ] STREAM LISTENING 2405 1/init /run/lvm/lvmpolld.socket
unix 2 [ ACC ] STREAM LISTENING 2410 1/init /run/systemd/fsck.progress
unix 2 [ ACC ] STREAM LISTENING 3319120 594706/[lxc monitor @/var/snap/lxd/common/lxd/containers/good-4/command
unix 2 [ ACC ] STREAM LISTENING 2420 1/init /run/systemd/journal/stdout
unix 2 [ ACC ] SEQPACKET LISTENING 2425 1/init /run/udev/control
unix 2 [ ACC ] STREAM LISTENING 15528 186/systemd-journal /run/systemd/journal/io.systemd.journal
unix 2 [ ACC ] STREAM LISTENING 3401324 639599/[lxc monitor @/var/snap/lxd/common/lxd/containers/good-5/command
unix 2 [ ACC ] STREAM LISTENING 4692658 1490069/[lxc monito @/var/snap/lxd/common/lxd/containers/bad-1/command
unix 2 [ ACC ] STREAM LISTENING 4697285 1492963/[lxc monito @/var/snap/lxd/common/lxd/containers/bad-2/command
unix 2 [ ACC ] STREAM LISTENING 26845 1705/[lxc monitor] @/var/snap/lxd/common/lxd/containers/good-6/command
unix 2 [ ACC ] STREAM LISTENING 35279 7831/[lxc monitor] @/var/snap/lxd/common/lxd/containers/bad-3/command
unix 2 [ ACC ] STREAM LISTENING 19959 1/init @ISCSIADM_ABSTRACT_NAMESPACE
unix 2 [ ACC ] SEQPACKET LISTENING 29179 1563/lxd /var/snap/lxd/common/lxd/seccomp.socket
unix 2 [ ACC ] STREAM LISTENING 3350487 609875/[lxc monitor @/var/snap/lxd/common/lxd/containers/good-7/command
unix 2 [ ACC ] STREAM LISTENING 26648 1563/lxd @00010
From the inside of a “bad” container, with eth0 sporting an ipv6 address:
ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3e:1c:c4:52
inet6 addr: fe80::216:3eff:fe1c:c452/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:108 errors:0 dropped:0 overruns:0 frame:0
TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:17616 (17.6 KB) TX bytes:6680 (6.6 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:308 (308.0 B) TX bytes:308 (308.0 B)
The container seems to have the device configured correctly:
lxc config show collective-access --expanded
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 12.04 LTS amd64 (release) (20170502)
image.label: release
image.os: ubuntu
image.release: precise
image.serial: "20170502"
image.type: root.tar.xz
image.version: "12.04"
security.privileged: "false"
volatile.base_image: be4aa8e56eab681fac6553b48ce19d7f34833accc2c8ae65f140a603b8369a1d
volatile.eth0.host_name: veth91e363a6
volatile.eth0.hwaddr: 00:16:3e:1c:c4:52
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.power: RUNNING
volatile.uuid: d5451aec-2e03-4b75-8bb0-3892d4887447
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: zfspool
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
The managed network config looks like below,:
lxc network show lxdbr0
config:
ipv4.address: 10.193.131.1/24
ipv4.nat: "true"
ipv6.address: none
raw.dnsmasq: |-
auth-zone=lxd
dns-loop-detect
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/good-1
- /1.0/instances/bad-1
- /1.0/instances/bad-2
- /1.0/instances/good-2
- /1.0/instances/good-3
- /1.0/instances/good-4
- /1.0/instances/good-5
- /1.0/instances/good-6
- /1.0/instances/good-7
- /1.0/instances/good-8
- /1.0/profiles/default
managed: true
status: Created
locations:
- none