Can't get IPv6 Netmask 64 to work (no NAT, should be end to end)

Check that setting up netplan isn’t removing the gateway route that LXD is adding in:

ip -6 r

Should expect to see a default route to fe80::1

The output is:

1111:aaaa:3004:9978::2 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::1 dev eth0 proto static metric 1024 pref medium

But I cannot resolve any domain names. Also resolving ipv4 domain names should be possible as I will want to install programs and stuff.
(I hence included gateway6: fe80::1 to the netplan config.)

What is the contents of /etc/resolv.conf and systemd-resolve --status?

Can you still ping outside IPs?

As for resolving IPv4 domains, once you get DNS working, you will be able to resolve IPv4 domains, but they will be unreachable as you’ve not configured any IPv4 address (not even private ones that could be NATed by your host).

Before the reset I just did for the 1000th time I cloud ping outside IPv6 adresses yes.
I assigned the container an 192.168.1.2 domain, but the pinging of outside domain names still didn’t work.

I had put the following contents in /etc/resolv.conf:

nameserver 1.1.1.1
nameserver 2606:4700:4700::1111

systemd-resolve --status had shown me the nameservers in one of the setups before but still no name resolving, I have no idea why not.

Right now I am trying to have a normal managed bridge on eth0 of the containers and add an eth1 with the static ipv6 address. In the hopes that the managed bridge will handle the DNS.

I also allowed port 53 on ufw on the host but that seems to not have been the issue either.

Ah, ok so you’ve introduced 2 interfaces inside the container, and the requirement to access IPv4 nameservers (before you said you didn’t need IPv4, only IPv6), which will change things a fair bit.

If you have a 2nd interface in the container, connected to the managed bridge, its likely that the SLAAC autoconfiguration for IPv6 on that interface will wipe out the default IPv6 gateway route for the routed interface and instead replace it with a default route out of the managed bridge interface instead.

So either disable IPv6 on the managed bridge, ipv6.dhcp=false and ipv6.address=none so that your 2nd interface is just for IPv4, or add a static private IPv4 address to the routed NIC:

lxc config device edit c1 eth0 nic ipv4.address=192.168.0.n

Then you need to add an outbound masquerade firewall rule so that outbound traffic is translated to your host’s external IP. That way you’d only need 1 interface inside the container.

1 Like

Here’s an example using 2 interfaces, routed for ipv6 (eth1), bridged for ipv4 (using LXD’s NAT) (eth0).

Disable IPv6 on lxdbr0:

lxc network set lxdbr0 ipv6.address=none
lxc network set lxdbr0 ipv6.dhcp=false
lxc init ubuntu:18.04 c1
lxc config device add c1 eth1 nic nictype=routed ipv6.address=2a02:nnn:76f4:1::1234 parent=wlp0s20f3
sysctl net.ipv6.conf.all.proxy_ndp=1
sysctl net.ipv6.conf.wlp0s20f3.proxy_ndp=1
lxc start c1

Then use this netplan config inside container:

network:
    version: 2
    ethernets:
        eth0:
            dhcp4: true
        eth1:
            addresses:
             - 2a02:nnn:76f4:1::1234/128
            gateway6: fe80::1

Resulting config:

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth1@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 82:a6:92:ed:12:67 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2a02:nnn:76f4:1::1234/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::80a6:92ff:feed:1267/64 scope link 
       valid_lft forever preferred_lft forever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:b0:fb:9c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.87.57.6/24 brd 10.87.57.255 scope global dynamic eth0
       valid_lft 3274sec preferred_lft 3274sec
    inet6 fe80::216:3eff:feb0:fb9c/64 scope link 
       valid_lft forever preferred_lft forever
ip -4 r
default via 10.87.57.1 dev eth0 proto dhcp src 10.87.57.6 metric 100 
10.87.57.0/24 dev eth0 proto kernel scope link src 10.87.57.6 
10.87.57.1 dev eth0 proto dhcp scope link src 10.87.57.6 metric 100 
ip -6 r
2a02:nnn:76f4:1::1234 dev eth1 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth1 proto kernel metric 256 pref medium
default via fe80::1 dev eth1 proto static metric 1024 pref medium
systemd-resolve --status
Global
          DNSSEC NTA: 10.in-addr.arpa
                      16.172.in-addr.arpa
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa
                      18.172.in-addr.arpa
                      19.172.in-addr.arpa
                      20.172.in-addr.arpa
                      21.172.in-addr.arpa
                      22.172.in-addr.arpa
                      23.172.in-addr.arpa
                      24.172.in-addr.arpa
                      25.172.in-addr.arpa
                      26.172.in-addr.arpa
                      27.172.in-addr.arpa
                      28.172.in-addr.arpa
                      29.172.in-addr.arpa
                      30.172.in-addr.arpa
                      31.172.in-addr.arpa
                      corp
                      d.f.ip6.arpa
                      home
                      internal
                      intranet
                      lan
                      local
                      private
                      test

Link 9 (eth0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers: 10.87.57.1
          DNS Domain: lxd
lxc ls
+------+---------+-------------------+------------------------------+-----------+-----------+
| NAME |  STATE  |       IPV4        |             IPV6             |   TYPE    | SNAPSHOTS |
+------+---------+-------------------+------------------------------+-----------+-----------+
| c1   | RUNNING | 10.87.57.6 (eth0) | 2a02:nnn:76f4:1::1234 (eth1) | CONTAINER | 0         |
+------+---------+-------------------+------------------------------+-----------+-----------+
2 Likes

I didn’t think I need IPv4 in the container, but DNS would not work. And of course I need DNS so I can install stuff in the container. I will try the setup adjustments you have given me, thanks a lot!

I thought my config was fine as pinging the ipv6 from the outside worked. But of course I wouldn’t want any conflicts and also trying to connect to the containers through ports 80 and 443 from the outside is proving difficult…

Thank you very much for all the provided help!

You could also try using an IPv6 resolver rather than an IPv4 resolver address if you would like to keep a pure IPv6 only environment.

1 Like

How do I do that?
(I will try to use Cloudflare to host a site inside the container and provide the ipv4 connectivity.)
PS: The config works for me without setting the netplan config inside the container.

Well, say, you were using Google’s 8.8.8.8 as your DNS resolver IP. Instead of that you could use their equivalent IPv6 resolver IP:

https://developers.google.com/speed/public-dns/docs/using

Your ISP may provide an IPv6 resolver address too.

I did try Cloudflares IPv6 resolvers:
nameserver 2606:4700:4700::1111
nameserver 2606:4700:4700::1001

But didn’t work, dns simply wouldn’t resolve… Must have been something else.

Try this:

ping 2606:4700:4700::1111
dig @2606:4700:4700::1111 www.google.com

If I don’t attach a network to the container as I initiated lxd without lxdbr0, the container does not do DNS. Maybe we can setup a network for each ipv6? And one network that will let the containers do DNS?

So here my output again.

root@c1:~# ip -6 r
1111:aaaa:3004:9978::2 dev eth1 proto kernel metric 256 pref medium
fe80::/64 dev eth1 proto kernel metric 256 pref medium
default via fe80::1 dev eth1 metric 1024 pref medium

Pinging Cloudflares DNS works. Also pinging the address with an online tool also works.

root@c1:~# ping 2606:4700:4700::1111
PING 2606:4700:4700::1111(2606:4700:4700::1111) 56 data bytes
64 bytes from 2606:4700:4700::1111: icmp_seq=1 ttl=61 time=7.44 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=2 ttl=61 time=7.15 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=3 ttl=61 time=7.20 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=4 ttl=61 time=7.18 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=5 ttl=61 time=7.16 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=6 ttl=61 time=7.14 ms

--- 2606:4700:4700::1111 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5007ms
rtt min/avg/max/mdev = 7.144/7.216/7.440/0.141 ms

But dig does not work:

root@c1:~# dig @2606:4700:4700::1111 www.google.com

; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> @2606:4700:4700::1111 www.google.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

Good, so now you should use tcpdump or similar to check the outgoing traffic from your host and see where it is going and if you are getting dns response packets or not. It maybe a firewall somewhere.

1 Like

Been at it quite some time, can’t quite figure out how to use tcpdump correctly.
I went in the container and put this:

root@c1:/etc# dnsmasq

dnsmasq: failed to create listening socket for port 53: Address already in use

And this:

root@c1:/etc# netstat -anlp | grep -w LISTEN
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      161/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      205/sshd            
tcp6       0      0 :::22                   :::*                    LISTEN      205/sshd

Is that to be expected?

Here’s how I would do it:

First, a refresh of my test config:

lxc network show lxdbr1 (I’m using lxdbr1 for IPv4 only):

lxc network show lxdbr1
config:
  ipv4.address: 10.0.171.1/24
  ipv4.nat: "true"
  ipv6.address: none
  ipv6.dhcp: "false"
  ipv6.nat: "true"
description: ""
name: lxdbr1
type: bridge

I’ve created a test container called crouted and added 2 NICs, one (eth0) bridged to lxdbr1 (for IPv4), and the other (eth1) routed to parent enp3s0 (my external interface on the host).

architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 18.04 LTS amd64 (release) (20200317)
  image.label: release
  image.os: ubuntu
  image.release: bionic
  image.serial: "20200317"
  image.type: squashfs
  image.version: "18.04"
  volatile.base_image: 98e43d99d83ef1e4d0b28a31fc98e01dd98a2dbace3870e51c5cb03ce908144b
  volatile.eth0.hwaddr: 00:16:3e:ec:e2:b5
  volatile.eth1.hwaddr: 00:16:3e:83:d2:60
  volatile.eth1.name: eth1
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: STOPPED
devices:
  eth0:
    name: eth0
    network: lxdbr1
    type: nic
  eth1:
    ipv6.address: 2a02:nnn:76f4:1::1234
    nictype: routed
    parent: enp3s0
    type: nic
ephemeral: false
profiles:
- default
stateful: false
description: ""

I start the container, and check the IPs and routes have taken effect:

lxc start crouted
Wait a couple of seconds.
lxc ls crouted
+---------+---------+--------------------+------------------------------+-----------+-----------+
|  NAME   |  STATE  |        IPV4        |             IPV6             |   TYPE    | SNAPSHOTS |
+---------+---------+--------------------+------------------------------+-----------+-----------+
| crouted | RUNNING | 10.0.171.52 (eth0) | 2a02:nnn:76f4:1::1234 (eth1) | CONTAINER | 0         |
+---------+---------+--------------------+------------------------------+-----------+-----------+

Check routes inside container:

lxc exec crouted -- ip r
default via 10.0.171.1 dev eth0 proto dhcp src 10.0.171.52 metric 100 
10.0.171.0/24 dev eth0 proto kernel scope link src 10.0.171.52 
10.0.171.1 dev eth0 proto dhcp scope link src 10.0.171.52 metric 100 
lxc exec crouted -- ip -6 r
2a02:nnn:76f4:1::1234 dev eth1 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth1 proto kernel metric 256 pref medium
default via fe80::1 dev eth1 metric 1024 pref medium

Check ping to external addresses:

 lxc exec crouted -- ping 8.8.8.8 -c 5
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=23.9 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=23.8 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=23.8 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=24.0 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=23.7 ms

--- 8.8.8.8 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 23.725/23.885/24.045/0.141 ms
lxc exec crouted -- ping 2606:4700:4700::1111 -c 5
PING 2606:4700:4700::1111(2606:4700:4700::1111) 56 data bytes
64 bytes from 2606:4700:4700::1111: icmp_seq=1 ttl=59 time=23.4 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=2 ttl=59 time=23.4 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=3 ttl=59 time=23.3 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=4 ttl=59 time=23.3 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=5 ttl=59 time=23.3 ms

--- 2606:4700:4700::1111 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 23.325/23.378/23.465/0.200 ms

Now, check DNS resolution manually using dig tool rather than relying on the systemd resolver (that you have shown is listening on 127.0.0.1:53 of your container):

Test using external IPv6 resolver:

lxc exec crouted -- dig @2606:4700:4700::1111 www.linuxcontainers.org

; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> @2606:4700:4700::1111 www.linuxcontainers.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6433
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1452
;; QUESTION SECTION:
;www.linuxcontainers.org.	IN	A

;; ANSWER SECTION:
www.linuxcontainers.org. 884	IN	CNAME	rproxy.stgraber.org.
rproxy.stgraber.org.	884	IN	A	149.56.148.5

;; Query time: 23 msec
;; SERVER: 2606:4700:4700::1111#53(2606:4700:4700::1111)
;; WHEN: Thu Mar 19 08:54:12 UTC 2020
;; MSG SIZE  rcvd: 98

Test using external IPv4 resolver:

lxc exec crouted -- dig @8.8.8.8 www.linuxcontainers.org

; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> @8.8.8.8 www.linuxcontainers.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;www.linuxcontainers.org.	IN	A

;; ANSWER SECTION:
www.linuxcontainers.org. 899	IN	CNAME	rproxy.stgraber.org.
rproxy.stgraber.org.	899	IN	A	149.56.148.5

;; Query time: 430 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Thu Mar 19 08:54:58 UTC 2020
;; MSG SIZE  rcvd: 98

If these didn’t work, I then need to check where the packets are getting lost using tcpdump.

First, lets review how the packets will traverse from the container, to the host and the response coming back:

  1. User runs dig to 2606:4700:4700::1111 inside container, this means UDP packets with a destination port of 53 will leave the container via eth1 destined for the default gateway address fe80::1.
  2. Both bridged and routed NIC types make use of a Linux network concept called “veth pairs” where a pair of virtual Ethernet devices are created by the OS and any packets that go in one end come out of the other. We then leave one end of the pair on the host and move the other end of the pair into the container. In this way eth0 and eth1 in the containers are connected to their respective pair ends on the host. You can see which are the respective host-side veth interfaces by running lxc info crouted

e.g.

 lxc info crouted
Name: crouted
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/03/19 08:43 UTC
Status: Running
Type: container
Profiles: default
Pid: 14160
Ips:
  lo:	inet	127.0.0.1
  lo:	inet6	::1
  eth0:	inet	10.0.171.52	veth575cd614
  eth0:	inet6	fe80::216:3eff:feec:e2b5	veth575cd614
  eth1:	inet6	2a02:nnn:76f4:1::1234	veth9142f1c9
  eth1:	inet6	fe80::f813:d4ff:fe89:c57f	veth9142f1c9

You can see that eth0 has a host-side end interface called veth575cd614 and eth1 has a host-side end of veth9142f1c9

  1. For bridged NIC types, LXD ‘connects’ the host-side veth interface to the parent LXD bridge (in the case of my container’s eth0 this is lxdbr1). For routed NIC types, LXD does not connect the host-side veth interface to anything, and just leaves it ‘connected’ to the host like any other interface.

We can see this by running on the host:

ip a show dev veth9142f1c9
30: veth9142f1c9@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:d4:d1:5e:1c:82 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::1/128 scope link 
       valid_lft forever preferred_lft forever
    inet6 fe80::fcd4:d1ff:fe5e:1c82/64 scope link 
       valid_lft forever preferred_lft forever

LXD has added an IPv6 address of fe80::1/128 to the host-side interface of the veth pair.

  1. So, for packets leaving eth1 in the container destined for fe80::1 they will arrive at the host end of the veth pair. After that it is up to the host to “route” the packets where they need to go.

So an expected packet flow for these UDP packets would be:

  • Leave container’s eth1 for fe80::1
  • Arrive at host’s veth interface.
  • Host routes packets out to the Internet via your host’s external interface, in my case enp3s0 (check ip r on host to see your default gateway).
  • Response packets arrive from Internet back at your host’s external interface destined for the container’s IPv6 address.
  • Your host sees the static route LXD adds for the container’s IPv6 address, and sends the response packets down the host-side veth interface.
  • Response packets appear in the container at eth1.
  1. So we can see there are several places we can ‘attach’ a tcpdump session to track the flow of these packets in and out. I would suggest:
  • enp3s0 on the host (your external interface).
  • veth9142f1c9 on the host (the host-side end of the veth pair).
  • eth1 inside the container.

Lets setup the tcpdump on enp3s0 and then in a separate window run dig inside the container again.

On the host:

tcpdump -l -nn -i enp3s0 host 2a02:nnn:76f4:1::1234 and port 53

Now run the dig command:

lxc exec crouted -- dig @2606:4700:4700::1111 www.linuxcontainers.org

The tcpdump results should show:

sudo tcpdump -l -nn -i enp3s0 host 2a02:nnn:76f4:1::1234 and port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp3s0, link-type EN10MB (Ethernet), capture size 262144 bytes
09:13:06.114853 IP6 2a02:nnn:76f4:1::1234.54264 > 2606:4700:4700::1111.53: 6205+ [1au] A? www.linuxcontainers.org. (64)
09:13:06.138527 IP6 2606:4700:4700::1111.53 > 2a02:nnn:76f4:1::1234.54264: 6205$ 2/0/1 CNAME rproxy.stgraber.org., A 149.56.148.5 (98)

This shows outbound DNS request packets leaving with a source address of 2a02:nnn:76f4:1::1234 going to 2606:4700:4700::1111 and the query for A? www.linuxcontainers.org..

Then the response packet coming back with an answer of CNAME rproxy.stgraber.org., A 149.56.148.5.

But this only shows us that the response packets made it back to the host’s external interface.

Lets re-run the test, but now with tcpdump running on veth9142f1c9:

sudo tcpdump -l -nn -i veth9142f1c9 host 2a02:nnn:76f4:1::1234 and port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on veth9142f1c9, link-type EN10MB (Ethernet), capture size 262144 bytes
09:15:57.342239 IP6 2a02:nnn:76f4:1::1234.45415 > 2606:4700:4700::1111.53: 17864+ [1au] A? www.linuxcontainers.org. (64)
09:15:57.366240 IP6 2606:4700:4700::1111.53 > 2a02:nnn:76f4:1::1234.45415: 17864$ 2/0/1 CNAME rproxy.stgraber.org., A 149.56.148.5 (98)

Great, so we can see that the host is correctly routing the response packets that are coming in on enp3s0 down veth9142f1c9.

Finally, lets check that the packets are arriving at the container’s eth1 interface:

sudo lxc exec crouted -- tcpdump -l -nn -i eth1 host 2a02:nnn:76f4:1::1234 and port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
09:18:39.547749 IP6 2a02:nnn:76f4:1::1234.33122 > 2606:4700:4700::1111.53: 4365+ [1au] A? www.linuxcontainers.org. (64)
09:18:39.571926 IP6 2606:4700:4700::1111.53 > 2a02:nnn:76f4:1::1234.33122: 4365$ 2/0/1 CNAME rproxy.stgraber.org., A 149.56.148.5 (98)

Success! The packets have now been confirmed arriving at the container.

So this is how you can break down the problem to see where the issue lies. Hope that helps.

3 Likes

Thanks to your steps and instructions I was able to get it working.

Turns out enabling port 53 on ufw was not enough as that still blocked the packets.Completely disbaling ufw did the trick. Then later I figured out that forwarding packets also needs to be enabled in ufw: https://help.ubuntu.com/lts/serverguide/firewall.html

sudo nano /etc/default/ufw

Make this change:

DEFAULT_FORWARD_POLICY="ACCEPT"

Thank you very much for all the help!
Can I at least buy you a coffee or something?

Excellent glad you got it working :slight_smile:

@stgraber has posted some info on donating to Ubuntu’s community fund if you would like to make a donation.

1 Like

Hey @tomp, thanks again for everything!!

If I wanted to limit ingress and egress of a container I since learned that you need to use a bridge for that? So how about a managed bridge, is it impossible to have the ipv6/64 on a managed bridge? Or how about one bridge for ipv4 (leaving the ipv4/32 address on eth0 of the host) and another for ipv6 that gives each container it’s ipv6 automatically?

Should I make a new thread for this for better search engine visibility?