How to use a second IP with a container and routed NIC

My case is that I have a root server on netcup.eu, with ubuntu:20.04, and inside it I want to install BigBlueButton in a container (which requires ubuntu:18.04). I have also purchased a second IP which I want to use for the BBB container, and netcup routes the second IP to the primary one.

1. Install LXD

apt install snap
snap install lxd --channel=4.0/stable
snap list
lxc list
lxd init

The output from the last command looks like this:

Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: btrfs
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=30GB]: 70
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

For the storage backend I am using btrfs because I need to install docker inside the lxc containers, and only this filesystem supports it.

2. Remove the second IP from the host

The default configuration assigns the second IP to eth0 on the host. We can remove it like this:

ip addr del 37.121.182.6/32 dev eth0

To remove it permanently, edit /etc/netplan/50-cloud-init.yaml and comment out the second IP line. Then do: netplan apply

3. Create a profile for the container

lxc profile create bbb
lxc profile edit bbb
lxc profile list

Set the content of the bbb profile to something like this:

config:
  security.nesting: true
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - 37.121.182.6/32
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
description: Routed LXD profile
devices:
  eth0:
    ipv4.address: 37.121.182.6
    nictype: routed
    parent: eth0
    host_name: veth-bbb
    type: nic
name: bbb
used_by:

The configuration security.nesting: true is needed in order to run docker inside the container. However if the container is unprivileged it does not really have any security implications.

We are using the second public IP that we deleted from the host interface (37.121.182.6/32). For this reason this profile can be used only for one container. If we want to build other containers like this, we should make a copy of the profile and modify it.

The config.user.network-config part is about the configuration of the container (through cloud-init). The gateway is 169.254.0.1.

Notice that devices.eth0.nictype is routed. We could have used an ipvlan type as well, and most of the configurations would be almost the same, however it seems that in this case the container cannot ping the public IP of the host.

The field devices.eth0.host_name sets the name of the virtual interface that will be created on the host. If we don’t specify it, then a random name will be used each time we start the container. But this would make difficult the specification of the firewall rules (that we will see later).

The field devices.eth0.parent is the name of the interface on the host where this virtual interface will be attached. In our case this is not really necessary and can be left out or commented.

4. Launch the container

The latest stable version of BBB requires ubuntu:18.04.

lxc launch ubuntu:18.04 bbb --profile default --profile bbb
lxc list
lxc list -c ns4t
lxc info bbb

With ip addr notice that a new interface named veth-bbb has been created, with IP 169.254.0.1/32. With ip ro notice that a route like this has been added:

37.121.182.6 dev veth-bbb scope link

Try also these commands:

lxc exec bbb -- ip addr
lxc exec bbb -- ip ro

Notice that the interface inside the container has IP 37.121.182.6/32 and the default gateway is 169.254.0.1.

We can also ping from the host to 37.121.182.6, however from the
container we cannot ping to the host or outside (to the Internet):

ping 37.121.182.6
lxc exec bbb -- ping 169.254.0.1
lxc exec bbb -- ping 8.8.8.8

5. Fix networking

The problem is that I have installed firewalld on the host and it blocks these connections. To fix this problem we can add the interface that is connected to the container to the trusted zone of the firewall:

firewall-cmd --add-interface=veth-bbb --zone=trusted --permanent
firewall-cmd --reload
firewall-cmd --list-all --zone=trusted

Note: If we did not specify the name of the veth interface on the profile, then each time the container is started it would get a random name, and the firewall configuration would not work.

Now connections should work:

lxc exec bbb -- ping 169.254.0.1
lxc exec bbb -- ping 8.8.8.8

However there is still something that does not work: from outside the host we cannot ping to the container. The problem is that the traffic goes through the FORWARD chain of iptables (the firewall) and the firewall currently blocks it. To fix this we should add some rules that allow forwarding for all the traffic that goes to the interface veth-bbb:

firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -o veth-bbb -j ACCEPT
firewall-cmd --permanent --direct --add-rule ipv6 filter FORWARD 0 -o veth-bbb -j ACCEPT
firewall-cmd --reload

We can test that everything works with netcat. On the server run lxc exec bbb -- nc -l 443. Outside the server run nc 37.121.182.6 443. Then every line that is typed outside the server should be displayed inside the server.

Note: If we used nic type ipvlan instead of routed, then instead of FORWARD, the relevant chains of iptables in this case would have been INPUT and OUTPUT and the filter rules above would have been a bit different.

6. Install BBB inside the container

For the sake of completeness, let’s also see how to install BBB inside the container.

lxc exec bbb -- bash
wget http://ubuntu.bigbluebutton.org/repo/bigbluebutton.asc -O- | apt-key add -
wget -q https://ubuntu.bigbluebutton.org/bbb-install.sh
chmod +x bbb-install.sh
./bbb-install.sh -v bionic-240 -s bbb.example.org -e email@example.org  -g -w

Add admins and users:

docker exec greenlight-v2 \
    bundle exec rake \
    admin:create["Full Name 1","email1@example.org","passw1","username1"]
docker exec greenlight-v2 \
    bundle exec rake \
    user:create["Full Name 2","email2@example.org","passw2","username2"]

Fix html5 services

If you run bbb-conf --status you will notice that html5 services are not working. They can be fixed like this:

# Override /lib/systemd/system/freeswitch.service
mkdir /etc/systemd/system/freeswitch.service.d
cat <<EOF | tee /etc/systemd/system/freeswitch.service.d/override.conf
[Service]
CPUSchedulingPolicy=other
EOF

# override /usr/lib/systemd/system/bbb-html5-frontend@.service
mkdir /etc/systemd/system/bbb-html5-frontend@.service.d
cat <<EOF | tee /etc/systemd/system/bbb-html5-frontend@.service.d/override.conf
[Service]
CPUSchedulingPolicy=other
EOF

# override /usr/lib/systemd/system/bbb-html5-backend@.service
mkdir /etc/systemd/system/bbb-html5-backend@.service.d
cat <<EOF | tee /etc/systemd/system/bbb-html5-backend@.service.d/override.conf
[Service]
CPUSchedulingPolicy=other
EOF

systemctl daemon-reload
bbb-conf --restart
bbb-conf --status

(Thanks to this post)

2 Likes

Thanks I’ve moved it to the tutorials section.

7. Install a TURN server in another container

In some network restricted sites or development environments, such as those behind NAT or a firewall that restricts outgoing UDP connections, users may be unable to make outgoing UDP connections to your BigBlueButton server.

The TURN protocol is designed to allow UDP-based communication flows like WebRTC to bypass NAT or firewalls by having the client connect to the TURN server, and then have the TURN server connect to the destination on their behalf.

In addition, the TURN server implements the STUN protocol as well, used to allow direct UDP connections through certain types of firewalls which otherwise might not work.

Using a TURN server under your control improves the success of connections to BigBlueButton and also improves user privacy, since they will no longer be sending IP address information to a public STUN server.

Because the TURN protocol is not CPU and memory intensive, and because it needs to use the port 443, it makes sense to use another public IP for it and to install it in a container with a routed NIC.

Create the profile

We can copy and modify the profile for BBB:

lxc profile copy bbb turn
lxc profile ls
lxc profile edit turn

Change the public IP and the host_name. It should look like this:

config:
  security.nesting: "true"
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - 37.121.183.102/32
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
description: Routed LXD profile
devices:
  eth0:
    host_name: veth-turn
    ipv4.address: 37.121.183.102
    nictype: routed
    parent: eth0
    type: nic
name: turn
used_by:

The modifications are these:

config:
    ethernets:
        eth0:
            addresses:
            - 37.121.183.102/32

devices:
  eth0:
    host_name: veth-turn
    ipv4.address: 37.121.183.102

The setting : security.nesting: "true" is not actually needed because we don’t need to run docker inside the container, but it doesn’t harm.

Launch the container

lxc launch ubuntu:20.04 turn --profile default --profile turn
lxc list
lxc info turn

ip addr show veth-turn
ip ro | grep veth

lxc exec turn -- ip addr
lxc exec turn -- ip ro

Fix networking

firewall-cmd --permanent --zone=trusted --add-interface=veth-turn 

firewall-cmd --permanent --direct --add-rule \
        ipv4 filter FORWARD 0 -o veth-turn -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
        ipv6 filter FORWARD 0 -o veth-turn -j ACCEPT

firewall-cmd --reload

firewall-cmd --zone=trusted --list-all
iptables-save | grep veth

lxc exec turn -- ping 169.254.0.1
lxc exec turn -- ping 8.8.8.8

Use netcat as well to make sure that you can reach any TCP or UDP port inside the container.

Install coturn inside the container

lxc exec turn -- bash

wget -qO- https://ubuntu.bigbluebutton.org/bbb-install.sh \
    | bash -s -- -c turn.example.com:1234abcd -e info@example.com

For more details see: https://github.com/bigbluebutton/bbb-install#install-a-turn-server

Now reinstall BBB, adding the option -c turn.example.com:1234abcd to the installation command, like this:

lxc exec bbb -- bash

./bbb-install.sh -g -w \
        -v bionic-240 \
        -s bbb.example.org \
        -e email@example.org  \
        -c turn.example.com:1234abcd

For testing that everything works as expected, see: BigBlueButton : Configure TURN

1 Like