Another 'networking issue' or 'how to connect containers to more than one network using a bridge or macvlan'

A little different from a recent “Networking Issue” posted by @rayj00.
I had thought of piggybacking my question there as the last answer profferred would seem to ‘help’ but didn’t want to hijack the thread so am starting another.

Running on Debian stretch using snapd running predominantly debian containers.

I would like to be able to access the containers not only from the host but also from at least my private network and possibly even use some future iteration where outside access using the web would happen. A number of the documents that give guidance are out of date and so I was looking for documents from about the last 18 months.
https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-networking-on-ubuntu-16-04-lts/
This one seemed straightforward and easy to understand.
https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-networking-on-ubuntu-16-04-lts/
This one didn’t seem to address my question. There is a paragraph where it talks about adding another bridge and part of the process an edit of /etc/network/interfaces.d/eth0.cfg being necessary.
This may be where debian and ubuntu differ but I have an interfaces.d file but there is nothing in it (most definitely not eth0.cfg!).
Also looked at a thread started by @wzhyuan but I don’t understand what’s the same and what’s different.

So I went with the clearest most straight forward explanation (at least I could follow it - sorry I’m quite new to the under the hood stuff!).
I have 10 veth* items that show up in ifconfig. Yet now when I use $lxc list none of the containers are listed with an IP address (neither IPv4 nor IPv6).

For the moment I am going to revert the /etc/network/interfaces document to its previous state.

Hi,
I think this instructions will work to you in order to have a bridge mode with lxd:
In the host:

$ sudo apt-get update
$ sudo apt-get install bridge-utils

Create and configure the bridge:

$ sudo nano /etc/network/interfaces
auto lo
iface lo inet loopback
#auto enp0s3     <----------------- enp0s3 is the real NIC in my computer
#iface enp0s3 inet static
auto br0            <----------------- br0 is the name I use for the bridge interface
iface br0 inet static
       address 192.168.1.150
       netmask 255.255.255.0
       gateway 192.168.1.1
       dns-nameservers 8.8.8.8 8.8.4.4
       bridge_ports enp0s3   <----------- the real NIC is connect to the bridge
       bridge_stp off
       bridge_fd 0
       bridge_maxwait 0
$ sudo reboot

Now, the host has a new interface br0 working like a switch (where the containers will be able to connect it and work like a real computer in your lan)
Now, you have to create a new profile:

$ lxc profile copy default bridge
$ lxc profile edit bridge 
config: {}
description: bridge profile
devices:
  eth0:
    nictype: bridged
    parent: **br0**      <---- replace lxdbr0 with br0 in order to use the 'switch' br0
    type: nic
  root:
    path: /
    pool: lxd
    type: disk
name: default
used_by: []

Lastly, you must use the new profile:

$ lxc launch ubuntu:x c1 -p bridge    <-- to create a new container in bridge mode
$ lxc profile assign c1 bridge <-- to assign the profile to a existing container

The container must be correctly configurated with IP, MS, gateway and DNS to work in your lan. If it’s all correct, the container will be another computer in your lan.

If your host is a virtual machine in Virtualbox, you have to configure the network adapter in Virtualhost as bridge mode and Promiscuous mode - Allow all.


MACVLAN:
In the host, you don’t have to do anything in the network config but you have to create a new profile:

$ lxc profile copy default macvlan
$ lxc edit macvlan
description:  macvlan profile
devices:
  eth0:
    nictype: **macvlan** <--- replace bridged with macvlan 
    parent: enp0s3
    type: nic
  root:
    path: /
    pool: lxd
    type: disk
name: macvlan
used_by: []

Now the same as before, you must use the new profile:

$ lxc launch ubuntu:x c1 -p macvlan    <-- to create a new container in macvlan mode
$ lxc profile assign c1 macvlan <-- to assign the profile to a existing container

The container must be correctly configurated with IP, MS, gateway and DNS to work in your lan. If it’s all correct, the container will be another computer in your lan; nevertheless, container and host will not be able to talk among them.
Mode macvlan did not work for me in Virtualbox (I tried it a few months ago).

I suppose that you are not working with wifi. If you are working with wifi, neither the bridge nor the macvlan, will work.

Salutes

2 Likes

I haven’t tried your proposed solution yet but have some questions.
(A very minor one first - - - debian now allows the use of just ‘apt’ rather than ‘apt-get’ with ‘apt’ seeming a bit neater in response - - - does not ubuntu use similarly?)

After the creation and configuration of the bridge - - - why is it necessary to reboot the machine?

Your final point is to use the new profile by creating a new container.
What is done for existing containers?

I am using wifi for some machines here - - - why does the bridge not work for that?

TIA

Hi,

  • You can use apt without problem.

  • Reboot in order to use the new network config. It’s possible do it without reboot using ifup and ifdown.

  • For new container:
    $ lxc launch ubuntu:x c1 -p macvlan <-- to create a new container in macvlan mode

  • For existing containers:

$ lxc profile assign c1 macvlan <-- to assign the profile to a existing container

  • Problems with wifi only if the host is using wifi. The AP expects one MAC address for the wifi connection from your host, but you would have several MAC addresses (one for the host and one for each container). If your host is wired, the containers will access to the local net without problems (via bridge or macvlan) and will communicate with any other computer in the lan (wired or wireless).

Salutes

Wow - - - thank you for a very quick response!
For existing containers what do I use if I would prefer to use a bridge instead of macvlan?

Assuming that the above is possible then a solution is presented for I think all the possibilities present now for networking containers in lxd - - - wonderful!

Hi,

Configure the bridge on the host, create a new profile and apply (see my first answer):
$ lxc profile assign c1 bridged <-- to assign the profile to a existing container where c1 is the container's name and bridged the profile's name

Believe me, it works and works very well. It is messy the first time but once done, it works and the containers will have network connection.

You can switch an existing container from using either private network, bridge or macvlan.
The way to do this, is

  1. Stop the container.

    lxc stop mycontainer

  2. Apply the appropriate new profile

    lxc profile assign mycontainer default # switch to private network, OR
    lxc profile assign mycontainer default,bridgeprofile # switch to bridge, OR
    lxc profile assign mycontainer default,macvlanprofile # switch to macvlan

  3. Start the container

    lxc start mycontainer

OK - - - was able to complete the operation and can report that it was a success !!! (Yah!!)

So - - - which gives me greater security (makes it more difficult to get into the container)?
Running the container on a private network or running the container using bridgeprofile?

As a secondary question - - - if I want to have some containers on the private network and some using bridgeprofile - - - is all that is necessary is to:

  1. stop the container
  2. assign the correct profile (default or bridge)
  3. start the container

(I am assuming that for someone using macvlan that it would be included in #2 just above.)

TIA

You can easily switch between all three, private network, bridge, macvlan. That is what I wrote earlier. The container gets its network configuration over DHCP, therefore a container can adapt to our network requirements.

Private network or bridge? They become equivalent when you add a firewall to the bridged container. That is, when you configure the firewall to allow the exact same network services for the bridged container, then there are no security differences. You would get though usability differences; how easy is it to setup the firewall or iptables to allow a new network service in the respective container.

In practice, when there is a cooperative environment and you have containers talking with other computers on the LAN, it is simpler to use bridged containers.

Thank you for your explanation.

It would seem that it is time to learn a LOT more about firewalls.

Combining (by quoting) the posts that together comprise the solution. Please note - - - this is not ‘my’ solution rather the solution is the product of 2 other person’s postings!!!

@mgregal

should be profile apply

LXD 2.0.11 and LXD 2.21 differ on this option. One version wants ‘apply’, the other ‘assign’.

apply works on both as we didn’t want to break scripts when people upgrade from 2.0.
For 3.0, I’m likely to keep apply as an alias of assign as the new command line parser makes such aliases pretty trivial to handle.