LAN ip address for container using host bridge with docker

Hello lxd community,
I’m new to lxc containers. I’ve been playing with it for a while. Now I have to get some reliable containers for my home server.
I used lxd v4 for some time and used this How to get LXD containers get IP from the LAN with routed network – Mi blog lah! to run some containers for testing and it was working fine.
Now I have an old laptop connected to the router via ethernet on which I have installed Ubuntu server 22.04. I found lxd v5 already installed. So I tried the same tutorial as above but didn’t work. So I looked for an alternative solution and found this https://thenewstack.io/how-to-create-a-bridged-network-for-lxd-containers/
I managed to get the br0 bridge (after replacing the gateway4 with routes as recommended here netplan generate: `gateway4` has been deprecated, use default routes instead - Unix & Linux Stack Exchange)
But I still don’t have lan ip for the containers. Now they don’t have any ip.
What’s the best way to get it working? Thanks

Please show ip a and ip r inside host and container?
Also please show lxc network show <network> and lxc config show <instance> --expanded?

On the host

jarod@ubuntuacerserver:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 1c:75:08:df:ee:63 brd ff:ff:ff:ff:ff:ff
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 90:00:4e:58:c9:ad brd ff:ff:ff:ff:ff:ff
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:56:2b:37:8d:f6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.25/24 brd 192.168.1.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::d056:2bff:fe37:8df6/64 scope link 
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:51:55:9e:aa brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:e6:76:d0 brd ff:ff:ff:ff:ff:ff
    inet 10.150.204.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:4c5c:f9a9:de00::1/64 scope global 
       valid_lft forever preferred_lft forever
8: vethe77fded2@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether c6:1f:f7:5e:c7:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: veth9d3dffb1@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether 02:7d:d5:69:b6:65 brd ff:ff:ff:ff:ff:ff link-netnsid 1
12: veth6f24f4e1@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether 0a:09:59:38:8c:5a brd ff:ff:ff:ff:ff:ff link-netnsid 2
jarod@ubuntuacerserver:~$ ip r
default via 192.168.1.1 dev br0 proto static 
10.150.204.0/24 dev lxdbr0 proto kernel scope link src 10.150.204.1 linkdown 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.25
jarod@ubuntuacerserver:~$ lxc network show br0
config: {}
description: ""
name: br0
type: bridge
used_by:
- /1.0/instances/azura
- /1.0/instances/lxd-dashboard
- /1.0/instances/ubuntu
- /1.0/profiles/bridgeprofile
- /1.0/profiles/default
managed: false
status: ""
locations: []
jarod@ubuntuacerserver:~$ lxc config show azura
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Debian bullseye amd64 (20221230_05:24)
  image.os: Debian
  image.release: bullseye
  image.serial: "20221230_05:24"
  image.type: squashfs
  image.variant: default
  volatile.base_image: aff620fd83925b23bd4ba5929e1b93ac3a05275d068f11022c77e9216f8c091d
  volatile.cloud-init.instance-id: 0ee78fcb-ee2e-4f17-ba5f-28113e06b7e8
  volatile.eth0.host_name: vethe77fded2
  volatile.eth0.hwaddr: 00:16:3e:03:94:4f
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 27afb2f7-09bd-496c-9cd2-2c9749c23b57
devices: {}
ephemeral: false
profiles:
- bridgeprofile
stateful: false
description: ""
jarod@ubuntuacerserver:~$ lxc config show ubuntu
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu jammy amd64 (20221230_07:42)
  image.os: Ubuntu
  image.release: jammy
  image.serial: "20221230_07:42"
  image.type: squashfs
  image.variant: default
  volatile.base_image: c63127ad1a9f826091d4903720d7e9c29feb94ea90fe58b808570c94787c5454
  volatile.cloud-init.instance-id: 90ccdd55-e6c0-4f8c-8f6d-bef892e58348
  volatile.eth0.host_name: veth6f24f4e1
  volatile.eth0.hwaddr: 00:16:3e:fd:59:48
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: de09e685-2ffc-4a8c-bbdb-05a58f5d230a
devices: {}
ephemeral: false
profiles:
- bridgeprofile
stateful: false
description: ""
jarod@ubuntuacerserver:~$ cat /etc/netplan/00-installer-config.yaml 
# This is the network config written by 'subiquity'
network:
  ethernets:
    enp1s0:
      dhcp4: no
      dhcp6: no
  version: 2
  renderer: networkd

  bridges:
   br0:
    interfaces: [enp1s0]
    addresses: [192.168.1.25/24]
    routes:
                - to: default
                  via: 192.168.1.1
#    gateway4: 192.168.1.1
    mtu: 1500
    nameservers:
     addresses: [8.8.8.8]
    parameters:
     stp: true
     forward-delay: 4
jarod@ubuntuacerserver:~$ lxc profile show bridgeprofile
config: {}
description: Bridged networking LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: bridgeprofile
used_by:
- /1.0/instances/ubuntu
- /1.0/instances/azura

On the containers

root@azura:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:03:94:4f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fe03:944f/64 scope link 
       valid_lft forever preferred_lft forever
root@ubuntu:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:fd:59:48 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fefd:5948/64 scope link 
       valid_lft forever preferred_lft forever

ip r gives nothing on both containers (azura is debian11 container).

Thanks a lot for the help.

not exactly help for your question, but maybe an alternative solution.

Set yout lxd host back to default settings (no bridge)
“Default” netplan on lxd- host

# This is the network config written by 'subiquity'
network:
  ethernets:
    enp1s0:
      dhcp4: true
  version: 2

new test-container
lxc launch ubuntu:22.04 c1

add a macvtap device (as you use ethernet and not wlan, this should work)
lxc config device add c1 eth1 nic nictype=macvlan parent=enp1s0

netplan inside contaer c1

root@proxy:~# cat /etc/netplan/50-cloud-init.yaml 
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    ethernets:
        eth0:
            dhcp4: true
            dhcp4-overrides:
             route-metric: 400
        eth1:
            dhcp4: true
            dhcp4-overrides:
             route-metric: 100

netplan generate && netplan apply

Now, you have two interfaces. One direct on your home-network
192.168.1.0/24 and one to the default lxdbr0. (10.XXX.XXX.XXX)

For INTER-VM-Communication, you can use the 10.0.0.0/8 network or the container.lxd names (like ping c1.lxd) and for communication with other devices, its use the dhcp adress of your router/gateway (or you set a static one). With the overwritten route-metrics, the (faster) macvlan interface is prefered.

If you don’t need inter-vm-connection and vm-host-connection, you can remove eth0 from the containers. After that, the container CAN’T connect to the lxd-host through an network deivce.

1 Like

You’re using docker on the host, see How to configure your firewall - LXD documentation

Thanks a lot :heart_eyes:. It worked for the ubuntu containers but the debian one doesn’t have netplan, and there is no /etc/networks/interfaces so I don’t know how to configure it.
By the way, I was a little in a hurry so I didn’t remove the br0 bridge and added the device to br0.

Ah yes, during ubuntu server installation I ticked docker. Since I have 2 docker containers that I would like to move to the new server, I have installed it together. I have read that it’s not recommended to install docker inside an lxc container. I don’t know if it’s still the same recommendation with the new version of lxd.

This may be useful:

https://www.youtube.com/watch?v=_fCSSEyiGro

1 Like

on debian, you could just create /etc/network/interfaces and configure eth1 in the usual way.

I tried to do so but I got an error saying that I can’t create a new dir for networks

root@azura:~# cd /etc
root@azura:/etc# mkdir networks
mkdir: cannot create directory ‘networks’: File exists
root@azura:/etc# cd networks
bash: cd: networks: Not a directory
root@azura:/etc# cat networks
default         0.0.0.0
loopback        127.0.0.0
link-local      169.254.0.0

After some googling, I understood that this container is using Systemd-Networkd, that I am not familiar with.

So I looked in

root@azura:/etc/systemd/network# ls
eth0.network

And inside

root@azura:/etc/systemd/network# cat eth0.network
[Match]
Name=eth0
[Network]
DHCP=true
[DHCPv4]
UseDomains=true

So I added a new config file for eth1 (using cat as I’m not confortable with vim :sweat_smile:)

root@azura:/etc/systemd/network# cat <<EOF> eth1.network
[Match]
Name=eth1
[Network]
DHCP=true
[DHCPv4]
UseDomains=true
EOF

And applied changes by restarting
root@azura:/etc/systemd/network# systemctl restart systemd-networkd

And it worked.

Thank you @qupfer and @tomp for the help. I still have some questions about filesystem and storage that I will ask in a dedicated thread.

1 Like

yes, the legacy debian file is /etc/network/interfaces without “s” on network.
The file /etc/networks exist for other reasons.
However, using systemd-networkd is also a good and “correct” way to configure your network.
Some people will say, its the better/modern way, other says its not the preferred “debian” way. Mostly people, they dislike systemd but not switched to Devuan
The the third group will use networkmanger/nm-cli :smiley:

1 Like