Access to container from lan and wan

On a 20.04 Ubuntu host, my Debian /12 container is up.
I can ping to the container from the Ubuntu host.
I can ping from the container to the Ubuntu host.
I cannot ping to the container from the lan outside the host.

My first wish is to to connect to the container using putty from another host
on the lan.

Been reading the Incus Networking and security stuff, but need help.

bret@idempiere-erp:~$ incus profile show default
config: {}
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: id
    type: nic
  root:
    path: /
    pool: id
    type: disk
name: default
used_by:
- /1.0/instances/bret
project: default

You’re using the id network here, what does incus network show id show you?

To have a container be available from your LAN, you typically need one of:

  • Routing your bridge’s subnet from your router to your host
  • Attach the instance directly to your network interface using macvlan (cannot work on wifi, will prevent host to container communication)
  • Move your LAN interface into a bridge and then bridge the container into it (cannot work on wifi)

The alternative to this would be to use a proxy device to only forward some specific ports on your host’s IP address to your instance.

bret@idempiere-erp:~$ incus network show id
config:
ipv4.address: 10.1.60.110/24
ipv4.nat: “true”
ipv6.address: fd42:5129:77c1:1dc0::1/64
ipv6.nat: “true”
description: “”
name: id
type: bridge
used_by:

  • /1.0/instances/mm-erp
  • /1.0/profiles/default
    managed: true
    status: Created
    locations:
  • none
    project: default
    bret@idempiere-erp:~$

Here is my current ip address display of interfaces. My current subnet is
192.168.60.0/24 with 192.168.60.110 on interface eno1…being the Ubuntu host.

For the third option you suggested moving my LAN interface to a bridge and then bridge the container to it.

Can you provide or point to a sample where I accomplish that?..and at the same time compare the sample to the Incus documentation.

Thanks in advance.

bret@idempiere-erp:~$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 28:80:23:a7:1a:68 brd ff:ff:ff:ff:ff:ff
altname enp3s0f0
inet 192.168.60.110/24 brd 192.168.60.255 scope global noprefixroute eno1
valid_lft forever preferred_lft forever
inet6 fe80::3855:7f78:a15e:e922/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 28:80:23:a7:1a:69 brd ff:ff:ff:ff:ff:ff
altname enp3s0f1
4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 28:80:23:a7:1a:6a brd ff:ff:ff:ff:ff:ff
altname enp3s0f2
5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 28:80:23:a7:1a:6b brd ff:ff:ff:ff:ff:ff
altname enp3s0f3
6: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
valid_lft forever preferred_lft forever
7: id: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:69:39:2e brd ff:ff:ff:ff:ff:ff
inet 10.1.60.110/24 scope global id
valid_lft forever preferred_lft forever
inet6 fd42:5129:77c1:1dc0::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe69:392e/64 scope link
valid_lft forever preferred_lft forever
8: incusbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:d0:fd:66 brd ff:ff:ff:ff:ff:ff
inet 10.176.150.1/24 scope global incusbr0
valid_lft forever preferred_lft forever
inet6 fd42:1e8d:25e1:5d58::1/64 scope global
valid_lft forever preferred_lft forever
11: bret-network: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:85:03:da brd ff:ff:ff:ff:ff:ff
inet 10.231.70.1/24 scope global bret-network
valid_lft forever preferred_lft forever
inet6 fd42:e095:b6ec:edf4::1/64 scope global
valid_lft forever preferred_lft forever
15: vethc3c4eb1d@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master id state UP group default qlen 1000
link/ether 7a:de:23:0b:4b:61 brd ff:ff:ff:ff:ff:ff link-netnsid 0

That’s not done through Incus but through your Linux distribution’s networking configuration tool.
On Ubuntu servers that’d normally be done through Netplan, on Ubuntu desktop you’d probably need to do that through Network Manager instead.

If using netplan, share what you have in /etc/netplan, that should make it easy enough to suggest how to turn that into a bridge.

This my Ubuntu info. So it sounds like I need to put a netplan config in, and change the network to be managed by netplan vs network manager.
Please let me know if my thinking is on the right track.

bret@idempiere-erp:~$ cat /etc/os-release
NAME=“Ubuntu”
VERSION=“20.04.6 LTS (Focal Fossa)”
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME=“Ubuntu 20.04.6 LTS”
VERSION_ID=“20.04”
HOME_URL=“https://www.ubuntu.com/”
SUPPORT_URL=“https://help.ubuntu.com/”
BUG_REPORT_URL=“Bugs : Ubuntu”
PRIVACY_POLICY_URL=“https://www.ubuntu.com/legal/terms-and-policies/privacy-policy”
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
bret@idempiere-erp:~$

This is currently in /etc/netplan.
cat /etc/netplan/01-network-manager-all.yaml

Let NetworkManager manage all devices on this system

network:
version: 2
renderer: NetworkManager

The incus container usage wasn’t in the cards when the server was setup.
The server is in a data center. I’ll have to figure out how to put a netplan configuration in and hopefully keep my ability to connect to the server.

I’ve played with netplan a little, here’s one from a Incus step by step article: I’ll need to rename and change some numbers
Is this what you are referring to?

network:
version: 2

ethernets:
enp1s0:
dhcp4: false
dhcp6: false

bridges:
bridge0:
interfaces: [enp1s0]
addresses: [172.16.1.91/16]
routes:
- to: default
via: 172.16.0.1
nameservers:
addresses:
- 1.1.1.1
- 1.0.0.1
parameters:
stp: true
forward-delay: 4
dhcp4: no

Hello Bert,

Is your server, in a DC, using Network Manager ?
Maybe are you using cockpit, or something in the like ?

If you setup a bridge, you will need to allocate an IP to your host, and to your container.
In a “homelab” situation, it would be pretty straight forward :

1 - Setup a bridge, containing your network interface eg :

bridges:

br_lan:
  dhcp4: true
  dhcp6: true
  interfaces:
    - eno1

Then create a custom network within incus :

devices:
eth0:
nictype: bridged
parent: br_lan
type: nic

eg : incus config device add container eth1 nic nictype=bridged parent=br_lan

Then just retrieve an ip from that bridge with your container, defining a config for eth1 in it’s netplan / network manager config files :slight_smile:

If you’re running this container on a virtualhost/hosted vps… and don’t have multiples ipv4 addresses, you can still apply it, you can generally get multi ipv6 addresses even on lowest ends plans :slight_smile:

If it’s too much of a assle, you may be able to use incus incus network forward, that’s made for your use case : you can forward hosts ports to managed networks… in this case, stick to incusbr0 for host networking, and explore options in network forward : How to configure network forwards - Incus documentation

Hope it helps,

1 Like

Thanks for trying to help. Don’t mean to ambush the forum
Goal is to have the container get it’s ip address from the router.
Is that possible?

I would like to use the router firewall to allow access to the container. If that isn’t possible, I can tweak iptables in the container for access.

Here’s my /etc/netplan config.

network:
version: 2
renderer: NetworkManager
ethernets:
eno1:
dhcp4: no

bridges:
br0:
dhcp4: yes
interfaces:
- eno1

I feel like I’m getting close, but I need to assign/modify the “id” network, or create a new network, then assign it to the container.


current network assigned to container


bret@idempiere-erp:~$ incus network show id
config:
ipv4.address: 10.1.60.110/24
ipv4.nat: “true”
ipv6.address: fd42:5129:77c1:1dc0::1/64
ipv6.nat: “true”
description: “”
name: id
type: bridge
used_by:

  • /1.0/instances/mm-erp
  • /1.0/profiles/default
    managed: true
    status: Created
    locations:
  • none
    project: default
    bret@idempiere-erp:~$

current interfaces on server


bret@idempiere-erp:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 28:80:23:a7:1a:68 brd ff:ff:ff:ff:ff:ff
altname enp3s0f0
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 28:80:23:a7:1a:69 brd ff:ff:ff:ff:ff:ff
altname enp3s0f1
4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 28:80:23:a7:1a:6a brd ff:ff:ff:ff:ff:ff
altname enp3s0f2
5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 28:80:23:a7:1a:6b brd ff:ff:ff:ff:ff:ff
altname enp3s0f3
6: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
valid_lft forever preferred_lft forever
7: bret-network: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:85:03:da brd ff:ff:ff:ff:ff:ff
inet 10.231.70.1/24 scope global bret-network
valid_lft forever preferred_lft forever
inet6 fd42:e095:b6ec:edf4::1/64 scope global
valid_lft forever preferred_lft forever
8: id: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:69:39:2e brd ff:ff:ff:ff:ff:ff
inet 10.1.60.110/24 scope global id
valid_lft forever preferred_lft forever
inet6 fd42:5129:77c1:1dc0::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe69:392e/64 scope link
valid_lft forever preferred_lft forever
9: incusbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:d0:fd:66 brd ff:ff:ff:ff:ff:ff
inet 10.176.150.1/24 scope global incusbr0
valid_lft forever preferred_lft forever
inet6 fd42:1e8d:25e1:5d58::1/64 scope global
valid_lft forever preferred_lft forever
10: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 28:80:23:a7:1a:68 brd ff:ff:ff:ff:ff:ff
inet 192.168.60.2/24 brd 192.168.60.255 scope global dynamic noprefixroute br0
valid_lft 165157sec preferred_lft 165157sec
inet6 fe80::2a80:23ff:fea7:1a68/64 scope link
valid_lft forever preferred_lft forever
12: veth51309bff@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master id state UP group default qlen 1000
link/ether 6e:67:42:c8:51:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Any suggestions welcome

I have been successful to a degree. I can now ping to the container from other devices on my lan.
Here is my router dhcp issued address list (pic).
Next I will attempt to get some ports open and hopefully can have some more fun.

I know ALL of you who visit and provide help to us rookies have work to do.
Thanks for your eyes and comments.

Time for a break.

1 Like