Containers behind VPN, with limited LAN access

I want to host some websites and game servers publicly. I want to use a VPS to keep my ISP’s public IP hidden. I do not want any of these containers to use the ISP’s gateway for accessing traffic for security and to help stay hidden.

My first thought was to use OpenWrt as a container and put the containers on their own bridge with it being set as OpenWrt’s LAN interface.

I started to think about using network ACLs to block egress and only allow egress to the VPS WireGuard endpoint and select LAN addresses, this would mean each container connects to the VPS with WireGuard.

I know usually one would use a VLAN, however I am trying to self-contain it all inside the host so that I could move networks without having to worry about re-configuring the ISP router for VLANs, if it even is possible.

What would you suggest I do here for this implementation?

I’ve gone ahead and setup an OpenWrt container to see what it would be like.

I think what I’ve got is satisfactory now. 3 network interfaces, 1 for the WAN(incusbr0), 1 for VPN(br1), 1 for LAN(br0). This allows me the ease of setting the ones I want on VPN, and not-VPN, and I can always create another interface if I need something more specific like limited LAN access. I don’t know why I didn’t really think of this beforehand.

Note the WAN is really the Incus network(incusbr0), so this is triple NATTed(ISP router → Incus → OpenWrt). However, to keep all the containers/VMs self contained without worrying about the true WAN I felt this would be the best setup.

In Incus I add proxy devices to the OpenWrt container and port forward to the containers that I want access to from the true LAN. I suppose I could add proxy devices to the containers directly, however this just felt more typical.

1 Like

Hi- Your timing with this post is perfect as just this weekend I setup an OpenWrt container trying to setup a VPN bridge network (vpn1) and Non VPN (br0) for incus containers, however, I was not successful.

I am hoping you can provide some details on your setup. I already have a br0 bridge installed to allow me to get a 192 addresses on my local network. I will typically attached to the br0 bridge to get 192 addresses for the incus container.

Here is what I did:

  1. Installed bridge-utils and created a new bridge called vpn1 with brctl addbr vpn1
  2. Created an incus OpenWrt container and attached the following devices:
    eth1:
    name: eth1
    nictype: bridged
    parent: vpn1
    type: nic
    etho:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  3. From within the container I edited the etc/network file to allow the 192 assigned to the container to be reached on port by 80 (by default this is rejected and can not be reached on port 80)
    5.From the webgui I added a new network interface called lan and used the eth1 device and setup a new 10.50.50.1 network with DHCP and the IPV4 Gateway as 192.168.1.1 (wan)
  4. The interface came up as green in the webgui and I can see both interfaces as 192.168.1.158 (eth0) and 10.50.50.1 (eth1) with the incus list command
  5. I then create a test debain 12 container and attached the below device:
    eth0:
    name: eth0
    nictype: bridged
    parent: vpn1
    type: nic

I was hopping this container would get a new 10.50 address from the vpn1 bridge and that I would have internet access. If this worked correctly, I was going to install the VPN software and configure.

This did not work.

Do I need to don anything on the host to activate the vpn1 bridge? What am I doing wrong?

Thanks for any help.

I created a container to test this. Note, I am using the incusbr0 on the WAN side, this will just need a proxy device to access it rather than the WAN IP. I am also using Debian 12 as the host.
These are the steps I took to setup OpenWrt with an interface for the containers to use as LAN.

  1. Install bridge-utils: apt install bridge-utils
  2. In /etc/network/interfaces, I added:
auto br1
iface br1 inet manual
  bridge-ports none
  bridge-stp off
  bridge-fd 0
  1. Bring up the bridge: ifup br1
  2. Create the OpenWrt container: incus create images:openwrt/23.05 openwrt-test
    Add br1 to the container as eth1: incus config device add openwrt-test eth1 nic nictype=bridged parent=br1 name=eth1
  3. I am using a proxy device to access the LuCI interface, so I add one: incus config device add openwrt-test luci proxy listen=tcp:127.0.0.1:8080 connect=tcp:127.0.0.1:80
  4. Start the OpenWrt container: incus start openwrt-test
  5. Access the shell to add the firewall rule: incus exec openwrt-test sh
  6. Add a firewall rule to allow access to LuCI over the WAN interface:
uci add firewall rule
uci set firewall.@rule[-1].name='Allow-LuCI'
uci set firewall.@rule[-1].src='wan'
uci set firewall.@rule[-1].dest_port='80'
uci set firewall.@rule[-1].proto='tcp'
uci set firewall.@rule[-1].target='ACCEPT'
  1. Apply the changes: uci commit
  2. Restart firewall: service firewall restart
  3. Access LuCI via the proxy, 127.0.0.1:8080. Navigate to Interfaces and apply the changes that it makes on first visit.
  4. Create new interface:
Name: lan
Protocol: Static address
Device: eth1

13: Set IPv4 address to 192.168.10.1 and IPv4 netmask to 255.255.255.0
14: Save, Save & Apply.

OpenWrt should provide IPs via DHCP on the br1 interface now.

  1. Create new container: incus create images:debian/12 debian-test
  2. Add bridged interface: incus config device add debian-test eth0 nic nictype=bridged parent=br1 name=eth0
  3. Start the container: incus start debian-test

Hi - I apologize not acknowledging your very thoughtful response. Thanks