Remove ipv4 but keep WAN

I came across the following post: How to isolate bridges against each other? - #5 by idef1x

I set the ipv4 to none, security.ipv4_filtering and assigned a static IP inside the container with:

ip addr add 10.8.204.3/32 dev eth0

but what’s left to be able to still have internet inside the container, but not have the host route it to other containers?

I figure it has to be some other “routing” container that’s allowed to populate the routing table, but what would become the gateway inside the other containers, since the bridge now neither exposes an IP nor holds one inside the container I assume.

Thanks!

Can you explain your setup in more detail please?

Please include full network and container configs using lxc network show <network> and lxc config show <instance> --expanded.

@tomp

lxc network show lxdbr0
config:
  ipv4.address: 10.167.8.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:80a6:9b9d:5b7c::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/test2
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
lxc network show secondary
config:
  ipv4.address: none
  ipv4.nat: "true"
  ipv6.address: none
  ipv6.nat: "true"
description: ""
name: secondary
type: bridge
used_by:
- /1.0/instances/test
managed: true
status: Created
locations:
- none
lxc config show test --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 22.04 LTS amd64 (release) (20220712)
  image.label: release
  image.os: ubuntu
  image.release: jammy
  image.serial: "20220712"
  image.type: squashfs
  image.version: "22.04"
  security.nesting: "true"
  volatile.base_image: 49261351a3dea3e8176138640e0a45a70e84c1aaa963bbbde232ea6ea5efdae9
  volatile.cloud-init.instance-id: bb168346-c6bd-43ad-814c-a59adee52cad
  volatile.eth0.hwaddr: 00:16:3e:42:11:cb
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: STOPPED
  volatile.uuid: 366f2276-66ba-4e46-a5ee-db77b8427357
devices:
  eth0:
    network: secondary
    security.ipv4_filtering: "true"
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
lxc config show test2 --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 22.04 LTS amd64 (release) (20220712)
  image.label: release
  image.os: ubuntu
  image.release: jammy
  image.serial: "20220712"
  image.type: squashfs
  image.version: "22.04"
  security.nesting: "true"
  volatile.base_image: 49261351a3dea3e8176138640e0a45a70e84c1aaa963bbbde232ea6ea5efdae9
  volatile.cloud-init.instance-id: ce1ea6fb-ef6e-4217-babc-9323ba489b5d
  volatile.eth0.hwaddr: 00:16:3e:0f:0e:33
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: STOPPED
  volatile.uuid: a34a723b-dd40-4609-960f-768758112665
devices:
  eth0:
    name: eth0
    network: lxdbr0
    security.port_isolation: "true"
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

My setup is having n-amount of networks (currently testing with 2), each supposed to have their own set of containers, which each shouldn’t be able to communicate with those outside their bridge, the above link seems to remove the routing from the host, so it can’t route bridge to bridge, which could avoid having to create hundreds of iptables rules.

Do you still expect the managed bridge networks to provide DHCP/DNS etc?

it would be nice if there’s a list I can have on the host that assigns the static IPs (without it suddenly routing them), but I could live without it, including DNS, since I can just paste it into a template container that I’d keep copying.

I’m not really following to be honest. Are you saying its working? What is the actual problem?

I’m not sure how I can implement what was mentioned here: How to isolate bridges against each other? - #5 by idef1x

while still keeping internet inside those containers and a static IP assigned (which doesn’t leak to the host to be routable)

Is this a continuation/duplicate of your earlier thread?

It’s the same situation, but different approach, hence the new topic.

That’s what I figured the forum-etiquette is nowadays, after other websites telling me to keep things like this separate, even if the base-case might be similar.

People “coming from google can find it easier” supposedly and in this example, people that might not be able to help how to fix it with ufw, might know how to fix it with removing / adding the routing tables.

Was I supposed to just continue it there instead? it’s confusing how this thread separation is handled from site-to-site.

I don’t think port isolation or IP filtering are going to prevent inter-bridge routing.
I do think the ACL feature should be able to do it without a separate ACL for every combination of networks being needed. I’ll try and create a proof of concept for you on that original thread.

1 Like