Ubuntu + ZFS + LXC

Morning everyone,

I hope you are well?

So I am a bit of a curious noob and setup my test environment in a way I “think” is the best possible way given the equipment I have.

If you could weigh in and give advice/criticism that would be just peachy

I have the following kit in use

4x R420s2x Juniper EX2300s2x Fortigate 100F’s in a HA Active Passive cluster

Each R420 has their cards flashed to I.T mode so I can get that ZFS goodness on Ubuntu Server 18.04 ( this mimics our prod environment for right now and we need to bump to 22.04 )

Each R420 has one ethernet cable plugged into each switch and a bond setup for active-backup

Each switch has LAGGs to the firewalls.

My question is more around ZFS, LXD/LXC containers and the subnetting I have setup but here is some more info

So I have typically got 4x /26 subnets per “estate” and they are labelled as such

int.estate ( international traffic permitted inbound )

za.estate ( local South African traffic permitted inbound )

insidea.estate (db’s and its relevant apps )

insideb.estate (slave db’s and their relevant apps )

On each server I have a vlanned interface setup ( also a vlan profile per container ) per subnet and the relevant container will be assigned to said interface/profile. So if any container wants to connect to another they have to traverse the fw and the ACL’s in place would either permit or deny said traffic

On the ZFS side of things I have one large ZFS pool and each container gets its own data set (Example below )

So now after waffling through all my build up, my questions are as follows

1 - Is the way I have done things sort of in the right direction?

2 - Is there things I should be concerned about by doing it this way?

3 - Do you have any suggestions on how to improve on this setup?

4 - Since the containers are unable to view the information on the host ( they are not privileged containers ), is there a way to break in from the container → host?

Ty for any and all advice/criticism

NAME USED AVAIL REFER MOUNTPOINT

lxd 2.21T 1.15T 96K none

lxd/containers 177G 1.15T 96K none

lxd/containers/app1-shell 22.1G 178G 22.3G legacy

lxd/containers/app2-engen 1.64G 17.0G 1.80G legacy

lxd/containers/app2-fnb 2.17G 16.5G 2.33G legacy

lxd/containers/app3-xlc 1.74G 16.9G 1.90G legacy

lxd/containers/app4-xlc 1.74G 16.9G 1.90G legacy

lxd/containers/db1-desk 8.91G 9.72G 9.08G legacy

lxd/containers/db1-sasol 13.9G 4.75G 14.1G legacy

lxd/containers/db1-shell 36.3G 13.7G 36.5G legacy

lxd/containers/db1-totem 17.5G 72.5G 17.7G legacy

lxd/containers/db2-engen 9.93G 8.70G 10.1G legacy

lxd/containers/db2-fnb 2.35G 16.3G 2.51G legacy

lxd/containers/db2-namp 2.36G 16.3G 2.53G legacy

So you have asked LXD to setup a single ZFS storage pool that consumes an entire zpool or physical device? Please can you show lxc storage show <pool> so I can see?

But it sounds fine to me. Its normal for each LXD instance to use its own data set. So doesn’t seem to be any concern there.

If the containers are running as unprivileged (the default in LXD) then they will be using Linux’s user namespace feature so that root inside the container is not real root on the host, so should there be a container break out the processes are not privileged.

See also