IncusOS to run kubernetes for fun or profit

Purpose: I’m prepping for a kubernetes cert and I need a cheap lab to break. I’ve run out of free credits in the main cloud providers and need a place for my lab to call home while I cycle through different configurations.

Environment: I have a default config IncusOS cluster of three physical nodes all with internal storage. (I think each node is configured similarly but now it’s a cluster I’ve not checked to ensure they are all using their internal 2nd drives)

The cluster only has the default bridge network. I’m not sure what the best next step is for my situation. I have a managed switch they are all connected to a vlan just for incus (10.123.0.0/16) which should be fine I guess if I have all the kubernetes VMs live on the physical network?

I am just looking for lest obtrusive since I don’t want to have some stupid red herring issue when working through the cert troubleshooting stuff.

I’m a CLI first kinda guy and in the DevOps world so love me some Terraform/Ansible which is what I rely on for environment (re)creation… but that is all for the instances inside IncusOS Cluster.

I’ve checked out the WebUI and think thats handy for sure, it also seems alluring for things like setting up network and stuff?

Questions:

  • where is a good guide to get a solid base on IncusOS cluster for making instances a k8s cluster
  • else; what might be the bare min I should shoot for as an IncusOC Cluster config before I start creating VMs for k8s to use?
    • physical network vs physical uplink+ovn
    • internal storage per node good enough vs external nfs
    • any gotchas I need to know for k8s in IncusOS

Thank you, I will share whatever failures or successes I might have.

Heck jeah, go for it.

Internal storage is actually good, because etcd prefers really, really low write latencies.
Just make sure to not use BTRFS :smiley: . From the networking point of view, it doesn’t really matter what type you use.
If you want to use OVN with VLANs in a home network: Just make sure to reduce the MTU to something very low like 1390 or similar. Usually ISPs carry you traffic with and MTU of 1480, than pull something like you VLAN overhead and then the OVN overhead (geneve) and you quickly run into such cases where the MTU must be set to something low.

Working with ansible and terraform is actually the approach I am taking as well. Just spawn the VMs with an interface on your incus network bridge and you should be able to access them via your VLAN. Make sure to not use linux containers but real VMs.

As soon as you are there, use one of the many ansible galaxy roles to setup an kubernetes cluster, e.g. with kubeadm: Ansible Galaxy

Since you are working with a bridge network, no special things should be causing trouble regarding the network here. The VMs should be able to reach each other.

Besides that, Incus just spawns KVM VMs, there is nothing special to it from that point of view.
I hope i could give you some confidence to try it, but I cannot give you a step by step guide since every setup is highly different and every layer you add adds more knobs you can tweak (IncusOS, the VM image you choose, the kubernetes installation method, etc.)

Tip: For profit, you should definitely take more time to dig around in Incus. E.g. do not use the default project to spawn workloads, as the default project is inherited from all other projects and making changes to it can cause problems later. But for getting things started, it does not matter :wink:

If there are more questions, feel free to ask.

Thank you for the reply!

At this point I think the “profit” part might be from the self edification and less so collecting monies for workloads running on this ancient hardware (have you see the RAM prices these days)…

I’ve created production edge nodes using Ubuntu+LXD+BTRFS and know that well.

This cluster at the moment is to server one single purpose for the rinse&repeat of an educational lab.

I like the idea of using OVN I guess but maybe my assumptions are wrong is OVN the “right way” to:

  1. let the containers within a cluster communicate
  2. allow for multi-tenant/multi-profile same named containers to not collide
  3. keep DNS and DHCP contained within each tenant/profile so it’s not clogging up my primary services

I have got a UDMPro that everything is connected with so I can manage most all the things but it kinda sucks as a DNS/DHCP when you are creating&destroying containers all the time with same names and bla bla bla…

What I’m really looking to get ahead of is if I just put the VMs running k8s on the physical network (UDMPro with native VLAN on the ports for the /16 network meant just for this) then I will be relying on k8s networking. Would it be better or worse to have Incus aware OVN first, then have the k8s exist inside that virtual lab world… better in this sense is I guess a balance of simple but valuable for knowledge growth (read: I don’t wanna burn time configuring something that won’t be needed)

Please don’t feel like I’m asking you to architect my environment, I’m both asking and rubber ducking a little since it feels like maybe just adding the VMs to the physical network so my laptop can hit each one and letting k8s networking do the rest is the right idea. Then tackle the OVN multi-tenant thing when I need more production grade configs if that is even needed. (for years the city I’m in has been working toward fiber internet up to 5gbps symmetrical but so far it’s not on my street, but then hosting a cluster for others seems interesting.)

If I put together a step-by-step for my environment I’ll share it for anyone else. But maybe by then you can just say “Hey Alexa, make me a multi-tenant private cloud on the spare hardware in my garage, use the DevOps/Network/SysAdmin MCPs and just let me know when it’s all configured and working” :man_shrugging:

For the simplicity I would setup the K8 VM network interfaces to share them with the host using the desired VLAN. Has the advantage to reuse what you have in place UDMPro and direct access from your Laptop.

In a second step you can work on using OVN and Incus projects for multi-tenant setup. Each project will have it’s own OVN network to separate and secure the traffic. It allows to configure separate admin access etc. Have a look at this Layman’s walk through of open virtual network setup (on a single host) it should be close to what you are after?

1 Like