Exporting DB (internal cowsql) access into containers

Would it be feasible to extend API with the ability to use internal cowsql database from within containers (not internal tables) via socket by extending REST API to a similar to Clickhouse SQL over HTTP interface?

Use case is to leverage “replicated and fault tolerant SQL engine” that is already built-in and possibly integrate this with k3s-io/kine so we would have etcd replacement.

Hmm, I don’t know that this would be a good thing for us to provide.

The reason why Incus can cluster to pretty large size with tens of thousands of instances across 50+ servers is because our database is extremely low throughput. We have designed everything to limit database writes to effectively only happen when an object is edited and reads to only happen as strictly needed.

The moment we allow for instances to cause database load of any kind, it will become trivial to take down the entire Incus database.

The way Kubernetes uses its database is particularly problematic for that. Back at Canonical we had a patch that would use Kine with Dqlite as a replacement for etcd though as its own thing (within microk8s), not by using LXD’s database. We ran into all kind of performance issues when doing that because Kubernetes constantly writes stuff to the database and sometimes rather huge amount of data, which is a bit of a problem when Dqlite/Cowsql requires the entire database to be loaded into memory.

@stgraber In your opinion what can we do in integrate k8s better into Incus?
I mean sure it can be run in containers but it feels like that more integration can be beneficial.

I think a good first step would be to reduce friction around deploying Kubernetes on top of Incus, so having a good way to automate throwing a k8s cluster at Incus and having that be up and running and easy to scale up/down.

Last I checked, a good way to make things better in that regard would be to get a cluster API plugin for Incus.

I am willing to work on that (CAPI), won’t be fast as on my free time, but yea I am interested.

Could we collaborate on this? And how should we start?

My Kubernetes experience is extremely limited, but I think for something like this the best would be to start with a rough spec, basically covering what CAPI provides, what it needs/expects and then we can figure out how to best have it interact with Incus.

Ideally you wouldn’t want to give it access to the entire Incus deployment, so you’d probably want to create a project, potentially apply resource limits on the project, then create a token on the Incus side which is restricted to that project, feed that to CAPI and have it set up a connection to Incus with that token and then be able to manage its own instances through it to scale the cluster.