Tried the new edge snap, got further this time, managed to create the S3 storage pool, but radosgw-admin bucket link command failed:
root@v1:~# lxc storage bucket create s3 foo
Error: Failed creating storage bucket: Failed linking bucket to user: Failed to run: radosgw-admin --cluster ceph --id admin bucket link --bucket foo --uid foo: exit status 5 (2022-08-16T11:15:08.743+0000 7f884a158b40 0 ERROR: could not decode buffer info, caught buffer::error
2022-08-16T11:15:08.743+0000 7f884a158b40 -1 ERROR: do_read_bucket_instance_info failed: -5
failure: (5) Input/output error: failed to fetch bucket info for bucket=foo)
To get to this point some of the radosgw-admin command must have succeeded.
But this sounds like a version mismatch between what is in the snap and what my local ceph cluster is running (quincy).
I also tried setting snap set lxd ceph.external=true on each cluster member and reloading LXD, but this didnāt help.
Is the radosgw-admin command covered by snap set lxd ceph.external=true?
An alternative option we could use is to use the rados operations admin API (Admin Operations ā Ceph Documentation) via a Go rest client, but itāll require us to implement the S3 authentication process, as it uses S3 auth but is not part of the S3 protocol itself.
@stgraber shall we have this as core.objects_address (plural) to align with core.metrics_address?
Also as the entity type managed in LXD are ābucketsā, should this be core.buckets_address instead, or core.storage_buckets_address?
One thing Iāve realised now Iām looking at the minio implementation in more detail is that we will need to parse the incoming credentials on a request in order to implement the ListAllMyBuckets request. Because each key is only associated to a single bucket, itāll only ever return one result, but weāll need to authenticate the request before we know who it is.
Hrm, another reason why we are going to need to understand the S3 auth header is because the requested host is used in the signature, meaning when we reverse proxy it to minio the signature no longer matches. Have tried setting the outbound Host header to the original LXD-side listener address to no avail.
@stgraber so as well as MinIO not being able to run on tmpfs (which complicates our automated testing when using the dir driver), it also doesnāt support quotas when used on filesystem/single disk setups:
NOTE: Bucket quotas are not supported under gateway or standalone single disk deployments.
This shouldnāt be too much of an issue for most storage pool types that support their own volume level quotas, but for dir pool type, the only quota supported is project quotas, and these currently use the volume DB ID for project quota ID. For buckets we arenāt going to have a volume DB record, and thus no ID, and no support for project quotas.
This means we canāt really do quotas for buckets on dir pools.
My company is developing an internal tool to run multi-node Kubernetes inside of LXD containers. The approach is a rewrite of kubedee https://github.com/schu/kubedee in Go. Quite different than other attempts.
When this feature was announced, I was excited, because a built in object store should be a good place to store the numerous assets Kubernetes requires: TLS certs, yaml manifests, etc. I was actually considering having our tool provision an LXD container running minio! First class support is much better.
Is this on track for 5.5? We donāt require multi-tenancy or quotas.
Yes its ontrack for LXD 5.6. The MinIO support will only be for local storage pools meaning that each bucket will be stored locally and the bucket S3 listen address will be configurable using core.storage_buckets_address