[LXD] Object storage (S3 API)

Sounds good

We’re going to need radosgw-admin added to the snap? Is this something you can add? Thanks

Done

That was quick, thanks!

Looks like there is a dependency issue on radosgw-admin inside the snap:

root@v1:~# lxc storage create s3 cephobject --target=v1
Error: Failed to run: radosgw-admin --version: exit status 127 (radosgw-admin: error while loading shared libraries: librabbitmq.so.4: cannot open shared object file: No such file or directory)

Tried the new edge snap, got further this time, managed to create the S3 storage pool, but radosgw-admin bucket link command failed:

root@v1:~# lxc storage bucket create s3 foo
Error: Failed creating storage bucket: Failed linking bucket to user: Failed to run: radosgw-admin --cluster ceph --id admin bucket link --bucket foo --uid foo: exit status 5 (2022-08-16T11:15:08.743+0000 7f884a158b40  0 ERROR: could not decode buffer info, caught buffer::error
2022-08-16T11:15:08.743+0000 7f884a158b40 -1 ERROR: do_read_bucket_instance_info failed: -5
failure: (5) Input/output error: failed to fetch bucket info for bucket=foo)

To get to this point some of the radosgw-admin command must have succeeded.
But this sounds like a version mismatch between what is in the snap and what my local ceph cluster is running (quincy).

I also tried setting snap set lxd ceph.external=true on each cluster member and reloading LXD, but this didn’t help.

Is the radosgw-admin command covered by snap set lxd ceph.external=true?

An alternative option we could use is to use the rados operations admin API (Admin Operations — Ceph Documentation) via a Go rest client, but it’ll require us to implement the S3 authentication process, as it uses S3 auth but is not part of the S3 protocol itself.

Ah looks like we need radosgw-admin covered by:

@stgraber thanks, its working from the snap now using snap set lxd ceph.external=true

@stgraber shall we have this as core.objects_address (plural) to align with core.metrics_address?
Also as the entity type managed in LXD are “buckets”, should this be core.buckets_address instead, or core.storage_buckets_address?

Also @stgraber any preference on default port for this new listener?

Currently we have:

const HTTPSDefaultPort = 8443
const HTTPDefaultPort = 8080
const HTTPSMetricsDefaultPort = 9100

By default the minio API listeners on port 9000, shall we use that?

We can use 9000, that’s fine. And I guess we can do core.storage_buckets_address for that one.
Just buckets may be a bit confusing.

1 Like

Thanks will do.

One thing I’ve realised now I’m looking at the minio implementation in more detail is that we will need to parse the incoming credentials on a request in order to implement the ListAllMyBuckets request. Because each key is only associated to a single bucket, it’ll only ever return one result, but we’ll need to authenticate the request before we know who it is.

Hrm, another reason why we are going to need to understand the S3 auth header is because the requested host is used in the signature, meaning when we reverse proxy it to minio the signature no longer matches. Have tried setting the outbound Host header to the original LXD-side listener address to no avail.

Actually seems to work OK using httputil.NewSingleHostReverseProxy() so think it was a URL/header mismatch.

@stgraber so as well as MinIO not being able to run on tmpfs (which complicates our automated testing when using the dir driver), it also doesn’t support quotas when used on filesystem/single disk setups:

NOTE: Bucket quotas are not supported under gateway or standalone single disk deployments.

This shouldn’t be too much of an issue for most storage pool types that support their own volume level quotas, but for dir pool type, the only quota supported is project quotas, and these currently use the volume DB ID for project quota ID. For buckets we aren’t going to have a volume DB record, and thus no ID, and no support for project quotas.

This means we can’t really do quotas for buckets on dir pools.

My company is developing an internal tool to run multi-node Kubernetes inside of LXD containers. The approach is a rewrite of kubedee https://github.com/schu/kubedee in Go. Quite different than other attempts.

When this feature was announced, I was excited, because a built in object store should be a good place to store the numerous assets Kubernetes requires: TLS certs, yaml manifests, etc. I was actually considering having our tool provision an LXD container running minio! First class support is much better.

Is this on track for 5.5? We don’t require multi-tenancy or quotas.

omg I’m a goofball; 5.5 is out LXD 5.5 has been released - #4 by despens

update I’m still a goofball: okay we use BTRFS; we’ll have to wait for 5.6; :slight_smile:

Yes its ontrack for LXD 5.6. The MinIO support will only be for local storage pools meaning that each bucket will be stored locally and the bucket S3 listen address will be configurable using core.storage_buckets_address

1 Like

Local bucket support:

1 Like