[LXD] Object storage (S3 API)

LGTM, approved

1 Like

Shall we enable it only for existing projects that have features.storage.volumes enabled?

Yeah

1 Like

@stgraber updated the spec to change the key read-only flag to role.
The allowed values would be admin or read-only and the lxc tool will default to read-only role for new keys if not specified.

@stgraber I’ve added these settings to the cephobject storage pool type. They will be used to store automatically generated credentials for the lxd-admin radosgw user (created by the radosgw-admin command). It is important that non-admins can’t see these settings though.

We should probably blanket hide all config on storage pools and networks for non-admin users.

That would line up with what we do on the server config.

Stéphane

Sounds good thanks.

Added:

@stgraber I’ve removed those now actually, as in the end it occurred to me we can just load the lxd-admin user S3 keys as we need them using radosgw-admin user info command and avoid storing them in LXD.

Thinking about this more, would it be more conventional to use lxc storage bucket key <add|remove> rather than <create|delete> as we are adding/removing keys from an existing entity (the bucket)?

Its a little bit confusing, as it is its own entity type in the DB, so could be “create”, but for ceph radosgw at least, its a property of the user/sub-user so could be “add”.

On the other hand, there are no existing “Add” functions in the LXD Go client interface, suggesting that an “Add” CLI command should only be used where a separate function isn’t needed.

LXD 4.5

Is this a typo? Curious which release you all are targeting for this.

Yes should be 5.5 will correct.

1 Like

@stgraber I’m thinking of turning this setting:

into a more general (non-driver specific) setting, such as object.bucket.name_prefix for the following reasons:

  1. It might be useful for other storage driver types.
  2. Because it could be changed, we need to store any buckets created in LXD with the current prefix added so that if it is changed in the future we don’t lose track of the previously created buckets. As such we need to either have it a drive agnostic setting (so we can generate the full bucket name including prefix to pass to both the storage driver and the DB) or we need to have a per-drive name generation function which can apply the driver specific logic and indicate what the full name should be to backendLXD.

What do you think?

Storing the create-time bucket prefix in the DB is a little odd though as it means the bucket created would be named differently than the bucket name passed in the client’s POST request. Which would mean one wouldn’t be able to read their own writes effectively (as the resource name would be different).

Either way seems to be problematic.

@stgraber I’ve changed the URL field for a bucket to S3URL because:

  • This way it won’t conflict with the URL() function we add to API entity structs to generate their resource URL.
  • It makes clear this URL is for the S3 protocol (in case we add support for other object protocols in the future).

Sounds good

We’re going to need radosgw-admin added to the snap? Is this something you can add? Thanks

Done

That was quick, thanks!

Looks like there is a dependency issue on radosgw-admin inside the snap:

root@v1:~# lxc storage create s3 cephobject --target=v1
Error: Failed to run: radosgw-admin --version: exit status 127 (radosgw-admin: error while loading shared libraries: librabbitmq.so.4: cannot open shared object file: No such file or directory)