S3 endpoint addressing and encryption

Dear colleagues,

I am running a few containers with legacy applications making use of a S3 API. Setting up the API endpoint, creating buckets with incus, and transferring data into the buckets using rclone was easy, this feature seems incredibly useful.

My legacy applications use a variety of programming languages and libraries to access S3, and it would make things easier if

  1. it would be possible to access the incus S3 endpoint with a stable DNS name — outside the containers localhost:8555 works, but how would I address it from inside a container; and
  2. it would be possible to use http instead of https — is there a way to turn off https, or run a http endpoint on another port?

Any insights appreciated!

_gateway.incus may work as a DNS record assuming the DNS domain is set to incus.

For HTTP, we’ve usually tried VERY hard not to implement any cleartext endpoints so I’d prefer to keep it that way. You have a few options though:

  • Run a basic haproxy/nginx type thing to still give you a cleartext endpoint
  • If the issue is the certificate’s validity, you can swap the certificate for one of your liking
    • On standalone systems, just replace server.crt and server.key in /var/lib/incus/
    • In clustered systems, you can upload a new cluster-wide certificate through incus cluster update-certificate
  • Rather than manually put in your certificate, we also natively support ACME/Let’s Encrypt, so you can use that to automatically get a certificate, though it’s worth mentioning that this also needs something to handle the HTTP traffic used during validation. Remote API authentication - Incus documentation
1 Like

Thank you Stéphane! The _gateway.incus DNS name is great, and your suggestion with proxying the endpoint into plain http seems like a good idea.

The issue I have with encryption is that I probably will not find a type of certificate that is accepted for every type of use. Some older OS won’t be able to use certain new encryption algorithms, and newer tools will refuse to use older types of encryption. The software I need to run spans like two decades :spider_web:

I’ll work on the proxy solution and report back here.

Together with a colleague we were able to figure out an nginx reverse proxy that will provide a cleartext S3 endpoint over the SSL endpoint that incus provides:

server {
    listen 8556; # some random port chosen for the cleartext endpoint
    location / {
        proxy_set_header    Host $http_host;
        proxy_ssl_protocols TLSv1.3;
        proxy_ssl_verify    off;
        proxy_pass          https://_gateway.incus:8555; # S3 API port set up with incus
    }
}

Confirmed working with this rclone remote configuration

[cleartext]
type = s3
provider = Other
access_key_id = XXX
secret_access_key = XXX
endpoint = http://localhost:8556
acl = public-read
bucket_acl = public-read

(rclone complains that _gateway.incus is an invalid domain name, so the rclone configuration needed to be localhost.)

2 Likes

@stgraber The choice of _gateway.incus as a DNS name, while very unlikely to lead to any conflicts, is a bit problematic because of the underscore. AFAIK that’s an invalid character in a DNS name, and any tools that verify URLs, in particular some S3 libraries I have to deal with, as well as rclone, will refuse to connect purely because of this formal error. Maybe it would be good to have the option to set that DNS name to something else? My solution is to set up a mapping in /etc/hosts within the containers and VMs that need access to the gateway, but that seems a bit hacky.

Is there a way to configure that gateway name? Would you consider setting another default?

We’re not actually the ones defining this, it’s something that’s synthesized by networkd/resolved on the client side.

Thanks @stgraber, I’ll check if there is a way to configure these tools to omit the _ perhaps. :+1: