Can't get SSL certificates to work on my containers

Dear devs,
first of all, congrats for your fantastic work on LXD. I find it much more intuitive for my server than virtual hosts. I’m trying to run 11 websites as containers in my DigitalOcean VPS (“droplet”), all of them with Xenial images. I’m using LXD 3.0 installed via snap.

I’ve followed @simos’s tutorials (thanks Simos!) to both install LXD and then get SSL certificates on two of my containers I’m using as tests. First, I had to abandon the recommended block storage because I couldn’t find where my containers were mounted, which is a required step on the SSL tutorial. I tried again leaving dir as file system and no block storage, now I can find my containers but still things are not working.

Would anybody please take a look at my haproxy config file to spot trouble? I’ve simplified it by leaving only info about my test containers “web1” and “web9”. Thanks!

        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy

 # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL). This list is from:
        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:EC$
        ssl-default-bind-options no-sslv3

# Minimum DH ephemeral key size. Otherwise, this size would drop to 1024.
        # @link:$
        tune.ssl.default-dh-param 2048

        log     global
        mode    http
        option  httplog
        option  dontlognull
        option  forwardfor
        option  http-server-close
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
       errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

frontend www_frontend
    bind *:80     # Bind to port 80 (www) on the container
    bind *:443 ssl crt /etc/haproxy/certs/
    redirect scheme https if !{ ssl_fc }
    reqadd X-Forwarded-Proto:\ https
acl secure dst_port eq 443
    rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
    rspadd Strict-Transport-Security:\ max-age=31536000 if secure

    acl host_web1 hdr(host) -i
    acl host_web9 hdr(host) -i

    use_backend web1_cluster if host_web1
    use_backend web2_cluster if host_web9
backend web1_cluster
    balance leastconn
    http-request set-header X-Client-IP %[src]
    server web1 web1.lxd:80 check

backend web9_cluster
    balance leastconn
    http-request set-header X-Client-IP %[src]
    server web9 web9.lxd:80 check


I suppose that you followed the guide at

Can you tell us at what step do you get an error, and what error do you get. Paste the message here.

This guide talks about using Let’s Encrypt and --authenticator webroot. That method needs access to the website files, hence it’s messy and error prone. The alternative is to use --standalone --preferred-challenges http-01 instead. In this case, you need to briefly allow Let’s Encrypt to accept connections to port 80 so that it can perform the authentication. That is, you run the Let’s Encrypt command in the way that it set’s up itself a temporary web server at port 80 and is able to respond appropriately to the incoming verification connection of the Let’s Encrypt servers.

Thanks for the quick answer, Simos.

I don’t get any error message per se, but I get a warning when, from inside the haproxy container, I run

/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg

[WARNING] 141/212752 (4272) : parsing [/etc/haproxy/haproxy.cfg:49] : a ‘reqadd’ rule placed after a ‘redirect’ rule will still be processed before.
Configuration file is valid

I also tested whether it was the Ubuntu firewall that was blocking access to port 443, but with a rule allowing both 80 and 443 both inside and outside the container nothing changes. Plus, even the http access is broken now, while it was working before.

The warning says that even these two lines have been placed in this order, HAProxy will consider as first the reqadd lines and then the redirect lines. You can solve this warning by simply changing the order of these two lines.

If you notice my post, it says

frontend www_frontend
# We bind on port 80 (http) but (see below) get HAProxy to force-switch to HTTPS.
bind *:80
# We bind on port 443 (https) and specify a directory with the certificates.
####    bind *:443 ssl crt /etc/haproxy/certs/
# We get HAProxy to force-switch to HTTPS, if the connection was just HTTP.
####    redirect scheme https if !{ ssl_fc }

The lines with #### should be uncommented only when you set up HTTPS properly. Because if you uncomment them earlier, then all HTTP connections will be auto-upgraded to HTTPS connection even when there are no certificates available.
When you add a new website, you need to comment back these two lines for the short duration of obtaining the certificates.

Hi Simos,
I did it again as you recommended. The haproxy config file is declared valid, no warning this time. I generate the certificates in the containers’ web roots and then place them in /etc/haproxy/certs in the haproxy container, following the instructions of your tutorial.

But still it’s not working. I can see that when I type the domains on my browser they are redirected to https. However, no connection is established.

For both my test domains says:

Assessment failed: Unable to connect to the server

When you try

$ curl --verbose
*   Trying

$ openssl s_client -connect

This shows that the HTTPS (port 443) connection cannot be established.

Things to check

  1. Do you have an iptables rule (on the host) for the HTTPS connections? You do have for HTTP, you need to verify you have for HTTPS as well.

  2. When you connect to the HAProxy container (with lxc exec ...), can you run the command

    $ curl --verbose web1.lxd

Good point. It appears I missed that part. I tried this on web1:

PORT=443 PUBLIC_IP=my_server_ip CONTAINER_IP=my_container_ip \
sudo -E bash -c 'iptables -t nat -I PREROUTING -i eth0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'

Still not working, unfortunately. I simply took this from your tutorial on DigitalOcean and replaced port 80 to 443. (The ip addresses are also correct.)

Here’s the output:

root@web1:~# curl --verbose web1.lxd
* Rebuilt URL to: web1.lxd/
*   Trying fd42:fa9b:bfc5:c6dc:216:3eff:feae:7c39...
* Connected to web1.lxd (fd42:fa9b:bfc5:c6dc:216:3eff:feae:7c39) port 80 (#0)
> GET / HTTP/1.1
> Host: web1.lxd
> User-Agent: curl/7.47.0
> Accept: */*
< HTTP/1.1 200 OK
< Server: nginx/1.10.3 (Ubuntu)
< Date: Thu, 24 May 2018 14:04:40 GMT
< Content-Type: text/html
< Content-Length: 654
< Last-Modified: Sun, 20 May 2018 19:14:55 GMT
< Connection: keep-alive
< ETag: "5b01c92f-28e"
< Accept-Ranges: bytes
<!DOCTYPE html>
<title>Welcome to nginx on LXD container web1!</title>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx on LXD container web1!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>
* Connection #0 to host web1.lxd left intact

Looks better. Now it is an issue to verify all the links in the connection.

I see that you run the curl command from within the web1 container. Can you try it from proxy as well?

Here’s curl’s output from the haproxy container:

root@haproxy:~# curl --verbose web1.lxd

  • Rebuilt URL to: web1.lxd/
  • Trying fd42:fa9b:bfc5:c6dc:216:3eff:feae:7c39…
  • Connected to web1.lxd (fd42:fa9b:bfc5:c6dc:216:3eff:feae:7c39) port 80 (#0)

GET / HTTP/1.1
Host: web1.lxd
User-Agent: curl/7.47.0
Accept: /

< HTTP/1.1 200 OK
< Server: nginx/1.10.3 (Ubuntu)
< Date: Fri, 25 May 2018 15:35:00 GMT
< Content-Type: text/html
< Content-Length: 654
< Last-Modified: Sun, 20 May 2018 19:14:55 GMT
< Connection: keep-alive
< ETag: “5b01c92f-28e”
< Accept-Ranges: bytes

(…) [omitted for brevity]

* Connection #0 to host web1.lxd left intact

Thanks for keeping on trying to find a solution with me.


Now let’s check that the proxy is able to forward to the correct website.

From the host (that has LXD, not from inside a containair), run

curl --verbose --header “Host:” ip_address_of_proxy_container

If all goes well, you should get some output.

You have already performed the final step, running curl from your PC. And it did not work.
I noticed that the containers would show up their IPV6 addresses. Can you check that tbey have IPv4 addresses as well. Because the IPtables command was only for IPv4.

Yes, lxc list shows all of the containers with both IPV4 and IPV6 addresses. IPV6 was not intended (as per your digitalocean tutorial), but provided automatically by LXD 3.0.

Here’s curl’s output outside containers as you asked:

[Error addressed.]

And part of the lxc list:

lxc list
|  NAME   |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
| haproxy | RUNNING | (eth0)  | fd42:fa9b:bfc5:c6dc:216:3eff:fe13:5c57 (eth0) | PERSISTENT | 0         |
| web1    | RUNNING | (eth0)  | fd42:fa9b:bfc5:c6dc:216:3eff:feae:7c39 (eth0) | PERSISTENT | 0         |

Try again the curl command. If you pasted the command, it got some UTF-8 character and that shows in the output you wrote above. That is, type the hostname again. The error was with the .com part.

Oops! My bad. I was using Bash on Ubuntu on Windows, now I’m back to my Ubuntu machine.

eli@imburana:~$ curl --verbose --header "Host:"
* Rebuilt URL to:
*   Trying
* Connected to ( port 80 (#0)
> GET / HTTP/1.1
> Host:
> User-Agent: curl/7.47.0
> Accept: */*
< HTTP/1.1 302 Found
< Cache-Control: no-cache
< Content-length: 0
< Location:
< Connection: close
* Closing connection 0

I couldn’t solve the problem. It seems I’ll have to publish my websites without SSL and hope I can solve this later.

If I may make a suggestion: the containers non-expert community needs an updated tutorial on how to install and use LXD 3.0 containers with https. My main problems trying to do it were:

Choosing a filesystem

  • The standard zfs filesystem clashes with popular applications like Discourse. I found the suggestion of btfrs as filesystem, on account of some problems with memory brought about by zfs, but I didn’t find it trivial to follow the suggestion. I eventually gave up trying to use Discourse inside an lxc container, and bought a ready-made Digitalocean Droplet (VPS) running it.

TLS/SSL certificates

  • The elusive problem outlined in this topic remains. It may well be my fault, but I followed more than twice Simos’ tutorials. Maybe the problem is the version of LXD these tutorials referred to? I couldn’t find nothing as didactic as Simos’ instructions for the newest versions of LXD. Anyhow, better integration with Letsencrypt seems to be necessary for the average user who wants to use LXD/LXC to host multiple websites.

I still think containers are great and will recommend them to fellow webmasters in the future.

P.S.: I’d gladly pay a commission to whomever would have a go at solving the problem for me.