Dear devs,
first of all, congrats for your fantastic work on LXD. I find it much more intuitive for my server than virtual hosts. I’m trying to run 11 websites as containers in my DigitalOcean VPS (“droplet”), all of them with Xenial images. I’m using LXD 3.0 installed via snap.
I’ve followed @simos’s tutorials (thanks Simos!) to both install LXD and then get SSL certificates on two of my containers I’m using as tests. First, I had to abandon the recommended block storage because I couldn’t find where my containers were mounted, which is a required step on the SSL tutorial. I tried again leaving dir as file system and no block storage, now I can find my containers but still things are not working.
Would anybody please take a look at my haproxy config file to spot trouble? I’ve simplified it by leaving only info about my test containers “web1” and “web9”. Thanks!
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:EC$
ssl-default-bind-options no-sslv3
# Minimum DH ephemeral key size. Otherwise, this size would drop to 1024.
# @link: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#$
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
option forwardfor
option http-server-close
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend www_frontend
bind *:80 # Bind to port 80 (www) on the container
bind *:443 ssl crt /etc/haproxy/certs/
redirect scheme https if !{ ssl_fc }
reqadd X-Forwarded-Proto:\ https
acl secure dst_port eq 443
rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
rspadd Strict-Transport-Security:\ max-age=31536000 if secure
acl host_web1 hdr(host) -i imburana.elivieira.com www.imburana.elivieira.com
acl host_web9 hdr(host) -i justica.social www.justica.social
use_backend web1_cluster if host_web1
use_backend web2_cluster if host_web9
backend web1_cluster
balance leastconn
http-request set-header X-Client-IP %[src]
server web1 web1.lxd:80 check
backend web9_cluster
balance leastconn
http-request set-header X-Client-IP %[src]
server web9 web9.lxd:80 check
Can you tell us at what step do you get an error, and what error do you get. Paste the message here.
This guide talks about using Let’s Encrypt and --authenticator webroot. That method needs access to the website files, hence it’s messy and error prone. The alternative is to use --standalone --preferred-challenges http-01 instead. In this case, you need to briefly allow Let’s Encrypt to accept connections to port 80 so that it can perform the authentication. That is, you run the Let’s Encrypt command in the way that it set’s up itself a temporary web server at port 80 and is able to respond appropriately to the incoming verification connection of the Let’s Encrypt servers.
I don’t get any error message per se, but I get a warning when, from inside the haproxy container, I run
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg
[WARNING] 141/212752 (4272) : parsing [/etc/haproxy/haproxy.cfg:49] : a ‘reqadd’ rule placed after a ‘redirect’ rule will still be processed before.
Configuration file is valid
I also tested whether it was the Ubuntu firewall that was blocking access to port 443, but with a rule allowing both 80 and 443 both inside and outside the container nothing changes. Plus, even the http access is broken now, while it was working before.
The warning says that even these two lines have been placed in this order, HAProxy will consider as first the reqadd lines and then the redirect lines. You can solve this warning by simply changing the order of these two lines.
If you notice my post, it says
frontend www_frontend
# We bind on port 80 (http) but (see below) get HAProxy to force-switch to HTTPS.
bind *:80
# We bind on port 443 (https) and specify a directory with the certificates.
#### bind *:443 ssl crt /etc/haproxy/certs/
# We get HAProxy to force-switch to HTTPS, if the connection was just HTTP.
#### redirect scheme https if !{ ssl_fc }
The lines with #### should be uncommented only when you set up HTTPS properly. Because if you uncomment them earlier, then all HTTP connections will be auto-upgraded to HTTPS connection even when there are no certificates available.
When you add a new website, you need to comment back these two lines for the short duration of obtaining the certificates.
Hi Simos,
I did it again as you recommended. The haproxy config file is declared valid, no warning this time. I generate the certificates in the containers’ web roots and then place them in /etc/haproxy/certs in the haproxy container, following the instructions of your tutorial.
But still it’s not working. I can see that when I type the domains on my browser they are redirected to https. However, no connection is established.
Still not working, unfortunately. I simply took this from your tutorial on DigitalOcean and replaced port 80 to 443. (The ip addresses are also correct.)
Here’s the output:
root@web1:~# curl --verbose web1.lxd
* Rebuilt URL to: web1.lxd/
* Trying fd42:fa9b:bfc5:c6dc:216:3eff:feae:7c39...
* Connected to web1.lxd (fd42:fa9b:bfc5:c6dc:216:3eff:feae:7c39) port 80 (#0)
> GET / HTTP/1.1
> Host: web1.lxd
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.10.3 (Ubuntu)
< Date: Thu, 24 May 2018 14:04:40 GMT
< Content-Type: text/html
< Content-Length: 654
< Last-Modified: Sun, 20 May 2018 19:14:55 GMT
< Connection: keep-alive
< ETag: "5b01c92f-28e"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx on LXD container web1!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx on LXD container web1!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host web1.lxd left intact
You have already performed the final step, running curl from your PC. And it did not work.
I noticed that the containers would show up their IPV6 addresses. Can you check that tbey have IPv4 addresses as well. Because the IPtables command was only for IPv4.
Yes, lxc list shows all of the containers with both IPV4 and IPV6 addresses. IPV6 was not intended (as per your digitalocean tutorial), but provided automatically by LXD 3.0.
Here’s curl’s output outside containers as you asked:
Try again the curl command. If you pasted the command, it got some UTF-8 character and that shows in the output you wrote above. That is, type the hostname again. The error was with the .com part.
I couldn’t solve the problem. It seems I’ll have to publish my websites without SSL and hope I can solve this later.
If I may make a suggestion: the containers non-expert community needs an updated tutorial on how to install and use LXD 3.0 containers with https. My main problems trying to do it were:
Choosing a filesystem
The standard zfs filesystem clashes with popular applications like Discourse. I found the suggestion of btfrs as filesystem, on account of some problems with memory brought about by zfs, but I didn’t find it trivial to follow the suggestion. I eventually gave up trying to use Discourse inside an lxc container, and bought a ready-made Digitalocean Droplet (VPS) running it.
TLS/SSL certificates
The elusive problem outlined in this topic remains. It may well be my fault, but I followed more than twice Simos’ tutorials. Maybe the problem is the version of LXD these tutorials referred to? I couldn’t find nothing as didactic as Simos’ instructions for the newest versions of LXD. Anyhow, better integration with Letsencrypt seems to be necessary for the average user who wants to use LXD/LXC to host multiple websites.
I still think containers are great and will recommend them to fellow webmasters in the future.
P.S.: I’d gladly pay a commission to whomever would have a go at solving the problem for me.