How to dynamically route HTTPS traffic to LXD instances

The number of containers is increasing every day

you can’t add each created container to nginx.conf

Can’t this be automated?

Regards.

This can be automated with a simple script.
Can you show the nginx configuration?
Regards.

1 Like

Nginx has a “resolver” function Module ngx_http_core_module

what will happen if I enter dns lxdbr0 in nginx.conf?

also used here (https://sleeplessbeastie.eu/2021/05/24/how-to-dynamically-route-https-traffic-to-lxd-instances/)

Example

resolver 10.97.179.1;

Can it help us?

The fact is that the script may not be reliable when migrating containers to another host

Regards.

PS: i use nginx.conf which you sent me

No, you have to separate each configuration file for your needs. You may define container IP addresses as statically in nginx, proxy_pass http://<container_IP>:80;
Please, make a simple test environment and then try like that way.
Put every configuration as a file in /etc/nginx/conf.d directory.

Regards.

I have it already configured both in the test environment and in production (using apache2)

but I needed something like Traefik for LXD

So that he can automatically see my containers using the LXD API and make a reverse proxy on them

isn’t there a good solution for this? I think this is a very interesting topic and help a lot of people

Regards.

PS: You have been helping me for many days, thank you very much, I don’t know how to thank you

Gobetween supports LXD out the box for the use case your asking for GitHub - GoBetween project load balancer for the Сloud supports both local & remote LXD servers.

Gobetween is now in maintenance mode though so someone should probably just write some generic program for haproxy / nginx / apache / whatever else is in these days.

1 Like

Can this help us?

For example write in nginx.conf

unix://var/lib/lxd/unix.socket

It is very interesting! (I think this is what I need!)

Can Gobetween be used on Alpine Linux?

And is it possible to use the LXD API + Gobetween for dynamic reverse proxy (If possible, how?)

Probably, look at the docs

Yes that’s what its for, look at the docs, I dont have a config available im afraid.

1 Like

Excellent!

I’m still trying to figure out this solution.

If I get something, I’ll be sure to post it!

Hi @Ibragim_Ganizade, sorry for the late response, as in your diagram the best and easy solution looks like haproxy. You can reorganize for your needs.
https://stackoverflow.com/questions/20606544/haproxy-url-based-routing-with-load-balancing
Regards.

1 Like

Hi! @cemzafer I am glad to hear from you at any time!

How can I solve this problem

I’ll post the solution with detailed documentation (if I can integrate with lxd)

PS: it may take several days

Regards.

Hello everyone!

I tried the solution but there are problems

The installation did not require much effort

Install gobetween

Stable release

  1. Download the latest stable release for your host platform
  2. Add the gobetween binary to your $PATH.
  3. Run gobetween to get going.

I use this documentetion for configuring proxy to my lxd instances (https://gobetween.io/documentation.html#lxd)

but i’m sure i’m doing something wrong

1 The left terminal shows running gobetween
2 Middle terminal shows lxd container
3 the Right terminal shows the config file gobetween


I think that we need to join forces and solve the problem, as this will be more effective! Thank you very much for your attention!

Regards.

PS: I really liked gobetween and I want to try it with LXD

Hi @Ibragim_Ganizade, I havent investigate your configuration in detail but I have configured a simple configuration for your needs as explained in your diagram.
Here is the config.

  1. Install the haproxy on the host.
  2. Create lxd instances/containers.
  3. To test purposes install nginx or any service to test. (Here I used nginx.)

In the host haproxy configuration (/etc/haproxy/haproxy.cfg), and restart service.

defaults
   mode http
   timeout connect 5s
   timeout client  60m
   timeout server  60m

listen stats
   bind *:8181
   mode http
   stats enable
   stats uri /stats
   stats realm Haproxy Statistics
   stats auth admin:admin

frontend haentry
   bind <host ip address>:80
   mode http

   acl app1 path_end -i /app1/123
   acl app2 path_end -i /app2/123
   acl app3 path_end -i /app3/123

   use_backend srvs_app1 if app1
   use_backend srvs_app2 if app2
   use_backend srvs_app3 if app3

   default_backend srvs_app3

backend srvs_app1
   http-request set-path %[path,regsub(^/app1/123,/)]
   server c1 <container_one_IP_address:80> check

backend srvs_app2
   http-request set-path %[path,regsub(^/app2/123,/)]
   server c2 <container_two_IP_address:80> check

backend srvs_app3
   http-request set-path %[path,regsub(^/app3/123,/)]
   server c3 <container_three_IP_address:80> check

Your case AlpineLinux, or in any container.
apk add nginx
rc-update add nginx default
rc-service nginx start
mkdir -p /var/www/app3
cd /var/www/app3
echo “This is a test page. APP3.” > index.html

And in your nginx configuration, change default configuration a little bit in the /etc/nginx/http.d directory as below. After configuration changes reload with nginx -s reload. And test it.

# This is a default site configuration which will simply return 404, preventing
# chance access to any other virtualhost.

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /var/www/app3;

        # Everything is a 404
        location / {
                index index.html;
#               return 404;
        }

        # You may need this to prevent return 404 recursion.
        location = /404.html {
                internal;
        }
}

P.S. You can watch your requests on haproxy as stated in the configuration on port 8181.
Regards.

1 Like

Hi! @cemzafer

Wow!

You actually wrote a cool configuration for HAProxy

But there is a problem(

All the time you need to manually add containers to the configuration

I believe that go between is what we need (https://gobetween.io/index.html)

Let’s try this new solution? (I think it will be a good experience to build modern micro service architectures)

If you help me, we will do it together!
Thank you very much that you are still with me!
I don’t know how to thank you

Regards.

Actually no. :wink: , as I mentioned before when you launch a container you can trigger a small script to organize/add configuration automagically. The concept is same, if i follow you correctly the backend executes small python like services, right?
Regards.

1 Like

No, I don’t use python, actually I’m not a developer)

Could you show me a script that could solve this problem?

Thanks :slight_smile:

Regards.

PS: I still like gobetween and want to use it with your help of course :slight_smile:

I found gobetween ui (https://github.com/yyyar/gobetween-ui)

Wondering how to install it?

I didn’t find any instructions

Wrote a question on their Telegram

Hi,

the first link that you posted is the right solution for you. Just need to do right nginx config. I tried it and it was working well.
I was looking at gobetween, however from documentation the lxd configuration is for loadbalancing at specific port e.g 80 or 12345 in default configuration, and not for solution that you expect.

server {
listen 80;
listen 443 ssl;

resolver 10.243.93.1;

ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_certificate /etc/nginx/ssl/nginx.crt;

server_name ~^(?<container>\w+).localhost$;

#if ($scheme != “https”) {
#rewrite ^ https:// permanent;
#}

location / {
proxy_pass http://$container.lxd:80$request_uri;
#proxy_set_header Host $container.fishsilentcruise.space;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

Than I tried it with curl from lxd host:

curl web1.localhost

Welcome to nginx! at WEB1

curl web2.localhost

Welcome to nginx! at WEB2

1 Like

Hi! @Miso-K

How to check I will write

Many thanks for the help!

Regards