I have a domain name https://example.com that points to a vps server on amazon lightsail. I have several applications i want to run. The apps are in vue js and some in spring and i am using nginx as the web server.
The landing page is basically an app running on port 3000 but using reverse proxy to display it at the root of example.com on port 80
I would like to run another app like:
example.com/one, example.com/two and example.com/three where one, two and three are applications each running inside a docker container.
How would i go about configuring my apps in this way keeping in mind the apps are running separately inside docker?
I highly suggest using Caddy for this type of setup.
Nginx is awesome and you could use that for the same purpose.
But for what you want to do caddy will work perfectly.
Just make sure to run each container on a different port.
Then use caddy as a reverse proxy to each container:
https://medium.com/bumps-from-a-little-front-end-programmer/caddy-reverse-proxy-tutorial-faa2ce22a9c6
Lets say you have containers running on port 5000,8800 and 9000
the you could do:
example.com
reverse_proxy /one localhost:5000
reverse_proxy /two localhost:8800
reverse_proxy /three localhost:9000
Caddy is cool because it will also setup SSL via Letsencrypt.
I didn't have time or a server to test this now, but let me know if it works.
God bless :)
Docker can only route to different ports. It can not determine the container by a http-path.
You need a reverse proxy (RP).
You have two options:
Install RP on host
You can install the RP on your host machine. There are many pros, like you can use the certbot for automatic lets encrypt certs. And you have the opportunity to use more docker-containers.
For this you have to publish ports in docker to your hostmachine.
Use your docker-nginx as RP
You can also set your frontend as RP. Just put your docker-containers in a docker-network and add the RP-config to your nginx.
Related
I want to deploy api service (asp .net) to VPS.
What is at the moment:
VPS ubuntu 22.10
Container api service with open port http.
Container mongodb.
Network bridge for communication between these containers.
Volume for storing mongodb collections.
Configured dns subdomain, which translates to ip VPS.
What I want:
To add nginx.
To add ssl (Let's Encrypt with certbot).
I don't want to use docker compose because I want to understand how things works.
I'm not strong in terminology, but perhaps what I want to do is called an open nginx proxy.
Please tell me if I understand correctly what I need to do.
Nginx:
To run a separate nginx container.
To add the nginx configuration to the docker volume.
To add nginx to the bridge network (close ports on the api container, open ports on the nginx container)
To set up nginx location configs to work internally through the network bridge.
SSL:
On the VPS machine (not in the docker container) to install and run certbot
To enabled automatic certificate renewal
I'm not sure where I need to run certbot. On vps machine or in nginx docker container.
I don't know how to configure nginx to work through the bridge.
I have a Google Cloud VM which runs a docker image. The docker image runs a specific JAVA app which runs on port 1024. I have pointed my domain DNS to the VM public IP.
This works, as I can go to mydomain.com:1024 and access my app. Since Google Cloud directly exposes the docker port as a public port. However, I want to access the app through https://example.com (port 443). So basically map port 443 to port 1024 in my VM.
Note that my docker image starts a nginx service. Previously I configured the java app to run on port 443, then the nginx service listened to 443 and Google Cloud exposed this HTTPS port so everthing worked fine. But I cannot use the port 443 anymore for my app for specific reasons.
Any ideas? Can I configure nginx somehow to map to this port? Or do I setup a load balancer to proxy the traffic (which seems rather complex as this is all pretty new to me)?
Ps. in Google Cloud you cannot use "docker run -p 443:1024 ..." which basically does the same if I am right. But the containerized VMs do not allow this.
Container Optimized OS maps ports one to one. Port 1000 in the container is mapped to 1000 on the public interface. I am not aware of a method to change that.
For your case, use Compute Engine with Docker or a load balancer to proxy connections.
Note: if you use a load balancer, your app does not need to manage SSL/TLS. Offload SSL/TLS to the load balancer and just publish HTTP within your application. Google can then manage your SSL certificate issuance and renewal for you. You will find that managing SSL certificates for containers is a deployment pain.
I use docker-compose stacks to run things on my personal VPS. Right now, I have a stack that's composed of:
Nginx (exposed port 443)
Ghost (blogging)
MySQL (for Ghost)
Adminer (for MySQL, exposed port 8080)
I wanted to try out Matomo analytics software, but I didn't want to add that to my current stack until I was happy with it, so I decided to create a second docker-compose stack for just the Matomo analytics:
Nginx (exposed port 444)
Matomo
MariaDB (for Matomo)
Adminer (for MariaDB, exposed port 8081)
With both stacks running, I can access everything at its appropriate port, but only by IP address. If I try to use my domain, it can only connect to the first Nginx, the one exposing port 443. If I try https://www.example.com:444 in my browser, it isn't able to connect. If I try https://myip:444 in my browser, it connects to the second Nginx instance exposing port 444, warning me that the SSL certificate has issues (since I'm connecting to my IP, not my domain), and then lets me through.
I was wondering if anyone knew why this behavior was happening. I'm admittedly new to setting up VPSs, using Nginx to proxy to other hosted services, etc. If it turns out Nginx cannot be used this way, I'd love to hear recommendations on how else I could arrange this. (Do I have to only have one Nginx instance running at a time, and I must proxy through it to everything else? etc)
Thank you!
I was able to fix this by troubleshooting my Cloudflare. I posted this question while waiting for my domain to accept the name servers for my VPS instead of Cloudflare. When this was finished, I tested it and it did get through at https://example.com:444. This proved it was Cloudflare blocking me.
I found this page which explained that the free Cloudflare plans only support a few ports, which does not include port 444. If I upgraded to a pro plan, I would have that option.
Therefore, I can conclude that the solution to my problem is to either upgrade my Cloudflare plan or merge the two docker-compose stacks so that I can accept requests for everything on just port 443.
Is it possible to have a 2 docker containers serve on port 80 but different subdomains or hostnames?
Something like:
api.example.com goes to a node application
app.example.com goes to a Java application
Yes you can. using a proxy.
There is a project by jwilder/nginx-proxy which allows you to give your hostname via an enviroment variable which will than route your request to the appropriate container.
A good example of this implemented is given here: https://blog.florianlopes.io/host-multiple-websites-on-single-host-docker/
No. The first container you start will have exclusive access to the port, and if you try and start a second container on the same port it will fail.
Instead, use a load balancer such as Nginx or Traefik to handle the incoming traffic to port 80 and proxy it on to your two app containers based on host headers.
What do you think is best practise for running multiple website on the same machine, would you have separate containers set for each website/domain? Or have all the sites in the one container set?
Website 1, Website 2, Website 3:
nginx
phpfpm
mysql
or
Website 1:
nginx_1
phpfpm_1
mysql_1
Website 2:
nginx_2
phpfpm_2
mysql_2
Website 3:
nginx_3
phpfpm_3
mysql_3
I prefer to use separate containers for the individual websites but use a single webserver as proxy. This allows you to access all websites of different domains on the same host ports (80/443). Additionally you don't necessarily need to run multiple nginx containers.
Structure:
Proxy
nginx (listens to port 80/443)
Website 1
phpfpm_1
mysql_1
Website 2
phpfpm_2
mysql_2
...
You can use automated config-generation for the proxy service such as jwilder/nginx-proxy that also opens the way to use convenient SSL-certification handling with e.g. jrcs/letsencrypt-nginx-proxy-companion.
The proxy service can then look like this:
Proxy
nginx (listens to port 80/443)
docker-gen (creates an nginx config based on the running containers and services)
letsencrypt (creates SSL certificates if needed)
Well, if they are not related, I would definitely not put them in the same machine...
Even if you have one website with nginx and mysql I would pull these two apart. It simply gives you more flexibility, and that's what Docker is all about.
Good luck and have fun with Docker!
I would definitely isolate each of the applications.
Why?
If they don't rely on each other at all, making changes in one application can affect the other simply because they're running in the same environment, and you wouldn't want all of them to run into problems all at once.