Running Docker on a ubuntu 20 Webserver - docker

I have a web server with ubuntu20 and installed nginx, which is also running well.
Now, I try to install docker and use nginx on the server side as a reverse proxy to forward subdomains to docker.
At this point, I am not sure if this has to be handled by the server sided nginx, or inside a docker container. If I use docker, I always get a 401 response.
Which is the best way to handle multiple domains on a server (pointed there by A-Record) with docker?
Do I really need the server sided nginx, or is this no necessary?
Which is the best practice to handle this?

Related

building dockerized reverse proxy for and enviroment im working on

I currently have a small cluster of wordpress services implemented using docker, which are accessible through a nginx, using vhost for it and also the service is accessible over the internet using duckdns.org. The nginx server is not on docker but is installed on the machine and I would like to know two things.
Is it advisable to move the server from local to docker and keep the whole architecture "dockerized"?
How can I implement this using the nginx server in docker to have the same result ?

Running containerized multiple applications using one ip address

I have a domain name https://example.com that points to a vps server on amazon lightsail. I have several applications i want to run. The apps are in vue js and some in spring and i am using nginx as the web server.
The landing page is basically an app running on port 3000 but using reverse proxy to display it at the root of example.com on port 80
I would like to run another app like:
example.com/one, example.com/two and example.com/three where one, two and three are applications each running inside a docker container.
How would i go about configuring my apps in this way keeping in mind the apps are running separately inside docker?
I highly suggest using Caddy for this type of setup.
Nginx is awesome and you could use that for the same purpose.
But for what you want to do caddy will work perfectly.
Just make sure to run each container on a different port.
Then use caddy as a reverse proxy to each container:
https://medium.com/bumps-from-a-little-front-end-programmer/caddy-reverse-proxy-tutorial-faa2ce22a9c6
Lets say you have containers running on port 5000,8800 and 9000
the you could do:
example.com
reverse_proxy /one localhost:5000
reverse_proxy /two localhost:8800
reverse_proxy /three localhost:9000
Caddy is cool because it will also setup SSL via Letsencrypt.
I didn't have time or a server to test this now, but let me know if it works.
God bless :)
Docker can only route to different ports. It can not determine the container by a http-path.
You need a reverse proxy (RP).
You have two options:
Install RP on host
You can install the RP on your host machine. There are many pros, like you can use the certbot for automatic lets encrypt certs. And you have the opportunity to use more docker-containers.
For this you have to publish ports in docker to your hostmachine.
Use your docker-nginx as RP
You can also set your frontend as RP. Just put your docker-containers in a docker-network and add the RP-config to your nginx.

Remote HTTP Endpoint to Docker Application

I have a demo application running perfectly on my local environment. However, I would like to run the same application remotely by giving it a HTTP endpoint. My goal is to test the performance of the application.
How to give a HTTP endpoint to any multi container docker application?
The following is the Github repository link for the demo application
https://github.com/LonareAman/BankCQRS.git
Use docker-compose and handle containers based on what you need
One of your containers should be web server like nginx. And then bind your machine port to your nginx like 80:80
Then handle your containers in nginx and make a proxy to them
You can find some samples in https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/

Docker best practises for managing websites

What do you think is best practise for running multiple website on the same machine, would you have separate containers set for each website/domain? Or have all the sites in the one container set?
Website 1, Website 2, Website 3:
nginx
phpfpm
mysql
or
Website 1:
nginx_1
phpfpm_1
mysql_1
Website 2:
nginx_2
phpfpm_2
mysql_2
Website 3:
nginx_3
phpfpm_3
mysql_3
I prefer to use separate containers for the individual websites but use a single webserver as proxy. This allows you to access all websites of different domains on the same host ports (80/443). Additionally you don't necessarily need to run multiple nginx containers.
Structure:
Proxy
nginx (listens to port 80/443)
Website 1
phpfpm_1
mysql_1
Website 2
phpfpm_2
mysql_2
...
You can use automated config-generation for the proxy service such as jwilder/nginx-proxy that also opens the way to use convenient SSL-certification handling with e.g. jrcs/letsencrypt-nginx-proxy-companion.
The proxy service can then look like this:
Proxy
nginx (listens to port 80/443)
docker-gen (creates an nginx config based on the running containers and services)
letsencrypt (creates SSL certificates if needed)
Well, if they are not related, I would definitely not put them in the same machine...
Even if you have one website with nginx and mysql I would pull these two apart. It simply gives you more flexibility, and that's what Docker is all about.
Good luck and have fun with Docker!
I would definitely isolate each of the applications.
Why?
If they don't rely on each other at all, making changes in one application can affect the other simply because they're running in the same environment, and you wouldn't want all of them to run into problems all at once.

Run nginx in one container to proxy pass Gitlab in another container

I want to run multiple service such as GItlab, racktables on same host with https enabled in different containers. How can I achieve this?
You achieve this by running a reverse proxy (nginx or apache) that forwards traffic to the different containers using different virtualhosts.
gitlab.foo.bar -> gitlab container
racktables.foo.bar -> racktables container
etc
The reverse proxy container will map port 80 and 443 to the host. All the other containers will not need port mapping as all traffic goes through the revers proxy.
I think the quickest way to get this working is to use jwilder/nginx-proxy. It's at least extremely newbie friendly as it automates almost everything for you. You can also learn a lot by looking at the generated config files in the container. Even getting TLS to work is not that complicated and you get setup with A+ rating from ssllabs by default.
I've used this for my hobby projects for almost a year and it works great (with Let's Encrypt).
You can of course also manually configure everything, but it's a lot of work with so many pitfalls.
The really bad way to do this is to run the reverse proxy on the host and map lots of ports from all the containers to the host. Please don't do that.

Resources