I want to know if it's possible (or even a good practice) to run a Rails app and Nginx on different Docker containers.
My intention is to use one instance of Nginx to serve more than one application running in containers in the future.
My question is because I will have to configure Nginx to access the root path of an application running on another container (I will have on my nginx.conf: root /home/user/public_html/railsapp/public/;)
How can I setup my rails docker container so the nginx container will be able to access the railsapp root path?
The question is whether your rails application and the nginx will be two different processes or one?
In case of two, you will have rails app served somehow and nginx proxying it. Which is normal to run in two different containers.
In case you will be serving your rails app by nginx, there is no need to create a separate container. You might just add files to the container of the nginx, use volumes or data containers.
Related
For example, to run django in production I can use nginx, uwsgi, supervisor
I can have a single docker file which installs all of them and run supervisor
or
I can probably have 3 docker files (nginx, uwsgi, supervisor) and one docker-compose file.
I 've been using the first option and wonder if there's any benefit of using the 2nd form
I am not sure about the need of supervisor container, but for the uwsgi and Nginx rule of thumb for the contianer
"Single process per container"
dockerfile_best-practices
So better to have 3 container
Nginx
uwsgi
Superviosir
If you want to keep supervisor just for sake managing Nginx process then better to remove this as "update docker image and launch new container is better then restarting process"
Both Nginx and uwsgi will be running as root process of the container and when there is an update, update the image and launch new container is common practice and the health check should be manageable.
Plus you can run one Nginx along with two application container as scaling and flexibility are more when you have one process per container .
Given that you have nginx and uwsgi serving django, I would recommend to have two services in docker-compose:
uwsgi + supervisor
Nginx + supervisor
How does this help?
Given uwsgi and nginx are two major processes that describes the availability of your solution, splitting them this way ensures the following:
Separation of concern and flexibility to use nginx for other purposes or solutions
Per service healthchecks (by docker) to up-level precisely where the issue is in case of any failure
I have two separate sites behind two separate nginx hosted on separate VPS using docker.
When I tried to have both nginx on the same VPS server as separate docker container, it doesn't work. The running container is overwritten with the newer one.
How can I host both nginx instance on same docker machine? Both redirecting to separate proxy_pass app but the nginx port are same, i.e. 80 & 443.
If you want to have 2 nginx container, both listening to the same port, you can use Docker in swarm mode. It has a built in load balancer which redirect the load to both of them. (note that in this case, both nginx instances must come from the same image)
Just use your current docker-compose file, but deploy it in the swarm mode.
I have a docker-compose setup, where an nginx container is being used as a reverse-proxy and load balancer for the rest of the containers that make up my application.
I can spin up the application using docker-compose up -d and everything works great. Then, I can scale up one of my services using docker-compose up -d --scale auth=3, and everything continues to work fine.
The only issue is that nginx is not yet aware of the two new instances, so I need to manually restart the nginx process inside the running container using docker exec revproxy nginx -s reload, "revproxy" being the name of the nginx container.
That's fine and dandy, I don't mind running an extra command when I decide to scale out one of my services. The real issue though is when there is a container failure somewhere... nginx needs to know as soon as this happens to stop sending traffic to the failed instance until the Docker engine is able to replace it with a healthy one.
With all that said, essentially I would like to accomplish what they are doing in the Traefik quickstart tutorial, except I would like to stick with nginx as my reverse-proxy.
While I personally think Traefik would be a real time saver in your case, there is another project which does what you want with nginx: jwilder/nginx-proxy.
It works by listening to docker engine events and when containers are added or removed, it updates a nginx config using a template.
You could either use this jwilder/nginx-proxy docker image at is is, or you can also make your own flavor by using the jwilder/docker-gen project which is the part that produces a file given a template and docker engine events.
But again, I would recommend Traefik ; for the time and trouble saved and for all the features that comes with it (different load balancing strategies, healthchecks, circuit breakers, automatic SSL certificate setup with ACME/Let's Encrypt, ...)
You just need to write a service discovery script that looks for the updated list of containers every X interval and update the nginx config accordingly.
What do you think is best practise for running multiple website on the same machine, would you have separate containers set for each website/domain? Or have all the sites in the one container set?
Website 1, Website 2, Website 3:
nginx
phpfpm
mysql
or
Website 1:
nginx_1
phpfpm_1
mysql_1
Website 2:
nginx_2
phpfpm_2
mysql_2
Website 3:
nginx_3
phpfpm_3
mysql_3
I prefer to use separate containers for the individual websites but use a single webserver as proxy. This allows you to access all websites of different domains on the same host ports (80/443). Additionally you don't necessarily need to run multiple nginx containers.
Structure:
Proxy
nginx (listens to port 80/443)
Website 1
phpfpm_1
mysql_1
Website 2
phpfpm_2
mysql_2
...
You can use automated config-generation for the proxy service such as jwilder/nginx-proxy that also opens the way to use convenient SSL-certification handling with e.g. jrcs/letsencrypt-nginx-proxy-companion.
The proxy service can then look like this:
Proxy
nginx (listens to port 80/443)
docker-gen (creates an nginx config based on the running containers and services)
letsencrypt (creates SSL certificates if needed)
Well, if they are not related, I would definitely not put them in the same machine...
Even if you have one website with nginx and mysql I would pull these two apart. It simply gives you more flexibility, and that's what Docker is all about.
Good luck and have fun with Docker!
I would definitely isolate each of the applications.
Why?
If they don't rely on each other at all, making changes in one application can affect the other simply because they're running in the same environment, and you wouldn't want all of them to run into problems all at once.
I have a few Django that I want to host on a single docker host running CentOS. I want to have 3 layers
network
application
database
network: I want to have an nginx container in the network layer routing requests to different containers in the application layer. I play to use 1:1 port mappings in this docker container to expose port 80 on the host to the container. Nginx will use direct request to appropriate app in the application layer running on port 8001-8010
application: Ill have several containers each running a seperate django app using Gunicorn running on port 8001-8010
database: one container running MySQL with a different database for each app. The MYSQL container will have a data volume linked to it for persistence.
I understand you can link containers. But as I understand it, I think it relies on the order in which the containers are started ie: how can you link nginx to several containers when they havent been started yet.
So my question is
How do I connect the network layer to the application layer when the number/names of containers in the application is always changing. ie: I might bring a new applcation online/offline. How would I update nginx config and what would the addressing look like?
How do I connect the application layer to the database layer? do I have to use Docker Linking? In my Django application code I need to use the hostname of the database to connect to. What would I put for my hostname of my docker container? Would it be able to resolve?
Is there a reference architecture I could leverage?
Docker does not support dynamic linking but there are some tools that can do this for you, see this SO question.
2.) You could start you database container at first and then link all application containers to the database container. Docker will create the host file at the boot up (statically, if your database container reboots and gets another IP you need dynamlically links, see above). When you link a container like this:
-link db:db
you can access the container with the hostname db.
I ended up using this solution:
https://github.com/blalor/docker-hosts
It allows you to have to refer to other containers on the same host by hostname. It is also dynamic as the /etc/host file on the containers gets updated dynamically as containers go up and down.