Basically, something like nginx -t, where nothing can break.
I have a NGINX proxy running in Docker which is networked to several other Docker networks (web applications). Sometimes I need to restart the proxy, but if one of the connected networks is no longer accessible to the proxy the proxy fails to restart even if it was able to run fine before the restart.
Is there a way to restart Docker containers without risking downtime caused
by this?
Dry-run feature for docker-compose is an open feature request for the time being.
I think a better approach is to make Nginx configuration more robust:
Ref: make nginx ignore site config when its upstream cannot be reached
Do not assign tasks (that belongs to the nginx) to docker, more specifically fix it in your nginx.conf!
The right way to resolve your "issue" is using upstreams.
Related
I'm trying to dockerize 2 dotnet console application that one depends on the other.
When I run the 1st container I need it to run another container on the host, insert a parameter to it's stdin and wait for it to end the job and exit.
What will be the best solution to achieve this?
Running a container inside a container seems to me like a bad solution,
I've also thought about managing another process with a webserver (nginx or something) on the host that get the request as http request and execute a docker run command in the host but I'm sure there is a better solution for this (in this way the webserver will just run on the host and not inside a container).
There is also this solution but it seems to have major security issues.
I've tried also using the Docker.DotNet library but it does not help with my problem.
any ideas for the best solution?
Thanks in advance.
EDIT:
I will be using docker compose but the problem is that the 2nd container is not running and listening at all time, similar to the Hello-World container it's called, performs it's job and exits.
EDIT2: FIX
I've implemented redis as a message broker to communicate between the different services, while it changed the requirement a little (containers will always run and listen to redis) it helped me to solve the issue.
I am not sure I understand correctly what you need to do, but if you simply need to start two containers in parallel the simplest way I can think of is docker-compose.
Another way is with a python or bash/bat script that launches the containers independently (in python you can either use the Docker API or do it manually with the subprocess module). This allows you to perform other things (like writing on the stdin of one container, as you stated).
I am looking for a way to get one of my docker containers to re-start with a delay (after a localhost restart).
What I currently have is:
Computer which runs docker desktop.
Docker has two containers: Webserver and MySQL (which serves data to the Webserver).
Both of the containers have --restart=always option, which allows them to restart if I restart the computer.
My issue: after a computer restart, Webserver does not seem to work properly, unless I specifically manually restart it.
My guess is that I need to give MySQL some time to boot up before I start the Webserver.
I was thinking to maybe setup a bash script or look into Compose (https://docs.docker.com/compose/startup-order/), but since I am quite new to this wanted to double check if I missed something and maybe there is a more of an elegant way to approach this.
You should use compose and specify that your webserver depends_on MySQL so that your webserver container starts after the DB is up.
You should ideally make your webserver resilient to unavailability of it's dependencies.
I have a docker-compose setup, where an nginx container is being used as a reverse-proxy and load balancer for the rest of the containers that make up my application.
I can spin up the application using docker-compose up -d and everything works great. Then, I can scale up one of my services using docker-compose up -d --scale auth=3, and everything continues to work fine.
The only issue is that nginx is not yet aware of the two new instances, so I need to manually restart the nginx process inside the running container using docker exec revproxy nginx -s reload, "revproxy" being the name of the nginx container.
That's fine and dandy, I don't mind running an extra command when I decide to scale out one of my services. The real issue though is when there is a container failure somewhere... nginx needs to know as soon as this happens to stop sending traffic to the failed instance until the Docker engine is able to replace it with a healthy one.
With all that said, essentially I would like to accomplish what they are doing in the Traefik quickstart tutorial, except I would like to stick with nginx as my reverse-proxy.
While I personally think Traefik would be a real time saver in your case, there is another project which does what you want with nginx: jwilder/nginx-proxy.
It works by listening to docker engine events and when containers are added or removed, it updates a nginx config using a template.
You could either use this jwilder/nginx-proxy docker image at is is, or you can also make your own flavor by using the jwilder/docker-gen project which is the part that produces a file given a template and docker engine events.
But again, I would recommend Traefik ; for the time and trouble saved and for all the features that comes with it (different load balancing strategies, healthchecks, circuit breakers, automatic SSL certificate setup with ACME/Let's Encrypt, ...)
You just need to write a service discovery script that looks for the updated list of containers every X interval and update the nginx config accordingly.
I want to run multiple service such as GItlab, racktables on same host with https enabled in different containers. How can I achieve this?
You achieve this by running a reverse proxy (nginx or apache) that forwards traffic to the different containers using different virtualhosts.
gitlab.foo.bar -> gitlab container
racktables.foo.bar -> racktables container
etc
The reverse proxy container will map port 80 and 443 to the host. All the other containers will not need port mapping as all traffic goes through the revers proxy.
I think the quickest way to get this working is to use jwilder/nginx-proxy. It's at least extremely newbie friendly as it automates almost everything for you. You can also learn a lot by looking at the generated config files in the container. Even getting TLS to work is not that complicated and you get setup with A+ rating from ssllabs by default.
I've used this for my hobby projects for almost a year and it works great (with Let's Encrypt).
You can of course also manually configure everything, but it's a lot of work with so many pitfalls.
The really bad way to do this is to run the reverse proxy on the host and map lots of ports from all the containers to the host. Please don't do that.
I have wildcard dns pointed to my server e.g. *.domain.com
I'd like to route each subdomain to it's own docker container.
So that box1.domain.com goes to the appropriate docker container.
This should work for any traffic primarily HTTP and SSH.
Or perhaps the port can be part of the subdomain e.g. 80.box1.domain.com.
I will have lots of docker containers so the solution should be dynamic not hard-coded for every container.
Another solution would be to use https://github.com/jwilder/nginx-proxy.
This tool automatically forwards requests to the appropriate container (based on subdomain via the VIRTUAL_HOST container environment variable).
For instance, if you want to redirect box1.domain.com to a container, simply set the VIRTUAL_HOST container environment variable to "box1.domain.com".
Here is a detailed tutorial I wrote about it: http://blog.florianlopes.io/host-multiple-websites-on-single-host-docker.
I went with interlock to route http traffic using the nginx plugin.
I settled on using a random port for each SSH connection as I couldn't get it work using the subdomain alone.
The easiest solution would be to use the Apache mod_rewrite RewriteMap method. It's very performant when used against a text file, but it can call a script if desired. There is another StackOverflow answer that covers the script variant pretty well.
If you want to avoid Apache, the good folks over at dotCloud created Hipache to do the routing for their PaaS services. They even documented the different things they tried before building their own solution. I found a reference to tsuru.io using hipache exactly for routing to docker containers, so that definitely validates it for this purpose.
my answer may come to late but when you use docker you don't really need ssh to connect to your containers. with the docker exec command, you can run shell command directly in your running container.
here is my advice use the nginx proxy container listed at the beginning for configuring sub-domains. and run portainer on your host in order to have a visual overview of your Containers, images, logs and even execute command in it all of this through the portainer gui.
I used apache proxypresevehost
ProxyPreserveHost On
ProxyPass "/" "http://localhost:4533/"
ProxyPassReverse "/" "http://localhost:4533/"