I am looking for a way to get one of my docker containers to re-start with a delay (after a localhost restart).
What I currently have is:
Computer which runs docker desktop.
Docker has two containers: Webserver and MySQL (which serves data to the Webserver).
Both of the containers have --restart=always option, which allows them to restart if I restart the computer.
My issue: after a computer restart, Webserver does not seem to work properly, unless I specifically manually restart it.
My guess is that I need to give MySQL some time to boot up before I start the Webserver.
I was thinking to maybe setup a bash script or look into Compose (https://docs.docker.com/compose/startup-order/), but since I am quite new to this wanted to double check if I missed something and maybe there is a more of an elegant way to approach this.
You should use compose and specify that your webserver depends_on MySQL so that your webserver container starts after the DB is up.
You should ideally make your webserver resilient to unavailability of it's dependencies.
Related
I'm trying to dockerize 2 dotnet console application that one depends on the other.
When I run the 1st container I need it to run another container on the host, insert a parameter to it's stdin and wait for it to end the job and exit.
What will be the best solution to achieve this?
Running a container inside a container seems to me like a bad solution,
I've also thought about managing another process with a webserver (nginx or something) on the host that get the request as http request and execute a docker run command in the host but I'm sure there is a better solution for this (in this way the webserver will just run on the host and not inside a container).
There is also this solution but it seems to have major security issues.
I've tried also using the Docker.DotNet library but it does not help with my problem.
any ideas for the best solution?
Thanks in advance.
EDIT:
I will be using docker compose but the problem is that the 2nd container is not running and listening at all time, similar to the Hello-World container it's called, performs it's job and exits.
EDIT2: FIX
I've implemented redis as a message broker to communicate between the different services, while it changed the requirement a little (containers will always run and listen to redis) it helped me to solve the issue.
I am not sure I understand correctly what you need to do, but if you simply need to start two containers in parallel the simplest way I can think of is docker-compose.
Another way is with a python or bash/bat script that launches the containers independently (in python you can either use the Docker API or do it manually with the subprocess module). This allows you to perform other things (like writing on the stdin of one container, as you stated).
Basically, something like nginx -t, where nothing can break.
I have a NGINX proxy running in Docker which is networked to several other Docker networks (web applications). Sometimes I need to restart the proxy, but if one of the connected networks is no longer accessible to the proxy the proxy fails to restart even if it was able to run fine before the restart.
Is there a way to restart Docker containers without risking downtime caused
by this?
Dry-run feature for docker-compose is an open feature request for the time being.
I think a better approach is to make Nginx configuration more robust:
Ref: make nginx ignore site config when its upstream cannot be reached
Do not assign tasks (that belongs to the nginx) to docker, more specifically fix it in your nginx.conf!
The right way to resolve your "issue" is using upstreams.
I have a docker-compose setup, where an nginx container is being used as a reverse-proxy and load balancer for the rest of the containers that make up my application.
I can spin up the application using docker-compose up -d and everything works great. Then, I can scale up one of my services using docker-compose up -d --scale auth=3, and everything continues to work fine.
The only issue is that nginx is not yet aware of the two new instances, so I need to manually restart the nginx process inside the running container using docker exec revproxy nginx -s reload, "revproxy" being the name of the nginx container.
That's fine and dandy, I don't mind running an extra command when I decide to scale out one of my services. The real issue though is when there is a container failure somewhere... nginx needs to know as soon as this happens to stop sending traffic to the failed instance until the Docker engine is able to replace it with a healthy one.
With all that said, essentially I would like to accomplish what they are doing in the Traefik quickstart tutorial, except I would like to stick with nginx as my reverse-proxy.
While I personally think Traefik would be a real time saver in your case, there is another project which does what you want with nginx: jwilder/nginx-proxy.
It works by listening to docker engine events and when containers are added or removed, it updates a nginx config using a template.
You could either use this jwilder/nginx-proxy docker image at is is, or you can also make your own flavor by using the jwilder/docker-gen project which is the part that produces a file given a template and docker engine events.
But again, I would recommend Traefik ; for the time and trouble saved and for all the features that comes with it (different load balancing strategies, healthchecks, circuit breakers, automatic SSL certificate setup with ACME/Let's Encrypt, ...)
You just need to write a service discovery script that looks for the updated list of containers every X interval and update the nginx config accordingly.
I am setting up a series of Linux command line challenges (for internal use/training), similar to those at OverTheWire.org's Bandit. From some reading I have done of their infrastructure, they setup things as such:
All ssh-based games on OverTheWire run in Docker containers. When you
login with SSH to one of the games, a fresh Docker container is
created just for you. Noone else is logged in into your container, nor
are there any files from other players lying around. We opted for this
setup to provide each player with a clean environment to experiment
and learn in, which is automatically cleaned up when you log out.
This seems like an ideal solution, since everyone who logs in gets a completely clean environment (destroyed on logout) so that simultaneous players do not interfere with each other.
I am very new to Docker and understand it in principle, but am unsure about how to setup a similar system - particularly spawn new Docker instances on SSH login to a server and then destroy the instance on logout/disconnection.
I'd appreciate any advice on how to design/implement this kind of setup.
It seems to me there are two main goals here. First undestand what docker really makes and how it works. Second the sistem that orquestates the whole sistem.
Let me make some brief and short introduction. I won't go into details but mainly docker is a plaform that works like a system virtualization that lets you isolate a process, operating system or a whole aplication without any kind of hypervisor. The container shares the kernel of the host system and all that it cointains is islated from the host and the rest of the containers.
So the basic principle you are looking for is a system that orchestrates containers that has an ssh server with the port 22 open. Although there are many ways of how you could reach this goal, one way it can be with this docker sshd server image.
docker run -itd --rm rastasheep/ubuntu-sshd bash
Docker needs a process to keep alive. By using -it you are creating an interactive session with the "bash" interpreter. This will keep alive the container plus lets you start a bash terminal inside an isolated virtual ubuntu server.
--rm: will remove the container once you exists from the container.
rastasheep/ubuntu-sshd: it is the docker image id.
As you can see, there is a lack of a system that connects between your aplication and this docker platform. One approach would it be with a library that python has that uses the docker client programaticaly. As an advice I would recomend you to install docker in your computer and to try to create a couple of ubuntu servers with ssh server and to connect into it from your host. It will help you to see if it's really necesary to have sshd server, the network requisites you will need if so, to traffic all the clients into the containers. Read the oficial docker network documentation.
With the example I had described a new fresh terminal is started and there is no need to connect to the docker via ssh. By using this way you won't need to route the traffic, indentify the host free ports to connect your host to the containers or to check and shutdown the container once the connection has finished. Otherwhise the container will keep alive.
There are many ways where your system can be made and I would strongly recomend to you to start by creating some containers with the docker tool and start to understand how it works.
I'm just getting started with Docker, and I read a ton of documentation and tutorials yesterday, but I can't find where I read about replacing an external service using a linked container, and I'm not even sure which terminology to search for.
Say there is an apache container and a mysql container, where apache was run with a link to mysql, and has access to its ports and such. Now instead of MySQL running on the container instance, we move it to AWS RDS, for example. How do you modify the mysql container so that apache continues to run as expected? To clarify, apache would still be run with a link to a container with the alias mysql, but the mysql container would take care of getting traffic on that port sent to AWS.
Alternatively, maybe there is a container running a MySQL service, but that container is on another host. I have a vague feeling that the pattern I'm referring to would be able to handle that scenario as well. Does this sound familiar to anyone?
If the container is on another host, why not just hit the host directly and have docker be transparent with 3386 (or whatever port you're running mysql on) forwarding requests to the container? I can't think of any reason you'd want to link containers unless they're actually on the same host. Docker is great at being transparent, so clients can run things against a service in Docker from another machine as if the service was being run directly on the machine without Docker.
If you really have to have both containers on the same machine (even though the mysql container is calling out to RDS or another host), you should be able to make a new simple mysql image that just has mysql_client installed and just takes requests and forwards them to RDS.