We've found that, if you have two containers linked, and network connections established between them, if the receiving container is restarted, the other one maintain the connections alive, resulting in failures.
Our question is: Is it possible to restart containers in cascade?
eg:
Container_A -link-> Container_B
Container_B is restarted
Container_A is restarted in cascade because of Container_B was restarted
Thank you in advance!
Regards.
you will need to write a script, which either looks at docker events and restarts Container_A when needed, or note the id of container B, and when in docker ps -q you do not find it, restart container_A
if you use systemd with service units u make one service dependant on other:
for example I have my custom web application with nginx in front, so when I define nginx.service
[Unit]
After=vcl.service
Requires=vcl.service
...
this way when my vcl service is restarted systemd restarts nginx. Probably u can set upstart to make something similar. So answer to your question is - use some init service.
Related
I am setting up a series of Linux command line challenges (for internal use/training), similar to those at OverTheWire.org's Bandit. From some reading I have done of their infrastructure, they setup things as such:
All ssh-based games on OverTheWire run in Docker containers. When you
login with SSH to one of the games, a fresh Docker container is
created just for you. Noone else is logged in into your container, nor
are there any files from other players lying around. We opted for this
setup to provide each player with a clean environment to experiment
and learn in, which is automatically cleaned up when you log out.
This seems like an ideal solution, since everyone who logs in gets a completely clean environment (destroyed on logout) so that simultaneous players do not interfere with each other.
I am very new to Docker and understand it in principle, but am unsure about how to setup a similar system - particularly spawn new Docker instances on SSH login to a server and then destroy the instance on logout/disconnection.
I'd appreciate any advice on how to design/implement this kind of setup.
It seems to me there are two main goals here. First undestand what docker really makes and how it works. Second the sistem that orquestates the whole sistem.
Let me make some brief and short introduction. I won't go into details but mainly docker is a plaform that works like a system virtualization that lets you isolate a process, operating system or a whole aplication without any kind of hypervisor. The container shares the kernel of the host system and all that it cointains is islated from the host and the rest of the containers.
So the basic principle you are looking for is a system that orchestrates containers that has an ssh server with the port 22 open. Although there are many ways of how you could reach this goal, one way it can be with this docker sshd server image.
docker run -itd --rm rastasheep/ubuntu-sshd bash
Docker needs a process to keep alive. By using -it you are creating an interactive session with the "bash" interpreter. This will keep alive the container plus lets you start a bash terminal inside an isolated virtual ubuntu server.
--rm: will remove the container once you exists from the container.
rastasheep/ubuntu-sshd: it is the docker image id.
As you can see, there is a lack of a system that connects between your aplication and this docker platform. One approach would it be with a library that python has that uses the docker client programaticaly. As an advice I would recomend you to install docker in your computer and to try to create a couple of ubuntu servers with ssh server and to connect into it from your host. It will help you to see if it's really necesary to have sshd server, the network requisites you will need if so, to traffic all the clients into the containers. Read the oficial docker network documentation.
With the example I had described a new fresh terminal is started and there is no need to connect to the docker via ssh. By using this way you won't need to route the traffic, indentify the host free ports to connect your host to the containers or to check and shutdown the container once the connection has finished. Otherwhise the container will keep alive.
There are many ways where your system can be made and I would strongly recomend to you to start by creating some containers with the docker tool and start to understand how it works.
I have a bunch of docker containers running in swarm mode (services). If the whole server restarts then containers start to run one by one after server reboot. Is there a way to set an order of container creation and run?
P.S. I can't user docker-compose as these services were created dynamically through Docker Remote API.
You can try to set a shorter restart delay (with --restart-delay) to the services you want to start firstly and a bigger to next one etc..
But I am not sure that working.
is there any way to disable/leave the swarm mode of docker when starting the daemon manually, e.g. dockerd --leave-swarm, instead of starting the daemon and leave the swarm mode afterwards, e.g. using docker swarm leave?
Many thanks in advance,
Aljoscha
I don't think this is anticipated by docker developers. When node leaves swarm, it needs to notify swarm managers, that it will not be available anymore.
Leaving swarm is a one time action and passing this as an configuration option to the daemon is weird. You may try to suggest that on docker's github, but I don't think it will have much supporters.
Perhaps more intuitive option would be to have ability to start dockerd in a way that communication to docker swarm manager would be suspended - so your dockerd is running only locally, but if you start without that flag (--local?) it would reconnect to swarm that it was attached before.
I am running a docker container which contains a node server. I want to attach to the container, kill the running server, and restart it (for development). However, when I kill the node server it kills the entire container (presumably because I am killing the process the container was started with).
Is this possible? This answer helped, but it doesn't explain how to kill the container's default process without killing the container (if possible).
If what I am trying to do isn't possible, what is the best way around this problem? Adding command: bash -c "while true; do echo 'Hit CTRL+C'; sleep 1; done" to each image in my docker-compose, as suggested in the comments of the linked answer, doesn't seem like the ideal solution, since it forces me to attach to my containers after they are up and run the command manually.
This is by design by Docker. Each container is supposed to be a stateless instance of a service. If that service is interrupted, the container is destroyed. If that service is requested/started, it is created. If you're using an orchestration platform like k8s, swarm, mesos, cattle, etc at least.
There are applications that exist to represent PID 1 rather than the service itself. But this goes against the design philosophy of microservices and containers. Here is an example of an init system that can run as PID 1 instead and allow you to kill and spawn processes within your container at will: https://github.com/Yelp/dumb-init
Why do you want to reboot the node server? To apply changes from a config file or something? If so, you're looking for a solution in the wrong direction. You should instead define a persistent volume so that when the container respawns the service would reread said config file.
https://docs.docker.com/engine/admin/volumes/volumes/
If you need to restart the process that's running the container, then simply run a:
docker restart $container_name_or_id
Exec'ing into a container shouldn't be needed for normal operations, consider that a debugging tool.
Rather than changing the script that gets run to automatically restart, I'd move that out to the docker engine so it's visible if your container is crashing:
docker run --restart=unless-stopped ...
When a container is run with the above option, docker will restart it for you, unless you intentionally run a docker stop on the container.
As for why killing pid 1 in the container shuts it down, it's the same as killing pid 1 on a linux server. If you kill init/systemd, the box will go down. Inside the namespace of the container, similar rules apply and cannot be changed.
It is default way that the IP of a docker container will change after restarting it. I am confused why this is suggested in designing docker. Is it more reasonable to retain the IP with a simple restart? This should be distinguished from creating a new container.
IP of a docker container gets changed after a restart as of now, but the community is working here on this highly demanded feature. Meanwhile I am using pipework to assign specific IP to my docker container.
pipework docker-bridge-name-here docker-container-name 10.1.1.110/24#10.1.1.1
The only drawback is that you'll have to do it every time you restart docker.