Best way to restart a docker service in swarm - docker

If I want to restart a docker service cleanly, I am doing this:
docker service update service_name --force
But this recreates the service leaving an orphaned one behind, which then needs to be removed with a prune. Any other way?
Scaling down and up is one way I know, but doesn't seem like the cleanest way.

Related

How to recreate a Docker Service with same env variables and volumes but under a new name?

I have some Docker Services and I would ideally to have a command that creates a "cloned" service from an existing one.
Something like command_to_copy_a_service_definition old_service new_service should therefore lead to the creation of another service new_service with a similar config to old_service.
I know that rekcod is able to recreate Docker commands from docker inspect but it works on "regular" containers and not on Swarm Service definitions.
Is there something similar for Docker Swarm or any other approach?
PS: I would also accept an answer how to rename a Docker Service but there's another SO question to cover that.

What is the purpose of creating a volume between two docker.sock files in Docker?

I see this in many docker apps that are somehow related hardly on networking, but I can't understand this
Got it.
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
Let’s take a step back here. Do you really want Docker-in-Docker? Or
do you just want to be able to run Docker (specifically: build, run,
sometimes push containers and images) from your CI system, while this
CI system itself is in a container?
I’m going to bet that most people want the latter. All you want is a
solution so that your CI system like Jenkins can start containers.
And the simplest way is to just expose the Docker socket to your CI
container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other),
instead of hacking something together with Docker-in-Docker, start it
with:
docker run -v /var/run/docker.sock:/var/run/docker.sock
Now this
container will have access to the Docker socket, and will therefore be
able to start containers. Except that instead of starting “child”
containers, it will start “sibling” containers.

Docker Swarm for managing headless containers, and keeping them updated (or watchtower?)

I've been trying to devise a strategy for using Docker Swarm for managing a bunch of headless containers - don't need load balancer, exposing any ports, or auto scaling.
The only thing I want is the ability to update all of the containers (on all nodes), if any of the images are updated. Each container running will need to have a specific --hostname.
Is running docker service even viable for this? Or should I just do a normal docker run targeting specific nodes to specify the --hostname i want? The reason I'm even asking about docker service is because it allows you to do an update (forcing an update for all containers if there are updated images).
Was also thinking that Docker Swarm would make it a bit easier to keep an eye on all the containers (i.e. manage them from a central location).
The other option I was looking at was watchtower, to run on each server that is running one of the containers, as an alternative to swarm. My only issue with this is that it doesn't provide any orchestration, for centralized management.
Anyone have any ideas of what would be a better option given the scenario?
Docker swarm does not give you any advantage regarding rolling updates apart from the docker service command, swarm only provides the user horizontal scaling and places a load balancer in front of those replicas called "service", as well as some other goodies such as replicating the docker events across the swarm nodes.
docker service --force would work as expected.
However, you should probably use both, docker swarm for orchestration and watchtower for rolling updates.

docker restart on cascade

We've found that, if you have two containers linked, and network connections established between them, if the receiving container is restarted, the other one maintain the connections alive, resulting in failures.
Our question is: Is it possible to restart containers in cascade?
eg:
Container_A -link-> Container_B
Container_B is restarted
Container_A is restarted in cascade because of Container_B was restarted
Thank you in advance!
Regards.
you will need to write a script, which either looks at docker events and restarts Container_A when needed, or note the id of container B, and when in docker ps -q you do not find it, restart container_A
if you use systemd with service units u make one service dependant on other:
for example I have my custom web application with nginx in front, so when I define nginx.service
[Unit]
After=vcl.service
Requires=vcl.service
...
this way when my vcl service is restarted systemd restarts nginx. Probably u can set upstart to make something similar. So answer to your question is - use some init service.

Docker Container management in production environment

Maybe I missed something in the Docker documentation, but I'm curious and can't find an answer:
What mechanism is used to restart docker containers if they should error/close/etc?
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose? (as they say it is not production ready)
What mechanism is used to restart docker containers if they should error/close/etc?
Docker restart policies, as set with the --restart option to docker run. From the docker-run(1) man page:
--restart=""
Restart policy to apply when a container exits (no, on-fail‐
ure[:max-retry], always)
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose?
Well, you can of course use docker-compose if that is the best match for your requirements, even if it is not labelled as "production ready".
You can investigate larger container management solutions like Kubernetes or even OpenStack (although I would not recommend the latter unless you are already familiar with OpenStack).
You could craft individual systemd unit files for each container.

Resources