I've got a docker-compose service that needs to be restarted only when docker or the system restarts. The service should not restart when an error occurred or the service completes. The flags --restart unless-stopped or --restart always doesn't work for me because with these flags the service is going to restart, too, when an error occurres.
I have the same question. I tried using the docker compose restart_policy and found that it did not work.
services:
hello:
deploy:
restart_policy:
condition: ...
WARNING: Some services (hello) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
See the answer to here Docker: Restart Container only on reboot?
So, I then considered doing something in the Dockerfile but the docs suggest setting up an external process to restart containers, using the same command we use to start them normally.
See https://docs.docker.com/config/containers/start-containers-automatically/
If restart policies don’t suit your needs, such as when processes
outside Docker depend on Docker containers, you can use a process
manager such as upstart, systemd, or supervisor instead.
Related
On my docker swarm cluster, when I perform a docker stack deploy with a new version of my service's image or do a docker service update --force, the old containers of the service(s) get desired state SHUTDOWN, they remain with a current state running.
However, they don't seem te be actually running, I can't do anything with them, docker logs, docker inspect, docker exec, ... nothing.
The only way to get rid of them is to restart the docker daemon.
What would you consider look at to try to understand and fix this recurring issue ?
We faced the same issue a few days ago: Turned out, we had a logging-driver configured, but the logging-server was not available. We stopped using this anyway, but forgot to remove the configuration from the service:
logging:
driver: fluentd
options:
fluentd-address: localhost:24224
fluentd-async-connect: "true"
Removing this configuration fixed the issue for future containers. Old instances were still hanging around, but restarting Docker helped.
I'm trying to run my docker-compose setup into my localhost kubernetes cluster that comes default with Docker for Desktop.
I run the following command and it just .. hangs???
> docker stack deploy --orchestrator=kubernetes -c docker-compose.yml hornet
Ignoring unsupported options: build
Ignoring deprecated options:
container_name: Setting the container name is not supported.
expose: Exposing ports is unnecessary - services on the same network can access each other's containers on any port.
top-level network "backend" is ignored
top-level network "frontend" is ignored
service "website.public": network "frontend" is ignored
service "website.public": container_name is deprecated
service "website.public": build is ignored
service "website.public": depends_on are ignored
....
<snip> heaps of services 'ignored'
....
Waiting for the stack to be stable and running...
The docker-compose up command works great, locally, when I run that.
Is there any ways I can try and see what's going on under the hood, which hangs this?
I'm new to Docker and trying to get started by deploying locally a hello-world Flask app on Docker-Swarm.
So far I have my Flask app, a Dockerfile, and a docker-compose.yml file.
version: "3"
services:
webapp:
build: .
ports:
- "5000:5000"
docker-compose up works fine and deploys my Flask app.
I have started a Docker Swarm with docker swarm init, which I understand created a swarm with a single node:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
efcs0tef4eny6472eiffiugqp * moby Ready Active Leader
Now, I don't want workers or anything else, just a single node (the manager node created by default), and deploy my image there.
Looking at these instructions https://docs.docker.com/get-started/part4/#create-a-cluster it seems like I have to create a VM driver, then scp my files there, and ssh to run docker-compose up. Is that the normal way of working? Why do I need a VM? Can't I just run docker-compose up on the swarm manager? I didn't find a way to do so, so I'm guessing I'm missing something.
Running docker-compose up will create individual containers directly on the host.
With swarm mode, all the commands to manage containers have shifted to docker stack and docker service which manage containers across multiple hosts. The docker stack deploy command accepts a compose file with the -c arg, so you would run the following on a manager node:
docker stack deploy -c docker-compose.yml stack_name
to create a stack named "stack_name" based on the version 3 yml file. This command works the same regardless of whether you have one node or a large cluster managed by your swarm.
Does anyone know how (if possible) to run docker-compose commands against a swarm using the new docker 1.12 'swarm mode' swarm?
I know with the previous 'Docker Swarm' you could run docker-compose commands directly against the swarm by updating the DOCKER_HOST to point to the swarm master :
export DOCKER_HOST="tcp://123.123.123.123:3375"
and then simply execute commands as if you were running them against a single instance of Docker engine.
OR is this functionality something that docker-compose bundle is replacing?
I realized my question was vaguely worded and actually has two parts to it. Eventually however, I was able to figure out solutions to both issues.
1) Can you run commands directly 'against' a swarm / swarm-mode in Docker 1.12 running on a remote machine?
While you can't really run commands 'against' a swarm you CAN run docker service commands on the master node of a swarm in order to run services on that swarm.
You can also configure the Docker daemon (the docker daemon that is the master node of the swarm) to listen on TCP ports in order to externally expose the Docker API.
2) Can you still use docker-compose files to start services in Docker 1.12 swarm-mode?
Yes, although these features are currently part of Docker's "experimental" features. This means you must download/install the version that includes the experimental features (check the github).
You essentially follow these instructions https://github.com/docker/docker/blob/master/experimental/docker-stacks-and-bundles.md
to go from the docker-compose.yml file to a distributed application bundle and then to an application stack (this is when your services are actually run).
$ docker-compose bundle
$ docker deploy [OPTIONS] STACK
Here's what I did:
On my remote swarm manager node I started docker with the following options:
docker daemon -D -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 &
This configures Docker daemon to listen on the standard docker socket unix:///var/run/docker.sock AND on localhost:2375.
WARNING : I'm not enabling TLS here just for simplicity
On my local machine I update the docker host environment variable to point at my swarm master node.
$ export DOCKER_HOST="tcp://XX.XX.XX.XX:2377" (populate with your IP)
Navigate to the directory of my docker-compose.yml file
Create a bundle file from my docker-compose.yml file. Make sure to include the .dab extension.
docker-compose bundle --fetch-digests -o myNewBundleFile.dab
Create an application stack from the bundle file. Do not specify the .dab extension here.
$ docker deploy myNewBundleFile
Now I'm still experiencing some networking related issues but I have successfully gotten my service up and running from my unmodified docker-compose.yml files. The network issues I'm experiencing is documented here : https://github.com/docker/docker/issues/23901
While the official support for Swarm mode in Docker Compose is still in progress, I've created a simple script that takes docker-compose.yml file and runs docker service commands for you. See https://github.com/ddrozdov/docker-compose-swarm-mode for details.
It is not possible. Compose uses containers to create a client-side concept of a service. Docker 1.12 Swarm mode introduces a new server-side concept of a service.
You are correct that docker-compose bundle; docker stack deploy is the way to get a Compose file running in Swarm Mode.
I've been using ansible docker module to install several containers on a server. I have containers with services running on them, like mysql or mongodb.
But sometimes my containers stop running, so I have to run the playbook again in order to get them running back again.
I've been trying to use supervisord and writing the docker command to run the containers in supervisor configuration. But by doing this, there is no need to use ansible docker module. And I'd love to continue using it since it makes docker configuration cleaner and less tedious.
Is there a better way to achieve this using ansible docker module? What's the right way?
The docker module has a restart_policy option, which translates to the --restart parameter of the docker command.
You get the desired behavior by applying this to your task:
restart_policy: on-failure