I have a docker-compose file for some services, among them an airflow-webserver. I realized that I can both add restart and deploy-restart_policy to the compose file. I tried searching for a difference between the two, but could only find posts discussing the individual settings (like on-failure or always).
What is the difference of setting the configuration?
Which should I use?
Is it a versioning issue, e.g. restart is from older versions and deploy-restart_policy is the newer one?
Example docker-compose.yml:
version: "3"
services:
airflow-webserver:
container_name: airflow_container
image: puckel/docker-airflow
ports:
- '8080:8080'
networks:
- dataworld
volumes:
- ./airflow/dags:/usr/local/airflow/dags
- ./airflow/logs:/usr/local/airflow/logs
deploy:
restart_policy:
condition: on-failure
restart: on-failure
The restart and deploy.restart_policy options configure the same thing but depend on the way you run your containers:
restart is used by Docker Compose
deploy.restart_policy is used by Docker Swarm
The deploy option is used for Docker Swarm only and is ignored by Docker Compose.
From the documentation on deploy.restart_policy:
Configures if and how to restart containers when they exit. Replaces restart.
And here about restart:
The restart option is ignored when deploying a stack in swarm mode.
Related
I have docker compose file:
x11:
network_mode: host
container_name: x11
image: dorowu/ubuntu-desktop-lxde-vnc
restart: on-failure
environment:
- RESOLUTION=1920x1080
my_app:
network_mode: host
container_name: my_app
image: my-image-path
restart: on-failure
depends_on:
- x11
environment:
- LANG=ru_RU.UTF-8
- DISPLAY=:1
entrypoint:
./start_script
Sometimes, during my tests my application crashes. And because I have option "restart: on-failure" it automatically restarts. The problem is that the state of the application is saved at the time of the crash, for example some windows remain open. Is there a way to delete the data in the container on restart or make it so that a new container is created each time?
So far, it is not possible to do that automatically (I mean in docker-compose.yaml). You will need to rely on something external for enforcing like create a bash script that automatically removes containers as shown here. However, you still need to link it up with your app on failure.
Hi guys and excuse me for my English. I'm using docker swarm, when I attempt to deploy docker application with this command
docker stack deploy -c docker-compose.yml -c docker-compose.prod.yml chatappapi
it shows the next error : services.chat-app-api Additional property pull_policy is not allowed
why this happens?
how do I solve this?
docker-compose.yml
version: "3.9"
services:
nginx:
image: nginx:stable-alpine
ports:
- "5000:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
chat-app-api:
build: .
image: username/myapp
pull_policy: always
volumes:
- ./:/app
- /app/node_modules
environment:
- PORT= 5000
- MAIL_USERNAME=${MAIL_USERNAME}
- MAIL_PASSWORD=${MAIL_PASSWORD}
- CLIENT_ID=${CLIENT_ID}
- CLIENT_SECRET=${CLIENT_SECRET}
- REDIRECT_URI=${REDIRECT_URI}
- REFRESH_TOKEN=${REFRESH_TOKEN}
depends_on:
- mongo-db
mongo-db:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: 'username'
MONGO_INITDB_ROOT_PASSWORD: 'password'
ports:
- "27017:27017"
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
docker-compose.prod.yml
version: "3.9"
services:
nginx:
ports:
- "80:80"
chat-app-api:
deploy:
mode: replicated
replicas: 8
restart_policy:
condition: any
update_config:
parallelism: 2
delay: 15s
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWORD=${MONGO_PASSWORD}
- MONGO_IP=${MONGO_IP}
command: node index.js
mongo-db:
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
Information
docker-compose version 1.29.2
Docker version 20.10.8
Ubuntu 20.04.2 LTS
Thanks in advance.
Your problem line is in docker-compose.yml
chat-app-api:
build: .
image: username/myapp
pull_policy: always # <== this is the bad line, delete it
The docker compose file reference doesn't have any pull_policy in the api because
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in which case it builds it using the specified options and tags it with the specified tag.
I think pull_policy used to be a thing for compose? Maybe keep the latest api documentation open to refer to/search through whilst you're developing (things can and do change fairly frequently with compose).
If you want to ensure that the most recent version of an image is pulled onto all servers in a swarm then run docker compose -f ./docker-compose.yml pull on each server in turn (docker stack doesn't have functionality to run this over an entire swarm yet).
As an aside: I wouldn't combine two .yml files with a single docker stack command without a very good reason to do so.
You are mixing docker-compose and docker swarm ideas up in the same files:
It is probably worth breaking your project up into 3 files:
docker-compose.yml
This would contain just the basic service definitions common to both compose and swarm.
docker-compose.override.yml
Conveniently, docker-compose and docker compose both should read this file automatically. This file should contain any "port:", "depends_on:", "build:" directives, and any convenience volumes use for development.
stack.production.yml
The override file to be used in stack deployments should contain everything understood by swarm and not compose, and b. everything required for production.
Here you would use configs: or even secrets: rather than volume mappings to local folders to inject content into containers. Rather than relying on ports: directives, you would install an ingress router on the swarm such as traefik. and so on.
With this arrangement, docker compose can be used to develop and build your compose stack locally, and docker stack deploy won't have to be exposed to compose syntax it doesn't understand.
pull_policy is in the latest version of docker-compose.
To upgrade your docker-compose refer to:
https://docs.docker.com/compose/install/
The spec for more info:
https://github.com/compose-spec/compose-spec/blob/master/spec.md#pull_policy
I have two different Docker containers and each has a different image. Each app in the containers uses non-conflicting ports. See the docker-compose.yml:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
hostname: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
I can cURL each image successfully from the host machine (Mac OS), e.g. curl -k https://localhost:6473/service_a/api/version works. What I'd like to do is to be able to refer to postgres container from the service_a container via localhost as if these two containers were one and they share the same localhost. I know that it's possible if I use the hostname postgres.dev from inside the service_a container, but I'd like to be able to use localhost. Is this possible? Please note that I am not very well versed in networking or Docker.
Mac version: 10.12.4
Docker version: Docker version 17.03.0-ce, build 60ccb22
I have done quite some prior research, but couldn't find a solution.
Relevant: https://forums.docker.com/t/localhost-and-docker-compose-networking-issue/23100/2
The right way: don't use localhost. Instead use docker's built in DNS networking and reference the containers by their service name. You shouldn't even be setting the container name since that breaks scaling.
The bad way: if you don't want to use the docker networking feature, then you can switch to host networking, but that turns off a very key feature and other docker capabilities like the option to connect containers together in their own isolated networks will no longer work. With that disclaimer, the result would look like:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "host"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
image: postgres:9.6
network_mode: "host"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Note that I removed port publishing from the container to the host, since you're no longer in a container network. And I removed the hostname setting since you shouldn't change the hostname of the host itself from a docker container.
The linked forum posts you reference show how when this is a VM, the host cannot communicate with the containers as localhost. This is an expected limitation, but the containers themselves will be able to talk to each other as localhost. If you use a VirtualBox based install with docker-toolbox, you should be able to talk to the containers by the virtualbox IP.
The really wrong way: abuse the container network mode. The mode is available for debugging container networking issues and specialized use cases and really shouldn't be used to avoid reconfiguring an application to use DNS. And when you stop the database, you'll break your other container since it will lose its network namespace.
For this, you'll likely need to run two separate docker-compose.yml files because docker-compose will check for the existence of the network before taking any action. Start with the postgres container:
version: "2"
services:
postgres:
container_name: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Then you can make a second service in that same network namespace:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "container:postgres.dev"
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
Specifically for Mac and during local testing, I managed to get the multiple containers working using docker.for.mac.localhost approach. I documented it http://nileshgule.blogspot.sg/2017/12/docker-tip-workaround-for-accessing.html
I need to set service mode to global while using compose files .
Any chance we can use this in compose file ?
I have a requirement where for a service there should be exactly one container on every node/host .
This doesn't happen with "spread strategy" of swarm if a node goes down & comes up , it just attains the equal number of containers on each host irrespective of services .
https://github.com/docker/compose/issues/3743
We can do this easily now with docker compose v3 (version 3) under the deploy(mode) section.
Prerequisites -
docker compose version should be 1.10.0+
docker engine version should be 1.13.0+
Example compose file -
version: "3"
services:
nginx:
image: nexus3.example.com/prd-nginx-sm:v1
ports:
- "80:80"
networks:
- cheers
volumes:
- logs:/rest/out/
deploy:
mode: global
labels:
feature.description: "Frontend"
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: any
command: "/usr/sbin/nginx"
networks:
cheers:
volumes:
logs:
data:
Deploy the compose file -
$ docker stack deploy -c sm-deploy-compose.yml --with-registry-auth CHEERS
This will deploy nginx container on all the nodes participating in the cluster .
I want to restart a container if it crashes automatically. I am not sure how to go about doing this. I have a script docker-compose-deps.yml that has elasticsearch, redis, nats, and mongo. I run this in the terminal to set this up: docker-compose -f docker-compose-deps.yml up -d. After this I set up my containers by running: docker-compose up -d. Is there a way to make these containers restart if they crash? I noticed that docker has a built in restart, but I don't know how to implement this.
After some feedback I added restart: always to my docker-compose file and my docker-compose-deps.yml file. Does this look correct? Or is this how you would implement the restart always?
docker-compose sample
myproject-server:
build: "../myproject-server"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5880:5880
- 6971:6971
volumes:
- "../myproject-server/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
myproject-associate:
build: "../myproject-associate"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5870:5870
volumes:
- "../myproject-associate/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
docker-compose-deps.yml sample
nats:
image: nats
container_name: nats
restart: always
ports:
- 4222:4222
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- "./data:/data"
ports:
- 27017:27017
If you're using compose, it has a restart flag which is analogous to the one existing in the docker run command, so you can use that. Here is a link to the documentation about this part -
https://docs.docker.com/compose/compose-file/
When you deploy out, it depends where you deploy to. Most container clusters like kubernetes, mesos or ECS would have some configuration you can use to auto-restart your containers. If you don't use any of these tools you are probably starting your containers manually and can then just use the restart flag just as you would locally.
Looks good to me. What you want to understand when working on Docker policies is what each one means. always policy means that if it crashes for any reason automatically restart.
So if it stops for any reason, go ahead and restart it.
So why would you ever want to use always as opposed to say on-failure?
In some cases, you might have a container that you always want to ensure is running such as a web server. If you are running a public web application chances are you want that server to be available 100% of the time.
So for web application I expect you want to use always. On the other hand if you are running a worker process on a file and then naturally exit, that would be a good use case for the on-failure policy, because the worker container might be finished processing the file and you probably want to let it close out and not have it restart.
Thats where I would expect to use the on-failure policy. So not just knowing the syntax, but when to apply which policy and what each one means.