I want to restart a container if it crashes automatically. I am not sure how to go about doing this. I have a script docker-compose-deps.yml that has elasticsearch, redis, nats, and mongo. I run this in the terminal to set this up: docker-compose -f docker-compose-deps.yml up -d. After this I set up my containers by running: docker-compose up -d. Is there a way to make these containers restart if they crash? I noticed that docker has a built in restart, but I don't know how to implement this.
After some feedback I added restart: always to my docker-compose file and my docker-compose-deps.yml file. Does this look correct? Or is this how you would implement the restart always?
docker-compose sample
myproject-server:
build: "../myproject-server"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5880:5880
- 6971:6971
volumes:
- "../myproject-server/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
myproject-associate:
build: "../myproject-associate"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5870:5870
volumes:
- "../myproject-associate/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
docker-compose-deps.yml sample
nats:
image: nats
container_name: nats
restart: always
ports:
- 4222:4222
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- "./data:/data"
ports:
- 27017:27017
If you're using compose, it has a restart flag which is analogous to the one existing in the docker run command, so you can use that. Here is a link to the documentation about this part -
https://docs.docker.com/compose/compose-file/
When you deploy out, it depends where you deploy to. Most container clusters like kubernetes, mesos or ECS would have some configuration you can use to auto-restart your containers. If you don't use any of these tools you are probably starting your containers manually and can then just use the restart flag just as you would locally.
Looks good to me. What you want to understand when working on Docker policies is what each one means. always policy means that if it crashes for any reason automatically restart.
So if it stops for any reason, go ahead and restart it.
So why would you ever want to use always as opposed to say on-failure?
In some cases, you might have a container that you always want to ensure is running such as a web server. If you are running a public web application chances are you want that server to be available 100% of the time.
So for web application I expect you want to use always. On the other hand if you are running a worker process on a file and then naturally exit, that would be a good use case for the on-failure policy, because the worker container might be finished processing the file and you probably want to let it close out and not have it restart.
Thats where I would expect to use the on-failure policy. So not just knowing the syntax, but when to apply which policy and what each one means.
Related
I have docker compose file:
x11:
network_mode: host
container_name: x11
image: dorowu/ubuntu-desktop-lxde-vnc
restart: on-failure
environment:
- RESOLUTION=1920x1080
my_app:
network_mode: host
container_name: my_app
image: my-image-path
restart: on-failure
depends_on:
- x11
environment:
- LANG=ru_RU.UTF-8
- DISPLAY=:1
entrypoint:
./start_script
Sometimes, during my tests my application crashes. And because I have option "restart: on-failure" it automatically restarts. The problem is that the state of the application is saved at the time of the crash, for example some windows remain open. Is there a way to delete the data in the container on restart or make it so that a new container is created each time?
So far, it is not possible to do that automatically (I mean in docker-compose.yaml). You will need to rely on something external for enforcing like create a bash script that automatically removes containers as shown here. However, you still need to link it up with your app on failure.
I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.
version: "3.9"
services:
service_1: #This server emulates Google Pubsub locally
build:
dockerfile: <dockerfile_path>
context: ./
ports:
- "8074:8074" # port 8074 is used inside CMD in the Dockerfile
restart: always
service_2: #This service creates necessary topics and subscriptions for the other services
build:
dockerfile: <dockerfile_path>
context: ./
environment:
PUBSUB_EMULATOR_HOST: service_1:8074
depends_on:
- emulator
service_3: #database
image: postgres:13.1
environment:
- POSTGRES_USER=<USER>
- POSTGRES_PASSWORD=<PASSWORD>
- APP_DB_USER=<USER>
- APP_DB_PASS=<PASSWORD>
- APP_DB_NAME=test
volumes:
- ./db:/docker-entrypoint-initdb.d/
ports:
- "5432:5432"
service_4: #this service orchestrates the three services below by receiving and sending messages from/to pubsub
build:
dockerfile: <dockerfile_path>
context: ./
ports:
- "8083:8083"
environment:
PUBSUB_EMULATOR_HOST: service_1:8074
depends_on:
- postgres
restart: always
service_5:
build:
dockerfile: <dockerfile_path>
context: ./
ports:
- "8090:8090"
environment:
PUBSUB_EMULATOR_HOST: service_1:8074
restart: always
service_6:
build:
dockerfile: <dockerfile_path>
context: ./
ports:
- "8096:8096"
environment:
PUBSUB_EMULATOR_HOST: service_1:8074
restart: always
service_7:
build:
dockerfile: <dockerfile_path>
context: ./
ports:
- "8080:8080"
environment:
PUBSUB_EMULATOR_HOST: service_1:8074
restart: always
This is what I currently have in my docker-compose.yml. It seems that there is something crucial I don't understand about how containers are run, but I get random results every time I run docker-compose up.
Even using depends_ondoesn't guarantee that one service is started after another one. For some reason, this breaks how services interact with the local pubsub emulator. I noticed that whenever I change ports inside services and restart, all the services might start working appropriately. But then after docker-compose down and docker-compose up, some services report not being able to subscribe and don't even try any further despite setting restart: always.
I guess this might be to a misunderstanding in how this configuration is supposed to work on my side.
Why is the output so indeterministic?
Is it just by coincidence that changing ports used by the web apps somehow makes it work?
How do I fix that behavior?
According to the documentation, we specify ports: "HOST_PORT:CONTAINER_PORT" and the latter one is used internally by services. It's not even required to set the host ports, but it doesn't change anything whether I set it or not.
I think indetermenistic behaviour is caused by readiness order of your services which is not guarantied by depends_on. Docker documentation has good explanation of this problem:
You can control the order of service startup and shutdown with the
depends_on option. Compose always starts and stops containers in
dependency order, where dependencies are determined by depends_on,
links, volumes_from, and network_mode: "service:...".
However, for startup Compose does not wait until a container is
“ready” (whatever that means for your particular application) - only
until it’s running. There’s a good reason for this.
The problem of waiting for a database (for example) to be ready is
really just a subset of a much larger problem of distributed systems.
In production, your database could become unavailable or move hosts at
any time. Your application needs to be resilient to these types of
failures.
To handle this, design your application to attempt to re-establish a
connection to the database after a failure. If the application retries
the connection, it can eventually connect to the database.
The best solution is to perform this check in your application code,
both at startup and whenever a connection is lost for any reason.
However, if you don’t need this level of resilience, you can work
around the problem with a wrapper script: ...
Now that links are deprecated in docker-compose.yml (and we're able to use the new networking feature to communicate between containers), we've lost a way to explicitly define dependencies between containers. How can we, now, tell our mysql container to come up first, before our api-server container starts up (which connects to mysql via the dns entry myapp_mysql_1 in docker-compose.yml?
There is a possibility to use "volumes_from" as a workaround until depends_on feature (discussed below) is introduced. Assuming you have a nginx container depending on php container, you could do the following:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
volumes_from:
- php
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
One big caveat in the above approach is that the volumes of php are exposed to nginx, which is not desired. But at the moment this is one docker specific workaround that could be used.
depends_on feature This probably would be a futuristic answer. Because the functionality is not yet implemented in Docker (as of 1.9)
There is a proposal to introduce "depends_on" in the new networking
feature introduced by Docker. But there is a long running debate about
the same # https://github.com/docker/compose/issues/374 Hence, once it
is implemented, the feature depends_on could be used to order the
container start-up, but at the moment, you would have to resort to the above approach.
I have a docker based system that comprises of three containers:
1. The official PHP container, modified with some additional pear libs
2. mysql:5.7
3: alterrebe/postfix-relay (a postfix container)
The official php container has a volume that is linked to the host system's code repository which should in theory allow me to work on this application the same as I would if it were hosted "locally".
However, every time the system is brought up, I have to run
docker-compose stop && docker-compose up -d
in order to see the changes that I just made to the system. It's possible that I don't understand Docker correctly and this is by design, but stopping and starting the container after every code change slows down development substantially. Can anyone tell me what I am doing wrong (if anything)? Thanks in advance.
My docker-compose.yml is below (with variables and what not hidden of course)
web:
build: .
links:
- mysql
- mailrelay
environment:
- HIDDEN_VAR=placeholder
- ABC_ENV=development
volumes:
- ./html/:/var/www/html/
ports:
- "0.0.0.0:80:80"
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=abcdefg
- MYSQL_DATABASE=thedatabase
volumes:
- .:/db/:ro
mailrelay:
hostname: mailrelay
image: alterrebe/postfix-relay
ports:
- "25:25"
environment:
- EXT_RELAY_HOST=relay.relay.com
- EXT_RELAY_PORT=25
- SMTP_LOGIN=CLASSIFIED
- SMTP_PASSWORD=ABCDEFGHIK
- ACCEPTED_NETWORKS=172.0.0.0/8
Eventually I just started running
docker stop {{ container name }} && docker start {{ container name }}
every time instead of docker-compose. using Docker directly instead of docker-compose is super fast ( < 1second as opposed to over a minute) so it stopped being a big deal