docker-compose.yml
version: '2'
services:
app:
build:
context: .
command: python src/app.py
restart: on-failure
depends_on:
- db
environment:
- TJBOT_DB_HOST=db
- TJBOT_API_KEY
- TJBOT_AUTO_QUESTION_TIME
env_file:
- .env
db:
image: mongo:3.0.14
volumes:
- mongodbdata:/data/db
volumes:
mongodbdata:
If I change the .env file, how could I reload the container to use the new environment variables with minimum downtime?
If you are running the yml with docker-compose, you can just run docker-compose up -d and it will recreate any containers that have changes and leave all unchanged services untouched.
$ cat docker-compose.env2.yml
version: '2'
services:
test:
image: busybox
# command: env
command: tail -f /dev/null
environment:
- MY_VAR=hello
- MY_VAR2=world
test2:
image: busybox
command: tail -f /dev/null
environment:
- MY_VAR=same ole same ole
$ docker-compose -f docker-compose.env2.yml up -d
Creating network "test_default" with the default driver
Creating test_test_1
Creating test_test2_1
$ vi docker-compose.env2.yml # edit the file to change MY_VAR
$ docker-compose -f docker-compose.env2.yml up -d
Recreating test_test_1
test_test2_1 is up-to-date
If you run the containers as a docker stack deploy -c docker-compose.yml with a version 3 file format, you can do a rolling update of the service which will prevent any downtime if you have multiple instances of your service running. This functionality is still very new, you'll want 1.13.1 to fix some of the issues with updates, and as with anything this new, bugs are still being worked out.
Related
I am deploying, on 3 different environments (test, stage & production) an API.
I am used to deploy with docker-compose so I wrote 2 services (1 for my API and 1 for a database) like following:
# file docker-compose.yml
version: '3.3'
services:
api:
build:
context: ..
dockerfile: Dockerfile
image: my_api:${TAG}
ports:
- "${API_PORT_FROM_ENV}:8000"
env_file: .env
depends_on:
- db
db:
image: whatever:v0.0.0
ports:
- "${DB_PORT_FROM_ENV}:5000"
env_file:
- .env
In the file above, you can find the parent services.
Thne, I wrote 2 files that explains my strategy to deploy on the same Virtual Machine my containers:
-> staging environment below
# docker-compose.stage.yml
version: "3.3
services:
api:
container_name: api_stage
environment:
- environment="staging"
db:
container_name: db_stage
environment:
- environment="staging"
volumes:
- /I/Mount/a/local/volume/stage:/container/volume
-> production environment below
# docker-compose.prod.yml
version: "3.3
services:
api:
container_name: api_prod
environment:
- environment="production"
db:
container_name: db_prod
environment:
- environment="production"
volumes:
- /I/Mount/a/local/volume/prod:/container/volume
My problem:
The production is actually running.
I deploy my containers with the following command:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --build
I want to deploy a staging environment on the same virtual machine. I want my api_prod + db_prod running in parallel with api_stage + db_stage.
Unfortunatly, when I run the command:
docker-compose -f docker-compose.yml -f docker-compose.stage.yml up --build
My containers called api_prod and db_prod stops. Why?
I found the solution:
I need to specify --project-name option that allows me to run both containers from stage and production environment without concurrency.
Below the 2 commands:
# Stage
docker-compose --project-name stage -f docker-compose.yml -f docker-compose.prod.yml up --build
# Production
docker-compose --project-name prod -f docker-compose.yml -f docker-compose.prod.yml up --build
I am also open to other solutions
You need to specify different port bindings as well:
# docker-compose.stage.yml
version: "3.3
services:
api:
container_name: api_stage
ports:
- "8001:8000"
environment:
- environment="staging"
db:
container_name: db_stage
ports:
- "xxxY:xxxx"
environment:
- environment="staging"
volumes:
- /I/Mount/a/local/volume/stage:/container/volume
I want to execute a command using of a docker-compose file, and the code sometimes fails because of connection timeouts. I thought that adding restart: on-failure would automatically restart the container if it failed.
The command looks like that
docker-compose run --rm \
-e VAR1=value1 \
[...] \
web flask tasks my_failing_task
My docker-compose.yml looks like that
version: "3"
services:
web:
user: root
image: my-image
network_mode: "host"
environment:
APPLICATION: "web"
GOOGLE_APPLICATION_CREDENTIALS: "mysecret.json"
volumes:
- ../../../stuff/:/stuff/
restart: on-failure:3
I have noticed that the container does not restart when I use docker-compose run.
I have then tried to move the command inside the docker-compose.yml, like this:
version: "3"
services:
web:
user: root
image: my-image
network_mode: "host"
command: flask tasks my_failing_task
environment:
APPLICATION: "web"
GOOGLE_APPLICATION_CREDENTIALS: "mysecret.json"
VAR1: value1
volumes:
- ../../../stuff/:/stuff/
restart: on-failure:3
And execute docker-compose up, but same result.
It seems that restart only works with docker-compose up when I add another container, like a redis for example
version: "3"
services:
web:
user: root
image: my-image
network_mode: "host"
command: flask tasks my_failing_task
environment:
APPLICATION: "web"
GOOGLE_APPLICATION_CREDENTIALS: "mysecret.json"
VAR1: value1
volumes:
- ../../../stuff/:/stuff/
restart: on-failure:3
redis:
hostname: redis
image: redis
ports:
- "6379:6379"
Then it actually restarts up to 3 times if fails.
So my questions are:
Why doesn't restart work with run
Why does restart only work with up IF there are more than 1 container in the docker-compose.yml file
Thanks!
In the code, docker-compose run always implies restart: no. The GitHub issue docker/compose#6302 describes this a little bit further.
docker-compose run is used to run a "one-off" container, running a single alternate command with mostly the same specification as what's described in docker-compose.yml. Imagine Compose didn't have this override. If you did docker-compose run a command that failed, it would restart, potentially forever. If it ran in the background, you'd leak a restarting container; if it ran in the foreground, you'd have to go to another terminal to kill off the container. That's a harsh penalty for typing docker-compose run web bsah by accident.
Otherwise, most of the options, including restart:, get passed through directly to the Docker API. It shouldn't make a difference running docker-compose up if there's only one container or multiple.
I'm particularly new to Docker. I was trying to containerize a project for development and production versions. I came up with a very basic docker-compose configuration and then tried the override feature which doesn't seem to work.
I added overrides for volumes to web and celery services which do not actually mount to the container, can confirm the same by looking at the inspect log of both the containers.
Contents of compose files:-
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- redis
redis:
image: redis:5.0.9-alpine
celery:
build: .
command: celery worker -A facedetect.celeryapp -l INFO --concurrency=1 --without-gossip --without-heartbeat
depends_on:
- redis
environment:
- C_FORCE_ROOT=true
docker-compose.override.yml
version: '3'
services:
web:
volumes:
- .:/code
ports:
- "8000:8000"
celery:
volumes:
- .:/code
I use Docker with Pycharm on Windows 10.
Command executed to deploy the compose configuration:-
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full-path>/docker-compose.yml up -d
Command executed to inspect one of the containers:-
docker container inspect <container_id>
Any help would be appreciated! :)
Just figured out that I had provided the docker-compose.yml file explicitly to the Run Configuration created in Pycharm as it was mandatory to provide at least one of these.
The command used by Pycharm explicitly mentions the .yml files using -f option when running the configuration. Adding the docker-compose.override.yml file to the Run Configuration changed the command to
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full_path>\docker-compose.yml -f <full_path>/docker-compose.override.yml up -d
This solved the issue. Thanks to Exadra37 directing to look out for the command that was being executed.
I run this command manually:
$ docker run -it --rm \
--network app-tier \
bitnami/cassandra:latest cqlsh --username cassandra --password cassandra cassandra-server
But I don't know how to convert it to a docker compose file, specially the container's custom properties such as --username and --password.
What should I write in a docker-compose.yaml file to obtain the same result?
Thanks
Here is a sample of how others have done it. http://abiasforaction.net/apache-cassandra-cluster-docker/
Running the command below
command:
Setting arg's below
environment:
Remember just because you can doesn't mean you should.. Compose is not always the best way to launch something. Often it can be the lazy way.
If your running this as a service id suggest building the dockerfile to start and then creating systemd/init scripts to rm/relaunch it.
an example cassandra docker-compose.yml might be
version: '2'
services:
cassandra:
image: 'bitnami/cassandra:latest'
ports:
- '7000:7000'
- '7001:7001'
- '9042:9042'
- '9160:9160'
volumes:
- 'cassandra_data:/bitnami'
volumes:
cassandra_data:
driver: local
although this will not provide you with your commandline arguments but start it with the default CMD or ENTRYPOINT.
As you are actually running another command then the default you might not want to do this with docker-compose. Or you can create a new Docker image with this command as the default and provide the username and password as ENV's
e.g. something like this (untested)
FROM bitnami/cassandra:latest
ENV USER=cassandra
ENV PASSWORD=password
CMD ["cqlsh", "--username", "$USER", "--password", "$PASSWORD", "cassandra-server"]
and you can build it
docker build -t mycassandra .
and run it with something like:
docker run -it -e "USER=foo" -e "PASSWORD=bar" mycassandra
or in docker-compose
services:
cassandra:
image: 'mycassandra'
ports:
- '7000:7000'
- '7001:7001'
- '9042:9042'
- '9160:9160'
environment:
USER:user
PASSWORD:pass
volumes:
- 'cassandra_data:/bitnami'
volumes:
cassandra_data:
driver: local
You might looking for something like the following. Not sure if it is going to help you....
version: '3'
services:
my_app:
image: bitnami/cassandra:latest
command: /bin/sh -c cqlsh --username cassandra --password cassandra cassandra-server
ports:
- "8080:8080"
networks:
- app-tier
networks:
app-tier:
external: true
I'm new to Docker. I am writing a docker-compose file which creates 2 containers, foo and bar, sharing a volume data:
services:
foo:
container_name: foo
build: ./foo
volumes:
- data:/var/lib/
bar:
container_name: bar
build: ./bar
volumes:
- data:/var/lib/
depends_on:
- foo
volumes:
data:
Now, I want to use the environment variable TAG, to tag containers and volumes, in order to specify if it's for test or production. I expects something like this:
services:
foo:
container_name: foo_${TAG}
build: ./foo
volumes:
- data_${TAG}:/var/lib/
bar:
container_name: bar_${TAG}
build: ./bar
volumes:
- data_${TAG}:/var/lib/
depends_on:
- foo
volumes:
data_${TAG}:
Obviously, docker-compose is unhappy because of the last line containing data_${TAG}:.
How can I name my volume with TAG env variable?
If you create your volumes in advance, you can use the variable on external volume names like this (note that the reference inside of compose is a fixed name but it points to a variable external volume name):
$ cat docker-compose.volvar.yml
version: '2'
volumes:
data:
external:
name: test-data-${TAG}
services:
source:
image: busybox
command: /bin/sh -c 'echo From ${TAG} >>/data/common.log && sleep 10m'
environment:
- TAG
volumes:
- data:/data
target:
image: busybox
command: tail -f /data/common.log
depends_on:
- source
environment:
- TAG
volumes:
- data:/data
Create your volumes in advance with a docker volume create command:
$ docker volume create test-data-dev
test-data-dev
$ docker volume create test-data-uat
test-data-uat
$ docker volume create test-data-stage
test-data-stage
And here's an example of running it (I didn't use different directories or change the project name, so compose just replaced my containers each time, but I could have easily changed the project to run them all concurrently with the same results):
$ TAG=dev docker-compose -f docker-compose.volvar.yml up -d
Creating test_source_1
Creating test_target_1
$ docker logs test_target_1
From dev
$ TAG=uat docker-compose -f docker-compose.volvar.yml up -d
Recreating test_source_1
Recreating test_target_1
$ docker logs test_target_1
From uat
$ TAG=stage docker-compose -f docker-compose.volvar.yml up -d
Recreating test_source_1
Recreating test_target_1
$ docker logs test_target_1
From stage
$ # just to show that the volumes are saved and unique,
$ # rerunning uat generates a second line
$ TAG=uat docker-compose -f docker-compose.volvar.yml up -d
Recreating test_source_1
Recreating test_target_1
$ docker logs test_target_1
From uat
From uat
I don't know if this is possible like that but here is what I do:
I have a docker-compose.yml file like that
services:
foo:
container_name: foo_${TAG}
build: ./foo
volumes:
- /var/lib/
bar:
container_name: bar_${TAG}
build: ./bar
volumes:
- /var/lib/
depends_on:
- foo
And then I create a file docker-compose.override.yml that contains
services:
foo:
volumes:
- data_dev:/var/lib/
bar:
volumes:
- data_dev:/var/lib/
This way when you launch docker-compose, it will use the main file and use the other file to override its values.
You should then have 3 files:
docker-compose.yml
docker-compose.override-prod.yml
docker-compose.override-dev.yml
And then when you build you have the choice between those 2:
(What I do) I copy docker-compose.override-.yml to docker-compose.override.yml and Docker Compose with automatically takes those 2 files
You can provide the 2 files to use to the docker compose file (I forgot what the paramter is... I guess it's "-f")
I hope it helps