I'm particularly new to Docker. I was trying to containerize a project for development and production versions. I came up with a very basic docker-compose configuration and then tried the override feature which doesn't seem to work.
I added overrides for volumes to web and celery services which do not actually mount to the container, can confirm the same by looking at the inspect log of both the containers.
Contents of compose files:-
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- redis
redis:
image: redis:5.0.9-alpine
celery:
build: .
command: celery worker -A facedetect.celeryapp -l INFO --concurrency=1 --without-gossip --without-heartbeat
depends_on:
- redis
environment:
- C_FORCE_ROOT=true
docker-compose.override.yml
version: '3'
services:
web:
volumes:
- .:/code
ports:
- "8000:8000"
celery:
volumes:
- .:/code
I use Docker with Pycharm on Windows 10.
Command executed to deploy the compose configuration:-
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full-path>/docker-compose.yml up -d
Command executed to inspect one of the containers:-
docker container inspect <container_id>
Any help would be appreciated! :)
Just figured out that I had provided the docker-compose.yml file explicitly to the Run Configuration created in Pycharm as it was mandatory to provide at least one of these.
The command used by Pycharm explicitly mentions the .yml files using -f option when running the configuration. Adding the docker-compose.override.yml file to the Run Configuration changed the command to
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full_path>\docker-compose.yml -f <full_path>/docker-compose.override.yml up -d
This solved the issue. Thanks to Exadra37 directing to look out for the command that was being executed.
Related
I am working on my django + celery + docker-compose project.
Problem
I changed django code
Update is working only after docker-compose up --build
How can I enable code update without rebuild?
I found this answer Developing with celery and docker but didn't understand how to apply it
docker-compose.yml
version: '3.9'
services:
django:
build: ./project # path to Dockerfile
command: sh -c "
gunicorn --bind 0.0.0.0:8000 core_app.wsgi"
volumes:
- ./project:/project
- ./project/static:/project/static
- media-volume:/project/media
expose:
- 8000
celery:
build: ./project
command: celery -A documents_app worker --loglevel=info
volumes:
- ./project:/usr/src/app
- media-volume:/project/media
depends_on:
- django
- redis
.........
volumes:
pg_data:
static:
media-volume:
Code update without rebuild is achievable and best practice when working with containers otherwise it takes too much time and effort creating a new image every time you change the code.
The most popular way of doing this is to mount your code directory into the container using one of the two methods below.
In your docker-compose.yml
services:
web:
volumes:
- ./codedir:/app/codedir # while 'codedir' is your code directory
In CLI starting a new container
$ docker run -it --mount "type=bind,source=$(pwd)/codedir,target=/app/codedir" celery bash
So you're effectively mounting the directory that your code lives in on your computer inside of the /opt/ dir of the Celery container. Now you can change your code and...
the local directory overwrites the one from the image when the container is started. You only need to build the image once and use it until the installed dependencies or OS-level package versions need to be changed. Not every time your code is modified. - Quoted from this awesome article
I want to execute a command using of a docker-compose file, and the code sometimes fails because of connection timeouts. I thought that adding restart: on-failure would automatically restart the container if it failed.
The command looks like that
docker-compose run --rm \
-e VAR1=value1 \
[...] \
web flask tasks my_failing_task
My docker-compose.yml looks like that
version: "3"
services:
web:
user: root
image: my-image
network_mode: "host"
environment:
APPLICATION: "web"
GOOGLE_APPLICATION_CREDENTIALS: "mysecret.json"
volumes:
- ../../../stuff/:/stuff/
restart: on-failure:3
I have noticed that the container does not restart when I use docker-compose run.
I have then tried to move the command inside the docker-compose.yml, like this:
version: "3"
services:
web:
user: root
image: my-image
network_mode: "host"
command: flask tasks my_failing_task
environment:
APPLICATION: "web"
GOOGLE_APPLICATION_CREDENTIALS: "mysecret.json"
VAR1: value1
volumes:
- ../../../stuff/:/stuff/
restart: on-failure:3
And execute docker-compose up, but same result.
It seems that restart only works with docker-compose up when I add another container, like a redis for example
version: "3"
services:
web:
user: root
image: my-image
network_mode: "host"
command: flask tasks my_failing_task
environment:
APPLICATION: "web"
GOOGLE_APPLICATION_CREDENTIALS: "mysecret.json"
VAR1: value1
volumes:
- ../../../stuff/:/stuff/
restart: on-failure:3
redis:
hostname: redis
image: redis
ports:
- "6379:6379"
Then it actually restarts up to 3 times if fails.
So my questions are:
Why doesn't restart work with run
Why does restart only work with up IF there are more than 1 container in the docker-compose.yml file
Thanks!
In the code, docker-compose run always implies restart: no. The GitHub issue docker/compose#6302 describes this a little bit further.
docker-compose run is used to run a "one-off" container, running a single alternate command with mostly the same specification as what's described in docker-compose.yml. Imagine Compose didn't have this override. If you did docker-compose run a command that failed, it would restart, potentially forever. If it ran in the background, you'd leak a restarting container; if it ran in the foreground, you'd have to go to another terminal to kill off the container. That's a harsh penalty for typing docker-compose run web bsah by accident.
Otherwise, most of the options, including restart:, get passed through directly to the Docker API. It shouldn't make a difference running docker-compose up if there's only one container or multiple.
I've been using: docker build -t devstack .
docker run --rm -p 443:443 -it -v ~/code:/code devstack
That has been working fine for me so far. I've been able to access the site as expected through my browser. I set my hosts file to point devstack.com to 127.0.0.1 and the site loads nicely. Now I'm trying to use docker-compose so I can use some of the functionality there to more easily connect with AWS.
services:
web:
build:
context: .
network_mode: "bridge"
ports:
- "443"
- "80"
volumes:
- ~/code:/code
image: devstack:latest
So I run docker-compose build which gives me the familiar build stuff from Dockerfile.
Then I run docker-compose run web which puts me into the VM where I start apache (doing it manually at the moment), hit top to verify it’s running, then tail the log files. But when I attempt to hit the site in my browser, I get: devstack.com refused to connect. and no logs in the apache log files, so it's not even getting to apache. So something about the ports isn't opening up to me. Any idea what I need to change to make this work?
Edit: Updated file. Still same problem:
version: "3"
services:
web:
build:
context: .
# Same issue with both of these:
# network_mode: "bridge"
# network_mode: "host"
ports:
- "443:443"
- "80:80"
volumes:
- ~/code:/code
tty: true
This is what I did to get it working. I used the example project docker-compose showed in their documentation, which runs a test app on port 5000. That worked, so I knew it could be done.
I updated my docker-compose.yml to be very similar to the one in the test project. So it looks like this now:
version: "3"
services:
web:
build: .
ports:
- "443:443"
- "80:80"
volumes:
- ~/code:/code
Then I created an entry.sh file which will start apache, and added this to my Dockerfile:
# copy the entry file which will start apache
COPY entry.sh entry.sh
RUN chmod +x entry.sh
# start apache
CMD ./entry.sh; tail -f /var/log/apache2/*.log
So now when I do docker-compose up, it will start apache and tail the apache log files. So I immediately see apache log files output to terminal. Then I'm able to access the site. Basically the problem was just with the VM exiting. This was the only way I could find to keep it from exiting without doing tty=true in the docker-compose, which while it kept it from exiting, wouldn't publish the ports.
I have a docker compose file that links my server to a redis image:
version: '3'
services:
api:
build: .
command: npm run dev
environment:
NODE_ENV: development
volumes:
- .:/home/node/code
- /home/node/code/node_modules
- /home/node/code/build/Release
ports:
- "1389:1389"
depends_on:
- redis
redis:
image: redis:alpine
I am wondering how could I open a redis-cli against the Redis container started by docker-compose to directly modify ke/value pairs. I tried with docker attach but it does not open any shell.
Use docker exec -it your_container_name /bin/bash to enter into redis container, then execute redis-cli to modify key-value pair.
See https://docs.docker.com/engine/reference/commandline/exec/
Install the Redis CLI on your host. Edit the YAML file to publish Redis's port
services:
redis:
image: redis:alpine
ports: ["6379:6379"]
Then run docker-compose up to redeploy the container, and you can run redis-cli from the host without needing to directly interact with Docker.
Using /bin/bash as the command (as suggested in the accepted solution) doesn't work for me with the latest redis:alpine image on Linux.
Instead, this worked:
docker exec -it your_container_name redis-cli
docker-compose.yml
version: '2'
services:
app:
build:
context: .
command: python src/app.py
restart: on-failure
depends_on:
- db
environment:
- TJBOT_DB_HOST=db
- TJBOT_API_KEY
- TJBOT_AUTO_QUESTION_TIME
env_file:
- .env
db:
image: mongo:3.0.14
volumes:
- mongodbdata:/data/db
volumes:
mongodbdata:
If I change the .env file, how could I reload the container to use the new environment variables with minimum downtime?
If you are running the yml with docker-compose, you can just run docker-compose up -d and it will recreate any containers that have changes and leave all unchanged services untouched.
$ cat docker-compose.env2.yml
version: '2'
services:
test:
image: busybox
# command: env
command: tail -f /dev/null
environment:
- MY_VAR=hello
- MY_VAR2=world
test2:
image: busybox
command: tail -f /dev/null
environment:
- MY_VAR=same ole same ole
$ docker-compose -f docker-compose.env2.yml up -d
Creating network "test_default" with the default driver
Creating test_test_1
Creating test_test2_1
$ vi docker-compose.env2.yml # edit the file to change MY_VAR
$ docker-compose -f docker-compose.env2.yml up -d
Recreating test_test_1
test_test2_1 is up-to-date
If you run the containers as a docker stack deploy -c docker-compose.yml with a version 3 file format, you can do a rolling update of the service which will prevent any downtime if you have multiple instances of your service running. This functionality is still very new, you'll want 1.13.1 to fix some of the issues with updates, and as with anything this new, bugs are still being worked out.