Docker-compose how to update celery without rebuild? - docker

I am working on my django + celery + docker-compose project.
Problem
I changed django code
Update is working only after docker-compose up --build
How can I enable code update without rebuild?
I found this answer Developing with celery and docker but didn't understand how to apply it
docker-compose.yml
version: '3.9'
services:
django:
build: ./project # path to Dockerfile
command: sh -c "
gunicorn --bind 0.0.0.0:8000 core_app.wsgi"
volumes:
- ./project:/project
- ./project/static:/project/static
- media-volume:/project/media
expose:
- 8000
celery:
build: ./project
command: celery -A documents_app worker --loglevel=info
volumes:
- ./project:/usr/src/app
- media-volume:/project/media
depends_on:
- django
- redis
.........
volumes:
pg_data:
static:
media-volume:

Code update without rebuild is achievable and best practice when working with containers otherwise it takes too much time and effort creating a new image every time you change the code.
The most popular way of doing this is to mount your code directory into the container using one of the two methods below.
In your docker-compose.yml
services:
web:
volumes:
- ./codedir:/app/codedir # while 'codedir' is your code directory
In CLI starting a new container
$ docker run -it --mount "type=bind,source=$(pwd)/codedir,target=/app/codedir" celery bash
So you're effectively mounting the directory that your code lives in on your computer inside of the /opt/ dir of the Celery container. Now you can change your code and...
the local directory overwrites the one from the image when the container is started. You only need to build the image once and use it until the installed dependencies or OS-level package versions need to be changed. Not every time your code is modified. - Quoted from this awesome article

Related

Issue with docker not acknowledging docker-compose.override.yml

I'm particularly new to Docker. I was trying to containerize a project for development and production versions. I came up with a very basic docker-compose configuration and then tried the override feature which doesn't seem to work.
I added overrides for volumes to web and celery services which do not actually mount to the container, can confirm the same by looking at the inspect log of both the containers.
Contents of compose files:-
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- redis
redis:
image: redis:5.0.9-alpine
celery:
build: .
command: celery worker -A facedetect.celeryapp -l INFO --concurrency=1 --without-gossip --without-heartbeat
depends_on:
- redis
environment:
- C_FORCE_ROOT=true
docker-compose.override.yml
version: '3'
services:
web:
volumes:
- .:/code
ports:
- "8000:8000"
celery:
volumes:
- .:/code
I use Docker with Pycharm on Windows 10.
Command executed to deploy the compose configuration:-
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full-path>/docker-compose.yml up -d
Command executed to inspect one of the containers:-
docker container inspect <container_id>
Any help would be appreciated! :)
Just figured out that I had provided the docker-compose.yml file explicitly to the Run Configuration created in Pycharm as it was mandatory to provide at least one of these.
The command used by Pycharm explicitly mentions the .yml files using -f option when running the configuration. Adding the docker-compose.override.yml file to the Run Configuration changed the command to
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full_path>\docker-compose.yml -f <full_path>/docker-compose.override.yml up -d
This solved the issue. Thanks to Exadra37 directing to look out for the command that was being executed.

How do I run a website in bitnami+docker+nginx

I'm taking over a website https://www.funfun.io. Unfortunately, I cannot reach the previous developer anymore.
This is a AngularJS+Node+Express+MongoDB application. He decided to use bitnami+docker+nginx in the server. Here is docker-compose.yml:
version: "3"
services:
funfun-node:
image: funfun
restart: always
build: .
environment:
- MONGODB_URI=mongodb://mongodb:27017/news
env_file:
- ./.env
depends_on:
- mongodb
funfun-nginx:
image: funfun-nginx
restart: always
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "3000:8443"
depends_on:
- funfun-node
mongodb:
image: mongo:3.4
restart: always
volumes:
- "10studio-mongo:/data/db"
ports:
- "27018:27017"
networks:
default:
external:
name: 10studio
volumes:
10studio-mongo:
driver: local
Dockerfile.nginx:
FROM bitnami/nginx:1.16
COPY ./funfun.io /opt/bitnami/nginx/conf/server_blocks/default.conf
COPY ./ssl/MyCompanyLocalhost.cer /opt/MyCompanyLocalhost.cer
COPY ./ssl/MyCompanyLocalhost.pvk /opt/MyCompanyLocalhost.pvk
Dockerfile:
FROM node:12
RUN npm install -g yarn nrm --registry=https://registry.npm.taobao.org && nrm use cnpm
COPY ./package.json /opt/funfun/package.json
WORKDIR /opt/funfun
RUN yarn
COPY ./ /opt/funfun/
CMD yarn start
In my local machine, I could use npm start to test the website in a web browser.
I have access to the Ubuntu server. But I'm new to bitnami+docker+nginx, I have the following questions:
In the command line of Ubuntu server, how could I check if the service is running (besides launching the website in a browser)?
How could I shut down and restart the service?
Previously, without docker, we could start mongodb by sudo systemctl enable mongod. Now, with docker, how could we start mongodb?
First of all, to deploy the services mentioned in the compose file locally, you should run the below command
docker-compose up
docker-compose up -d # in the background
After running the above command docker containers will be created and available on your machine.
To list the running containers
docker ps
docker-compose ps
To stop containers
docker stop ${container name}
docker-compose stop
mongodb is part of the docker-compose file and it will be running once you start other services. It will also be restarted automatically in case it crashes or you restarted your machine.
One final note, since you are using external networks you may need to create the network before starting the services.
1.
docker-compose ps will give you the state of your containers
2.
docker-compose stop will stop your containers, keeping their state then you may start them as their are using docker-compose up
docker-compose kill will delete your containers
docker-compose restart will restart your containers
3.
By declaring your mongodb using an official mongo image your container start when you do docker-compose up without any other intervention.
Or you can add command: mongod --auth directly into your docker-compose.yml
the official documentation of docker is very detailed and help a lot for all of this, keep looking on it https://docs.docker.com/compose/

Is there a way to RUN a command after building one of two containers in docker-compose

Following case:
I want to build with docker-compose two containers. One is MySQL, the other is a .war File executed with springboot that is dependend on MySQL and needs a working db. After I build the mysql container, I want to fill the db with my mysqldump file, before the other container is built.
My first idea was to have it in my mysql Dockerfile as
#RUN mysql -u root -p"$MYSQL_ROOT_PASSWORD"' < /appsb.sql
but of course it wants to execute it while building.
I have no idea how to do it in the docker-compose file as Command, maybe that would work. Or do I need to build a script?
docker-compose.yml
version: "3"
services:
mysqldb:
networks:
- appsb-mysql
environment:
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_DATABASE=appsb
build: ./mysql
app-sb:
image: openjdk:8-jdk-alpine
build: ./app-sb/
ports:
- "8080:8080"
networks:
- appsb-mysql
depends_on:
- mysqldb
networks:
- appsb-mysql:
Dockerfile for mysqldb:
FROM mysql:5.7
COPY target/appsb.sql /
#RUN mysql -u root -p"$MYSQL_ROOT_PASSWORD"' < /appsb.sql
Dockerfile for the other springboot appsb:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/appsb.war /
RUN java -jar /appsb.war
Here is a similar issue (loading a dump.sql at start up) for a MySQL container: Setting up MySQL and importing dump within Dockerfile.
Option 1: import via a command in Dockerfile.
Option 2: exec. a bash script from docker-compose.yml
Option 3: exec. an import command from docker-compose.yml

Docker config : Celery + RabbitMQ

How do I run Celery and RabbitMQ in a docker container? Can you point me to sample dockerfile or compose files?
This is what I have:
Dockerfile:
FROM python:3.4
ENV PYTHONBUFFERED 1
WORKDIR /tasker
ADD requirements.txt /tasker/
RUN pip install -r requirements.txt
ADD . /tasker/
docker-compose.yml
rabbitmq:
image: tutum/rabbitmq
environment:
- RABBITMQ_PASS=mypass
ports:
- "5672:5672"
- "15672:15672"
celery:
build: .
command: celery worker --app=tasker.tasks
volumes:
- .:/tasker
links:
- rabbitmq:rabbit
The issue I'm having is I cant get Celery to stay alive or running. It keeps exiting.
I have similar Celery exiting problem while dockerizing the application. You should use rabbit service name ( in your case it's rabbitmq) as host name in your celery configuration.That is, use broker_url = 'amqp://guest:guest#rabbitmq:5672//' instead of broker_url = 'amqp://guest:guest#localhost:5672//' . In my case, major components are Flask, Celery and Redis.My problem is HERE please check the link, you may find it useful.
Update 2018, as commented below by Floran Gmehlin, The celery image is now officially deprecated in favor of the official python image.
As commented in celery/issue 1:
Using this image seems ridiculous. If you have an application container, as you usually have with Django, you need all dependencies (things you import in tasks.py) installed in this container again.
That's why other projects (e.g. cookiecutter-django) reuse the application container for Celery, and only run a different command (command: celery ... worker) against it with docker-compose.
Note, now the docker-compose.yml is called local.yml and use start.sh.
Original answer:
You can try and emulate the official celery Dockerfile, which does a bit more setup before the CMD ["celery", "worker"].
See the usage of that image to run it properly.
start a celery worker (RabbitMQ Broker)
$ docker run --link some-rabbit:rabbit --name some-celery -d celery
check the status of the cluster
$ docker run --link some-rabbit:rabbit --rm celery celery status
If you can use that image in your docker-compose, then you can try building your own starting FROM celery instead of FROM python.
something I used in my docker-compose.yml. it works for me. check the details in this medium
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit

Dockerfile and docker-compose not updating with new instructions

When I try to build a container using docker-compose like so
nginx:
build: ./nginx
ports:
- "5000:80"
the COPY instructions isnt working when my Dockerfile simply
looks like this
FROM nginx
#Expose port 80
EXPOSE 80
COPY html /usr/share/nginx/test
#Start nginx server
RUN service nginx restart
What could be the problem?
It seems that when using the docker-compose command it saves an intermediate container that it doesnt show you and constantly reruns that never updating it correctly.
Sadly the documentation regarding something like this is poor. The way to fix this is to build it first with no cache and then up it like so
docker-compose build --no-cache
docker-compose up -d
I had the same issue and a one liner that does it for me is :
docker-compose up --build --remove-orphans --force-recreate
--build does the biggest part of the job and triggers the build.
--remove-orphans is useful if you have changed the name of one of your services. Otherwise, you might have a warning leftover telling you about the old, now wrongly named service dangling around.
--force-recreate is a little drastic but will force the recreation of the containers.
Reference: https://docs.docker.com/compose/reference/up/
Warning I could do this on my project because I was toying around with really small container images. Recreating everything, everytime, could take significant time depending on your situation.
If you need to make docker-compose to copy files every time on up command I suggest declaring a volumes option to your service in the compose.yml file. It will persist your data and also will copy files from that folder into the container.
More info here volume-configuration-reference
server:
image: server
container_name: server
build:
context: .
dockerfile: server.Dockerfile
env_file:
- .envs/.server
working_dir: /app
volumes:
- ./server_data:/app # <= here it is
ports:
- "9999:9999"
command: ["command", "to", "run", "the", "server", "--some-options"]
Optionally, you can add the following section to the end of the compose.yml file. It will keep that folder persisted then. The data in that folder will not be removed after the docker-compose stop command or the docker-compose down command. To remove the folder you will need to run the down command with an additional flag -v:
docker-compose down -v
For example, including volumes:
services:
server:
image: server
container_name: server
build:
context: .
dockerfile: server.Dockerfile
env_file:
- .envs/.server
working_dir: /app
volumes:
- ./server_data:/app # <= here it is
ports:
- "9999:9999"
command: ["command", "to", "run", "the", "server", "--some-options"]
volumes: # at the root level, the same as services
server_data:

Resources