I have docker compose for my application. celery is one of the service.
Command celery worker is working but
Command celery multi is not working.
celery:
container_name: celery_application
build:
context: .
dockerfile: deploy/Dockerfile
# restart: always
networks:
- internal_network
env_file:
- deploy/.common.env
# command: ["celery", "--app=tasks", "worker", "--queues=upload_queue", "--pool=prefork", "--hostname=celery_worker_upload_queue", "--concurrency=1", "--loglevel=INFO", "--statedb=/external/celery/worker.state"] # This is working
command: ["celery", "-A", "tasks", "multi", "start", "default", "model", "upload", "--pidfile=/external/celery/%n.pid", "--logfile=/external/celery/%n%I.log", "--loglevel=INFO", "--concurrency=1", "-Q:default", "default_queue", "-Q:model", "model_queue", "-Q:upload", "upload_queue"] # This is not working
# tty: true
# stdin_open: true
depends_on:
- redis
- db
- pgadmin
- web
volumes:
- my_volume:/external
Getting this output
celery | celery multi v5.2.7 (dawn-chorus)
celery | > Starting nodes...
celery | > default#be788ec5974d:
celery | OK
celery | > model#be788ec5974d:
celery | OK
celery | > upload#be788ec5974d:
celery exited with code 0
All services gets up except celery which exited with code 0.
What I am missing when using celery multi?
Please suggest.
The celery multi command does not wait for celery worker to done but it start multiple celery workers in background and then exit. Unfortunately, the termination of foreground process causes child workers to be terminated too in docker container environment.
It's not a good practice to use celery multi with docker like this because any issue of a single worker may not be reflect to container console and your worker may be crashed or dead or go into forever loop inside the container without any signal for management or monitoring. With the single worker command, the exit code will be returned to docker container and it will restart the service in case of termination of the worker.
If you still really need to use the celery multi like this. You can try to use bash to append another forever loop command to prevent the container from exit:
command: ["bash", "-c", "celery -A tasks multi start default model upload --pidfile=/external/celery/%n.pid --logfile=/external/celery/%n%I.log --loglevel=INFO --concurrency=1 -Q:default default_queue -Q:model model_queue -Q:upload upload_queue; tail -f /dev/null"]
The tail -f /dev/null will keep your container there forever no matter whether the celery worker is running or not. Of course, your container must have bash installed.
My assumption is that you would like to containerize all celery workers into the single container for ease to use purpose. If so? You can try https://github.com/just-containers/s6-overlay instead of celery multi. The S6 Overlay can monitor your celery worker, restart it when exited, and also provide some process supervisor utilities like celery multi, but it's designed for this purpose.
Related
I have a docker compose application, which works fine in local. I would like to create an image from it and upload it to the docker hub in order to pull it from my azure virtual machine without passing all files. Is this possible? How can I do it?
I tried to upload the image I see from docker desktop and then pull it from the VM but the container does not start up.
Here I attach my .yml file. There is only one service at the moment but in the future there will be multiple microservices, this is why I want to use compose.
version: "3.8"
services:
dbmanagement:
build: ./dbmanagement
container_name: dbmanagement
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./dbmanagement:/dbmandj
ports:
- "8000:8000"
environment:
- POSTGRES_HOST=*******
- POSTGRES_NAME=*******
- POSTGRES_USER=*******
- POSTGRES_PASSWORD=*******
Thank you for your help
The answer is: yes, you can but you should not
According to the Docker official docs:
It is generally recommended that you separate areas of concern by using one service per container
Also check this:
https://stackoverflow.com/a/68593731/3957754
docker-compose is enough
docker-compose exist just for that: Run several services with one click (minimal configurations) and commonly in the same server.
foreground process
In order to works a docker container needs a foreground process. To understand what is this, check the following links. As a extremely summary we can said you that a foreground process is something that when you launch it using the shell, the shell is taken and you can and you cannot enter more commands. You need to press ctrl + c to kill the process and get back your shell.
https://unix.stackexchange.com/questions/175741/what-is-background-and-foreground-processes-in-jobs
https://linuxconfig.org/understanding-foreground-and-background-linux-processes
The "fat" container
Anyway, if you want to join several services or process in one container (previously an image) you can do it with supervisor.
Supervisor could works a our foreground process. Basically you need to register one or many linux processes and then, supervisor will start them.
how to install supervisor
sudo apt-get install supervisor
source: https://gist.github.com/hezhao/bb0bee800531b89d7be1#file-supervisor_cmd-sh
add single config: /etc/supervisor/conf.d/myapp.conf
[program:myapp]
autostart = true
autorestart = true
command = python /home/pi/myapp.py
environment=SECRET_ID="secret_id",SECRET_KEY="secret_key_avoiding_%_chars"
stdout_logfile = /home/pi/stdout.log
stderr_logfile = /home/pi/stderr.log
startretries = 3
user = pi
source: https://gist.github.com/hezhao/bb0bee800531b89d7be1
start it
sudo supervisorctl start myapp
sudo supervisorctl tail myapp
sudo supervisorctl status
In the previous sample, we are used supervisor to start a python process.
multiple process with supervisor
You just need to add more [program] sections to the config file:
[program:php7.2]
command=/usr/sbin/php-fpm7.2-zts
process_name=%(program_name)s
autostart=true
autorestart=true
[program:dropbox]
process_name=%(program_name)s
command=/app/.dropbox-dist/dropboxd
autostart=true
autorestart=true
Here some examples, just like your requirement: several process in one container:
canvas lms : Basically starts 3 process: postgress, redis and a ruby app
https://github.com/harvard-dce/canvas-docker/blob/master/assets/supervisord.conf
ngnix + php + ssh
https://gist.github.com/pollend/b1f275eb7f00744800742ae7ce403048#file-supervisord-conf
nginx + php
https://gist.github.com/lovdianchel/e306b84437bfc12d7d33246d8b4cbfa6#file-supervisor-conf
mysql + redis + mongo + nginx + php
https://gist.github.com/nguyenthanhtung88/c599bfdad0b9088725ceb653304a91e3
Also you could configure a web dashboard:
https://medium.com/coinmonks/when-you-throw-a-web-crawler-to-a-devops-supervisord-562765606f7b
Another samples with docker + supervisor:
https://gist.github.com/chadrien/7db44f6093682bf8320c
https://gist.github.com/damianospark/6a429099a66bfb2139238b1ce3a05d79
I have a docker-compose file that creates 3 Hello World applications and uses nginx to load balance traffic across the different containers.
The docker-compose code is as follows:
version: '3.2'
services:
backend1:
image: rafaelmarques7/hello-node:latest
restart: always
backend2:
image: rafaelmarques7/hello-node:latest
restart: always
backend3:
image: rafaelmarques7/hello-node:latest
restart: always
loadbalancer:
image: nginx:latest
restart: always
links:
- backend1
- backend2
- backend3
ports:
- '80:80'
volumes:
- ./container-balancer/nginx.conf:/etc/nginx/nginx.conf:ro
I would like to verify that the restart: always policy actually works.
The approach I tried is as follows:
First, I run my application with docker-compose up;
I identify the containers IDs with docker container ps;
I kill/stop one of the containers with docker stop ID_Container or docker kill ID_Container.
I was expecting that after the 3rd step (stop/kill the container. this makes it exist with code 137), the restart policy would kick in and create a new container again.
However, this does not happen. I have read that this is intentional, as to have a way to be able to manually stop containers that have a restart policy.
Despite this, I would like to know how I can kill a container in such a way that it triggers the restart policy so that I can actually verify that it is working.
Thank you for your help.
If you run ps on the host you will be able to see the actual processes in all of your Docker containers. Once you find a container's main process's process ID, you can sudo kill it (you will have to be root). That will look more like a "crash", especially if you kill -13 to send SIGSEGV.
It is very occasionally useful for validation scenarios like this to have an endpoint that crashes your application that you can enable in test builds and some other similar silly things. Just make sure you do have a gate so that those endpoints don't exist in production builds. (In old-school C, an #ifdef TEST would do the job; some languages have equivalents but many don't.)
You can docker exec into the running container and kill processes. If your entrypoint process (pid 1) starts a sub process, find it and kill it
docker exec -it backend3 /bin/sh
ps -ef
Find the process that pid 1 is its parent and kill -9 it.
If your entrypoint in the only process (pid 1), it cannot be killed by the kill command. Consider replacing your entrypoint with a script that calls your actual process, which will allow you to use the idea I suggest above.
This should simulate a crashing container and should kick the restart process.
NOTES:
See explanation in https://unix.stackexchange.com/questions/457649/unable-to-kill-process-with-pid-1-in-docker-container
See why not run NodeJS as pid 1 in https://www.elastic.io/nodejs-as-pid-1-under-docker-images/
We have the following structure of the project right now:
Web-server that processes incoming requests from the clients.
Analytics module that provides some recommendations to the users.
We decided to keep these modules completely independent and move them to different docker containers. When a query from a user arrives to the web-server it sends another query to the analytics module to get the recommendations.
For recommendations to be consistent we need to do some background calculations periodically and when, for instance, new users register within our system. Also some background tasks are connected purely with the web-server logic. For this purposes we decided to use a distributed task queue, e.g., Celery.
There are following possible scenarios of task creation and execution:
Task enqueued at the web-server, executed at the web-server (e.g., process uploaded image)
Task enqueued at the web-server, executed at the analytics module (e.g., calculate recommendations for a new user)
Task enqueued at the analytics module and executed there (e.g., periodic update)
So far I see 3 rather weird possibilities to use Celery here:
I. Celery in separate container and does everything
Move Celery to the separate docker container.
Provide all of the necessary packages from both web-server and analytics to execute tasks.
Share tasks code with other containers (or declare dummy tasks at web-server and analytics)
This way, we loose isolation, as the functionality is shared by Celery container and other containers.
II. Celery in separate container and does much less
Same as I, but tasks now are just requests to web-server and analytics module, which are handled asynchronously there, with the result polled inside the task until it is ready.
This way, we get benefits from having the broker, but all heavy computations are moved from Celery workers.
III. Separate Celery in each container
Run Celery both in web-server and analytics module.
Add dummy task declarations (of analytics tasks) to web-server.
Add 2 task queues, one for web-server, one for analytics.
This way, tasks scheduled at web-server could be executed in analytics module. However, still have to share code of tasks across the containers or use dummy tasks, and, additionally, need to run celery workers in each container.
What is the best way to do this, or the logic should be changed completely, e.g., move everything inside one container?
First, let clarify the difference between celery library (which you get with pip install or in your setup.py) and celery worker - which is the actual process that dequeue tasks from the broker and handle them. Of course you might wanna have multiple workers/processes (for separating different task to a different worker - for example).
Lets say you have two tasks: calculate_recommendations_task and periodic_update_task and you want to run them on a separate worker i.e recommendation_worker and periodic_worker.
Another process will be celery beat which just enqueue the periodic_update_task into the broker each x hours.
In addition, let's say you have simple web server implemented with bottle.
I'll assume you want to use celery broker & backend with docker too and I'll pick the recommended usage of celery - RabbitMQ as broker and Redis as backend.
So now we have 6 containers, I'll write them in a docker-compose.yml:
version: '2'
services:
rabbit:
image: rabbitmq:3-management
ports:
- "15672:15672"
- "5672:5672"
environment:
- RABBITMQ_DEFAULT_VHOST=vhost
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
redis:
image: library/redis
command: redis-server /usr/local/etc/redis/redis.conf
expose:
- "6379"
ports:
- "6379:6379"
recommendation_worker:
image: recommendation_image
command: celery worker -A recommendation.celeryapp:app -l info -Q recommendation_worker -c 1 -n recommendation_worker#%h -Ofair
periodic_worker:
image: recommendation_image
command: celery worker -A recommendation.celeryapp:app -l info -Q periodic_worker -c 1 -n periodic_worker#%h -Ofair
beat:
image: recommendation_image
command: <not sure>
web:
image: web_image
command: python web_server.py
both dockerfiles, which builds the recommendation_image and the web_image should install celery library. Only the recommendation_image should have the tasks code because the workers are going to handle those tasks:
RecommendationDockerfile:
FROM python:2.7-wheezy
RUN pip install celery
COPY tasks_src_code..
WebDockerfile:
FROM python:2.7-wheezy
RUN pip install celery
RUN pip install bottle
COPY web_src_code..
The other images (rabbitmq:3-management & library/redis are available from docker hub and they will be pulled automatically when you run docker-compose up).
Now here is the thing: In you web server you can trigger celery tasks by their string name and pull the result by task-ids (without sharing the code) web_server.py:
import bottle
from celery import Celery
rabbit_path = 'amqp://guest:guest#rabbit:5672/vhost'
celeryapp = Celery('recommendation', broker=rabbit_path)
celeryapp.config_from_object('config.celeryconfig')
#app.route('/trigger_task', method='POST')
def trigger_task():
r = celeryapp.send_task('calculate_recommendations_task', args=(1, 2, 3))
return r.id
#app.route('/trigger_task_res', method='GET')
def trigger_task_res():
task_id = request.query['task_id']
result = celery.result.AsyncResult(task_id, app=celeryapp)
if result.ready():
return result.get()
return result.state
last file config.celeryconfig.py:
CELERY_ROUTES = {
'calculate_recommendations_task': {
'exchange': 'recommendation_worker',
'exchange_type': 'direct',
'routing_key': 'recommendation_worker'
}
}
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
I've been porting a web service to docker recently. As mentioned in the title, I'm encountering a weird scenario where in when I run it using docker run -dit, the service runs in the background, but when I use a docker-compose.yml, the service exits.
To be clearer, I have this entrypoint in my Dockerfile:
ENTRYPOINT ["/data/start-service.sh"]
this is the code of start-service.sh:
#!/bin/bash
/usr/local/bin/uwsgi --emperor=/data/vassals/ --daemonize=/var/log/uwsgi/emperor.log
/etc/init.d/nginx start
exec "$#";
as you can see, I'm just starting uwsgi and nginx here in this shell script. The last line (exec) is just make the script accept a parameter and keep it running. Then I run this using:
docker run -dit -p 8080:8080 --name=web_server webserver /bin/bash
As mentioned, the service runs OK and I can access the webservice.
Now, I tried to deploy this using a docker-compose.yml, but the service keeps on exiting/shutting down. I attempted to retrieve the logs, but I have no success. All I can see from doing docker ps -a is it runs for a second or 2 (or 3), and then exits.
Here's my docker-compose.yml:
version: "3"
services:
web_server:
image: webserver
entrypoint:
- /data/start-service.sh
- /bin/bash
ports:
- "8089:8080"
deploy:
resources:
limits:
cpus: "0.1"
memory: 2048M
restart_policy:
condition: on-failure
networks:
- webnet
networks:
- webnet
The entrypoint entry in the yml file is just to make sure that start-service.sh script will be ran with /bin/bash as its parameter, to keep the service running. But again, the service shuts down.
bash will exit without a proper tty. Since you execute bash via exec it becomes PID 1. Whenever PID 1 exits the container is stopped.
To prevent this add tty: true to the service's description in your compose file. This is basically the same thing as you do with -t with the docker run command.
I have two containers that are spun up using docker-compose:
web:
image: personal/webserver
depends_on:
- database
entrypoint: /usr/bin/runmytests.sh
database:
image: personal/database
In this example, runmytests.sh is a script that runs for a few seconds, then returns with either a zero or non-zero exit code.
When I run this setup with docker-compose, web_1 runs the script and exits. database_1 remains open, because the process running the database is still running.
I'd like to trigger a graceful exit on database_1 when web_1's tasks have been completed.
You can pass the --abort-on-container-exit flag to docker-compose up to have the other containers stop when one exits.
What you're describing is called a Pod in Kubernetes or a Task in AWS. It's a grouping of containers that form a unit. Docker doesn't have that notion currently (Swarm mode has "tasks" which come close but they only support one container per task at this point).
There is a hacky workaround beside scripting it as #BMitch described. You could mount the Docker daemon socket from the host. Eg:
web:
image: personal/webserver
depends_on:
- database
volumes:
- /var/run/docker.sock:/var/run/docker.sock
entrypoint: /usr/bin/runmytests.sh
and add the Docker client to your personal/webserver image. That would allow your runmytests.sh script to use the Docker CLI to shut down the database first. Eg: docker kill database.
Edit:
Third option. If you want to stop all containers when one fails, you can use the --abort-on-container-exit option to docker-compose as #dnephin mentions in another answer.
I don't believe docker-compose supports this use case. However, making a simple shell script would easily resolve this:
#!/bin/sh
docker run -d --name=database personal/database
docker run --rm -it --entrypoint=/usr/bin/runmytests.sh personal/webserver
docker stop database
docker rm database