I am trying to test an API that sends long-running jobs to a queue processed by Celery workers.. I am using RabbitMQ running in a Docker container as the message queue. However, when sending a message to the queue I get the following error: Error: [Errno 111] Connection refused
Steps to reproduce:
Start RabbitMQ container: docker run -d -p 5672:5672 rabbitmq
Start Celery server: celery -A celery worker --loglevel=INFO
Build docker image: docker build -t fastapi .
Run container docker run -it -p 8000:8000 fastapi
Dockerfile:
FROM python:3.9
WORKDIR /
COPY . .
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
requirements.txt:
anyio==3.6.1
asgiref==3.5.2
celery==5.2.7
click==8.1.3
colorama==0.4.4
fastapi==0.78.0
h11==0.13.0
httptools==0.4.0
idna==3.3
pydantic==1.9.1
python-dotenv==0.20.0
PyYAML==6.0
sniffio==1.2.0
starlette==0.19.1
typing_extensions==4.2.0
uvicorn==0.17.6
watchgod==0.8.2
websockets==10.3
app.py:
from fastapi import FastAPI
import tasks
#app.get("/{num}")
async def root(num):
tasks.update_db.delay(num)
return {"success": True}
tasks.py:
from celery import Celery
import time
celery = Celery('tasks', broker='amqp://')
#celery.task(name='update_db')
def update_db(num: int) -> None:
time.sleep(30)
return
You can't connect to rabbitmq on localhost; it's not running in the same container as your Python app. Since you've exposed rabbit on your host, you can connect to it using the address of your host. One way of doing that is starting the app container like this:
docker run -it -p 8000:8000 --add-host host.docker.internal:host-gateway fastapi
And then modify your code like this:
celery = Celery('tasks', broker='amqp://host.docker.internal')
With that code in place, let's re-run your example:
$ docker run -d -p 5672:5672 rabbitmq
$ docker run -d -p 8000:8000 --add-host host.docker.internal:host-gateway fastapi
$ curl http://localhost:8000/1
{"success":true}
There's no reason to publish the rabbitmq ports on your host if you only need to access it from within a container. When building an application with multiple containers, using something like docker-compose can make your life easier.
If you used the following docker-compose.yaml:
version: "3"
services:
rabbitmq:
image: rabbitmq
app:
build:
context: .
ports:
- "8000:8000"
And modified your code to connect to rabbitmq:
celery = Celery('tasks', broker='amqp://rabbitmq')
You could then run docker-compose up to bring up both containers. Your app would be exposed on host port 8000, but rabbitmq would only be available to your app container.
Incidentally, rather than hardcoding the broker uri in your code, you might want to get that from an environment variable instead:
celery = Celery('tasks', broker=os.getenv('APP_BROKER_URI'))
That allows you to use different connection strings without needing to rebuild your image every time. We'd need to modify the docker-compose.yaml to include the appropriate variable:
version: "3"
services:
rabbitmq:
image: rabbitmq
app:
build:
context: .
environment:
APP_BROKER_URI: "amqp://rabbitmq"
ports:
- "8000:8000"
Update tasks.py
import time
celery = Celery('tasks', broker='amqp://user:pass#host:port//')
#celery.task(name='update_db')
def update_db(num: int) -> None:
time.sleep(30)
return
Related
How can I implement the below docker-compose code, but using the docker run command? I am specifically interested in the depends_on part.
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
depends_on: doesn't map to a docker run option. When you have your two docker run commands you need to make sure you put them in the right order.
docker build -t web_image .
docker network create some_network
docker run --name db --net some_network postgres
# because this depends_on: [db] it must be second
docker run --name web --net some_network ... web_image ...
depends-on mean :
Compose implementations MUST guarantee dependency services have been started before starting a dependent service. Compose implementations MAY wait for dependency services to be “ready” before starting a dependent service.
Hence the depends on is not only an order of running
and you can use docker-compose instead of docker run and every option in docker run can be in the docker-compose file
Let's say you have a long docker-compose file with a lot of containers that speak to one another inside of a docker network. Let's call this a "stack". You want to launch this stack 3 times, each with a slightly different config. To do that you might say:
docker-compose -p pizza up
docker-compose -p pie up
docker-compose -p soda up
But this would fail if you have any ports exposed to the host:
nginx:
image: nginx:alpine
ports:
- "80:80"
networks:
- my_app_net
It would fail, because the host can only expose one thing on port 80.
One alternative is to define that port declaration in different files and use different ports:
$ cat pizza.yml
services:
nginx:
ports:
- "8001:80"
$ cat pie.yml
services:
nginx:
ports:
- "8002:80"
$ cat soda.yml
services:
nginx:
ports:
- "8003:80"
docker-compose -f docker-compose.yml -f pizza.yml -p pizza up
docker-compose -f docker-compose.yml -f pie.yml -p pie up
docker-compose -f docker-compose.yml -f soda.yml -p soda up
That works because each stack is publishing port 80 to a different port. That's fine, but that's a little bit annoying because we have to stop/start the stack to do this.
How do we do this without publishing the port or stopping/starting the stack?
If this were a kubernetes cluster, we could use kubectl to do this with a port-forward like so:
kubectl port-forward replicaset/nginx-75f59d57f4 8001:80
That way fits my situation a little better because we don't want to stop the stack to see what's going on in there. We can start the port-forward, see what's going on and then go away.
Is there an equivalent for docker?
Related Questions:
Is there a kubectl port-forward equivalent in podman?
You can start another container on the same network that is running something like socat to forward the ports:
docker run --rm -it -p 8001:80 --net pizza_default \
nicolaka/netshoot \
socat TCP6-LISTEN:80,fork TCP:nginx:80
A more automated example of this is seen with docker-publish that handles spinning up a separate network, attaching containers, and automatically stopping the forwarder when the target container exits by sharing the same pid namespace.
How can I change the port dynamodb starts on through the Amazon Docker image?
According to this answer, the -port option can be used when executing the dynamodb java file.
However, when running the docker image with this command: docker run -p 8000:8000 amazon/dynamodb-local I do not have the option of specifying the port dynamodb listens to, just the port connected between my host and the container.
Would I have to make my own Dockerfile, specifying the OS and installing dynamodb and whatnot, so I can run the java command and specify my port?
You don't need to rebuild the image. Doing
docker inspect amazon/dynamodb-local
shows that the Entrypoint is set as
"Entrypoint": [
"java"
],
So running the below command gives an error:
$ docker run -p 8001:8001 amazon/dynamodb-local -port 8001
Unrecognized option: -port
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
because we are trying to pass -port argument to java but we need to pass it to DynamoDBLocal.jar.
Once we know that, we can add the jar to the docker run and the following works:
$ docker run -p 8001:8001 amazon/dynamodb-local -jar DynamoDBLocal.jar -port 8001
Initializing DynamoDB Local with the following configuration:
Port: 8001
InMemory: false
DbPath: null
SharedDb: false
shouldDelayTransientStatuses: false
CorsParams: *
I would raise it as a bug but https://hub.docker.com/r/amazon/dynamodb-local/ doesn't mention a public github repo to raise the issue.
For some reason docker-compose does not map ports to the host for dynamodb. The next configuration will not work for the dynamodb-local image:
ports:
- "9000:8000"
The workaround is to put the port directly to the command,
-port 9000 is the same as ports: - "9000:9000" in docker-compose.
version: "3.8"
services:
service-dynamodb:
image: amazon/dynamodb-local
image: "amazon/dynamodb-local:latest"
working_dir: /home/dynamodblocal
# -port {number} is the same as
# ports: - {number}:{number} in Docker
command: "-jar DynamoDBLocal.jar -port 9000 -sharedDb -optimizeDbBeforeStartup -dbPath ./data"
volumes:
- ./data:/home/dynamodblocal/data
I tried the official image to override entry point but there was some unknown error, But you can good to go with this approach.
Just create a New Docker image from amazon/dynamodb-local as a base image.
Build.
docker build -t mydb .
and run it
docker run -it --rm -p 8001:8001 mydb
Below is the Dockerfile
FROM amazon/dynamodb-local
WORKDIR /home/dynamodblocal
ENTRYPOINT ["java", "-jar", "DynamoDBLocal.jar", "-port", "8003"]
As you will see the port.
I am running ZooKeeper image on Docker via docker-compose.
My files are:
--repo
----script.zk (contains zookeeper script commands such as `create`
----docker-compose.yaml
----Dockerfile.zookeeper
----zoo.cfg
docker-compose.yaml contains names and properties:
services:
zoo1:
restart: always
hostname: zoo1
container_name: zoo1
build:
context: .
dockerfile: Dockerfile.zookeeper
volumes:
- $PWD/kyc_zookeeper.cfg:/conf/zoo.cfg
ports:
- 2181:2181
environment:
.... and two more nodes
Dockerfile.zookeeper currently contains only image
FROM zookeeper:3.4
Locally I can run zkCli.sh and communicate with zookeeper, but i wish to do it automatically when Dockerfile.zookeeper runs.
do I need to create a container with a vm, install zookeeper and copy the zkCli.sh into the container in order to run commands?
Or is it possible to run zookeeper commands via Dockerfile?
I've tried too attach to the container and using CMD in dockerfile but it's not working.
any idea how I can do it?
Thank you
In order to resolve that I wrote bash script, who gets zookeeper host and a zookeeper script (file with zookeeper commands line-by-line)
and run all the commands on the remote docker who contains zookeeper image:
config_script_file="$2"
zookeeper_host_url="$1"
#Retrieve all commands from file
TMPVAR=""
while read -r line
do
if [ -z "$TMPVAR" ]; then
TMPVAR="$line"
else
TMPVAR="$TMPVAR\n$line"
fi
done < "$config_script_file"
#Run ZooKeeper commands on remote machine
docker exec -i $zookeeper_host_url bash << EOF
./bin/zkCli.sh -server localhost:2181
$(echo -e ${TMPVAR})
quit
exit
EOF
example of a zookeeper script:
create /x 1
create /y 2
Usage:
./zkCliHelper.sh <zookeeper_url> <script.zk file>
How do I run Celery and RabbitMQ in a docker container? Can you point me to sample dockerfile or compose files?
This is what I have:
Dockerfile:
FROM python:3.4
ENV PYTHONBUFFERED 1
WORKDIR /tasker
ADD requirements.txt /tasker/
RUN pip install -r requirements.txt
ADD . /tasker/
docker-compose.yml
rabbitmq:
image: tutum/rabbitmq
environment:
- RABBITMQ_PASS=mypass
ports:
- "5672:5672"
- "15672:15672"
celery:
build: .
command: celery worker --app=tasker.tasks
volumes:
- .:/tasker
links:
- rabbitmq:rabbit
The issue I'm having is I cant get Celery to stay alive or running. It keeps exiting.
I have similar Celery exiting problem while dockerizing the application. You should use rabbit service name ( in your case it's rabbitmq) as host name in your celery configuration.That is, use broker_url = 'amqp://guest:guest#rabbitmq:5672//' instead of broker_url = 'amqp://guest:guest#localhost:5672//' . In my case, major components are Flask, Celery and Redis.My problem is HERE please check the link, you may find it useful.
Update 2018, as commented below by Floran Gmehlin, The celery image is now officially deprecated in favor of the official python image.
As commented in celery/issue 1:
Using this image seems ridiculous. If you have an application container, as you usually have with Django, you need all dependencies (things you import in tasks.py) installed in this container again.
That's why other projects (e.g. cookiecutter-django) reuse the application container for Celery, and only run a different command (command: celery ... worker) against it with docker-compose.
Note, now the docker-compose.yml is called local.yml and use start.sh.
Original answer:
You can try and emulate the official celery Dockerfile, which does a bit more setup before the CMD ["celery", "worker"].
See the usage of that image to run it properly.
start a celery worker (RabbitMQ Broker)
$ docker run --link some-rabbit:rabbit --name some-celery -d celery
check the status of the cluster
$ docker run --link some-rabbit:rabbit --rm celery celery status
If you can use that image in your docker-compose, then you can try building your own starting FROM celery instead of FROM python.
something I used in my docker-compose.yml. it works for me. check the details in this medium
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit