I am new in Docker and I want to dockerize my Flask-Celery-RabbitMQ app which connects to a sqlite database.
This is my tree files:
--dockerized_app
--celery_app
--app
--__init__.py
--routes.py
--modules.py
--celery_endpoints.py
run.py
config.py
Dockerfile
requirements.txt
docker-compose.yml
my_db.db
If I want to run this app without Docker I usually do:
PATH=$PATH:/usr/local/sbin
rabbitmq-server
export FLASK_APP=run.py
flask run
celery -A run.celery worker --loglevel=info
Dockerfile
FROM python:3
ADD requirements.txt /app/requirements.txt
ADD ./celery_app/ /app/
WORKDIR /app/
RUN pip install -r requirements.txt
ENTRYPOINT celery -A celery_app worker --loglevel=info
docker-compose.yml
version: '3'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5673:5672"
worker:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit
requeriments.txt
Flask==1.0.2
Flask-SQLAlchemy==2.3.2
SQLAlchemy==1.2.7
celery==4.2.1
I run docker-compose build and then I create the image. Celery starts well, but rabbitMQ doesnt and I get this error:
[2018-07-25 14:51:12,218: ERROR/MainProcess] consumer: Cannot connect
to amqp://guest:**#XXX.X.X.X:5672//: [Errno 111] Connection refused.
I think that the problem is that I am not declaring
PATH=$PATH:/usr/local/sbin
which is necessary to start RabbitMQ, but I dont know how to do it.
Also I dont know if Flask is starting well.
Related
I've found a lot of questions on this topic. Always the answer was to use the '0.0.0.0' IP-adress. But I'm already doing this and still I get the error.
So I'm running a docker compose file that runs a database and a flask front end. The dockerfile runs fine on my own computer but on the server in the cloud I am getting this error.
This code launches the application:
app.run(host="0.0.0.0", port=5000, ssl_context="adhoc", debug=cfg.app_debug_mode)
This is my docker compose file:
version: "3.8"
services:
app:
image: app_image #the beginning is the unique uri of my amazon qccount. Then follows the repository name (ratio) and then the tag of the image (app) https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html
build: ./app
links:
- database
ports:
- "5000:5000"
environment:
AM_I_IN_A_DOCKER_CONTAINER: 'Yes'
CONFIG_NAME: 'config' #Name of the config file to use
database:
image: database_image
container_name: database
build: ./sql
restart: always
ports:
- "32000:32000"
environment:
MYSQL_ROOT_HOST: '%' #This allows the root user to access the database from any ip. For some reason amazon requires it.
MYSQL_ROOT_PASSWORD: 'zv2yRCt79AsGvz'
MYSQL_DATABASE: 'education'
volumes:
- ratio_volume:/var/lib/mysql #use a named volume for the database.
networks:
default:
name: my-network
volumes:
ratio_volume:
This is the docker file:
FROM python:3
RUN pip3 install --upgrade pip
WORKDIR /app
COPY . /app
RUN pip3 --no-cache-dir install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["/app/run.py"]
What am I doing wrong? The error is also not really helpful.. I am not getting any errors in the console, just the browser is complaining.
I'm trying to run neo4j in one container, and a flask app in another. I have a docker.compose.yml like so:
version: '3'
services:
app1:
container_name: app1
image: python:3.7.3-slim
build: ./APP1/
volumes:
- ./APP1/:/usr/src/app/
environment:
PORT: 5000
FLASK_DEBUG: 1
ports:
- 5000:5000
tty: true
neo4:
container_name: neo4j
image: neo4j:3.5
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- dbms_connector_bolt_tls__level=OPTIONAL
- NEO4J_dbms_memory_heap_max__size=3500M
- NEO4J_AUTH=user/pwd
volumes:
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/import:/import
- $HOME/neo4j/plugins:/plugins
ports:
- 7474:7474
- 7687:7687
My app.py:
app = Flask(__name__)
if __name__ == '__main__':
app.run(host="0.0.0.0",
port=5000,
debug=True
)
And the Dockerfile for APP1:
FROM python:3.7.3-slim
RUN mkdir /usr/src/app/
COPY . /usr/src/app/
WORKDIR /usr/src/app/
EXPOSE 5000
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENTRYPOINT ["python3", "app.py"]
I then use docker-compose up to execute. When I access neo4j through my browser, I can access as normal (http://localhost:7474/) but for the flask app I have no access (http://0.0.0.0:5000/). Where in my configuration am I going wrong?
http://0.0.0.0:5000 doesn't mean localhost, but all network interfaces. So you can't reach localhost.
I solved it and it was actually a dumb mistake, but one that could happen to others I guess...
In the docker-compose.yml:
build: ./APP1/
needs to be in quotes, so:
build: './APP1/'
I have been playing around with docker, celery, redis and Flask for the past 2-3 days, after successfully setting up a flask, celery and redis server I decided to go onto to the next point which dockerizing it. I have successfully created a docker image and a composer file which seem to work just fine when building. I am using a local redis server and I am able to access it by using docker.for.mac.localhost as the host name in order to access the redis server from inside the container, but, when I try to access the flask app while it's running from outside of the container it doesn't work.
Having done some research I have tried the following:
Running with server host as 0.0.0.0
Exposing and using a different port other than 5000
This is my Dockerfile:
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python3", "./app.py"]
And this is my docker-compose.yml file
version: "3"
services:
web:
container_name: web
build: ./api
ports:
- "5000:5001"
links:
- redis
depends_on:
- redis
environment:
- FLASK_ENV=development
volumes:
- ./api:/app
redis:
container_name: redis
image: redis:5.0.5
hostname: redis
worker:
build:
context: ./api
hostname: worker
entrypoint: celery
command: -A app.celery worker --loglevel=info
volumes:
- ./api:/app
links:
- redis
depends_on:
- redis
Thanks for any help in advance!
Your port mapping is backwards. It should be external to internal.
ports:
- "5001:5000"
I have an issue running my docker-compose.yml file with 4 services. They are my go microservice, phoenix web server, mongodb and redis images.
I specified in both my phoenix and golang dockerfiles to change working directory before running both services. I currently get the following errors when I do docker-compose up.
The task "phx.server" could not be found
main.go: no such file or directory
Here is my Dockerfile.go.development:
# base image elixer to start with
FROM golang:latest
# create app folder
RUN mkdir /goApp
COPY ./genesys-api /goApp
WORKDIR /goApp/cmd/genesys-server
# install dependencies
RUN go get gopkg.in/redis.v2
RUN go get github.com/gorilla/handlers
RUN go get github.com/dgrijalva/jwt-go
RUN go get github.com/gorilla/context
RUN go get github.com/gorilla/mux
RUN go get gopkg.in/mgo.v2/bson
RUN go get github.com/graphql-go/graphql
# run phoenix in *dev* mode on port 8080
CMD go run main.go
Here is my Dockerfile.phoenix.development:
# base image elixer to start with
FROM elixir:1.6
# install hex package manager
RUN mix local.hex --force
RUN mix local.rebar --force
# install the latest phoenix
RUN mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez --force
# create app folder
RUN mkdir /app
COPY ./my_app /app
WORKDIR /app
# install dependencies
RUN mix deps.get
# run phoenix in *dev* mode on port 4000
CMD mix phx.server
Here is my docker-compose.yml file:
version: '3.6'
services:
go:
build:
context: .
dockerfile: Dockerfile.go.development
ports:
- 8080:8080
volumes:
- .:/goApp
depends_on:
- db
- redis
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.phoenix.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- .:/app
# make sure we start mongodb when we start this service
# links:
# - db
depends_on:
- db
- redis
environment:
GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET}
FACEBOOK_CLIENT_ID: ${FACEBOOK_CLIENT_ID}
FACEBOOK_CLIENT_SECRET: ${FACEBOOK_CLIENT_SECRET}
db:
container_name: db
image: mongo:latest
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
redis:
container_name: redis
image: redis:latest
ports:
- "6379:6379"
volumes:
- ./data/redis:/data/redis
entrypoint: redis-server
restart: always
For the error related to go microservice, Since the go binary is not found in PATH, you may need to set the GOPATH env variable via your docker file for go:
export GOPATH=
How do I run Celery and RabbitMQ in a docker container? Can you point me to sample dockerfile or compose files?
This is what I have:
Dockerfile:
FROM python:3.4
ENV PYTHONBUFFERED 1
WORKDIR /tasker
ADD requirements.txt /tasker/
RUN pip install -r requirements.txt
ADD . /tasker/
docker-compose.yml
rabbitmq:
image: tutum/rabbitmq
environment:
- RABBITMQ_PASS=mypass
ports:
- "5672:5672"
- "15672:15672"
celery:
build: .
command: celery worker --app=tasker.tasks
volumes:
- .:/tasker
links:
- rabbitmq:rabbit
The issue I'm having is I cant get Celery to stay alive or running. It keeps exiting.
I have similar Celery exiting problem while dockerizing the application. You should use rabbit service name ( in your case it's rabbitmq) as host name in your celery configuration.That is, use broker_url = 'amqp://guest:guest#rabbitmq:5672//' instead of broker_url = 'amqp://guest:guest#localhost:5672//' . In my case, major components are Flask, Celery and Redis.My problem is HERE please check the link, you may find it useful.
Update 2018, as commented below by Floran Gmehlin, The celery image is now officially deprecated in favor of the official python image.
As commented in celery/issue 1:
Using this image seems ridiculous. If you have an application container, as you usually have with Django, you need all dependencies (things you import in tasks.py) installed in this container again.
That's why other projects (e.g. cookiecutter-django) reuse the application container for Celery, and only run a different command (command: celery ... worker) against it with docker-compose.
Note, now the docker-compose.yml is called local.yml and use start.sh.
Original answer:
You can try and emulate the official celery Dockerfile, which does a bit more setup before the CMD ["celery", "worker"].
See the usage of that image to run it properly.
start a celery worker (RabbitMQ Broker)
$ docker run --link some-rabbit:rabbit --name some-celery -d celery
check the status of the cluster
$ docker run --link some-rabbit:rabbit --rm celery celery status
If you can use that image in your docker-compose, then you can try building your own starting FROM celery instead of FROM python.
something I used in my docker-compose.yml. it works for me. check the details in this medium
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit