Hi I'm trying to add a service to my docker https://pypi.org/project/django-crontab/ unfortunately despite reading a bunch of threads on the forum I still can't manage to configure the Dockerfile and docker-compose.yaml correctly. I would like to ask for help and clarification on what I am doing wrong.
Thank you for your reply
DockerFile
FROM python:3.10
RUN apt update
RUN apt-get install cron -y
RUN alias py=python
ENV PYTHONUNBUFFERED 1
WORKDIR /fieldguard
COPY . /fieldguard/
RUN pip install --upgrade pip
COPY requirments.txt /fieldguard/
RUN pip install -r requirments.txt
RUN chmod +x docker-entrypoint.sh
ENTRYPOINT ["./docker-entrypoint.sh"]
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8080"]
docker_compoe.yaml
version: "3.9"
services:
db:
image: postgres:14.5
restart: unless-stopped
container_name: database
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
ports:
- "5432:5432"
web:
build: .
container_name: webapp
restart: unless-stopped
ports:
- "8000:8000"
depends_on:
- db
environment:
- SECRET_KEY=${SECRET_KEY}
- JWT_SECRET_KEY=${JWT_SECRET_KEY}
- FRONT_URL=${FRONT_URL}
- EMAIL_PORT=${EMAIL_PORT}
- EMAIL_HOST=${EMAIL_HOST}
- EMAIL_HOST_USER=${EMAIL_HOST_USER}
- EMAIL_HOST_PASSWORD=${EMAIL_HOST_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- WRONG_SECRET_KEY=${WRONG_SECRET_KEY}
- DEVICE_SECRET_KEY_REGISTER=${DEVICE_SECRET_KEY_REGISTER}
- DEVICE_SECRET_KEY_LOGIN=${DEVICE_SECRET_KEY_LOGIN}
- TOKEN_EXPIRATION=${TOKEN_EXPIRATION}
- DEVICE_TOKEN_EXPIRATION=${DEVICE_TOKEN_EXPIRATION}
cron:
build: . # same as main application
restart: unless-stopped
env_file:
- .env
depends_on:
- db
command: cron -f # as a long-running foreground process
docker-entrypoint.sh
#!/bin/sh
# docker-entrypoint.sh
# If this is going to be a cron container, set up the crontab.
if [ "$1" = cron ]; then
python manage.py crontab add
fi
# Launch the main container command passed as arguments.
exec "$#"
settings.py
INSTALLED_APPS = [
'rest_framework',
'fg_backend',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django_crontab',
]
CRONJOBS = [
('*/1 * * * *', 'fg_backend.cron.my_scheduled_job'),
]
And cron.py located in fg_backend/
from datetime import datetime, timedelta
import datetime
def my_scheduled_job():
try:
with open("logs/{}_{}.log".format(datetime.datetime.now().strftime('%Y-%m-%d'),
'CRONJOB'), "a+") as f:
f.write('{}_SomeTextOnly for test'.format(datetime.datetime.now()))
# TODO:: CREATE CRON JOB TO DELETE LOGS
# self.delete_older_logs()
except Exception as e:
print(e)
I will be very grateful for your help
I would like to configure django-crontab so that I can add some small shuffles. I know Celerity is also used but I don't need such a big tool.
Thank you very much for your help.
Related
When I run the following Dockerfile,in the middle of the process, after installing prisma, the RUN command db-migrate up is stopped. But when I used docker exec bin bash to run it, it worked without any problem. I don't think I can run the app before serving it. But there is a workaround like putting the migration commands as a service in docker-compose.yml. How to achieve it? or if there's any way to run those RUN commands of migration in this Dockerfile?
Dockerfile
FROM node:16.15.0-alpine
WORKDIR /app
COPY package*.json ./
# generated prisma files
COPY prisma ./prisma/
# COPY ENV variable
COPY .env ./
# COPY
COPY . .
RUN npm install
RUN npm install -g db-migrate
RUN npm install -g prisma
RUN db-migrate up
RUN prisma db pull
RUN prisma generate
EXPOSE 3000
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '3.8'
services:
mysqldb:
image: mysql:5.7
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
auth:
depends_on:
- mysqldb
build: ./auth
restart: unless-stopped
env_file: ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
stdin_open: true
tty: true
volumes:
db:
I keep following along with the book Microservices with Docker, Flask and React form testdriven.io.
After creating and configuring a Container with flask, Postgres and running the following command docker-compose -f docker-compose-dev.yml up -d --build everything looks fine but i can't access the webserver from host.
The output is
docker-compose -f docker-compose-dev.yml up -d
[+] Running 2/2
⠿ Container testdriven-app-users-db-1 Running 0.0s
⠿ Container testdriven-app-users-1 Started 1.1s
when I go to the URL: http://docker_machine_ip:5001/users/ping this is what I should get:
{
"message": "pong!",
"status": "success"
}
But can't access the webserver and still don't understand the way?
Dockerfile-dev
# base image
FROM python:3.9.12-alpine
# new
# install dependencies
RUN apk update && \
apk add --virtual build-deps gcc python3-dev musl-dev && \
apk add postgresql-dev && \
apk add netcat-openbsd
# set working directory
WORKDIR /usr/src/app
# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# new
# add entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# add app
COPY . /usr/src/app
# run server
CMD ["/usr/src/app/entrypoint.sh"]
docker-compose-dev.yml
version: '3.6'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile-dev
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_APP=project/__init__.py
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#users-db:5432/users_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#users-db:5432/users_test
depends_on:
- users-db
users-db:
build:
context: ./services/users/project/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
You need to create a network
networks:
user_network:
then connect both using this network using
networks:
- user_network
then you will need a DATABASE_HOST="users-db"(name of the container db)
you can then make use of this HOST in flask like
app = Flask(__name__)
app.config['HOST'] = os.getenv('DATABASE_HOST', 'users-db')
app.config['USER'] = <you know what it is>
app.config['PASSWORD'] = <you know what it is>
app.config['DB'] = <you know what it is>
you can now connect use the above info.
All together kind of looks likes below
version: '3.6'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile-dev
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_APP=project/__init__.py
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_HOST='users-db'
- DATABASE_URL=postgres://postgres:postgres#users-db:5432/users_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#users-db:5432/users_test
networks:
- user_network
depends_on:
- users-db
users-db:
build:
context: ./services/users/project/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- user_network
networks:
user_network:
I am trying to containerise my application which is developed using technologise like DRF, Celery and Redis (as a broker).
I want to prepare docker-compose which will start all the three services (DRF, Celery and Redis (as a broker).
I also want to prepare the Dockerfile.prod for deloyment.
Here is what I have done so far -
version: "3"
services:
redis:
container_name: Redis-Container
image: "redis:latest"
ports:
- "6379:6379"
expose:
- "6379"
command: "redis-server"
dropoff-backend:
container_name: Dropoff-Backend
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/logistics_backend
ports:
- "8080:8080"
expose:
- "8080"
restart: always
command: "python manage.py runserver 0.0.0.0:8080"
links:
- redis
depends_on:
- redis
# - celery
celery:
container_name: celery-container
build: .
command: "celery -A logistics_project worker -l INFO"
volumes:
- .:/code
links:
- redis
Dockerfile(Not for deployment)
FROM python:3.7-slim
# FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
mkdir /logistics_backend
WORKDIR /logistics_backend
COPY ./requirements.txt /requirements.txt
COPY . /logistics_backend
EXPOSE 80
RUN pip install -r /requirements.txt
RUN pip install -U "celery[redis]"
RUN python manage.py makemigrations &&\
python manage.py migrate
RUN python manage.py loaddata roles businesses route_status route_type order_status service_city payment_status
CMD [ "python", "manage.py", "runserver", "0.0.0.0:80"]
The problem with the existing docker-compose is it returns the error as stated below -
celery-container | [2020-10-08 16:59:25,843: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
celery-container | Trying again in 32.00 seconds... (16/100)
In Setting.py I have defined this for radis connection
REDIS_HOST = 'localhost'
REDIS_PORT = '6379'
BROKER_URL = 'redis://' + REDIS_HOST + ':' + REDIS_PORT + '/0'
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 3600}
CELERY_RESULT_BACKEND = 'redis://' + REDIS_HOST + ':' + REDIS_PORT + '/0'
I don't know how should I extend my Dockerfile which is current used for the development, to form a Dockerfile.prod which could be deployable.
All of my three containers are working -
You need to change your REDIS_HOST in your Setting.py to be “redis” instead of “localhost”.
I am setting some environment variables in docker-compose that are being used by a python application being run by a cron job.
docker-compose.yaml:
version: '2.1'
services:
zookeeper:
container_name: zookeeper
image: zookeeper:3.3.6
restart: always
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
kafka:
container_name: kafka
image: wurstmeister/kafka:1.1.0
hostname: kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: "topic:1:1"
KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
KAFKA_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
data-collector:
container_name: data-collector
#image: mystreams:0.1
build:
context: /home/junaid/eMumba/CASB/Docker/data_collector/
dockerfile: Dockerfile
links:
- kafka
environment:
- KAFKA_HOST=kafka
- OFFICE_365_APP_ID=98aff1c5-7a69-46b7-899c-186851054b43
- OFFICE_365_APP_SECRET=zVyS/V694ffWe99QpCvYqE1sqeqLo36uuvTL8gmZV0A=
- OFFICE_365_APP_TENANT=2f6cb1a6-ecb8-4578-b680-bf84ded07ff4
- KAFKA_CONTENT_URL_TOPIC=o365_activity_contenturl
- KAFKA_STORAGE_DATA_TOPIC=o365_storage
- KAFKA_PORT=9092
- POSTGRES_DB_NAME=casb
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=pakistan
- POSTGRES_HOST=postgres_database
depends_on:
postgres_database:
condition: service_healthy
postgres_database:
container_name : postgres_database
build:
context: /home/junaid/eMumba/CASB/Docker/data_collector/
dockerfile: postgres.dockerfile
#image: ayeshaemumba/casb-postgres:v3
#volumes:
# - ./postgres/data:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: pakistan
POSTGRES_DB: casb
expose:
- "5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 30s
retries: 3
When I exec inside data-collector container and echo any of the environment variable I can see that its set:
># docker exec -it data-collector sh
># echo $KAFKA_HOST
> kafka
But my cron job logs shows KeyError: 'KAFKA_HOST'
It means my cron job cannot find environment variables.
Now I have two questions:
1) Why are environment variables not set for cron job?
2) I know that I can pass environment variables as a shell script and run it while building image. But is there a way to pass environment variables from docker-compose?
Update:
Cron job is defined in docker file for python application.
Dockerfile:
FROM python:3.5-slim
# Creating Application Source Code Directory
RUN mkdir -p /usr/src/app
# Setting Home Directory for containers
WORKDIR /usr/src/app
# Installing python dependencies
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
# Copying src code to Container
COPY . /usr/src/app
# Add storage crontab file in the cron directory
ADD crontab-storage /etc/cron.d/storage-cron
# Give execution rights on the storage cron job
RUN chmod 0644 /etc/cron.d/storage-cron
RUN chmod 0644 /usr/src/app/cron_storage_data.sh
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
#Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
crontab-storage:
*/1 * * * * sh /usr/src/app/cron_storage_data.sh
# Don't remove the empty line at the end of this file. It is required to run the cron job
cron_storage_data.sh:
#!/bin/bash
cd /usr/src/app
/usr/local/bin/python3.5 storage_data_collector.py
Cron doesn't inherit docker-compose environment variables by default. A potential workaround for this situation is to:
1.Pass the environment variables from docker-compose to a local .env file
touch .env
echo "export KAFKA_HOST=$KAFKA_HOST" > /usr/src/app/.env
2.source the .env file before the cron task execution
* * * * * <username> . /usr/src/app/.env && sh /usr/src/app/cron_storage_data.sh
This is how your update environment file will look like:
Before:
{'SHELL': '/bin/sh', 'PWD': '/root', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}
After:
{'SHELL': '/bin/sh', 'PWD': '/root', 'KAFKA_HOST': 'kafka', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}
In my docker container, I'm trying to install several packages with pip along with installing Bower via npm. It seems however that whichever of pip or npm run first, the other's contents in /usr/local/bin are overwritten (specifically, gunicorn is missing with the below Dockerfile, or Bower is missing if I swap the order of my FROM..RUN blocks).
Is this the expected behavior of Docker, and if so, how can I go about installing both my pip packages and Bower into the same directory, /usr/local/bin?
Here's my Dockerfile:
FROM python:3.4.3
RUN mkdir /code
WORKDIR /code
ADD ./requirements/ /code/requirements/
RUN pip install -r /code/requirements/docker.txt
ADD ./ /code/
FROM node:0.12.7
RUN npm install bower
Here's my docker-compose.yml file:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
#-redis:redis
volumes:
- .:/code
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
webstatic:
restart: always
build: .
volumes:
- /usr/src/app/static
env_file: .env
command: bash -c "/code/manage.py bower install && /code/manage.py collectstatic --noinput"
nginx:
restart: always
#build: ./config/nginx
image: nginx
ports:
- "80:80"
volumes:
- /www/static
- config/nginx/conf.d:/etc/nginx/conf.d
volumes_from:
- webstatic
links:
- web:web
postgres:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
ports:
- "5432:5432"
Update: I went ahead and cross-posted this as a docker-compose issue since it's unclear if there is an actual bug or if my configuration is a problem. I'll keep both posts updated, but do post in either if you have an idea of what is going on. Thanks!
You cannot use multiple FROM commands in Dockerfile and you cannot create image from 2 different base images. So if you need node and python in the same image you could either add node to python image or add python to node image.