I am setting some environment variables in docker-compose that are being used by a python application being run by a cron job.
docker-compose.yaml:
version: '2.1'
services:
zookeeper:
container_name: zookeeper
image: zookeeper:3.3.6
restart: always
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
kafka:
container_name: kafka
image: wurstmeister/kafka:1.1.0
hostname: kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: "topic:1:1"
KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
KAFKA_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
data-collector:
container_name: data-collector
#image: mystreams:0.1
build:
context: /home/junaid/eMumba/CASB/Docker/data_collector/
dockerfile: Dockerfile
links:
- kafka
environment:
- KAFKA_HOST=kafka
- OFFICE_365_APP_ID=98aff1c5-7a69-46b7-899c-186851054b43
- OFFICE_365_APP_SECRET=zVyS/V694ffWe99QpCvYqE1sqeqLo36uuvTL8gmZV0A=
- OFFICE_365_APP_TENANT=2f6cb1a6-ecb8-4578-b680-bf84ded07ff4
- KAFKA_CONTENT_URL_TOPIC=o365_activity_contenturl
- KAFKA_STORAGE_DATA_TOPIC=o365_storage
- KAFKA_PORT=9092
- POSTGRES_DB_NAME=casb
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=pakistan
- POSTGRES_HOST=postgres_database
depends_on:
postgres_database:
condition: service_healthy
postgres_database:
container_name : postgres_database
build:
context: /home/junaid/eMumba/CASB/Docker/data_collector/
dockerfile: postgres.dockerfile
#image: ayeshaemumba/casb-postgres:v3
#volumes:
# - ./postgres/data:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: pakistan
POSTGRES_DB: casb
expose:
- "5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 30s
retries: 3
When I exec inside data-collector container and echo any of the environment variable I can see that its set:
># docker exec -it data-collector sh
># echo $KAFKA_HOST
> kafka
But my cron job logs shows KeyError: 'KAFKA_HOST'
It means my cron job cannot find environment variables.
Now I have two questions:
1) Why are environment variables not set for cron job?
2) I know that I can pass environment variables as a shell script and run it while building image. But is there a way to pass environment variables from docker-compose?
Update:
Cron job is defined in docker file for python application.
Dockerfile:
FROM python:3.5-slim
# Creating Application Source Code Directory
RUN mkdir -p /usr/src/app
# Setting Home Directory for containers
WORKDIR /usr/src/app
# Installing python dependencies
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
# Copying src code to Container
COPY . /usr/src/app
# Add storage crontab file in the cron directory
ADD crontab-storage /etc/cron.d/storage-cron
# Give execution rights on the storage cron job
RUN chmod 0644 /etc/cron.d/storage-cron
RUN chmod 0644 /usr/src/app/cron_storage_data.sh
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
#Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
crontab-storage:
*/1 * * * * sh /usr/src/app/cron_storage_data.sh
# Don't remove the empty line at the end of this file. It is required to run the cron job
cron_storage_data.sh:
#!/bin/bash
cd /usr/src/app
/usr/local/bin/python3.5 storage_data_collector.py
Cron doesn't inherit docker-compose environment variables by default. A potential workaround for this situation is to:
1.Pass the environment variables from docker-compose to a local .env file
touch .env
echo "export KAFKA_HOST=$KAFKA_HOST" > /usr/src/app/.env
2.source the .env file before the cron task execution
* * * * * <username> . /usr/src/app/.env && sh /usr/src/app/cron_storage_data.sh
This is how your update environment file will look like:
Before:
{'SHELL': '/bin/sh', 'PWD': '/root', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}
After:
{'SHELL': '/bin/sh', 'PWD': '/root', 'KAFKA_HOST': 'kafka', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}
Related
Hi I'm trying to add a service to my docker https://pypi.org/project/django-crontab/ unfortunately despite reading a bunch of threads on the forum I still can't manage to configure the Dockerfile and docker-compose.yaml correctly. I would like to ask for help and clarification on what I am doing wrong.
Thank you for your reply
DockerFile
FROM python:3.10
RUN apt update
RUN apt-get install cron -y
RUN alias py=python
ENV PYTHONUNBUFFERED 1
WORKDIR /fieldguard
COPY . /fieldguard/
RUN pip install --upgrade pip
COPY requirments.txt /fieldguard/
RUN pip install -r requirments.txt
RUN chmod +x docker-entrypoint.sh
ENTRYPOINT ["./docker-entrypoint.sh"]
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8080"]
docker_compoe.yaml
version: "3.9"
services:
db:
image: postgres:14.5
restart: unless-stopped
container_name: database
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
ports:
- "5432:5432"
web:
build: .
container_name: webapp
restart: unless-stopped
ports:
- "8000:8000"
depends_on:
- db
environment:
- SECRET_KEY=${SECRET_KEY}
- JWT_SECRET_KEY=${JWT_SECRET_KEY}
- FRONT_URL=${FRONT_URL}
- EMAIL_PORT=${EMAIL_PORT}
- EMAIL_HOST=${EMAIL_HOST}
- EMAIL_HOST_USER=${EMAIL_HOST_USER}
- EMAIL_HOST_PASSWORD=${EMAIL_HOST_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- WRONG_SECRET_KEY=${WRONG_SECRET_KEY}
- DEVICE_SECRET_KEY_REGISTER=${DEVICE_SECRET_KEY_REGISTER}
- DEVICE_SECRET_KEY_LOGIN=${DEVICE_SECRET_KEY_LOGIN}
- TOKEN_EXPIRATION=${TOKEN_EXPIRATION}
- DEVICE_TOKEN_EXPIRATION=${DEVICE_TOKEN_EXPIRATION}
cron:
build: . # same as main application
restart: unless-stopped
env_file:
- .env
depends_on:
- db
command: cron -f # as a long-running foreground process
docker-entrypoint.sh
#!/bin/sh
# docker-entrypoint.sh
# If this is going to be a cron container, set up the crontab.
if [ "$1" = cron ]; then
python manage.py crontab add
fi
# Launch the main container command passed as arguments.
exec "$#"
settings.py
INSTALLED_APPS = [
'rest_framework',
'fg_backend',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django_crontab',
]
CRONJOBS = [
('*/1 * * * *', 'fg_backend.cron.my_scheduled_job'),
]
And cron.py located in fg_backend/
from datetime import datetime, timedelta
import datetime
def my_scheduled_job():
try:
with open("logs/{}_{}.log".format(datetime.datetime.now().strftime('%Y-%m-%d'),
'CRONJOB'), "a+") as f:
f.write('{}_SomeTextOnly for test'.format(datetime.datetime.now()))
# TODO:: CREATE CRON JOB TO DELETE LOGS
# self.delete_older_logs()
except Exception as e:
print(e)
I will be very grateful for your help
I would like to configure django-crontab so that I can add some small shuffles. I know Celerity is also used but I don't need such a big tool.
Thank you very much for your help.
I want to dump tables from an amazon url into my mariadb container, but it doesn't work.
Here is my Dockerfile
FROM amazoncorretto:11.0.15-alpine
LABEL website = "PHEDON"
VOLUME /phedon-app
RUN apk update && apk add --update --no-cache curl
RUN curl https://myurl/dbdump/dump.sql --output dump.sql
COPY . .
RUN chmod +x ./gradlew
RUN ./gradlew assemble
RUN mv ./build/libs/phedon-spring-server.jar app.jar
ENTRYPOINT ["java","-jar","-Dspring.profiles.active=dev", "app.jar"]
EXPOSE 8080
And here is the db part of the docker compose file.
phedon_db:
image: "mariadb:10.6"
container_name: mariadb
restart: always
command:
[ --lower_case_table_names=1 ]
healthcheck:
test: [ "CMD", "mariadb-admin", "--protocol", "tcp" ,"ping" ]
timeout: 3m
interval: 10s
retries: 10
ports:
- "3307:3306"
networks:
- phedon
volumes:
- container-volume:/var/lib/mysql
- ./dump.sql:/docker-entrypoint-initdb.d/dump.sql
environment:
MYSQL_DATABASE: "phedondb"
MYSQL_USER: "phedon"
MYSQL_PASSWORD: "12345"
MARIADB_ALLOW_EMPTY_ROOT_PASSWORD: "true"
env_file:
- .env
networks:
phedon:
volumes:
container-volume:
I have tried to use the ADD command too, and still no results, my mariadb database is still empty.
Method 1:
You can download the sql file which you want to restore directly on to the host system and can execute the below command to restore it
docker exec -i some-mysql sh -c 'exec mysql -u<user> -p<password> <database>' < /<path to the file>/dump.sql.sql
Method 2
As you are exposing the database on port 3307 you might try to connect to the database through a client. In case if you have installed mysql in your command line then you can execute the below command to restore it.
$ mysql --host=127.0.0.1 --port=3306 -u phedon -p phedon_db < dump.sql
I have the file docker-compose.production.yml that contains configurations of 5 services. I start them all with the command sudo docker-compose -f docker-compose.production.yml up --build in the directory where the file is. When I want to stop all the services, I simply call sudo docker-compose stop in the directory where the file is. Strangely, 4 out of 5 services stop correctly, but 1 keeps running and if I want to stop it, I must use sudo docker stop [CONTAINER]. The service is not event being listed in the list of services that are being stopped after the stop command is run. It's like the service somehow "detaches" from the group. What could be causing this strange behaviour?
Here's an example of the docker-compose.production.yml file:
version: '3'
services:
fe:
build:
context: ./fe
dockerfile: Dockerfile.production
ports:
- 5000:80
restart: always
be:
image: strapi/strapi:3.4.6-node12
environment:
NODE_ENV: production
DATABASE_CLIENT: mysql
DATABASE_NAME: some_db
DATABASE_HOST: db
DATABASE_PORT: 3306
DATABASE_USERNAME: someuser
DATABASE_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
URL: https://some-url.com
volumes:
- ./be:/srv/app
- ${SOME_DIRECTORY:?no directory specified}:/srv/something:ro
- ./some-directory:/srv/something-else
expose:
- 1447
ports:
- 5001:1337
depends_on:
- db
command: bash -c "yarn install && yarn build && yarn start"
restart: always
watcher:
build:
context: ./watcher
dockerfile: Dockerfile
environment:
LICENSE_KEY: ${LICENSE_KEY:?no license key specified}
volumes:
- ./watcher:/usr/src/app
- ${SOME_DIRECTORY:?no directory specified}:/usr/src/something:ro
db:
image: mysql:8.0.23
environment:
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
MYSQL_DATABASE: some_db
volumes:
- ./db:/var/lib/mysql
restart: always
db-backup:
build:
context: ./db-backup
dockerfile: Dockerfile.production
environment:
MYSQL_HOST: db
MYSQL_DATABASE: some_db
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
volumes:
- ./db-backup/backups:/backups
restart: always
The service that doesn't stop together with others is the last one - db-backup. Here's an example of its Dockerfile.production:
FROM alpine:3.13.1
COPY ./scripts/startup.sh /usr/local/startup.sh
RUN chmod +x /usr/local/startup.sh
# NOTE used for testing when needs to run cron tasks more frequently
# RUN mkdir /etc/periodic/1min
COPY ./cron/daily/* /etc/periodic/daily
RUN chmod +x /etc/periodic/daily/*
RUN sh /usr/local/startup.sh
CMD [ "crond", "-f", "-l", "8"]
And here's an example of the ./scripts/startup.sh:
#!/bin/sh
echo "Running startup script"
echo "Checking if mysql-client is installed"
apk update
if ! apk info | grep -Fxq "mysql-client";
then
echo "Installing MySQL client"
apk add mysql-client
echo "MySQL client installed"
fi
# NOTE this was used for testing. backups should run daily, thus script should
# normally be placed in /etc/periodic/daily/
# cron_task_line="* * * * * run-parts /etc/periodic/1min"
# if ! crontab -l | grep -Fxq "$cron_task_line";
# then
# echo "Enabling cron 1min periodic tasks"
# echo -e "${cron_task_line}\n" >> /etc/crontabs/root
# fi
echo "Startup script finished"
All this happens on all the Ubuntu 18.04 machines that I've tried running this on. Didn't try it on anything else.
I have a Golang app, that depends a FTP Server.
So, In docker compose, I build a FTP service and I refer to it into my tests.
So, in my docker-compose.yml I have:
version: '3'
services:
mygoapp:
build:
dockerfile: ./Dockerfile.local
context: ./
volumes:
- ./volume:/go
- ./test_files:/var/test_files
networks:
mygoapp_network:
env_file:
- test.env
tty: true
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "julien"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/www/julien"
restart: on-failure
networks:
mygoapp_network:
networks:
mygoapp_network:
external: true
In my gitlab-ci.yml I have
variables:
PACKAGE_PATH: /go/src/gitlab.com/xxx
VOLUME_PATH: /var/test_files
stages:
- test
# A hack to make Golang-in-Gitlab happy
.anchors:
- &inject-gopath
mkdir -p $(dirname ${PACKAGE_PATH})
&& ln -s ${CI_PROJECT_DIR} ${PACKAGE_PATH}
&& cd ${PACKAGE_PATH}
test:
image: docker:18
services:
- docker:dind
stage: test
# only:
# - production
before_script:
- touch test.env
- apk update
- apk upgrade
- apk add --no-cache py-pip
- pip install docker-compose
- docker network create mygoapp_network
- mkdir -p volume/log
script:
- docker-compose -f docker-local.yaml up --build -d
- docker exec project-0_mygoapp_1 ls /var/test_files
- docker exec project-0_mygoapp_1 echo $VOLUME_PATH
- docker exec project-0_mygoapp_1 go test ./... -v
All my services are up
But when I run
- docker exec project-0_myapp_1 echo $VOLUME_PATH
I can see $VOLUME_PATH is equal to /var/test_files
but inside code, when I do:
os.Getenv("VOLUME_PATH")
variable is empty
Also, in local, with a docker exec, variable is OK.
I also tried to put Variables into test definition, but it still doesn' work
EDIT: The only way I could do it is setting environment vars in docker compose, but it is not so great
Any idea how to fix it ?
The behaviour of your script is predictable - all environment variables are being expanded when they are met (unless they are in single quotes). So, your line
docker exec project-0_myapp_1 echo $VOLUME_PATH
is expanded before being executed, and $VOLUME_PATH is taken from gitlab runner, not from container.
The only way I see to get this script printing environment variable from inside container is putting script inside sh-file and calling that file:
doit.sh
echo $VOLUME_PATH
gitlab-ci.yml
docker exec project-0_myapp_1 doit.sh
I am trying to create a service consisting of a web server and database all in docker containers. Currently I am trying to create same environment file for both of them that would contain database credentials. Unfortunatelly, when I try to build database with it, it turns out that they are empty. How can I create a project with single environment file for both components? Here is my docker-compose.yml:
version: '2'
services:
db:
build:
context: .
dockerfile: Dockerfile-db
ports:
- '5433:5432'
env_file:
- env
web:
build: .
ports:
- '8000:8000'
command: venv/bin/python manage.py runserver 0.0.0.0:8000
depends_on:
- db
env_file:
- env
Here is part of my Dockerfile-db file responsible for creating database:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y postgresql-9.5-postgis-2.2
USER postgres
ARG DB_PASSWORD
ARG DB_NAME
RUN echo $DB_PASSWORD
RUN /etc/init.d/postgresql start && \
psql --command "ALTER USER postgres WITH PASSWORD '$DB_PASSWORD';" && \
psql --command "CREATE DATABASE $DB_NAME;" && \
psql --command "\\c $DB_NAME" && \
psql --command "CREATE EXTENSION postgis;" && \
psql --command "CREATE EXTENSION postgis_topology;"
And my env file has following structure:
DB_NAME=some_name
DB_USER=postgres
DB_PASSWORD=some_password
DB_HOST=db
DB_PORT=5432
The environment file is not part of the build process, it is used when running the container.
You need to use build-args. In docker-compose you can specify build args in the file:
build:
context: .
dockerfile: Dockerfile-db
args:
DB_NAME: some_name
DB_USER: postgress
...
This might not be a good idea if you want to publish the composefile, as you are storing credentials in it. You can explicitly build and pass --build-arg
docker-compose build --build-arg DB_NAME= some_name ...
And when running specify no build in docker-compose run --no-build
Update:
As suggested by #gonczor, a shorter and cleaner syntax to use pass the env file as build args is:
docker-compose build --build-args $(cat envfile)
Your configuration is correct, it seems like you may not be passing the actual path or the name of the .env file. Have you tried the following (assuming your .env file is in the same directory).
version: '2'
services:
db:
build:
context: .
dockerfile: Dockerfile-db
ports:
- '5433:5432'
env_file:
- ./.env
web:
build: .
ports:
- '8000:8000'
command: venv/bin/python manage.py runserver 0.0.0.0:8000
depends_on:
- db
env_file:
- ./.env