Pass environment variable to dockerfile build - docker

I am trying to create a service consisting of a web server and database all in docker containers. Currently I am trying to create same environment file for both of them that would contain database credentials. Unfortunatelly, when I try to build database with it, it turns out that they are empty. How can I create a project with single environment file for both components? Here is my docker-compose.yml:
version: '2'
services:
db:
build:
context: .
dockerfile: Dockerfile-db
ports:
- '5433:5432'
env_file:
- env
web:
build: .
ports:
- '8000:8000'
command: venv/bin/python manage.py runserver 0.0.0.0:8000
depends_on:
- db
env_file:
- env
Here is part of my Dockerfile-db file responsible for creating database:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y postgresql-9.5-postgis-2.2
USER postgres
ARG DB_PASSWORD
ARG DB_NAME
RUN echo $DB_PASSWORD
RUN /etc/init.d/postgresql start && \
psql --command "ALTER USER postgres WITH PASSWORD '$DB_PASSWORD';" && \
psql --command "CREATE DATABASE $DB_NAME;" && \
psql --command "\\c $DB_NAME" && \
psql --command "CREATE EXTENSION postgis;" && \
psql --command "CREATE EXTENSION postgis_topology;"
And my env file has following structure:
DB_NAME=some_name
DB_USER=postgres
DB_PASSWORD=some_password
DB_HOST=db
DB_PORT=5432

The environment file is not part of the build process, it is used when running the container.
You need to use build-args. In docker-compose you can specify build args in the file:
build:
context: .
dockerfile: Dockerfile-db
args:
DB_NAME: some_name
DB_USER: postgress
...
This might not be a good idea if you want to publish the composefile, as you are storing credentials in it. You can explicitly build and pass --build-arg
docker-compose build --build-arg DB_NAME= some_name ...
And when running specify no build in docker-compose run --no-build
Update:
As suggested by #gonczor, a shorter and cleaner syntax to use pass the env file as build args is:
docker-compose build --build-args $(cat envfile)

Your configuration is correct, it seems like you may not be passing the actual path or the name of the .env file. Have you tried the following (assuming your .env file is in the same directory).
version: '2'
services:
db:
build:
context: .
dockerfile: Dockerfile-db
ports:
- '5433:5432'
env_file:
- ./.env
web:
build: .
ports:
- '8000:8000'
command: venv/bin/python manage.py runserver 0.0.0.0:8000
depends_on:
- db
env_file:
- ./.env

Related

one of services started with docker-compose up doesn't stop with docker-compose stop

I have the file docker-compose.production.yml that contains configurations of 5 services. I start them all with the command sudo docker-compose -f docker-compose.production.yml up --build in the directory where the file is. When I want to stop all the services, I simply call sudo docker-compose stop in the directory where the file is. Strangely, 4 out of 5 services stop correctly, but 1 keeps running and if I want to stop it, I must use sudo docker stop [CONTAINER]. The service is not event being listed in the list of services that are being stopped after the stop command is run. It's like the service somehow "detaches" from the group. What could be causing this strange behaviour?
Here's an example of the docker-compose.production.yml file:
version: '3'
services:
fe:
build:
context: ./fe
dockerfile: Dockerfile.production
ports:
- 5000:80
restart: always
be:
image: strapi/strapi:3.4.6-node12
environment:
NODE_ENV: production
DATABASE_CLIENT: mysql
DATABASE_NAME: some_db
DATABASE_HOST: db
DATABASE_PORT: 3306
DATABASE_USERNAME: someuser
DATABASE_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
URL: https://some-url.com
volumes:
- ./be:/srv/app
- ${SOME_DIRECTORY:?no directory specified}:/srv/something:ro
- ./some-directory:/srv/something-else
expose:
- 1447
ports:
- 5001:1337
depends_on:
- db
command: bash -c "yarn install && yarn build && yarn start"
restart: always
watcher:
build:
context: ./watcher
dockerfile: Dockerfile
environment:
LICENSE_KEY: ${LICENSE_KEY:?no license key specified}
volumes:
- ./watcher:/usr/src/app
- ${SOME_DIRECTORY:?no directory specified}:/usr/src/something:ro
db:
image: mysql:8.0.23
environment:
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
MYSQL_DATABASE: some_db
volumes:
- ./db:/var/lib/mysql
restart: always
db-backup:
build:
context: ./db-backup
dockerfile: Dockerfile.production
environment:
MYSQL_HOST: db
MYSQL_DATABASE: some_db
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
volumes:
- ./db-backup/backups:/backups
restart: always
The service that doesn't stop together with others is the last one - db-backup. Here's an example of its Dockerfile.production:
FROM alpine:3.13.1
COPY ./scripts/startup.sh /usr/local/startup.sh
RUN chmod +x /usr/local/startup.sh
# NOTE used for testing when needs to run cron tasks more frequently
# RUN mkdir /etc/periodic/1min
COPY ./cron/daily/* /etc/periodic/daily
RUN chmod +x /etc/periodic/daily/*
RUN sh /usr/local/startup.sh
CMD [ "crond", "-f", "-l", "8"]
And here's an example of the ./scripts/startup.sh:
#!/bin/sh
echo "Running startup script"
echo "Checking if mysql-client is installed"
apk update
if ! apk info | grep -Fxq "mysql-client";
then
echo "Installing MySQL client"
apk add mysql-client
echo "MySQL client installed"
fi
# NOTE this was used for testing. backups should run daily, thus script should
# normally be placed in /etc/periodic/daily/
# cron_task_line="* * * * * run-parts /etc/periodic/1min"
# if ! crontab -l | grep -Fxq "$cron_task_line";
# then
# echo "Enabling cron 1min periodic tasks"
# echo -e "${cron_task_line}\n" >> /etc/crontabs/root
# fi
echo "Startup script finished"
All this happens on all the Ubuntu 18.04 machines that I've tried running this on. Didn't try it on anything else.

How to connect to other container from Dockerfile while docker-compose build

I try to configure docker compose for my php project. On deploy I want to update a source code, update composer dependencies and run database migrations.
So I have a docker-compose.yml file:
version: '3.0'
services:
php:
build:
context: .
dockerfile: php/Dockerfile
depends_on:
- postgres
postgres:
image: "postgres:13-alpine"
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB_NAME}
Php container builds from the next Dockerfile:
# Inatall dependensies
RUN apt-get update \
&& apt-get install -y git libicu-dev postgresql-server-dev-all zip libzip-dev postgresql-client\
&& docker-php-ext-install intl pdo pdo_pgsql zip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Copy source files
COPY ./app /var/www/my-site
# Update project files
WORKDIR /var/www/my-site
RUN composer install
RUN php ./yii migrate --interactive=0 # This command needs to connect to the database and fails
CMD [ "php-fpm"]
When I run docker-compose build, I have this error: could not translate host name "postgres" to address: Name or service not known.
How can I take access to database container while other is building?
Both php and postgres need to be on same network and php can access postgres using container_name which is postgres. depends_on will make sure postgres get starts before php.
version: '3.0'
services:
php:
build:
context: .
dockerfile: php/Dockerfile
restart: on-failure
depends_on:
- postgres
networks:
- test-network
postgres:
container_name: 'postgres'
image: "postgres:13-alpine"
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB_NAME}
networks:
- test-network
networks:
test-network:
driver: bridge

Set secret variable when using Docker in TravisCI

I build a backend with NodeJS and would like to use TravisCI and Docker to run tests.
In my code, I have a secret env: process.env.SOME_API_KEY
This is my Dockerfile.dev
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
My docker compose:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
And this is my TravisCI
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec api npm run test
I also set SOME_API_KEY='xxx' in my travis setting variables. However, it seems that the container doesn't receive the SOME_API_KEY.
How can I pass the SOME_API_KEY from travisCI to docker? Thanks
Containers in general do not inherit the environment from which they are run. Consider something like this:
export SOMEVARIABLE=somevalue
docker run --rm alpine sh -c 'echo $SOMEVARIABLE'
That will never print out the value of $SOMEVARIABLE because there is no magic process to import environment variables from your local shell into the container. If you want a travis environment variable exposed inside your docker containers, you will need to do that explicitly by creating an appropriate environment block in your docker-compose.yml. For example, I use the following docker-compose.yml:
version: "3"
services:
example:
image: alpine
command: sh -c 'echo $SOMEVARIABLE'
environment:
SOMEVARIABLE: "${SOMEVARIABLE}"
I can then run the following:
export SOMEVARIABLE=somevalue
docker-compose up
And see the following output:
Recreating docker_example_1 ... done
Attaching to docker_example_1
example_1 | somevalue
docker_example_1 exited with code 0
So you will need to write something like:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
environment:
SOME_API_KEY: "${SOME_API_KEY}"
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
I have a similar issue and solve it passing the environment variable to the container in docker-compose exec command. If the variable is in the Travis environment, you can do:
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec -e SOME_API_KEY=$SOME_API_KEY api npm run test

How to set up separate .env for development and production using Docker

Coming from an environment where I was manually doing a ssh into the remote server, doing a git pull and creating my .env(since it is gitignored), how do I separate development .env and a production .env. I used docker-machine to create an AWS EC2 instance. I created a production.yml and did docker-compose -f production.yml up -d. The container in the EC2 machine picked up my development .env which is not what I want.
Dockerfile
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev git jpeg-dev zlib-dev libmagic
RUN python -m pip install --upgrade pip
RUN mkdir /writer-api
COPY requirements.txt /writer-api/
RUN pip install --no-cache-dir -r /writer-api/requirements.txt
COPY . /writer-api/
WORKDIR /writer-api
production.yml
version: "3"
services:
postgres:
restart: always
image: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
web:
restart: always
build: .
command: gunicorn writer.wsgi:application -w 2 -b :8000
environment:
DEBUG: ${DEBUG}
SECRET_KEY: ${SECRET_KEY}
DB_HOST: ${DB_HOST}
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PORT: ${DB_PORT}
DB_PASSWORD: ${DB_PASSWORD}
SENDGRID_API_KEY: ${SENDGRID_API_KEY}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_STORAGE_BUCKET_NAME: ${AWS_STORAGE_BUCKET_NAME}
depends_on:
- postgres
- redis
expose:
- "8000"
redis:
restart: always
image: "redis:alpine"
celery:
restart: always
build: .
command: celery -A writer worker -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
celery-beat:
restart: always
build: .
command: celery -A writer beat -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
depends_on:
- web
volumes:
pgdata:
I guess you can export the environment shell variable & then use the .env as per the environment. Create a dev.env & prod.env file in the workspace.
Sample compose -
version: '3'
services:
nginx:
image: nginx
ports:
- '80'
env_file:
- ${ENVIRON}.env
Build for DEV -
export ENVIRON=dev
docker-compose up -d
Build for PROD -
export ENVIRON=prod
docker-compose up -d
This way you will be able to leverage same compose file for DEV & PROD environments.
setup the compose files for production and dev in seperate folders and put .env file in those folders

CRON job cannot find environment variable set in docker-compose

I am setting some environment variables in docker-compose that are being used by a python application being run by a cron job.
docker-compose.yaml:
version: '2.1'
services:
zookeeper:
container_name: zookeeper
image: zookeeper:3.3.6
restart: always
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
kafka:
container_name: kafka
image: wurstmeister/kafka:1.1.0
hostname: kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: "topic:1:1"
KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
KAFKA_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
data-collector:
container_name: data-collector
#image: mystreams:0.1
build:
context: /home/junaid/eMumba/CASB/Docker/data_collector/
dockerfile: Dockerfile
links:
- kafka
environment:
- KAFKA_HOST=kafka
- OFFICE_365_APP_ID=98aff1c5-7a69-46b7-899c-186851054b43
- OFFICE_365_APP_SECRET=zVyS/V694ffWe99QpCvYqE1sqeqLo36uuvTL8gmZV0A=
- OFFICE_365_APP_TENANT=2f6cb1a6-ecb8-4578-b680-bf84ded07ff4
- KAFKA_CONTENT_URL_TOPIC=o365_activity_contenturl
- KAFKA_STORAGE_DATA_TOPIC=o365_storage
- KAFKA_PORT=9092
- POSTGRES_DB_NAME=casb
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=pakistan
- POSTGRES_HOST=postgres_database
depends_on:
postgres_database:
condition: service_healthy
postgres_database:
container_name : postgres_database
build:
context: /home/junaid/eMumba/CASB/Docker/data_collector/
dockerfile: postgres.dockerfile
#image: ayeshaemumba/casb-postgres:v3
#volumes:
# - ./postgres/data:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: pakistan
POSTGRES_DB: casb
expose:
- "5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 30s
retries: 3
When I exec inside data-collector container and echo any of the environment variable I can see that its set:
># docker exec -it data-collector sh
># echo $KAFKA_HOST
> kafka
But my cron job logs shows KeyError: 'KAFKA_HOST'
It means my cron job cannot find environment variables.
Now I have two questions:
1) Why are environment variables not set for cron job?
2) I know that I can pass environment variables as a shell script and run it while building image. But is there a way to pass environment variables from docker-compose?
Update:
Cron job is defined in docker file for python application.
Dockerfile:
FROM python:3.5-slim
# Creating Application Source Code Directory
RUN mkdir -p /usr/src/app
# Setting Home Directory for containers
WORKDIR /usr/src/app
# Installing python dependencies
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
# Copying src code to Container
COPY . /usr/src/app
# Add storage crontab file in the cron directory
ADD crontab-storage /etc/cron.d/storage-cron
# Give execution rights on the storage cron job
RUN chmod 0644 /etc/cron.d/storage-cron
RUN chmod 0644 /usr/src/app/cron_storage_data.sh
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
#Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
crontab-storage:
*/1 * * * * sh /usr/src/app/cron_storage_data.sh
# Don't remove the empty line at the end of this file. It is required to run the cron job
cron_storage_data.sh:
#!/bin/bash
cd /usr/src/app
/usr/local/bin/python3.5 storage_data_collector.py
Cron doesn't inherit docker-compose environment variables by default. A potential workaround for this situation is to:
1.Pass the environment variables from docker-compose to a local .env file
touch .env
echo "export KAFKA_HOST=$KAFKA_HOST" > /usr/src/app/.env
2.source the .env file before the cron task execution
* * * * * <username> . /usr/src/app/.env && sh /usr/src/app/cron_storage_data.sh
This is how your update environment file will look like:
Before:
{'SHELL': '/bin/sh', 'PWD': '/root', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}
After:
{'SHELL': '/bin/sh', 'PWD': '/root', 'KAFKA_HOST': 'kafka', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}

Resources