I have part of a docker-compose file as so
docker-compose.yml
pitchjob-fpm01:
container_name: pitchjob-fpm01
env_file:
- env/base.env
build:
context: ./pitch
dockerfile: PitchjobDockerfile
volumes:
- "/Sites/pitch/pitchjob/:/Sites"
restart: always
depends_on:
- memcached01
- memcached02
links:
- memcached01
- memcached02
extends:
file: "shared/common.yml"
service: pitch-common-env
my extended yml file is
compose.yml
version: '2.0'
services:
pitch-common-env:
environment:
APP_VOL_DIR: Sites
WEB_ROOT_FOLDER: web
CONFIG_FOLDER: app/config
APP_NAME: sony_pitch
in the docker file for pitchjob-fpm01 i have a command like so
PitchjobDockerfile
# Set folder groups
RUN chown -Rf www-data:www-data /$APP_VOL_DIR
But when I run the command to bring up the stack
docker-compose -f docker-compose-base.yml up --build --force-recreate --remove-orphans
I get the following error
failed to build: The command '/bin/sh -c chown -Rf www-data:www-data
/$APP_VOL_DIR' returned a non-zero code: 1
I'm guessing this is because it doesn't have the $APP_VOL_DIR, but why is that so if the docker compose is extending another compose file that defines
environment: variables
You can use build-time arguments for that.
In Dockerfile define:
ARG APP_VOL_DIR=app_vol_dir
# Set folder groups
RUN chown -Rf www-data:www-data /$APP_VOL_DIR
Then in docker-compose.yml set app_vol_dir as build argument:
pitchjob-fpm01:
container_name: pitchjob-fpm01
env_file:
- env/base.env
build:
context: ./pitch
dockerfile: PitchjobDockerfile
args:
- app_vol_dir=Sites
I think your problem is not with the overrides, but with the way you are trying to do environment variable substitution. From the docs:
Note: Unlike the shell form, the exec form does not invoke a command
shell. This means that normal shell processing does not happen. For
example, RUN [ "echo", "$HOME" ]will not do variable substitution
on $HOME. If you want shell processing then either use theshell form
or execute a shell directly, for example:RUN [ "sh", "-c", "echo
$HOME" ].
Related
Is it possible to RUN a command within docker-compose.yml file? So instead of having a dockerfile where I have something like this RUN mkdir foo, to have the same command withing the docker-compose.yml file
services:
server:
container_name: nginx
image: nginx:stable-alpine
volumes:
- ./public:/var/www/html/public
ports:
- "${PORT:-80}:80"
???: 'mkdir foo' // <--- sudo code
I am trying to create a CI/CD pipeline using gitlab and now facing an issue with the gitlab variable. This is not accessible inside docker compose file.
this is my gitlab ci yml file
step-production:
stage: production
before_script:
- export APP_ENVIRONMENT="$PRODUCTION_APP_ENVIRONMENT"
only:
- /^release.*$/
tags:
- release-tag
script:
- echo production env value is "$PRODUCTION_APP_ENVIRONMENT"
- sudo curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- sudo docker-compose -f docker-compose.prod.yml build --no-cache
- sudo docker-compose -f docker-compose.prod.yml up -d
when: manual
and this is my docker compose file
version: "3"
services:
redis:
image: redis:latest
app:
build:
context: .
environment:
- APP_ENVIRONMENT=${PRODUCTION_APP_ENVIRONMENT}
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app:/app
ports:
- "8000:8000"
restart: on-failure:5
# network_mode: "host"
Can someone help me on how to access the gitlab variable inside docker compose file ? I have spend more than a day on the same issue
The issue has been resolved by the following method
Edit the following line in gitlab ci yml file
sudo docker-compose -f docker-compose.prod.yml build --build-arg DB_NAME=$DEVELOPMENT_DB_NAME --build-arg DB_HOST=$DEVELOPMENT_DB_HOST --no-cache
Define the value of $DEVELOPMENT_DB_NAME and $DEVELOPMENT_DB_HOST in gitlab variables section
In the Docker file, add ARG and ENV sections as follows
ARG DB_NAME
ARG DB_HOST
ENV DB_NAME=${DB_NAME}
ENV DB_HOST=${DB_HOST}
Make sure that no environment variables with the same name are not defined in the docker-compose yml file
That's it !!!
I'm using Docker v19.03.13 . In my shell, I have defined some env vars ...
davea$ echo $AZ_SQL_TP_SRVR
localhost
davea$ echo $AZ_SQL_TP_DB
myDB
I would like to refrence these in my docker-compose.yml file, which is below ...
version: "3.2"
services:
sqlserver-db:
build:
context: ./
args:
- AZ_SQL_TP_SRVR=${AZ_SQL_TP_SRVR}
- AZ_SQL_TP_PORT=1433
- AZ_SQL_TP_DB=${AZ_SQL_TP_DB}
- AZ_SQL_TP_USERNAME=${AZ_SQL_TP_USERNAME}
- AZ_SQL_TP_PASSWORD=${AZ_SQL_TP_PASSWORD}
container_name: sqlserver-db
ports:
- ${AZ_SQL_TP_PORT}:1433
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=${AZ_SQL_TP_PASSWORD}
- TZ=${TZ}
volumes:
- ../../mydb/mypb:/sqlscripts
tty: true
command: /bin/bash entrypoint.sh
but when I run my docker instance, I get these warnings complaining about not being able to find the vars ...
davea$ docker-compose up -d
WARNING: The AZ_SQL_TP_SRVR variable is not set. Defaulting to a blank string.
WARNING: The AZ_SQL_TP_DB variable is not set. Defaulting to a blank string.
What else do I need to do to make my docker-compose file recognize my env vars defined in my shell?
Just as the comments mentioned, you did assign with AZ_SQL_TP_DB=myDB, this is not enough.
You could choose either of next to made it work.
Option 1
$ export AZ_SQL_TP_DB=myDB
$ docker-compose up -d
Option 2
$ AZ_SQL_TP_DB=myDB docker-compose up -d
Or add .env file in the same folder of docker-compose.yaml with next contents:
$ cat .env
AZ_SQL_TP_SRVR=localhost
AZ_SQL_TP_DB=myDB
$ docker-compose up -d
I create a DocherFile with OpenJDK base image and run init.sh script.
I want to inherit that Dockerfile and override the init.sh to test.sh script.
Is it possible to "test" docker file inherit from or extends "my-app" docker file and override ENTRYPOINT?
Should I define both "my-app" and "test" dockers in docker-compose?
Can I run only test docker with docker-compose and not both?
My purpose is to run only "my-app" docker in production. But for tests, I want to extend it and run tests and some more configurations.
my-app/Dockerfile:
FROM openjdk:11-jre-slim
COPY initialization.sh /path/
ENTRYPOINT ["/bin/bash", "-c", "/path/init.sh"]
test/Dockerfile:
FROM my-app
COPY tset.sh /path/
ENTRYPOINT ["/bin/bash", "-c", "/path/tset.sh"]
Here is one idea,
You can have different docker-compose files.
docker-compose.yml: Contains the definition of all images needed for running your app.
my-api:
image: yourImage
build:
context: .
dockerfile: Dockerfile
depends_on:
- sqldata
docker-compose.override.yml: Contains the base config for all images of the previous file.
my-api:
environment:
- ENVIRONMENT=Development
- DEBUG
ports:
- "6105:80"
Using these two files together from CLI
docker-compose -f docker-compose.yml -f docker-compose.override.yml
This should start your app will all containers and the default environment by default.
PRODUCTION
docker-compose.prod.yml : This is a replacement of the docker.override but contains configurations, environment variables suitable for a production environment.
docker-compose -f docker-compose.yml -f docker-compose.prod.yml
TESTING
Override the default entrypoint of the image
docker-compose-test.override.yml :
app-test:
environment:
- ENVIRONMENT=Development
ports:
- "6103:80"
entrypoint:
- YourScript
- memory=1
I build a backend with NodeJS and would like to use TravisCI and Docker to run tests.
In my code, I have a secret env: process.env.SOME_API_KEY
This is my Dockerfile.dev
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
My docker compose:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
And this is my TravisCI
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec api npm run test
I also set SOME_API_KEY='xxx' in my travis setting variables. However, it seems that the container doesn't receive the SOME_API_KEY.
How can I pass the SOME_API_KEY from travisCI to docker? Thanks
Containers in general do not inherit the environment from which they are run. Consider something like this:
export SOMEVARIABLE=somevalue
docker run --rm alpine sh -c 'echo $SOMEVARIABLE'
That will never print out the value of $SOMEVARIABLE because there is no magic process to import environment variables from your local shell into the container. If you want a travis environment variable exposed inside your docker containers, you will need to do that explicitly by creating an appropriate environment block in your docker-compose.yml. For example, I use the following docker-compose.yml:
version: "3"
services:
example:
image: alpine
command: sh -c 'echo $SOMEVARIABLE'
environment:
SOMEVARIABLE: "${SOMEVARIABLE}"
I can then run the following:
export SOMEVARIABLE=somevalue
docker-compose up
And see the following output:
Recreating docker_example_1 ... done
Attaching to docker_example_1
example_1 | somevalue
docker_example_1 exited with code 0
So you will need to write something like:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
environment:
SOME_API_KEY: "${SOME_API_KEY}"
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
I have a similar issue and solve it passing the environment variable to the container in docker-compose exec command. If the variable is in the Travis environment, you can do:
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec -e SOME_API_KEY=$SOME_API_KEY api npm run test