There is a start.sh script under the application:
scripts/start.sh
#!/bin/bash
source /env/set.sh
env/set.sh content:
#!/bin/bash
export DB_USERNAME=a
export DB_PASSWORD=b
docker-compose.yml
version: '3.4'
services:
web:
build: .
expose:
- "5000"
command: scripts/start.sh
tty: true
stdin_open: true
After run docker-compose build && docker-compose up, login into the container, the env values had not been set. But should run source /env/set.sh manually.
Why didn't command: scripts/start.sh work?
Let's assume you need to set it after the container has started and can't use the docker-compose environment variables.
You are sourcing the script that exports all the variables needed but then you close this bash session, what you need is to make the exports permanent. I described a way to do it in a similar question here.
What you need is an entrypoint:
#!/bin/bash
# if env variable is not set, set it
if [ -z $DB_USERNAME ];
then
# env variable is not set
export DB_USERNAME=username;
fi
# pass the arguments received by the entrypoint.sh
# to /bin/bash with command (-c) option
/bin/bash -c $#
Then add this script as entrypoint in your docker-compose file.
Related
How to pass environment variables to RUN command in Dockerfile? In my scenario, I want to pass env variables to RUN command which runs a script & uses these variables?
.env
NAME=John
script.sh
#!/bin/sh
echo $NAME
Dockerfile
FROM alpine:3.14
COPY . .
RUN chmod +x script.sh
RUN ./script.sh
docker-compose.yml
version: "3.1"
services:
foo:
container_name: foo
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
How can I pass the NAME env variable to the last RUN command in Dockerfile (to be used by the script executable)?
I am aware of --build-arg but it is inconvenient when there are 100s of env variables. Even then how can I format the docker compose command to read all arguments from an env file & pass them as build arguments?
I have:
docker-compose.yml
version: "3.9"
services:
test_name:
image: ${PROJECT_NAME}/test_service
build:
dockerfile: Dockerfile
env_file: .env
Dockerfile
FROM alpine:3.15
RUN echo $TEST >> test1.txt
CMD echo $TEST >> test2.txt
As result:
test1.txt - empty and test2.txt with data.
My problem is that this variables are too much, so can I get environment variables in RUN command from .env file without enumeration all of them in ARG?
To use variables in a RUN instruction, you need to use ARG. ARG are available at build time while ENV is available when the container runs.
FROM alpine:3.15
ARG FOO="you see me on build"
ENV BAR="you see me on run"
RUN echo $FOO >> test1.txt
CMD echo $BAR >> test2.txt
docker build --build-arg FOO="hi" --tag test .
docker run --env BAR="there" test
There is one thing that comes close to using env variables, but you still need to provide the --build-arg flag.
You can define env variable with the same name as the build arg and reference it by its name without setting a value. The value will be taken from the env variable in your shell.
export FOO="bar"
docker build --build-arg FOO --tag test .
This also works in compose.
Additionally, when you use compose you can place a .env file next to your compose file. Variables found there will be read and are available in the build:arg key as well as the environment key, But you still have to name them.
# env file
FOO=bar
BAZ=qux
services:
test_name:
build:
context: ./
args:
FOO:
BAZ:
I have this Dockerfile:
FROM mongo:4.4.6
# setup the environment variables
ARG MONGO_INITDB_ROOT_USERNAME
ARG MONGO_INITDB_ROOT_PASSWORD
ARG MONGO_INITDB_DATABASE
# copy the initalisation file to the mongo db entrypoint sothat it gets excecuted on startup
COPY /mongo-init/init.js /docker-entrypoint-initdb.d/
RUN sed -i "s|database_user|${MONGO_INITDB_ROOT_USERNAME}|g" /docker-entrypoint-initdb.d/init.js
RUN sed -i "s/database_password/${MONGO_INITDB_ROOT_PASSWORD}/g" /docker-entrypoint-initdb.d/init.js
RUN sed -i "s;database_db;${MONGO_INITDB_DATABASE};g" /docker-entrypoint-initdb.d/init.js
CMD cat /docker-entrypoint-initdb.d/init.js && echo ${MONGO_INITDB_DATABASE}
EXPOSE 27017
And this file that gets copied into the container:
db.createUser({
user: "database_user",
pwd: "database_password",
roles: [
{
role: "readWrite",
db: "database_db",
},
],
});
db.createCollection("checklists");
db.createCollection("user");
When the container gets created with docker compose, the files contents are like this:
db.createUser({
user: "",
pwd: "",
roles: [
{
role: "readWrite",
db: "",
},
],
});
db.createCollection("checklists");
db.createCollection("user");
Is there anything I'm missing that puts literally nothing into the file? I already made sure, that when hardcoding the value instead of ${MONGO_INITDB_ROOT_USERNAME} the values gets correctly inserted.
Edit:
docker-compose file:
version: "3.7"
services:
database:
image: "jupiter/database"
container_name: "jupiter-stack-database"
build:
context: ./database
dockerfile: ./Dockerfile
ports:
- "7040:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE}
backend:
image: "jupiter/backend"
container_name: "jupiter-stack-backend"
build:
context: ./backend
dockerfile: ./Dockerfile
ports:
- "7020:3000"
depends_on:
- database
environment:
WAIT_HOSTS: database:27017
DB_HOST: database
DB_PORT: 27017
DB_DB: ${MONGO_INITDB_DATABASE}
DB_USER: ${MONGO_INITDB_ROOT_USERNAME}
DB_PASS: ${MONGO_INITDB_ROOT_PASSWORD}
The variables are taken from the .env file in the same directory. The same values are used in the backend and contain the correct values.
Running RUN sed -i "s|database_user|without variable|g" /docker-entrypoint-initdb.d/init.js and RUN echo "${MONGO_INITDB_ROOT_USERNAME}" > /tmp/my_var.txt results in the init.js file containing without variable in the correct place (instead of database_user) and the output to my_var.txt remains empty.
When Compose runs an image, it runs in two stages. First it builds the image if required; this only uses the settings in the build: block, but not any of the others. Then it runs the built image with the remaining settings. In this sequence, that means first the image is built with the configuration files, but the environment: settings are only considered afterwards.
Since details like the database credentials are runtime settings, you probably don't want to rebuild the image when they change. That means you need to write a script to rewrite the file when the container starts, then run the main container CMD.
#!/bin/sh
# Rewrite the config file
sed -i.bak \
-e "s|database_user|${MONGO_INITDB_ROOT_USERNAME}|g" \
-e "s/database_password/${MONGO_INITDB_ROOT_PASSWORD}/g" \
-e "s;database_db;${MONGO_INITDB_DATABASE};g" \
/docker-entrypoint-initdb.d/init.js
# Run the main container CMD (but see below)
exec "$#"
In your Dockerfile, you would typically make this script be the ENTRYPOINT; leave the CMD as it is. You don't need to mention any of the environment variables.
However, there's one further complication. Since you're extending a Docker Hub image, it comes with its own ENTRYPOINT and CMD. Setting ENTRYPOINT resets CMD. Look up the image's page on Docker Hub, click "the full list of tags" link, then click the link for your specific tag; that takes you to the image's Dockerfile which ends with
# from the standard mongo:4.4.6 Dockerfile
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["mongod"]
That means your entrypoint script needs to re-run the original entrypoint script
# (instead of "run the main container CMD")
# Run the original image's ENTRYPOINT
exec docker-entrypoint.sh "$#"
In your Dockerfile, you need to COPY your custom entrypoint script in, and repeat the original image's CMD.
FROM mongo:4.4.6
COPY /mongo-init/init.js /docker-entrypoint-initdb.d/
COPY custom-entrypoint.sh /usr/local/bin
ENTRYPOINT ["custom-entrypoint.sh"] # must be JSON-array form
CMD ["mongod"]
You don't need to change the docker-compose.yml file at all. You can double-check that this works by launching a temporary container; the override command replaces CMD but not ENTRYPOINT, so it it runs after the setup in the entrypoint script.
docker-compose run --rm database \
cat /docker-entrypoint-initdb.d/init.js
I have a docker-compose.yml that has a section:
myservice:
env_file:
- myvars.env
My env variable file has:
myvars.env:
SOME_VAL=123
And then in my Dockerfile I have this:
..
RUN echo "some_val ${SOME_VAL}"
ENTRYPOINT bash ${APP_BASE}/run.sh SOME_VAL=${SOME_VAL}
When I run docker-compose up, the value of some_val is empty.
Why is SOME_VAL not accessible in my dockerfile?
How do I pass the env variable SOME_VAL to my run.sh script?
You need to declare the variable with ENV in the Dockerfile before using it:
ENV variables are also available during the build, as soon as you introduce them with an ENV instruction.
Dockerfile
ENV SOME_VAL
RUN echo "some_val ${SOME_VAL}"
ENTRYPOINT bash ${APP_BASE}/run.sh SOME_VAL=${SOME_VAL}
When you docker-compose build an image, it only considers the build: sections of the docker-compose.yml file. Nothing else from any other part of the file is considered. environment: and env_file: settings aren't available, nor are volumes: nor networks:. The only way to pass settings in is through the Dockerfile ARG and the corresponding Compose args: settings, and even then, you only want to use this for things you'd "compile in" to the image.
Conveniently, shell scripts can directly access environment variables already, so you don't need to do anything at all; just use the variable.
#!/bin/sh
# run.sh
echo "SOME_VAL is $SOME_VAL"
# Dockerfile
FROM busybox
WORKDIR /app
COPY run.sh .
# RUN chmod +x run.sh
CMD ["./run.sh"]
# docker-compose.yml
version: '3.8'
services:
echoer:
build: .
env_file:
- my_vars.env
# environment:
# SOME_VAL: foo
I have part of a docker-compose file as so
docker-compose.yml
pitchjob-fpm01:
container_name: pitchjob-fpm01
env_file:
- env/base.env
build:
context: ./pitch
dockerfile: PitchjobDockerfile
volumes:
- "/Sites/pitch/pitchjob/:/Sites"
restart: always
depends_on:
- memcached01
- memcached02
links:
- memcached01
- memcached02
extends:
file: "shared/common.yml"
service: pitch-common-env
my extended yml file is
compose.yml
version: '2.0'
services:
pitch-common-env:
environment:
APP_VOL_DIR: Sites
WEB_ROOT_FOLDER: web
CONFIG_FOLDER: app/config
APP_NAME: sony_pitch
in the docker file for pitchjob-fpm01 i have a command like so
PitchjobDockerfile
# Set folder groups
RUN chown -Rf www-data:www-data /$APP_VOL_DIR
But when I run the command to bring up the stack
docker-compose -f docker-compose-base.yml up --build --force-recreate --remove-orphans
I get the following error
failed to build: The command '/bin/sh -c chown -Rf www-data:www-data
/$APP_VOL_DIR' returned a non-zero code: 1
I'm guessing this is because it doesn't have the $APP_VOL_DIR, but why is that so if the docker compose is extending another compose file that defines
environment: variables
You can use build-time arguments for that.
In Dockerfile define:
ARG APP_VOL_DIR=app_vol_dir
# Set folder groups
RUN chown -Rf www-data:www-data /$APP_VOL_DIR
Then in docker-compose.yml set app_vol_dir as build argument:
pitchjob-fpm01:
container_name: pitchjob-fpm01
env_file:
- env/base.env
build:
context: ./pitch
dockerfile: PitchjobDockerfile
args:
- app_vol_dir=Sites
I think your problem is not with the overrides, but with the way you are trying to do environment variable substitution. From the docs:
Note: Unlike the shell form, the exec form does not invoke a command
shell. This means that normal shell processing does not happen. For
example, RUN [ "echo", "$HOME" ]will not do variable substitution
on $HOME. If you want shell processing then either use theshell form
or execute a shell directly, for example:RUN [ "sh", "-c", "echo
$HOME" ].