I have this Dockerfile:
FROM mongo:4.4.6
# setup the environment variables
ARG MONGO_INITDB_ROOT_USERNAME
ARG MONGO_INITDB_ROOT_PASSWORD
ARG MONGO_INITDB_DATABASE
# copy the initalisation file to the mongo db entrypoint sothat it gets excecuted on startup
COPY /mongo-init/init.js /docker-entrypoint-initdb.d/
RUN sed -i "s|database_user|${MONGO_INITDB_ROOT_USERNAME}|g" /docker-entrypoint-initdb.d/init.js
RUN sed -i "s/database_password/${MONGO_INITDB_ROOT_PASSWORD}/g" /docker-entrypoint-initdb.d/init.js
RUN sed -i "s;database_db;${MONGO_INITDB_DATABASE};g" /docker-entrypoint-initdb.d/init.js
CMD cat /docker-entrypoint-initdb.d/init.js && echo ${MONGO_INITDB_DATABASE}
EXPOSE 27017
And this file that gets copied into the container:
db.createUser({
user: "database_user",
pwd: "database_password",
roles: [
{
role: "readWrite",
db: "database_db",
},
],
});
db.createCollection("checklists");
db.createCollection("user");
When the container gets created with docker compose, the files contents are like this:
db.createUser({
user: "",
pwd: "",
roles: [
{
role: "readWrite",
db: "",
},
],
});
db.createCollection("checklists");
db.createCollection("user");
Is there anything I'm missing that puts literally nothing into the file? I already made sure, that when hardcoding the value instead of ${MONGO_INITDB_ROOT_USERNAME} the values gets correctly inserted.
Edit:
docker-compose file:
version: "3.7"
services:
database:
image: "jupiter/database"
container_name: "jupiter-stack-database"
build:
context: ./database
dockerfile: ./Dockerfile
ports:
- "7040:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE}
backend:
image: "jupiter/backend"
container_name: "jupiter-stack-backend"
build:
context: ./backend
dockerfile: ./Dockerfile
ports:
- "7020:3000"
depends_on:
- database
environment:
WAIT_HOSTS: database:27017
DB_HOST: database
DB_PORT: 27017
DB_DB: ${MONGO_INITDB_DATABASE}
DB_USER: ${MONGO_INITDB_ROOT_USERNAME}
DB_PASS: ${MONGO_INITDB_ROOT_PASSWORD}
The variables are taken from the .env file in the same directory. The same values are used in the backend and contain the correct values.
Running RUN sed -i "s|database_user|without variable|g" /docker-entrypoint-initdb.d/init.js and RUN echo "${MONGO_INITDB_ROOT_USERNAME}" > /tmp/my_var.txt results in the init.js file containing without variable in the correct place (instead of database_user) and the output to my_var.txt remains empty.
When Compose runs an image, it runs in two stages. First it builds the image if required; this only uses the settings in the build: block, but not any of the others. Then it runs the built image with the remaining settings. In this sequence, that means first the image is built with the configuration files, but the environment: settings are only considered afterwards.
Since details like the database credentials are runtime settings, you probably don't want to rebuild the image when they change. That means you need to write a script to rewrite the file when the container starts, then run the main container CMD.
#!/bin/sh
# Rewrite the config file
sed -i.bak \
-e "s|database_user|${MONGO_INITDB_ROOT_USERNAME}|g" \
-e "s/database_password/${MONGO_INITDB_ROOT_PASSWORD}/g" \
-e "s;database_db;${MONGO_INITDB_DATABASE};g" \
/docker-entrypoint-initdb.d/init.js
# Run the main container CMD (but see below)
exec "$#"
In your Dockerfile, you would typically make this script be the ENTRYPOINT; leave the CMD as it is. You don't need to mention any of the environment variables.
However, there's one further complication. Since you're extending a Docker Hub image, it comes with its own ENTRYPOINT and CMD. Setting ENTRYPOINT resets CMD. Look up the image's page on Docker Hub, click "the full list of tags" link, then click the link for your specific tag; that takes you to the image's Dockerfile which ends with
# from the standard mongo:4.4.6 Dockerfile
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["mongod"]
That means your entrypoint script needs to re-run the original entrypoint script
# (instead of "run the main container CMD")
# Run the original image's ENTRYPOINT
exec docker-entrypoint.sh "$#"
In your Dockerfile, you need to COPY your custom entrypoint script in, and repeat the original image's CMD.
FROM mongo:4.4.6
COPY /mongo-init/init.js /docker-entrypoint-initdb.d/
COPY custom-entrypoint.sh /usr/local/bin
ENTRYPOINT ["custom-entrypoint.sh"] # must be JSON-array form
CMD ["mongod"]
You don't need to change the docker-compose.yml file at all. You can double-check that this works by launching a temporary container; the override command replaces CMD but not ENTRYPOINT, so it it runs after the setup in the entrypoint script.
docker-compose run --rm database \
cat /docker-entrypoint-initdb.d/init.js
Related
I have a docker-compose.yml that has a section:
myservice:
env_file:
- myvars.env
My env variable file has:
myvars.env:
SOME_VAL=123
And then in my Dockerfile I have this:
..
RUN echo "some_val ${SOME_VAL}"
ENTRYPOINT bash ${APP_BASE}/run.sh SOME_VAL=${SOME_VAL}
When I run docker-compose up, the value of some_val is empty.
Why is SOME_VAL not accessible in my dockerfile?
How do I pass the env variable SOME_VAL to my run.sh script?
You need to declare the variable with ENV in the Dockerfile before using it:
ENV variables are also available during the build, as soon as you introduce them with an ENV instruction.
Dockerfile
ENV SOME_VAL
RUN echo "some_val ${SOME_VAL}"
ENTRYPOINT bash ${APP_BASE}/run.sh SOME_VAL=${SOME_VAL}
When you docker-compose build an image, it only considers the build: sections of the docker-compose.yml file. Nothing else from any other part of the file is considered. environment: and env_file: settings aren't available, nor are volumes: nor networks:. The only way to pass settings in is through the Dockerfile ARG and the corresponding Compose args: settings, and even then, you only want to use this for things you'd "compile in" to the image.
Conveniently, shell scripts can directly access environment variables already, so you don't need to do anything at all; just use the variable.
#!/bin/sh
# run.sh
echo "SOME_VAL is $SOME_VAL"
# Dockerfile
FROM busybox
WORKDIR /app
COPY run.sh .
# RUN chmod +x run.sh
CMD ["./run.sh"]
# docker-compose.yml
version: '3.8'
services:
echoer:
build: .
env_file:
- my_vars.env
# environment:
# SOME_VAL: foo
I recently tried to clone our production code in local setup which means this code is running in production.
The docker file looks like
FROM jboss/keycloak
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
I am successfully able to create docker image but when I try to run it I get error
Caused by: java.io.FileNotFoundException: km.json (No such file or directory)
Repo structure
km/keycloak-images/km.json
km/keycloak-images/DockerFile
km/keycloak-images/entrypoint.sh
Docker compose file structure
/km/docker-compose.yml
/km/docker-compose.dev.yml
The docker-compose.dev.yml looks like
version: '3'
# The only service we expose in local dev is the keycloak server
# running an h2 database.
services:
keycloak:
build: keycloak-image
image: dt-keycloak
environment:
DB_VENDOR: h2
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
I run the command from /km
docker-compose -f docker-compose.dev.yml up --build
Basically not able to find the file inside docker container to check.
$docker run --rm -it <containerName> /bin/bash #this command is used to run the docker and get inside the container.
cd /opt/jboss #check km.json file is there or not
Edited: Basically the path for the source in COPY(km.json) is incorrect. Try using absolute path the make it relative.
FROM jboss/keycloak
COPY ./km.json /opt/jboss # changed this line
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
Your copy operation is wrong
if you run from
/km
you probably need to change COPY to
COPY keycloak-images/km.json /opt/jboss
if you run on Mac, try to use ADD instead of COPY, since mac has many issues with the copy
Try with this compose file:
version: '3'
services:
keycloak:
build:
context: ./keycloak-images
image: dt-keycloak
environment:
- DB_VENDOR: h2
- KEYCLOAK_USER: admin
- KEYCLOAK_PASSWORD: password
- KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
You have to specify the docker build context so that the files you need to copy are passed to the daemon.
Note that you need to adapt this context path when you do not execute docke-compose from km directory. This is because on your dockerfile you have specified
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
Saying that the build context sent to docker daemon should be a directory containing these files.
There is a start.sh script under the application:
scripts/start.sh
#!/bin/bash
source /env/set.sh
env/set.sh content:
#!/bin/bash
export DB_USERNAME=a
export DB_PASSWORD=b
docker-compose.yml
version: '3.4'
services:
web:
build: .
expose:
- "5000"
command: scripts/start.sh
tty: true
stdin_open: true
After run docker-compose build && docker-compose up, login into the container, the env values had not been set. But should run source /env/set.sh manually.
Why didn't command: scripts/start.sh work?
Let's assume you need to set it after the container has started and can't use the docker-compose environment variables.
You are sourcing the script that exports all the variables needed but then you close this bash session, what you need is to make the exports permanent. I described a way to do it in a similar question here.
What you need is an entrypoint:
#!/bin/bash
# if env variable is not set, set it
if [ -z $DB_USERNAME ];
then
# env variable is not set
export DB_USERNAME=username;
fi
# pass the arguments received by the entrypoint.sh
# to /bin/bash with command (-c) option
/bin/bash -c $#
Then add this script as entrypoint in your docker-compose file.
So I'm trying to set up a docker-compose file, and do some data initialisation in it. However when I do docker-compose up in my terminal (windows one + tried a bash one) I get sh: 1: ./entrypoint: not found despite it showing the file when I add ls to my command.
mssql_1 | data
mssql_1 | entrypoint.sh
mssql_1 | init.sql
mssql_1 | table
My docker-compose file:
version: '2.1'
services:
mssqldata:
image: microsoft/mssql-server-linux:latest
entrypoint: /bin/bash
mssql:
image: microsoft/mssql-server-linux:latest
ports:
- 1433:1433
volumes:
- /var/opt/mssql
- ./sql:/usr/src/app
working_dir: /usr/src/app
command: sh -c 'chmod +x ./entrypoint.sh; ./entrypoint.sh & /opt/mssql/bin/sqlservr;'
environment:
ACCEPT_EULA: Y
SA_PASSWORD: P#55w0rd
volumes_from:
- mssqldata
Folder structure:
docker-compose.yml
sql/
data/
table/
entrypoint.sh
init.sql
In my opinion this should be happening in your dockerfile instead of in your docker-compose.yml file. Generally the idea behind docker-compose is to get a multi container application running and to get the containers in a multi container application to talk to each other. So for example in your context it would perhaps be to get three containers to communicate with other for a asp.net+ MSSQL + IIS container.
In any case what you are trying to achieve you can do in your dockerfile
I'll try write this dockerfile for you as far as possible. Here is the dockerfile:
FROM microsoft/mssql-server-linux:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
EXPOSE 1433
ADD entrypoint.sh /entrypoint.sh
COPY entrypoint.sh /entrypoint.sh
# I suggest you add starting : "/opt/mssql/bin/sqlservr" to the entrypoint.sh script it would simplify things a bit.
ENTRYPOINT ["/bin/bash", "/entrypoint.sh"]
Here is the docker-compose.yml file that you need:
version: '2'
services:
ms-sql-server-container:
image: mssql-container:latest
# This refers to the docker container created by building our dockerfile with:
# docker build -t mssql-container .
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P#55w0rd
volumes:
- ./sql:/usr/src/app
# I don't really understand the reason for the second volume that you had if you can kindly explain, then I can edit my answer to accomodate the second volume
# if I think that you need it.
ports:
- 1433:1433
Let me know if this works.
I have part of a docker-compose file as so
docker-compose.yml
pitchjob-fpm01:
container_name: pitchjob-fpm01
env_file:
- env/base.env
build:
context: ./pitch
dockerfile: PitchjobDockerfile
volumes:
- "/Sites/pitch/pitchjob/:/Sites"
restart: always
depends_on:
- memcached01
- memcached02
links:
- memcached01
- memcached02
extends:
file: "shared/common.yml"
service: pitch-common-env
my extended yml file is
compose.yml
version: '2.0'
services:
pitch-common-env:
environment:
APP_VOL_DIR: Sites
WEB_ROOT_FOLDER: web
CONFIG_FOLDER: app/config
APP_NAME: sony_pitch
in the docker file for pitchjob-fpm01 i have a command like so
PitchjobDockerfile
# Set folder groups
RUN chown -Rf www-data:www-data /$APP_VOL_DIR
But when I run the command to bring up the stack
docker-compose -f docker-compose-base.yml up --build --force-recreate --remove-orphans
I get the following error
failed to build: The command '/bin/sh -c chown -Rf www-data:www-data
/$APP_VOL_DIR' returned a non-zero code: 1
I'm guessing this is because it doesn't have the $APP_VOL_DIR, but why is that so if the docker compose is extending another compose file that defines
environment: variables
You can use build-time arguments for that.
In Dockerfile define:
ARG APP_VOL_DIR=app_vol_dir
# Set folder groups
RUN chown -Rf www-data:www-data /$APP_VOL_DIR
Then in docker-compose.yml set app_vol_dir as build argument:
pitchjob-fpm01:
container_name: pitchjob-fpm01
env_file:
- env/base.env
build:
context: ./pitch
dockerfile: PitchjobDockerfile
args:
- app_vol_dir=Sites
I think your problem is not with the overrides, but with the way you are trying to do environment variable substitution. From the docs:
Note: Unlike the shell form, the exec form does not invoke a command
shell. This means that normal shell processing does not happen. For
example, RUN [ "echo", "$HOME" ]will not do variable substitution
on $HOME. If you want shell processing then either use theshell form
or execute a shell directly, for example:RUN [ "sh", "-c", "echo
$HOME" ].