I am struggling with passing ENV variables using docker-compose
I have a Dockerfile to build the container with a Java app:
FROM alpine:latest
ENV ftp_ip 127.0.0.1
ENV mongo_ip 127.0.0.1
ENV quorum_ip http://localhost:22000
RUN apk add --update openjdk8 && mkdir /var/backend/
RUN apk update
COPY license-system-0.0.1-SNAPSHOT.jar /var/backend/
EXPOSE 8080
ENTRYPOINT [ "java", "-jar", "-Dspring.quorum.host=${quorum_ip}", "-Dspring.ftp.server=${ftp_ip}", "-Dspring.data.mongodb.host=${mongo_ip}","/var/backend/license-system-0.0.1-SNAPSHOT.jar" ]
Then, the docker compose file:
version: "3"
services:
backend:
network_mode: host
build: backend
ports:
- "8080:8080"
environment:
- mongo_ip=${mongo_ip}
- ftp_ip=${ftp_ip}
- quorum_ip=${quorum_ip}
Finally, the container is started by a bash command:
quorum_ip="$1" mongo_ip="$2" ftp_ip="$3" docker-compose up -d --build
but docker inspect shows nothing promising. The variables are not set properly (they are using the default values from dockerfile) and the params arent changed even to the default values...
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"ftp_ip=127.0.0.1",
"mongo_ip=127.0.0.1",
"quorum_ip=http://localhost:22000"
],
"Cmd": null,
"ArgsEscaped": true,
"Image": "sha256:3ce51f52d70127f22462eafdb60321a4e477a4bec5aa092e860b8485e8575c26",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"java",
"-jar",
"-Dspring.quorum.host=${quorum_ip}",
"-Dspring.ftp.server=${ftp_ip}",
"-Dspring.data.mongodb.host=${mongo_ip}",
"/var/backend/license-system-0.0.1-SNAPSHOT.jar"
]
Am I missing something? Or am I doing something wrong?
If you want to use environmnet varibles in your entrypoint, you should use the "shell form" instead of the "exec form".
ENTRYPOINT java -jar -Dspring.quorum.host=${quorum_ip} -Dspring.ftp.server=${ftp_ip} -Dspring.data.mongodb.host=${mongo_ip} /var/backend/license-system-0.0.1-SNAPSHOT.jar
You could probably make it work with the "exec form" but it only complicates the syntax.
You have to use build args instead of envs to build an image from Dockerfile
Dockerfile
FROM alpine:latest
ARG ftp_ip
ARG mongo_ip
ARG quorum_ip
RUN apk add --update openjdk8 && mkdir /var/backend/
RUN apk update
COPY license-system-0.0.1-SNAPSHOT.jar /var/backend/
EXPOSE 8080
ENTRYPOINT [ "java", "-jar", "-Dspring.quorum.host="${quorum_ip}, "-Dspring.ftp.server="${ftp_ip}, "-Dspring.data.mongodb.host="${mongo_ip},"/var/backend/license-system-0.0.1-SNAPSHOT.jar" ]
docker-compose
version: "3"
services:
backend:
network_mode: host
build:
context: .
dockerfile: ./path/to/backend/Dockerfile
args:
- mongo_ip=${mongo_ip}
- ftp_ip=${ftp_ip}
- quorum_ip=${quorum_ip}
ports:
- "8080:8080"
.env (To pass envs for use in docker-compose. docker-compose automatically fetches envs from .env file if it exists)
ftp_ip=127.0.0.1
mongo_ip=127.0.0.1
quorum_ip=http://localhost:22000
And then run docker-compose build to build image with correct envs
Related
I am learning docker with asp.net core. Create a mvc application, dockerfile and docker-compose.yml file. I have docker file as below:
FROM mcr.microsoft.com/dotnet/sdk:7.0 as debug
#install debugger for NET Core
RUN apt-get update
RUN apt-get install -y unzip
RUN curl -sSL https://aka.ms/getvsdbgsh | /bin/sh /dev/stdin -v latest -l ~/vsdbg
RUN mkdir /app/
WORKDIR /app/
COPY ./src/testapp.csproj /app/testapp.csproj
RUN dotnet restore
COPY ./src/ /app/
RUN mkdir /out/
RUN dotnet publish --no-restore --output /out/ --configuration Release
EXPOSE 80
CMD dotnet run --urls "http://0.0.0.0:80"
Also, I have docker compose file as below:
version: "3.0"
services:
db:
image: postgres
restart: always
ports:
- "5432"
volumes:
- postgres:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: sachin
pg_admin:
image: dpage/pgadmin4
restart: always
ports:
- "5555:80"
volumes:
- pg_admin:/var/lib/pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: sachin.maharjan#dishhome.com.np
PGADMIN_DEFAULT_PASSWORD: password
web:
container_name: csharp
build:
context: .
target: debug
ports:
- "5000:80"
volumes:
- ./src:/app/
depends_on:
- db
volumes:
postgres:
pg_admin:
I have installed docker extension and C# extension in my vs code. I have lunch.js file inside .vscode folder.
The content of lunch.js:
{
"version": "0.2.0",
"configurations": [
{
"name": ".NET Core Docker Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickRemoteProcess}",
"pipeTransport": {
"pipeProgram": "docker",
"pipeArgs": [ "exec", "-i", "csharp" ],
"debuggerPath": "/root/vsdbg/vsdbg",
"pipeCwd": "${workspaceRoot}",
"quoteArgs": false
},
"sourceFileMap": {
"/work": "${workspaceRoot}/src/"
}
},
]
}
The debugger stop at break point but it shows error like this.
Is this error with volume mapping or did I set up lunch.json wrong?
I noticed in lunch.json in the source map section, I mapped the wrong folder in local file system with file system with docker.
Previously, MapSetting section was like below:
"sourceFileMap":
{
"/work": "${workspaceRoot}/src/"
}
Then I change folder mapping with right one and erorr is solved.
"sourceFileMap": {
"/src": "${workspaceRoot}/src/"
}
I have this Dockerfile:
FROM mongo:4.4.6
# setup the environment variables
ARG MONGO_INITDB_ROOT_USERNAME
ARG MONGO_INITDB_ROOT_PASSWORD
ARG MONGO_INITDB_DATABASE
# copy the initalisation file to the mongo db entrypoint sothat it gets excecuted on startup
COPY /mongo-init/init.js /docker-entrypoint-initdb.d/
RUN sed -i "s|database_user|${MONGO_INITDB_ROOT_USERNAME}|g" /docker-entrypoint-initdb.d/init.js
RUN sed -i "s/database_password/${MONGO_INITDB_ROOT_PASSWORD}/g" /docker-entrypoint-initdb.d/init.js
RUN sed -i "s;database_db;${MONGO_INITDB_DATABASE};g" /docker-entrypoint-initdb.d/init.js
CMD cat /docker-entrypoint-initdb.d/init.js && echo ${MONGO_INITDB_DATABASE}
EXPOSE 27017
And this file that gets copied into the container:
db.createUser({
user: "database_user",
pwd: "database_password",
roles: [
{
role: "readWrite",
db: "database_db",
},
],
});
db.createCollection("checklists");
db.createCollection("user");
When the container gets created with docker compose, the files contents are like this:
db.createUser({
user: "",
pwd: "",
roles: [
{
role: "readWrite",
db: "",
},
],
});
db.createCollection("checklists");
db.createCollection("user");
Is there anything I'm missing that puts literally nothing into the file? I already made sure, that when hardcoding the value instead of ${MONGO_INITDB_ROOT_USERNAME} the values gets correctly inserted.
Edit:
docker-compose file:
version: "3.7"
services:
database:
image: "jupiter/database"
container_name: "jupiter-stack-database"
build:
context: ./database
dockerfile: ./Dockerfile
ports:
- "7040:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE}
backend:
image: "jupiter/backend"
container_name: "jupiter-stack-backend"
build:
context: ./backend
dockerfile: ./Dockerfile
ports:
- "7020:3000"
depends_on:
- database
environment:
WAIT_HOSTS: database:27017
DB_HOST: database
DB_PORT: 27017
DB_DB: ${MONGO_INITDB_DATABASE}
DB_USER: ${MONGO_INITDB_ROOT_USERNAME}
DB_PASS: ${MONGO_INITDB_ROOT_PASSWORD}
The variables are taken from the .env file in the same directory. The same values are used in the backend and contain the correct values.
Running RUN sed -i "s|database_user|without variable|g" /docker-entrypoint-initdb.d/init.js and RUN echo "${MONGO_INITDB_ROOT_USERNAME}" > /tmp/my_var.txt results in the init.js file containing without variable in the correct place (instead of database_user) and the output to my_var.txt remains empty.
When Compose runs an image, it runs in two stages. First it builds the image if required; this only uses the settings in the build: block, but not any of the others. Then it runs the built image with the remaining settings. In this sequence, that means first the image is built with the configuration files, but the environment: settings are only considered afterwards.
Since details like the database credentials are runtime settings, you probably don't want to rebuild the image when they change. That means you need to write a script to rewrite the file when the container starts, then run the main container CMD.
#!/bin/sh
# Rewrite the config file
sed -i.bak \
-e "s|database_user|${MONGO_INITDB_ROOT_USERNAME}|g" \
-e "s/database_password/${MONGO_INITDB_ROOT_PASSWORD}/g" \
-e "s;database_db;${MONGO_INITDB_DATABASE};g" \
/docker-entrypoint-initdb.d/init.js
# Run the main container CMD (but see below)
exec "$#"
In your Dockerfile, you would typically make this script be the ENTRYPOINT; leave the CMD as it is. You don't need to mention any of the environment variables.
However, there's one further complication. Since you're extending a Docker Hub image, it comes with its own ENTRYPOINT and CMD. Setting ENTRYPOINT resets CMD. Look up the image's page on Docker Hub, click "the full list of tags" link, then click the link for your specific tag; that takes you to the image's Dockerfile which ends with
# from the standard mongo:4.4.6 Dockerfile
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["mongod"]
That means your entrypoint script needs to re-run the original entrypoint script
# (instead of "run the main container CMD")
# Run the original image's ENTRYPOINT
exec docker-entrypoint.sh "$#"
In your Dockerfile, you need to COPY your custom entrypoint script in, and repeat the original image's CMD.
FROM mongo:4.4.6
COPY /mongo-init/init.js /docker-entrypoint-initdb.d/
COPY custom-entrypoint.sh /usr/local/bin
ENTRYPOINT ["custom-entrypoint.sh"] # must be JSON-array form
CMD ["mongod"]
You don't need to change the docker-compose.yml file at all. You can double-check that this works by launching a temporary container; the override command replaces CMD but not ENTRYPOINT, so it it runs after the setup in the entrypoint script.
docker-compose run --rm database \
cat /docker-entrypoint-initdb.d/init.js
So I have a NodeJS API which I have created, which contains a Private NPM package in the package.json.
I have the following Docker files in this project:
Dockerfile
FROM node:10
WORKDIR /usr/src/app
ARG NPM_TOKEN
COPY .npmrc ./
COPY package.json ./
RUN npm install
RUN rm -f ./.npmrc
COPY . .
CMD [ "npm", "start" ]
.npmrc
//registry.npmjs.org/:_authToken=${NPM_TOKEN}
I have managed to build this API by running the command:
docker build --build-arg NPM_TOKEN=${MY_NPM_TOKEN} -t api:1.0.0 .
This successfully builds the image.
In my main application, I have a docker-compose.yml which I want to run this image.
version: '3' services: redis: container_name: redis image: redis:3.2.8 ports: - "6379:6379" volumes: - ./data:/data api: container_name: api image: api:1.0.0 build: context: . args: - NPM_TOKEN={MY_NPM_TOKEN} ports: - "3001:3001"
When I run docker-compose up it fails with the error:
Failed to replace env in config: ${NPM_TOKEN}
Does anyone have an idea, why my image is not taking in the ARG that is passed?
From my understanding, you are trying to pass the NPM_TOKEN arguments from the environment variable named MY_NPM_TOKEN.
However, there is an error in the syntax that you should update your docker-compose.yaml file
from - NPM_TOKEN={MY_NPM_TOKEN}
to - NPM_TOKEN=${MY_NPM_TOKEN}
I want to include the xvfb binary from the alpine image in the nodejs image.
Right now if I try to do docker exec on the container instance for the nodejs image, Xvfb won't exist, but it exists for the alpine image.
docker-compose.yml:
version: '2'
services:
nodejs:
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-nodejs
ports:
- "3000:3000"
volumes:
- .:/usr/src/nodejs
depends_on:
- alpine
alpine:
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-alpine
Dockerfile-nodejs:
FROM node:6.2.0
RUN mkdir -p /usr/src/nodejs
WORKDIR /usr/src/nodejs
COPY package.json /usr/src/nodejs/
RUN npm install
COPY . /usr/src/nodejs
CMD [ "npm", "start" ]
Dockerfile-alpine:
FROM alpine:3.3
RUN apk update
RUN apk add xvfb
CMD ["/bin/sh"]
I have part of a docker-compose file as so
docker-compose.yml
pitchjob-fpm01:
container_name: pitchjob-fpm01
env_file:
- env/base.env
build:
context: ./pitch
dockerfile: PitchjobDockerfile
volumes:
- "/Sites/pitch/pitchjob/:/Sites"
restart: always
depends_on:
- memcached01
- memcached02
links:
- memcached01
- memcached02
extends:
file: "shared/common.yml"
service: pitch-common-env
my extended yml file is
compose.yml
version: '2.0'
services:
pitch-common-env:
environment:
APP_VOL_DIR: Sites
WEB_ROOT_FOLDER: web
CONFIG_FOLDER: app/config
APP_NAME: sony_pitch
in the docker file for pitchjob-fpm01 i have a command like so
PitchjobDockerfile
# Set folder groups
RUN chown -Rf www-data:www-data /$APP_VOL_DIR
But when I run the command to bring up the stack
docker-compose -f docker-compose-base.yml up --build --force-recreate --remove-orphans
I get the following error
failed to build: The command '/bin/sh -c chown -Rf www-data:www-data
/$APP_VOL_DIR' returned a non-zero code: 1
I'm guessing this is because it doesn't have the $APP_VOL_DIR, but why is that so if the docker compose is extending another compose file that defines
environment: variables
You can use build-time arguments for that.
In Dockerfile define:
ARG APP_VOL_DIR=app_vol_dir
# Set folder groups
RUN chown -Rf www-data:www-data /$APP_VOL_DIR
Then in docker-compose.yml set app_vol_dir as build argument:
pitchjob-fpm01:
container_name: pitchjob-fpm01
env_file:
- env/base.env
build:
context: ./pitch
dockerfile: PitchjobDockerfile
args:
- app_vol_dir=Sites
I think your problem is not with the overrides, but with the way you are trying to do environment variable substitution. From the docs:
Note: Unlike the shell form, the exec form does not invoke a command
shell. This means that normal shell processing does not happen. For
example, RUN [ "echo", "$HOME" ]will not do variable substitution
on $HOME. If you want shell processing then either use theshell form
or execute a shell directly, for example:RUN [ "sh", "-c", "echo
$HOME" ].