Docker environment variable cannot be covered in entrypoint - docker

I used a env in Dockerfile and set a default value like this:
ENV HOST=abc
And I cover this env in docker-compose yaml file like follow:
environment:
HOST: "efg"
But I echo this env in my entrypoint script, it is abc.
And use docker exec to run in the container to echo $HOST, it is efg.
Who can tell me why.

Related

env variables not substituted in docker file

FROM alpine:3.11
COPY out/ /bin/
CMD ["command", "--flag1", "${HOST}", "--flag2", "${PORT}", "--flag3", "${AUTH_TOKEN}"]
This is the docker file used. I am loading the env variables during run through an env file.
But the variables are not substituted when running the command. If I override the CMD and exec into the container I am able to see the envs though.
What am I missing here?
You are running CMD in exec mode. Switch to shell mode and it will work out. As for the environment variables to be present you need a shell. more reading
your example:
CMD command --flag1 ${HOST} --flag2 ${PORT} --flag3 ${AUTH_TOKEN}
Full generic example:
Dockerfile:
FROM debian:stretch-slim
CMD echo ${env}
Run:
docker build .
docker run --rm -e env=hi <image id from build step>
hi

Why can't I load environment variables in dockerfile? [duplicate]

I'm building a container for a ruby app. My app's configuration is contained within environment variables (loaded inside the app with dotenv).
One of those configuration variables is the public ip of the app, which is used internally to make links.
I need to add a dnsmasq entry pointing this ip to 127.0.0.1 inside the container, so it can fetch the app's links as if it were not containerized.
I'm therefore trying to set an ENV in my Dockerfile which would pass an environment variable to the container.
I tried a few things.
ENV REQUEST_DOMAIN $REQUEST_DOMAIN
ENV REQUEST_DOMAIN `REQUEST_DOMAIN`
Everything passes the "REQUEST_DOMAIN" string instead of the value of the environment variable though.
Is there a way to pass environment variables values from the host machine to the container?
You should use the ARG directive in your Dockerfile which is meant for this purpose.
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.
So your Dockerfile will have this line:
ARG request_domain
or if you'd prefer a default value:
ARG request_domain=127.0.0.1
Now you can reference this variable inside your Dockerfile:
ENV request_domain=$request_domain
then you will build your container like so:
$ docker build --build-arg request_domain=mydomain Dockerfile
Note 1: Your image will not build if you have referenced an ARG in your Dockerfile but excluded it in --build-arg.
Note 2: If a user specifies a build argument that was not defined in the Dockerfile, the build outputs a warning:
[Warning] One or more build-args [foo] were not consumed.
So you can do:
cat Dockerfile | envsubst | docker build -t my-target -
Then have a Dockerfile with something like:
ENV MY_ENV_VAR $MY_ENV_VAR
I guess there might be a problem with some special characters, but this works for most cases at least.
This is for those looking to pass env variable from docker-compose using .env file to dockerfile during build and then pass those args as env variable to container.
Typical docker-compose file
services:
web:
build:
context: ./api
dockerfile: Dockerfile
args:
- SECRET_KEY=$SECRET_KEY
- DATABASE_URL=$DATABASE_URL
- AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
Pass the env variable present in .env file to args in build command.
Typical .env file
SECRET_KEY=blahblah
DATABASE_URL=dburl
Now when you run docker-compose up -d command, docker-compose file takes values from .env file then pass it to docker-compose file. Now Dockerfile of web containes all those varibales through args during build. Now typical dockerfile of web,
FROM python:3.6-alpine
ARG SECRET_KEY
ARG DATABASE_URL
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_BUCKET
ARG AWS_REGION
ARG CLOUDFRONT_DOMAIN
ENV CELERY_BROKER_URL redis://redis:6379/0
ENV CELERY_RESULT_BACKEND redis://redis:6379/0
ENV C_FORCE_ROOT true
ENV SECRET_KEY ${SECRET_KEY?secretkeynotset}
ENV DATABASE_URL ${DATABASE_URL?envdberror}
Now we recieved those secret_key and db url as arg in dokcerfile. Now let's use those in ENV as ENV SECRET_KEY ${SECRET_KEY?secretkeynotset}. Now even docker container has those variables in it's environment.
Remember not to use ARG $SECRET_KEY(which I did). It should be ARG SECRET_KEY
An alternative using envsubst without losing the ability to use commands like COPY or ADD, and without using intermediate files would be to use Bash's Process Substitution:
docker build -f <(envsubst < Dockerfile) -t my-target .
Load environment variables from a file you create at runtime.
export MYVAR="my_var_outside"
cat > build/env.sh <<EOF
MYVAR=${MYVAR}
EOF
... then in the Dockerfile
ADD build /build
RUN /build/test.sh
where test.sh loads MYVAR from env.sh
#!/bin/bash
. /build/env.sh
echo $MYVAR > /tmp/testfile
If you just want to find and replace all environment variables ($ExampleEnvVar) in a Dockerfile then build it this would work:
envsubst < /path/to/Dockerfile | docker build -t myDockerImage . -f -
When using build-arg...
docker build --build-arg CODE_VERSION=1.2 Dockerfile
...consider that the variable is not availabe after FROM:
ARG CODE_VERSION=latest
FROM base:${CODE_VERSION}
An ARG declared before a FROM is outside of a build stage, so it can’t be used in any instruction after a FROM.
Generally ARGs should be placed after FROM if not required during FROM:
FROM base:xy
ARG ABC=123
To use the default value of an ARG declared before the first FROM use an ARG instruction without a value inside of a build stage:
ARG VERSION=latest
FROM busybox:$VERSION
ARG VERSION
RUN echo $VERSION > image_version
https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
add -e key for passing environment variables to container.
example:
$ MYSQLHOSTIP=$(sudo docker inspect -format="{{ .NetworkSettings.IPAddress }}" $MYSQL_CONRAINER_ID)
$ sudo docker run -e DBIP=$MYSQLHOSTIP -i -t myimage /bin/bash
root#87f235949a13:/# echo $DBIP
172.17.0.2

Dockerfile ENV available in the .env file

I have an ENV set in Dockerfile:
...
ENV APP_HOME=/myapp
...
I also have a .env file.:
DATA_DIR=$APP_HOME/data
If I run the image with the env file:
docker run --env-file .env app_image
and echo the env variables in the shell(bash),
$ echo $DATA_DIR
$APP_HOME/data
I was expecting:
/myapp/data
My question is, how can I pass the ENV set during the docker build to the .env file?
My question is, how can I pass the ENV set during the docker build to
the .env file?
The short answer is that it is not possible.
The reason is that .env will not execute any bash command or take in any other external environment variables and hence the docker run command will just pass in the literal string.
As a result, you will see the value of $APP_HOME/data

Docker Environment Variable Not Overridden

Given I have a dockerfile like:
ARG MAX_MEMORY_PER_NODE="10GB"
ENV P_MAX_MEMORY_PER_NODE="${MAX_MEMORY_PER_NODE}"
ENTRYPOINT ["/var/p/entrypoint.sh"]
And the entrypoint.sh does something like:
echo "Max memory ${P_MAX_MEMORY_PER_NODE}"
If I were to run the container using the defaults, I would expect
Max Memory 10GB
And that works, but if I run
docker run me/mycontainer:latest -e P_MAX_MEMORY_PER_NODE=1GB
The script still uses the default value (does not print 1GB instead). In fact if I ran:
docker run me/mycontainer:latest -e A_TEST=Hello
And the script had
echo "My test: ${A_TEST}"
It would output
My test:
What am I doing wrong here? What can't I override (or even set) the environment variables being used in the entrypoint script from docker run?
Set the environment variable before the image:
docker run -e "A_TEST=hello" alpine env
For docker-compose
Similar to the this answer: https://stackoverflow.com/a/48915478/11406645
when using docker-compose, and you are passing docker-compose.yaml file an environment variable, or overriding one in env_file; you should pass your environment variable like so: DEBUG=1 docker-compose up
Another problem I faced is that docker commands require sudo permissions:
If you are using sudo before the docker-compose command, add the environment variable after the sudo like so: sudo DEBUG=1 docker-compose up.
The wrong way:
DEBUG=1 sudo docker-compose up
The right way:
sudo DEBUG=1 docker-compose up

User-provided environment variable within docker CMD

I have successfully pushed my docker image to the swisscom app cloud (similar to this example: https://ict.swisscom.ch/2016/05/docker-and-cloudfoundry/).
Now I would like to use a user-provided environment variable within my docker CMD. Something like this:
ADD target/app.jar app.jar
CMD java -jar app.jar -S $USER_PROVIDED_ENV_VARIABLE
I also tried system-provided environment variables:
ADD target/app.jar app.jar
CMD java -jar app.jar -S $VCAP_APPLICATION
What am I doing wrong here?
If your Dockerfile is built like that, you'll simply need to pass the -e flag when running the image.
Example Dockerfile:
FROM ubuntu:16.10
ENV MY_VAR "default value" # Optional - set a default value.
CMD echo $MY_VAR
Build the image:
docker build -t my_image .
Run a container from the image:
docker run -e MY_VAR="my value here" my_image

Resources