How to use compose environment variables in docker ENTRYPOINT - docker

Environment variables don't appear to work in ENTRYPOINT. It is my understanding that the shell form of ENTRYPOINT will expand ENV variables at run time, but this doesn't to appear to work for ENV_CONFIG_INT in the example below. What have I done wrong in the following example?
Dockerfile
ENTRYPOINT [ "yarn", "run", "app-${ENV_CONFIG_INT}" ]
Compose yaml
test:
image: testimage/test:v1.0
build:
context: .
dockerfile: Dockerfile
env_file:
- ./docker.env
environment:
- ENV_CONFIG_INT=1
Error:
error Command "app-${ENV_CONFIG_INT}" not found.
Replacing the value with a static int of say 1 fixes the issue, however I want the value to be dynamic at runtime.
Thanks in advance.

I wouldn't try to use an environment variable to specify the command to run. Remove the line you show from the Dockerfile, and instead specify the command: in your docker-compose.yml file:
test:
image: testimage/test:v1.0
build: .
env_file:
- ./docker.env
command: yarn run app-1 # <--
As you note, the shell form of ENTRYPOINT (and CMD and RUN) will expand environment variables, but you're not using the shell form: you're using the exec form, which doesn't expand variables or handle any other shell constructs. If you remove the JSON-array layout and just specify a flat command, the environment variable will be expanded the way you expect.
# Without JSON layout
CMD yarn run "app-${ENV_CONFIG_INT:-0}"
(I tend to prefer specifying CMD to ENTRYPOINT for the main application command. There are two reasons for this: it's easier to override CMD in a plain docker run invocation, and there's a useful pattern of using ENTRYPOINT as a wrapper script that does some initial setup and then runs the CMD.)

The usual way to ensure that environment variable expansion works in the entrypoint or command of an image is to utilize bash - or sh - as the entrypoint.
version: "3.8"
services:
test:
image: alpine
environment:
FOO: "bar"
entrypoint: ["/bin/sh", "-c", "echo $${FOO}"]
$ docker-compose run test
Creating so_test_run ... done
bar
The other thing you need to do is properly escape the environment variable, so its not expanded on the host system.

Related

Accessing shell environment variables from docker-compose?

How do you access environment variables exported in Bash from inside docker-compose?
I'm essentially trying to do what's described in this answer but I don't want to define a .env file.
I just want to make a call like:
export TEST_NAME=test_widget_abc
docker-compose -f docker-compose.yml -p myproject up --build --exit-code-from myproject_1
and have it pass TEST_NAME to the command inside my Dockerfile, which runs a unittest suite like:
ENV TEST_NAME ${TEST_NAME}
CMD python manage.py test $TEST_NAME
My goal is to allow running my docker container to execute a specific unittest without having to rebuild the entire image, by simply pulling in the test name from the shell at container runtime. Otherwise, if no test name is given, the command will run all tests.
As I understand, you can define environment variables in a .env file and then reference them in your docker-compose.yml like:
version: "3.6"
services:
app_test:
build:
args:
- TEST_NAME=$TEST_NAME
context: ..
dockerfile: Dockerfile
but that doesn't pull from the shell.
How would you do this with docker-compose?
For the setup you describe, I'd docker-compose run a temporary container
export COMPOSE_PROJECT_NAME=myproject
docker-compose run app_test python manage.py test_widget_abc
This uses all of the setup from the docker-compose.yml file except the ports:, and it uses the command you provide instead of the Compose command: or Dockerfile CMD. It will honor depends_on: constraints to start related containers (you may need an entrypoint wrapper script to actually wait for them to be running).
If the test code is built into your "normal" image you may not even need special Compose setup to do this; just point docker-compose run at your existing application service definition without defining a dedicated service for the integration tests.
Since Compose does (simple) environment variable substitution you could also provide the per-execution command: in your Compose file
version: "3.6"
services:
app_test:
build: ..
command: python manage.py $TEST_NAME # uses the host variable
Or, with the Dockerfile you have, pass through the host's environment variable; the CMD will run a shell to interpret the string when it starts up
version: "3.6"
services:
app_test:
build: ..
environment:
- TEST_NAME # without a specific value here passes through from the host
These would both work with the Dockerfile and Compose setup you show in the question.
Environment variables in your docker-compose.yaml will be substituted with values from the environment. For example, if I write:
version: "3"
services:
app_test:
image: docker.io/alpine:latest
environment:
TEST_NAME: ${TEST_NAME}
command:
- env
Then if I export TEST_NAME in my local environment:
$ export TEST_NAME=foo
And bring up the stack:
$ docker-compose up
Creating network "docker_default" with the default driver
Creating docker_app_test_1 ... done
Attaching to docker_app_test_1
app_test_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
app_test_1 | HOSTNAME=be3c12e33290
app_test_1 | TEST_NAME=foo
app_test_1 | HOME=/root
docker_app_test_1 exited with code 0
I see that TEST_NAME inside the container has received the value from my local environment.
It looks like you're trying to pass the environment variable into your image build process, rather than passing it in at runtime. Even if that works once, it's not going to be useful, because docker-compose won't rebuild your image every time you run it, so whatever value was in TEST_NAME at the time the image was built is what you would see inside the container.
It's better to pass the environment into the container at run time.

Passing arguments to a docker container when calling docker-compose

my intention is to able to pass some arguments to a docker container when calling the docker-compose for that container.
Something like
docker-compose up -d myContainer myArg1=hallo myArg2=world
My Docker file looks something like:
FROM mcr.microsoft.com/dotnet/core/runtime:2.2 AS final
WORKDIR /app
COPY Myproj/bin/Debug/netcoreapp2.1 .
ENTRYPOINT ["dotnet", "myproj.dll", myArg1, myArg2]
And the docker-compose.yml file looks like this:
version: '3.4'
services:
myContainer:
image: ${DOCKER_REGISTRY-}myContainerImage
build:
context: ../..
dockerfile: MyProj/Dockerfile
First of all I wonder if this is something doable and secondly how to do it
There are two "halves" to the command line Docker ultimately runs, the entrypoint and the command. There are corresponding Docker Compose entrypoint: and command: directives to set them. Their interaction is fairly straightforward: they're just concatenated together to produce a single command line.
Given what you show, in your Dockerfile you could write
ENTRYPOINT ["dotnet", "myproj.dll"]
and then in your docker-compose.yml
command: myArg1=hallo myArg2=world
and that would ultimately launch that process with those arguments.
If you want to be able to specify this from the command line when you spin up the Compose stack, Compose supports ${VARIABLE} references everywhere so you could write something like
command: myArg1=${MY_ARG1:-hallo} world
and then launch this with a command like
MY_ARG1=greetings docker-compose up
A very typical pattern in Docker is to use ENTRYPOINT for a wrapper script that does some first-time setup, and then runs the command as the main container process with exec "$#". The entrypoint gets the command-line arguments as parameters and can do whatever it likes with them, including reformatting them or ignoring them entirely, and you could also use this to implement #Kevin's answer of configuring this primarily with environment variables, but translating that back to var=value type options.
I think you need to use environnement variables, you can launch your command with it.
Something like this: (there are several ways to do this)
In compose:
https://docs.docker.com/compose/environment-variables/
environment:
- myVar
In dockerfile:
https://docs.docker.com/engine/reference/builder/#env
ENTRYPOINT ["dotnet", "myproj.dll", "$myVar"]

How to set environment variables in docker-compose

I have a docker-compose file in which I use env_file to read and set a bunch of env variables at run time. These env variables are required for a command that I need to run at run time using command. However it looks like the command section is executed before the env variables are set at run time and this cause an error. How can I ensure that setting the env variables occur before executing the command section in a docker-compose?
Here is my docker-compose file
services:
mlx-python-hdfs:
image: image_name
container_name: cname
env_file: ./variables.txt
command:
- microservice $VAR1 $VAR2
$VAR1 and $VAR2 are read from variables.txt file but when I start the container it complains on "microservice $VAR1 $VAR2" line and show the $VAR1 and VAR2 as empty.
rename your file to .env (without name) so mv variables.txt .env
edit your compose :
services:
mlx-python-hdfs:
image: image_name
container_name: cname
command:
- microservice $VAR1 $VAR2
then run it normally
see this
The Docker Compose command: directive has two forms. If you specify it as a list, it is read as a list of explicit individual arguments; no shell is invoked over it, and there is no argument expansion.
command:
- /bin/ls
- -l
- /app
If you specify it as a simple string, it is implicitly wrapped in sh -c '...', and that shell will do variable expansion, which is what you want in your case.
command: microservice $VAR1 $VAR2
(Your form is not only not doing variable expansion, but because you specified the command in a single list item, it is looking for a file literally named microservice $VAR1 $VAR2, spaces and dollar signs included, to be the main container process.)
Environment variables are most likely being set inside the container. However, the $ syntax is expanded by the compose file parser to inject settings from your shell on the host. To expand them inside the container, you need to escape them with the $$ syntax:
services:
mlx-python-hdfs:
image: image_name
container_name: cname
env_file: ./variables.txt
command:
- microservice $$VAR1 $$VAR2
That will pass a literal $ into the container which will be expanded by a shell inside the container.
See the compose file documentation for more details: https://docs.docker.com/compose/compose-file/#variable-substitution
Note that renaming the file to .env results in the variables being set inside docker-compose itself, not inside your container. That will also work if you do not escape your variables.

Docker compose : how to define env_file pointing to some file inside the container and not the physical server

I ve some services running within docker-compose file :
myService:
image: 127.0.0.1:myimage
stdin_open: true
tty: true
ports:
- target: 8800
published: 8800
protocol: tcp
mode: host
deploy:
mode: global
resources:
limits:
memory: 1024M
placement:
constraints:
- node.labels.myLabel== one
env_file:
- /opt/app/myFile.list # I WANT TO REUSE SOME FILE INSIDE THE CONTAINER
healthcheck:
disable: true
As you can see i need to declare an env-file :
env_file:
- /opt/app/myFile.list # I WANT TO REUSE SOME FILE INSIDE THE CONTAINER
My purpose is how to reuse some file inside the container and not pointing to the physical machine.
Suggestions ?
Docker (Compose) doesn't quite directly support this on its own, but it's fairly easy to add to your image.
Remember that there are two mechanisms to pass command lines to Docker. If you use both an entrypoint and a command, then the entrypoint is launched as the main container process, and passed the command as arguments. This lets you do first-time setup (like set environment variables) and then exec the command.
A typical entrypoint script for this sort of application could look like
#!/bin/sh
if [ -n "$ENV_FILE" ]; then
. "$ENV_FILE"
fi
exec "$#"
You'd add it to your Docker image
...
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["same", "as", "before"]
(There are several variants that use ENTRYPOINT to name the main application or just a language interpreter. For this pattern you need to move that to the CMD.)
Then when you launch the container set the environment variable the entrypoint script is looking for.
services:
myservice:
environment:
ENV_FILE: /opt/app/myFile.list # inside the container
If you launch a debug shell with docker run --rm myimage sh, that goes through the entrypoint script and you will get to see these environment variables. docker exec bypasses the entrypoint and you do not get the same environment. Low-level debugging tools like docker inspect won't show the environment variables either.

Docker environment variables in multi-stage builds

given this .env file:
TEST=33333
given this docker-compose.yml file:
service_name:
image: test
env_file: .env
environment:
TEST: 22222
given this Dockerfile file:
FROM an_image AS builder
FROM another_image
ENV TEST 11111
CMD ["/bin/echo $TEST"]
Whenever I build and run this image in a container, it prints 11111.
If I remove the ENV 11111 line from the Dockerfile, my TEST environment variable is empty...
Is the parent image receiving the environment variables but not the child one?
Thanks!
EDIT:
trying ENV TEST ${TEST} didn't work ($TEST is empty)
removing ENV TEST didn't work ($TEST is empty)
So this is not a multi-stage issue.
It appears ENV variables are only used when running containers (docker-compose up). Not at build time (docker-compose build). So you have to use arguments:
.env:
TEST=11111
docker-compose.yaml:
version: '3'
services:
test:
build:
context: .
args:
TEST: ${TEST}
Dockerfile:
FROM nginx:alpine
ARG TEST
ENV TEST ${TEST}
CMD ["sh", "-c", "echo $TEST"]
test command:
docker rmi test_test:latest ; docker-compose build && docker run -it --rm test_test:latest
Seriously the documentation is somewhat lacking.
Reference: https://github.com/docker/compose/issues/1837
The problem is not about multi-stage specifically.
It's about differences between Dockerfile ARG & docker-compose YAML build args ("build arguments"); and Dockerfile ENV & docker-compose YAML environment/.env.
The docs were updated (more recently than the original post), and it is fairly clear now:
args
Add build arguments, which are environment variables accessible only during the build process.
Example from the docker-compose docs
Starting simple, just showing the interaction between Dockerfile and the YAML:
ARG buildno
ARG gitcommithash
RUN echo "Build number: $buildno"
RUN echo "Based on commit: $gitcommithash"
build:
context: .
args:
buildno: 1
gitcommithash: cdc3b19
build:
context: .
args:
- buildno=1
- gitcommithash=cdc3b19
Example to tie it back to the question:
See the other answer in this thread.
Docs & deepening your understanding
Learn one layer of abstraction at a time
I recommend to go from the Dockerfile level of abstraction, upward. Making sure you understand each layer before you add the next layer of abstraction.
Dockerfile (and then play with running containers from your Dockerfile ... using default ENV, then playing with --env, then playing with ARG and --build-arg)
Then add docker-compose details in, and play with those.
Then loop back to Dockerfiles and understanding multi-stage builds.
Dockerfile
A helpful blog-post -- focuses on the Dockerfile but in all cases, it's best to understand Dockerfiles alone before adding the extra layers of abstraction on top of that, such as docker-compose YAML.
https://vsupalov.com/docker-arg-env-variable-guide/
docker-compose
Then docker-compose official docs:
https://docs.docker.com/compose/environment-variables/
https://docs.docker.com/compose/env-file/
https://docs.docker.com/compose/compose-file/#environment
https://docs.docker.com/compose/compose-file/#env_file
multi-stage Dockerfiles
https://docs.bitnami.com/containers/how-to/optimize-docker-images-multistage-builds/
https://medium.com/#tonistiigi/advanced-multi-stage-build-patterns-6f741b852fae
https://github.com/garethr/multi-stage-build-example

Resources