Passing arguments to a docker container when calling docker-compose - docker

my intention is to able to pass some arguments to a docker container when calling the docker-compose for that container.
Something like
docker-compose up -d myContainer myArg1=hallo myArg2=world
My Docker file looks something like:
FROM mcr.microsoft.com/dotnet/core/runtime:2.2 AS final
WORKDIR /app
COPY Myproj/bin/Debug/netcoreapp2.1 .
ENTRYPOINT ["dotnet", "myproj.dll", myArg1, myArg2]
And the docker-compose.yml file looks like this:
version: '3.4'
services:
myContainer:
image: ${DOCKER_REGISTRY-}myContainerImage
build:
context: ../..
dockerfile: MyProj/Dockerfile
First of all I wonder if this is something doable and secondly how to do it

There are two "halves" to the command line Docker ultimately runs, the entrypoint and the command. There are corresponding Docker Compose entrypoint: and command: directives to set them. Their interaction is fairly straightforward: they're just concatenated together to produce a single command line.
Given what you show, in your Dockerfile you could write
ENTRYPOINT ["dotnet", "myproj.dll"]
and then in your docker-compose.yml
command: myArg1=hallo myArg2=world
and that would ultimately launch that process with those arguments.
If you want to be able to specify this from the command line when you spin up the Compose stack, Compose supports ${VARIABLE} references everywhere so you could write something like
command: myArg1=${MY_ARG1:-hallo} world
and then launch this with a command like
MY_ARG1=greetings docker-compose up
A very typical pattern in Docker is to use ENTRYPOINT for a wrapper script that does some first-time setup, and then runs the command as the main container process with exec "$#". The entrypoint gets the command-line arguments as parameters and can do whatever it likes with them, including reformatting them or ignoring them entirely, and you could also use this to implement #Kevin's answer of configuring this primarily with environment variables, but translating that back to var=value type options.

I think you need to use environnement variables, you can launch your command with it.
Something like this: (there are several ways to do this)
In compose:
https://docs.docker.com/compose/environment-variables/
environment:
- myVar
In dockerfile:
https://docs.docker.com/engine/reference/builder/#env
ENTRYPOINT ["dotnet", "myproj.dll", "$myVar"]

Related

Accessing shell environment variables from docker-compose?

How do you access environment variables exported in Bash from inside docker-compose?
I'm essentially trying to do what's described in this answer but I don't want to define a .env file.
I just want to make a call like:
export TEST_NAME=test_widget_abc
docker-compose -f docker-compose.yml -p myproject up --build --exit-code-from myproject_1
and have it pass TEST_NAME to the command inside my Dockerfile, which runs a unittest suite like:
ENV TEST_NAME ${TEST_NAME}
CMD python manage.py test $TEST_NAME
My goal is to allow running my docker container to execute a specific unittest without having to rebuild the entire image, by simply pulling in the test name from the shell at container runtime. Otherwise, if no test name is given, the command will run all tests.
As I understand, you can define environment variables in a .env file and then reference them in your docker-compose.yml like:
version: "3.6"
services:
app_test:
build:
args:
- TEST_NAME=$TEST_NAME
context: ..
dockerfile: Dockerfile
but that doesn't pull from the shell.
How would you do this with docker-compose?
For the setup you describe, I'd docker-compose run a temporary container
export COMPOSE_PROJECT_NAME=myproject
docker-compose run app_test python manage.py test_widget_abc
This uses all of the setup from the docker-compose.yml file except the ports:, and it uses the command you provide instead of the Compose command: or Dockerfile CMD. It will honor depends_on: constraints to start related containers (you may need an entrypoint wrapper script to actually wait for them to be running).
If the test code is built into your "normal" image you may not even need special Compose setup to do this; just point docker-compose run at your existing application service definition without defining a dedicated service for the integration tests.
Since Compose does (simple) environment variable substitution you could also provide the per-execution command: in your Compose file
version: "3.6"
services:
app_test:
build: ..
command: python manage.py $TEST_NAME # uses the host variable
Or, with the Dockerfile you have, pass through the host's environment variable; the CMD will run a shell to interpret the string when it starts up
version: "3.6"
services:
app_test:
build: ..
environment:
- TEST_NAME # without a specific value here passes through from the host
These would both work with the Dockerfile and Compose setup you show in the question.
Environment variables in your docker-compose.yaml will be substituted with values from the environment. For example, if I write:
version: "3"
services:
app_test:
image: docker.io/alpine:latest
environment:
TEST_NAME: ${TEST_NAME}
command:
- env
Then if I export TEST_NAME in my local environment:
$ export TEST_NAME=foo
And bring up the stack:
$ docker-compose up
Creating network "docker_default" with the default driver
Creating docker_app_test_1 ... done
Attaching to docker_app_test_1
app_test_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
app_test_1 | HOSTNAME=be3c12e33290
app_test_1 | TEST_NAME=foo
app_test_1 | HOME=/root
docker_app_test_1 exited with code 0
I see that TEST_NAME inside the container has received the value from my local environment.
It looks like you're trying to pass the environment variable into your image build process, rather than passing it in at runtime. Even if that works once, it's not going to be useful, because docker-compose won't rebuild your image every time you run it, so whatever value was in TEST_NAME at the time the image was built is what you would see inside the container.
It's better to pass the environment into the container at run time.

Execute a host system command and pass the result as build arg in docker-compose

I'm building a docker-compose.yml file that builds my custom Dockerfile and I would need to execute a bash command on my host system first, then the pass the results as build argument for the Dockerfile.
Here is an example in practice:
Dockerfile:
#...
ARG SSH_KEY_BASE64
RUN echo "Build SSH_KEY_BASE64: $SSH_KEY_BASE64"
#...
docker-compose.yml:
#...
version: '3.4'
services:
container:
container_name: my-container
build:
context: .
dockerfile: Dockerfile
args:
SSH_KEY_BASE64: ${SSH_KEY_BASE64_COMMAND}
env_file: .env
#...
.env:
SSH_KEY_BASE64_COMMAND=$(cat ~/.ssh/id_rsa_mykey | base64)
At the moment the value of $SSH_KEY_BASE64 in the Dockerfile is unresolved and it prints just $(cat ~/.ssh/id_rsa_mykey | base64), but I want it to evaluate that command and print the base64 of the content of my key.
I would like to avoid to manually run $(cat ~/.ssh/id_rsa_mykey | base64) before running docker-compose up --build that's why I'm asking for an automatic way to do that.
What options do I have?
Thanks
Compose doesn't support this syntax, and can't directly execute commands on the host system. The only substitution syntax it supports are $VARIABLE, ${VARIABLE}, ${VARIABLE:-default}, and ${VARIABLE:?error} environment variable expansion syntaxes, and that only in the main docker-compose.yml file. The values in an env_file: file aren't interpreted or expanded at all.
In most cases you don't actually want to build an image that depends on the specific host system it's built on; an image is intended to be reused in multiple environments. In the particular case of an ssh key it's particularly dangerous to pass it as an ARG since it can be pretty easily extracted from the final image (docker-compose run container cat /root/.id_rsa). You might need to do whatever operation needs the ssh key (for example, an authenticated git clone) on the host system outside of Docker.
The only workaround is to set a host environment variable and reference that instead, but it's probably better to get rid of the ARG entirely.

How to use compose environment variables in docker ENTRYPOINT

Environment variables don't appear to work in ENTRYPOINT. It is my understanding that the shell form of ENTRYPOINT will expand ENV variables at run time, but this doesn't to appear to work for ENV_CONFIG_INT in the example below. What have I done wrong in the following example?
Dockerfile
ENTRYPOINT [ "yarn", "run", "app-${ENV_CONFIG_INT}" ]
Compose yaml
test:
image: testimage/test:v1.0
build:
context: .
dockerfile: Dockerfile
env_file:
- ./docker.env
environment:
- ENV_CONFIG_INT=1
Error:
error Command "app-${ENV_CONFIG_INT}" not found.
Replacing the value with a static int of say 1 fixes the issue, however I want the value to be dynamic at runtime.
Thanks in advance.
I wouldn't try to use an environment variable to specify the command to run. Remove the line you show from the Dockerfile, and instead specify the command: in your docker-compose.yml file:
test:
image: testimage/test:v1.0
build: .
env_file:
- ./docker.env
command: yarn run app-1 # <--
As you note, the shell form of ENTRYPOINT (and CMD and RUN) will expand environment variables, but you're not using the shell form: you're using the exec form, which doesn't expand variables or handle any other shell constructs. If you remove the JSON-array layout and just specify a flat command, the environment variable will be expanded the way you expect.
# Without JSON layout
CMD yarn run "app-${ENV_CONFIG_INT:-0}"
(I tend to prefer specifying CMD to ENTRYPOINT for the main application command. There are two reasons for this: it's easier to override CMD in a plain docker run invocation, and there's a useful pattern of using ENTRYPOINT as a wrapper script that does some initial setup and then runs the CMD.)
The usual way to ensure that environment variable expansion works in the entrypoint or command of an image is to utilize bash - or sh - as the entrypoint.
version: "3.8"
services:
test:
image: alpine
environment:
FOO: "bar"
entrypoint: ["/bin/sh", "-c", "echo $${FOO}"]
$ docker-compose run test
Creating so_test_run ... done
bar
The other thing you need to do is properly escape the environment variable, so its not expanded on the host system.

How can I add a file to a volume in a Docker image, using values from the docker-compose.yml?

I have this .env file:
admin=admin
password=adminsPassword
stackName=integration-demo
the values of which are used in the docker-compose.yml file, like this:
myService:
build:
context: .
dockerfile: myService.Dockerfile
args:
- instance=${stackName}.local
- admin=${admin}
- password=${password}
volumes:
- ./config:/config
I want to add them to the Docker compose file, like this:
FROM openjdk:8-jdk-alpine
ARG docker_properties_file=Username=$admin\nPassword=$password\nHost=$instance
RUN $docker_proprties_file >> config/gradle-docker.properties
so that I have a gradle-docker.properties file that looks like:
username=admin
password=adminsPassword
host=integration.demo.local
in the /config directory.
However, no gradle-docker.properties file is getting written.
How can I use the variable in a docker-compose.yml file to add data to a volume?
Plain Docker and Docker Compose don’t have this capability. You can create the file outside of Docker on the host and mount it into the container as you show, but neither Docker nor Compose has the templating capability you would need to be able to do this.
The overall approach you’re describing in the question builds a custom image for each set of configuration options. That’s not really a best practice: imagine needing to recompile ls because you attached a USB drive you needed to look at.
One thing you can do in plain Docker is teach the image how to create its own configuration file at startup time. You can do that with a script like, for example:
#!/bin/sh
# I am docker-entrypoint.sh
# Create the config file
cat >config/gradle-docker.properties <<EOF
username=$USERNAME
et=$CETERA
EOF
# Run the main container process
exec "$#"
In your Dockerfile, COPY this file into the image and set it as the ENTRYPOINT; leave your CMD unchanged. You must use the JSON-array form of the ENTRYPOINT directive.
...
COPY docker-entrypoint.sh .
RUN chmod +x docker-entrypoint.sh
ENTRYPOINT ["./docker-entrypoint.sh"]
CMD ["java", "-jar", "application.jar"]
(In Kubernetes, the Helm package manager does have a templating system that can create content for a ConfigMap object that can be injected into a pod; but that’s a significant amount of extra machinery.)

How do use docker-compose to run commands between the docker image and the entrypoint?

docker compose v3
I'm trying to run some app-specific commands like composer update whenever I run docker-compose up, having my docker-compose.yml file look something along the lines of this
version: '3'
services:
app1:
image: laraedit/laraedit
ports:
- 3000:80
volumes:
- ./appfolder:/var/www/appfolder
If I run my first-run commands in the entrypoint, it will override all the commands that default laraedit/laraedit is running. (At least I think so, because the container always stops when my entrypoint commands finish)
I don't want to bother the process of laraedit/laraedit starting up, I just want to execute a couple of commands on the side.
If I weren't using docker-compose, then I would have laraedit/laraedit Dockerfile locally, and I could then edit it and add a RUN statement somewhere in there.
But since I don't have the Dockerfile, and I can't make an entrypoint without throwing off the container's normal startup, I don't know how to go about automating the process of running these boring commands every single time I run docker-compose up.
Things I've tried:
adding my own Dockerfile (that replaces laraedit's)
running an entrypoint script (that blocks laraedit's startup)
running them as a command (the commands did not execute)
You need to extend the laraedit/laraedit image with a custom one.
You can use a Dockerfile as simple as this:
FROM laraedit/laraedit
COPY my_entrypoint.sh /my_entrypoint.sh
RUN chmod +x /my_entrypoint.sh
ENTRYPOINT /my_entrypoint.sh
my_entrypoint.sh is a script that contains your initialization commands and calls the original entrypoint at its end, for example:
#!/bin/sh
my_init_cmd1
my_init_cmd2
...
/original/entrypoint/script/path
You can get /original/entrypoint/script/path value by reading the original laraedit Dockerfile
Let's say you put the 2 files above in a directory called docker alongside your docker-compose.yml, than you need to adjust your docker-compose.yml like this:
version: '3'
services:
app1:
build: ./docker/
ports:
- 3000:80
volumes:
- ./appfolder:/var/www/appfolder

Resources