passing parameters to app on docker image - docker

I was looking around and I saw some simple examples of HelloWorld running on a docker container like this one:
http://dotnet.dzone.com/articles/docker-%E2%80%98hello-world-mono
at the end of the Dockerfile, the author calls:
CMD ["mono", "/src/hello.exe"]
What I want to do is have a reusable image as we build our Console App. Put that on a docker image using a Dockerfile. That part makes sense to me. But then I want to be able to pass the ConsoleApp parameters. Is that possible?
for example,
sudo docker run crystaltwix/helloworld -n "crystal twix"
where -n was a parameter I defined in my helloworld app.

You can use ENTRYPOINT foo rather than CMD foo to achieve this. All arguments after the docker run are passed to foo.

#seanmcl's answer is the simplest... if you have to pass secret values like application keys you may have to worry about exposing them is process lists.... So, you could use environment vars that you app looks for during startup:
SECRET_KEY="crystal twix"
docker run -e APP_KEY=$SECRET_KEY crystaltwix/helloworld

Related

Docker pass arguments to docker without replacing default CMD

I am writing a dockerfile that runs a pp that wil use the same argument most of the time.
ENTRYPOINY [ "mayapp" ]
CMD [ "defaultarg" ]
But every now and the it needs to take another argument, since the users will be mostly windows I want to do this without many scripts or extra arguments
docker run my/app somearg
Issue is they need to use a bind volume. However when I add a -v switch it will take that to override the CMD.
docker run my/app -v datadir:/app/datadir
This will replace 'defaultarg' with '-v'
Is there a way to stop this from happening without moving the CMD arg to a env/ARG?
I have used the build args switch because I couldn't find any info. However resolving this without build-args would be nicer since it less switches on running the container.
Parameters on docker run come in 2 flavours. Parameters for docker are placed before the image name and parameters for the container are placed after the image name. As you've discovered, parameters for the container replace any CMD statement in the image.
The -v parameter is for docker, so it should go before the image name, like this
docker run -v datadir:/app/datadir my/app
That will leave your CMD intact, so it'll work as before.

Pass NEPTUNE_API_TOKEN environment variable via docker run command

Using the docker run command, I'm trying to pass my NEPTUNE_API_TOKEN to my container.
My understanding is that I should use the -e flag as follows: -e ENV_VAR='env_var_value' and that might work.
I wish, however, to use the value existing in the already-running session, as follows:
docker run -e NEPTUNE_API_TOKEN=$(NEPTUNE_API_TOKEN) <my_image>
However, after doing so, NEPTUNE_API_TOKEN is set to empty when checking the value inside the container.
My question is whether I'm doing something wrong or if this is not possible and I must provide an explicit Neptune API token as a string.
$(NEPTUNE_API_TOKEN) is the syntax for running a command and grabbing the output. Use $NEPTUNE_API_TOKEN.
You can set up and pass NEPTUNE_API_TOKEN as a:
Docker run command options environment variable
Example: docker run -e NEPTUNE_API_TOKEN="<YOUR_API_TOKEN>" <image-name>
Dockerfile environment variable
Docker secret
Neptune will work with any of the methods described above.
For your case, I believe using method 2 and 3 will work best as you will set the API token only once and all containers can reuse it. Additionally, they are more secure methods.
You can read this guide on how to use Neptune with Docker I created last year.
Docs: https://docs.neptune.ai/how-to-guides/automation-pipelines/how-to-use-neptune-with-docker

Use Docker to Distribute CLI Application

I'm somewhat new to Docker. I would like to be able to use Docker to distribute a CLI program, but run the program normally once it has been installed. To be specific, after running docker build on the system, I need to be able to simply run my-program in the terminal, not docker run my-program. How can I do this?
I tried something with a Makefile which runs docker build -t my-program . and then writes a shell script to ~/.local/bin/ called my-program that runs docker run my-program, but this adds another container every time I run the script.
EDIT: I realize is the expected behavior of docker run, but it does not work for my use-case.
Any help is greatly appreciated!
If you want to keep your script, add the remove flag --rm to the docker run command. The remove flag removes the container automatically after the entry-point process has exit.
Additionally, I would personally prefer an alias for this. Simply add something like this example alias my-program="docker run --rm my-program" to your ~/.bashrc or ~/.zshrc file. This even has the advantage that all other parameters after the alias (my-program param1 param2) are automatically forwarded to the entry-point of your image without any additional effort.

How do I pass in configuration settings to a docker image for local development?

I'm working on a dotnet core docker container (not aspnet), I'd like to specify configuration options for it through appsettings.json. These values will eventually be filled in through environment variables in kubernetes.
However, for local development, how do we easily pass in these settings without storing them in the container?
You can map local volumes to docker -v local_path:container_path.
If you gonna use kubernetes you can use ConfigMap as well.
You can pass env variables while running the container with -e flag of the command docker run.
With this method, you’ll have to pass each variable in the command line. For example, docker run -e VAR1=value1 -e VAR2=value2
If this gets cumbersome, you can write these values to an env file and use this file like so, docker run --env-file=filename
For reference, you can check out the official docs.

Is it possible to customize environment variable by linking two docker containers?

I've created a docker image for my database server and one for the web application. Using the documentation I'm able to link between both container using the environment variables as follow:
value="jdbc:postgresql://${DB_PORT_5432_TCP_ADDR}:${DB_PORT_5432_TCP_PORT}/db_name"
It works fine now but it would be better that the environment variables are more general and without containing a static port number. Something like:
value="jdbc:postgresql://${DB_URL}:${DB_PORT}/db_name"
Is there anyway to link between the environment variables? for example by using the ENV command in the dockerfile ENV DB_URL=$DB_PORT_5432_TCP_ADDR or by using the argument --env by running the image docker run ... -e DB_URL=$DB_PORT_5432_TCP_ADDR docker_image ?
Without building this kind of functionality into your docker startup shell scripts or other orchestration mechanism, this is not possible at the moment to create environment variables like you are describing here. You do mention a couple of workarounds. However, the problem at least with using the -e DB_URL=... in your docker run command is that your $DB_PORT_5432_TCP_ADDR environment variable is not known at runtime, and so you will not be able to set this value when you run it. Typically, this is what your orchestration layer is used for, service discovery and passing this kind of data among your containers. There is at least one workaround mentioned here on SO that involves constructing a special shell script that you put in your CMD or ENTRYPOINT directives that passes the environment variable to the container.

Resources