Pass NEPTUNE_API_TOKEN environment variable via docker run command - docker

Using the docker run command, I'm trying to pass my NEPTUNE_API_TOKEN to my container.
My understanding is that I should use the -e flag as follows: -e ENV_VAR='env_var_value' and that might work.
I wish, however, to use the value existing in the already-running session, as follows:
docker run -e NEPTUNE_API_TOKEN=$(NEPTUNE_API_TOKEN) <my_image>
However, after doing so, NEPTUNE_API_TOKEN is set to empty when checking the value inside the container.
My question is whether I'm doing something wrong or if this is not possible and I must provide an explicit Neptune API token as a string.

$(NEPTUNE_API_TOKEN) is the syntax for running a command and grabbing the output. Use $NEPTUNE_API_TOKEN.

You can set up and pass NEPTUNE_API_TOKEN as a:
Docker run command options environment variable
Example: docker run -e NEPTUNE_API_TOKEN="<YOUR_API_TOKEN>" <image-name>
Dockerfile environment variable
Docker secret
Neptune will work with any of the methods described above.
For your case, I believe using method 2 and 3 will work best as you will set the API token only once and all containers can reuse it. Additionally, they are more secure methods.
You can read this guide on how to use Neptune with Docker I created last year.
Docs: https://docs.neptune.ai/how-to-guides/automation-pipelines/how-to-use-neptune-with-docker

Related

If docker run has multiple -e VAR=VALUE options for the same VAR, which one is used?

If I have docker run ... -e VAR=x ... -e VAR=y ..., what is the value of VAR in the container? From a quick test it looks like the last one is used, but is this guaranteed anywhere in the documentation?
No, this is not guaranteed as far as I can tell. Here are the relevant locations in the documentation:
The docker run reference
The docker run command line reference
Also the docker daemon API does not specify what happens on duplicate entries in the Env array.
There is also the documentation on Environment variables precedence for docker-compose. But this also does not mention duplicate keys in one of the layers.
You are probably best advised to not rely on the fact that this is implemented as it is.

How do I pass in configuration settings to a docker image for local development?

I'm working on a dotnet core docker container (not aspnet), I'd like to specify configuration options for it through appsettings.json. These values will eventually be filled in through environment variables in kubernetes.
However, for local development, how do we easily pass in these settings without storing them in the container?
You can map local volumes to docker -v local_path:container_path.
If you gonna use kubernetes you can use ConfigMap as well.
You can pass env variables while running the container with -e flag of the command docker run.
With this method, you’ll have to pass each variable in the command line. For example, docker run -e VAR1=value1 -e VAR2=value2
If this gets cumbersome, you can write these values to an env file and use this file like so, docker run --env-file=filename
For reference, you can check out the official docs.

Dynamically assigns different value for the environment variable upon a Docker service scale

For some situation, we might need to scale up the service and assign the different value for the environment variable, for example, NODE_ID (to be used internally).
Usually, I create a script to run my service with a dynamic parameter as preferred scale.
$ docker run -e NODE_ID=node_01 ...
$ docker run -e NODE_ID=node_02 ...
...
$ docker run -e NODE_ID=node_20 ...
Question
Is there any way to achieve this with the docker swarm mode, e.g.
$ docker service create ... ?
I believe the only value you can get it the container id by using $(hostname). That gives you a unique value for each container. There is no way to provide a custom one.

Is it possible to customize environment variable by linking two docker containers?

I've created a docker image for my database server and one for the web application. Using the documentation I'm able to link between both container using the environment variables as follow:
value="jdbc:postgresql://${DB_PORT_5432_TCP_ADDR}:${DB_PORT_5432_TCP_PORT}/db_name"
It works fine now but it would be better that the environment variables are more general and without containing a static port number. Something like:
value="jdbc:postgresql://${DB_URL}:${DB_PORT}/db_name"
Is there anyway to link between the environment variables? for example by using the ENV command in the dockerfile ENV DB_URL=$DB_PORT_5432_TCP_ADDR or by using the argument --env by running the image docker run ... -e DB_URL=$DB_PORT_5432_TCP_ADDR docker_image ?
Without building this kind of functionality into your docker startup shell scripts or other orchestration mechanism, this is not possible at the moment to create environment variables like you are describing here. You do mention a couple of workarounds. However, the problem at least with using the -e DB_URL=... in your docker run command is that your $DB_PORT_5432_TCP_ADDR environment variable is not known at runtime, and so you will not be able to set this value when you run it. Typically, this is what your orchestration layer is used for, service discovery and passing this kind of data among your containers. There is at least one workaround mentioned here on SO that involves constructing a special shell script that you put in your CMD or ENTRYPOINT directives that passes the environment variable to the container.

How to dynamically set environment variables of linked containers?

I have two containers webinterface and db, while webinterface is started using the --link option (for db) which generates the environment variables
DB_PORT_1111_TCP=tcp://172.17.0.5:5432
DB_PORT_1111_TCP_PROTO=tcp
DB_PORT_1111_TCP_PORT=1111
DB_PORT_1111_TCP_ADDR=172.17.0.5
...
Now my webinterface container uses a Dockerfile where some static environment variables are defined to define the connection:
ENV DB_HOST localhost
ENV DB_PORT 2222
Knowing that there is also an -e option for docker run, the problem is that I want to use those variables in the Dockerfile (used in some scripts) but overwrite them with the values generated with the --link option, i.e. something like:
docker run -d -e DB_HOST=$DB_PORT_1111_TCP_ADDR
This would use the host's defined environment variable which doesn't work here.
Is there a way to handle this?
This is a variable expansion issue so to resolve try the following:
docker run -d -e DB_HOST="$DB_PORT"_1111_TCP_ADDR
With a Unix process that is already running, its environment variables can only be changed from inside the process, not from the outside, so their are somewhat non-dynamic by nature.
If you find Docker links limiting, you are not the only person out there. One simple solution to this would be using WeaveDNS. With WeaveDNS you can simply use default ports (as with Weave overlay network there is no need to expose/publish/remap any internal ports) and resolve each component by via DNS (i.e. your app would just need to look for db.weave.local, and doesn't need to be aware of clunky environment variable scheme that Docker links present). To get a better idea of how WeaveDNS works, checkout one of the official getting started guides. WeaveDNS effectively gives you service discovery without having to modify the application you have.

Resources