How to dynamically set environment variables of linked containers? - docker

I have two containers webinterface and db, while webinterface is started using the --link option (for db) which generates the environment variables
DB_PORT_1111_TCP=tcp://172.17.0.5:5432
DB_PORT_1111_TCP_PROTO=tcp
DB_PORT_1111_TCP_PORT=1111
DB_PORT_1111_TCP_ADDR=172.17.0.5
...
Now my webinterface container uses a Dockerfile where some static environment variables are defined to define the connection:
ENV DB_HOST localhost
ENV DB_PORT 2222
Knowing that there is also an -e option for docker run, the problem is that I want to use those variables in the Dockerfile (used in some scripts) but overwrite them with the values generated with the --link option, i.e. something like:
docker run -d -e DB_HOST=$DB_PORT_1111_TCP_ADDR
This would use the host's defined environment variable which doesn't work here.
Is there a way to handle this?

This is a variable expansion issue so to resolve try the following:
docker run -d -e DB_HOST="$DB_PORT"_1111_TCP_ADDR

With a Unix process that is already running, its environment variables can only be changed from inside the process, not from the outside, so their are somewhat non-dynamic by nature.
If you find Docker links limiting, you are not the only person out there. One simple solution to this would be using WeaveDNS. With WeaveDNS you can simply use default ports (as with Weave overlay network there is no need to expose/publish/remap any internal ports) and resolve each component by via DNS (i.e. your app would just need to look for db.weave.local, and doesn't need to be aware of clunky environment variable scheme that Docker links present). To get a better idea of how WeaveDNS works, checkout one of the official getting started guides. WeaveDNS effectively gives you service discovery without having to modify the application you have.

Related

Pass NEPTUNE_API_TOKEN environment variable via docker run command

Using the docker run command, I'm trying to pass my NEPTUNE_API_TOKEN to my container.
My understanding is that I should use the -e flag as follows: -e ENV_VAR='env_var_value' and that might work.
I wish, however, to use the value existing in the already-running session, as follows:
docker run -e NEPTUNE_API_TOKEN=$(NEPTUNE_API_TOKEN) <my_image>
However, after doing so, NEPTUNE_API_TOKEN is set to empty when checking the value inside the container.
My question is whether I'm doing something wrong or if this is not possible and I must provide an explicit Neptune API token as a string.
$(NEPTUNE_API_TOKEN) is the syntax for running a command and grabbing the output. Use $NEPTUNE_API_TOKEN.
You can set up and pass NEPTUNE_API_TOKEN as a:
Docker run command options environment variable
Example: docker run -e NEPTUNE_API_TOKEN="<YOUR_API_TOKEN>" <image-name>
Dockerfile environment variable
Docker secret
Neptune will work with any of the methods described above.
For your case, I believe using method 2 and 3 will work best as you will set the API token only once and all containers can reuse it. Additionally, they are more secure methods.
You can read this guide on how to use Neptune with Docker I created last year.
Docs: https://docs.neptune.ai/how-to-guides/automation-pipelines/how-to-use-neptune-with-docker

Binding ports when running Docker images in Singularity

I am currently working on a distributed graph processing platform which maintains an Akka cluster inside of docker containers and have recently been granted access to a large cluster to test this. Unfortunately, this cluster does not run docker, only singularity.
This did not initially seem an issue as singularity supports docker images, however, due to the nature of the Akka cluster, I have to past several environment variables and bind several ports. As an example, a 'Partition Manager' within the system would be run with the following command:
docker run -p $PM0Port:2551 --rm -e "HOST_IP=$IP" -e "HOST_PORT=$PM0Port" -v $entityLogs:/logs/entityLogs $Image partitionManager $PM0ID $NumberOfPartitions $ZooKeeper
From looking through the Singularity documentation I can see that I can create a 'Singularity' file and specify the environment variables, but there doesn't seem to be any documentation on binding custom ports. Nor does it explain how I could pass arguments to the default entrypoint (The project is compiled with 'sbt docker:publish' so I am not sure exactly where this would be to reassign it).
Even if this was the solution, as there are multiple actor types (and several instances of each) it appears specifying environment variables and ports in a document would require templating, creating the files at run time, and building an image for each individual actor.
I am sure I have completely missed a page somewhere which would nicely translate this docker command into the equivalent singularity, but I just can't find it.
There is no network isolation in Singularity, so there is no need to map any port. If the process inside the container binds to an IP:port, it will be immediately reachable on the host.

Is it possible to customize environment variable by linking two docker containers?

I've created a docker image for my database server and one for the web application. Using the documentation I'm able to link between both container using the environment variables as follow:
value="jdbc:postgresql://${DB_PORT_5432_TCP_ADDR}:${DB_PORT_5432_TCP_PORT}/db_name"
It works fine now but it would be better that the environment variables are more general and without containing a static port number. Something like:
value="jdbc:postgresql://${DB_URL}:${DB_PORT}/db_name"
Is there anyway to link between the environment variables? for example by using the ENV command in the dockerfile ENV DB_URL=$DB_PORT_5432_TCP_ADDR or by using the argument --env by running the image docker run ... -e DB_URL=$DB_PORT_5432_TCP_ADDR docker_image ?
Without building this kind of functionality into your docker startup shell scripts or other orchestration mechanism, this is not possible at the moment to create environment variables like you are describing here. You do mention a couple of workarounds. However, the problem at least with using the -e DB_URL=... in your docker run command is that your $DB_PORT_5432_TCP_ADDR environment variable is not known at runtime, and so you will not be able to set this value when you run it. Typically, this is what your orchestration layer is used for, service discovery and passing this kind of data among your containers. There is at least one workaround mentioned here on SO that involves constructing a special shell script that you put in your CMD or ENTRYPOINT directives that passes the environment variable to the container.

How to restrict environment variables passed to linked containers

We've started to use docker extensively (and we love it), but have discovered a rather nasty security issue. Linked containers have full access to the source container's environment settings.
For example, say you create a mysql container.
docker run --name db -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql
And now you create a wordpress container
docker run --name wp --link db:db \
-e WORDPRESS_DB_USER=wp \
-e WORDPRESS_DB_PASSWORD=1234 \
-d wordpress
If you now inspect the environment in the wordpress container, you'll be able to see the mysql root password.
docker exec -i wp sh -c "env|grep ^MYSQL_MYSQL_ENV"
MYSQL_ENV_MYSQL_MAJOR=5.7
MYSQL_ENV_MYSQL_ROOT_PASSWORD=mysecretpassword
MYSQL_ENV_MYSQL_VERSION=5.7.5-m15
This is a major security hole ! Any random code or module within the wordpress container could use the mysql root password to connect and reek havoc. And if the mysql database is shared with multiple wordpress containers (and joomla containers), the havoc could be global.
My question is, is there a way to limit what environment variables are passed between linked containers?
A secondary question -- I've scrutinized the docs on linking containers https://docs.docker.com/userguide/dockerlinks/#environment-variables
But it does NOT describe this behavior. I was thinking maybe this was an unintended side effect, and perhaps I should open a bug report?
My question is, is there a way to limit what environment variables are passed between linked containers?
If this is a concern in your environment, your best bet may be to adopt a solution other than container linking for service discovery. For example, you could use one of the various etcd-backed discovery mechanisms that are out there -- either using etcd directly, or something like consul, registrator or skydns.

Packaging an app in docker that can be configured at run time

I have packaged a web app I've been working on as a docker image.
I want to be able to start the image with some configuration, like this is the url of the couchdb server to use, etc.
What is the best way of supplying configuration? My app relies on env variables can I set these at run time?
In addition to setting environment variables during docker run (using -e/--env and --env-file) as you already discovered, there are other options available:
Using --link to link your container to (for instance) your couchdb server. This will work if your server is also a container (or if you use an ambassador container to another server). Linking containers will make some environment variables available, including server IP and port, that your script can use. This will work if you only need to set references to services.
Using volumes. Volumes defined in the Dockerfile can be mapped to host folders, so you can use them to access configuration files, for instance. This is useful for very complex configurations.
Extending the image. You can create a new image based on your original and ADD custom configuration files or ENV entries. This is the least flexible option but is useful in complex configuration to simplify the launching, specially when the configuration is mostly static (probably a bad idea for services/hostnames, but can work for frameworks that can be configured differently for dev/production). Can be combined with any of the above.
It seems docker supports setting env variables - should have read the manual!
docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
http://docs.docker.com/reference/commandline/cli/#run

Resources