Docker Compose - Command using Container Environment Variable - docker

Using Docker Compose to link a master and slave service together. The slave container is thus automatically injected by Compose with environment variables containing the various ports and IPs needed to connect to the other master container.
The service accepts the IP/Port of the master via a command line argument, so I set this in my commands.
master:
command: myservice
ports:
- '29015'
slave:
command: myservice --master ${MASTER_PORT_29015_TCP_ADDR}:${MASTER_PORT_29015_TCP_PORT}
links:
- master:master
The problem is that the environment variables like MASTER_PORT_29015_TCP_PORT are evaluated when the compose command is run, and not from within the container itself where they are actually set.
When starting the cluster - you see the warning: WARNING: The MASTER_PORT_29015_TCP_ADDR variable is not set. Defaulting to a blank string.
I tried setting entrypoint: ["/bin/sh", "-c"] but produced unusual behaviour where the service wouldn't see any variables at all. (For information, the service I'm actually using is RethinkDB).

As stated in the documentation, link environment variables are now discouraged, and you should just write master instead of $MASTER_PORT_29015_TCP_ADDR. Moreover, there doesn't seem to be any point to writing $MASTER_PORT_29015_TCP_PORT when you know its value's going to be 29015.
Hence, change the command to:
myservice --master master:29015

Related

How to run service with deploy.replica=0 in docker-compose.yml?

I have services in my docker-compose.yml configuration that I would occasionally use, such as for end-to-end testing, linting, or some one-off service.
Something along the lines of this:
app:
...
e2e-or-linter-or-one-off:
...
deploy:
replicas: 0
With replicas set to 0, docker-compose up would not spin up the e2e-or-linter-or-one-off service when I just want to run my regular app container(s).
And when I would need that e2e-or-linter-or-one-off service, I want to do something like this:
docker-compose run e2e-or-linter-or-one-off bash
Is there a way to define a service that doesn't spin up on docker-compose up but is still able to be used with docker-compose run?
docker-compose up has a --scale flag that I can use if I wanted to spin everything up, such as:
docker-compose up --scale "e2e-or-linter-or-one-off"=1 e2e-or-linter-or-one-off
But docker-compose run doesn't have a similar flag that can be used and I need docker-compose run so I can run the container interactively. Without it this:
docker-compose run e2e bash
won't work and Docker returns: no containers to start
Thank you for your help 🙏
This article shows a way to use an environment variable for the replica count, allowing you to change the value at invocation-time:
app:
...
e2e-or-linter-or-one-off:
...
deploy:
replicas: ${E2E_REPLICAS:-0}
I modified the example a bit so you don't need to have the env var set 100% of the time. The :- in the variable expansion is an operator that says "use the default value to the right if the name to the left is unset or empty".
Now running docker-compose up should run every service with 1+ replicas defined, but invoking E2E_REPLICAS=1 docker-compose run --rm e2e-or-linter-or-one-off bash will set the env variable, overriding the default value of 0, create the container & service, and run bash. When you're done with your shell session, the --rm tag will tear down the container so the environment returns to its normal operational state.

Naming Dask Worker with Docker Swarm Templating

I'm currently using Docker Swarm to deploy/manage multiple Dask Workers across a cluster. For easier debugging I'd like to be able to name the workers based on what node in the Swarm it is running on.
The dask-worker command has a --name parameter, however, Docker's templating doesn't seem to work in the entrypoint or cmd options. e.g.
...
worker:
image: myapp:latest
restart: always
entrypoint: ["dask-worker", "tcp://scheduler:8786", "--name", "{{.Node.Hostname}}"]
deploy:
mode: global
...
Unfortunately, the {{.Node.Hostname}} templating only appears to work in the environment section of a docker-compose.yml file. So my next option was to try and set it via an environment variable like this:
...
worker:
image: myapp:latest
restart: always
entrypoint: ["dask-worker", "tcp://scheduler:8786"]
environment:
DASK_DISTRIBUTED__WORKER__NAME: "{{.Node.Hostname}}"
deploy:
mode: global
...
I've also been unsuccessful with this, as I assume the workers name cannot be set via an environment variable - though I've not been able to find exhaustive documentation of all the supported environment variable config names for dask, so could be a typo or incorrect guess.
Finally, I tried to take the environment variable and bring it back into the entrypoint command via bash templating. This also did not work as it appeared that the environment variable was not set at the time this command was evaluated:
...
worker:
image: myapp:latest
restart: always
entrypoint: ["sh", "-c", "dask-worker tcp://scheduler:8786 --name $FOO"]
environment:
FOO: "{{.Node.Hostname}}"
deploy:
mode: global
...
I'm out of ideas at this point, and wondering if anyone could figure out a way.
Thanks.
You can use service template variables on service environments and hostname, and if my memory server right, inside volume declarations.
Thus said, wouldn't service.worker.hostname: "{{.Node.Hostname}}" do the trick? Also, your last attempt might work, if you espace $FOO in your command. If my memory serves right, you need to escape the $ sign with another $ sign in front. So instead of $FOO, try $$FOO - otherwise it will applay variable substitution on the host during container start and NOT when executing the command in the container.
If it does not, you can still write a small entrypoint script that wrapps your command and uses the environment variable you declared.

Passing environmental variables when deploying docker to remote host

I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.

Variable substitution not working on Windows 10 with docker compose

I'm wondering if I've stumbled on a bug or that there's something not properly documented about variable substitution on Windows in combination with Docker Machine and Compose (installed version of docker is 1.11.1).
If I run the "docker-compose up" command for a yml file that looks like this:
volumes:
- ${FOOBAR}/build/:/usr/share/nginx/html/
And this variable doesn't exist docker compose will correctly complain about it:
The foobar variable is not set. Defaulting to a blank string.
However, when I change it to an existing environment variable:
volumes:
- ${PROJECT_DIR}/build/:/usr/share/nginx/html/
It will then not properly start the container and displays the following error (trying to access the nginx container will give you a host is unreachable message):
ERROR: for nginx rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: not a directory"
If I run the echo command in the Docker Quickstart Terminal it will output the correct path that I've set in the environment variable. If I replace the ${PROJECT_DIR} with the environment variable value the container runs correctly.
I get the same type of error message if I try to use the environment variable for the official php image instead of the official nginx image. In both cases the docker compose file works if I substitute ${PROJECT_DIR} text with the content of the environment variable.
So is this a bug or am I missing something?
After some mucking about I've managed to get the containers to start correctly without error messages if I use the following (contains the full path to the local files):
volumes:
- ${PROJECT_DIR}:/usr/share/nginx/html/
The nginx container is then up and running though it cannot find the files then anymore. If I replace the variable with the path it contains it then can find the files again.
Above behaviour isn't consistent. When I added a second environment variable for substitution it gave the oci runtime error. Kept giving it when I removed that second variable and only started working again when I also removed the first variable. After that it suddenly accepted ${PROJECT_DIR}/build/ but still without finding files.
Starting a bash session to the nginx container shows that the mount point for the volume contains no files.
I'm really at a loss here what docker is doing and what it expects from me. Especially as I have no idea to what it is expanding the variables in the compose file.
In the end the conclusion is that variable substitution is too quirky on Windows with Docker Machine to be useful. However, there is an alternative to variable substitution.
If you need a docker environment that does the following:
Can deploy on different computers that don't run the same OS
Doesn't care if the host uses Docker natively or via Virtual Box (this can require path changes)
Then your best bet is to use extending.
First you create the docker-compose.yml file that contains the images you'll need. For example an php image with MySQL:
php:
image: 5.5-apache
links:
- php_db:mysql
- maildev:maildev
ports:
- 8080:80
php_db:
image: mariadb
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: examplepass
You might notice that there aren't any volumes defined in this docker-compose file. That is something we're going to define in a file called docker-compose.override.yml:
php:
volumes:
- /workspaces/Eclipse/project/:/var/www/html/
When you have both files in one directory docker-compose does something interesting. It combines them into one adding/overwriting settings in the docker-compose.yml with those present in docker-compose.override.yml.
Then when running the command docker-compose up it will result in a docker run that is configured for the machine you're working on.
You can get similar behaviour with custom files names if you change a few things in your docker-compose command:
docker-compose -f docker-compose.yml -f docker-compose.conf.yml up
The detail is that docker-compose can accept multiple compose files and it will combine them into one. This happens from left to right.
Both methods allows you to create a basic compose file that configures the containers you need. You then can override/add the settings you need for the specific computer you're running docker on.
The page Overview of docker-compose CLI has more details on how these commands work.

Alias service environment var in a Docker container

I use Docker Compose to spin up my containers. I have a RethinkDB service container that exposes (amongst others) the host port in the following env var: APP_RETHINKDB_1_PORT_28015_TCP_ADDR.
However, my app must receive this host as an env var named RETHINKDB_HOST.
My question is: how can I alias the given env var to the desired one when starting the container (preferably in the most Dockerish way)? I tried:
env_file: .env
environment:
- RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR
but first, it doesn't work and second, it doesn't look as if it's the best way to go.
When one container is linked to another, it sets the environment variable, but also a host entry. For example,
ubuntu:
links:
rethinkdb:rethinkdb
will allow ubuntu to ping rethinkdb and have it resolve the IP address. This would allow you to set RETHINKDB_HOST=rethinkdb. This won't work if you are relying on that variable for the port, however, but that's the only thing I can think of besides adding a startup script or modifying your CMD.
If you want to modify your CMD, which is currently set to command: service rethink start, for example, just change it to prepend the variable assignment, e.g.
command: sh -c 'RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR && service rethink start'
The approach would be similar if you are using a startup script, you would just add that variable assignment as a line before the service starts
The environment variable name APP_RETHINKDB_1_PORT_28015_TCP_ADDR you are trying to use already contains the port number. It is already kind of "hard coded". I think you simply have to use this
environment:
- RETHINKDB_HOST=28015

Resources