Alias service environment var in a Docker container - docker

I use Docker Compose to spin up my containers. I have a RethinkDB service container that exposes (amongst others) the host port in the following env var: APP_RETHINKDB_1_PORT_28015_TCP_ADDR.
However, my app must receive this host as an env var named RETHINKDB_HOST.
My question is: how can I alias the given env var to the desired one when starting the container (preferably in the most Dockerish way)? I tried:
env_file: .env
environment:
- RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR
but first, it doesn't work and second, it doesn't look as if it's the best way to go.

When one container is linked to another, it sets the environment variable, but also a host entry. For example,
ubuntu:
links:
rethinkdb:rethinkdb
will allow ubuntu to ping rethinkdb and have it resolve the IP address. This would allow you to set RETHINKDB_HOST=rethinkdb. This won't work if you are relying on that variable for the port, however, but that's the only thing I can think of besides adding a startup script or modifying your CMD.
If you want to modify your CMD, which is currently set to command: service rethink start, for example, just change it to prepend the variable assignment, e.g.
command: sh -c 'RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR && service rethink start'
The approach would be similar if you are using a startup script, you would just add that variable assignment as a line before the service starts

The environment variable name APP_RETHINKDB_1_PORT_28015_TCP_ADDR you are trying to use already contains the port number. It is already kind of "hard coded". I think you simply have to use this
environment:
- RETHINKDB_HOST=28015

Related

Docker container name resolution inside and outside

I have a flask app that uses rabbitmq where both are docker containers (along with other components, such as a celery workers). I want to use a common .env environment file for both dev and container use in my docker-compose.
Example .env
RABBITMQ_DEFAULT_HOST=localhost
Now, if I use this with with flask run it works fine as the container rabbitmq port is mapped to the host. If I run this inside the flask docker container, it fails because localhost of the flask container is not the same as the host. If I change localhost to my container name, rabbitmq.
RABBITMQ_DEFAULT_HOST=rabbitmq
It will resolve nice inside the flask container via docker to the dynamic ip of the rabbitmq container (local port map not even necessary), however, my flask run during development has no knowledge of this name / ip mapping and will fail.
Is there any easy way to handle this so it's easily portable to other devs and just "works" when either outside using flask run or inside the container via docker-compose?
I'd also like to limit the port exposure if possible, such as 127.0.0.1:5672:5672.
Update
So far, this is the best I've come up with.. in the program, I use a socket to check if the name resolves, if not, then it looks to the env with a default to localhost.
import socket
def get_rabbitmq_host() -> str:
try:
return socket.gethostbyname("rabbitmq") # container name
except socket.gaierror:
return os.getenv("RABBITMQ_DEFAULT_HOST", "localhost")
Here is another method I tried that's a lot faster (no dns timeout), but changes the order a bit.
def get_rabbitmq_host() -> str:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1)
result = sock.connect_ex(("127.0.0.1", 5672))
sock.close()
if result == 0:
return "127.0.0.1"
elif (
os.getenv("RABBITMQ_DEFAULT_HOST") == "localhost"
or os.getenv("RABBITMQ_DEFAULT_HOST") == "127.0.0.1"
):
return "rabbitmq"
else:
return os.getenv("RABBITMQ_DEFAULT_HOST", "rabbitmq")
Well no, not really. Or yes, depending on how you view it.
Since now you find out that localhost does not mean the same in every context, nmaybe you should split up the variables, even though in some situations it maybe have the same value.
So just something like
rabbit_mq_internal_host=localhost
rabbit_mq_external_host=rabbitmq #container name!
Is there any easy way to handle this so it's easily portable to other devs and just "works" when either outside using flask run or inside the container via docker-compose?
Well: that is the point of the .env files. You have to different environments there, so make two different .env files. Or let everyone adjust the .env file according to her/his preferred way of running the app.
I'd also like to limit the port exposure if possible, such as 127.0.0.1:5672:5672
If you connect from container to container within a docker network, you do not need to publish the port at all. Only ports that have to be accessed from outside the network.
I am not sure if I completely understood your situation. I am assuming that you are developing the application and have environment which you would like to have it separated in accordance to the environment for example localhost, development, test etc ...
With that assumption as above. I would suggest to have env's in accordance to the environment like env_localhost, env_development where each key=value will be in accordance to the environment. Also, have an env.template file with empty key= so that if someone does not want a docker based runs then can setup that accordingly in a new file calling it the .env.
Once the above is created now you can modify your docker build for the app the Dockerfile I mean where you can utilise the following snippet. The important part is the environment variable called SETUP and the rename of the environment to .env during the build process:
# ... Other build commands follow
WORKDIR /usr/src/backend
COPY ./backend .
ARG SETUP=development # This is important environment we will pass in future. Defaults to a value like development.
COPY ./backend/env_${SETUP} .env # This is passed auto during docker-compose build I will tell that next.
# ... Other build commands follow
After the modification of the Dockerfile, now you can perform docker-compose build according to the environment by passing a SETUP as env to the build as follows:
docker-compose build --build-arg SETUP=localhost your_service_here
Additionally, once this process is stable you can create a Makefile and have make build-local, make build-dev and so on.

Makefile: extract an env variable from a docker container and use it in another docker container

I have an app that runs on several docker containers. To simplify my problem let's say I have 3 containers : one for MySQL and 2 for 2 instances of the api (sharing the same volume where the code is but with a different env specifying different database settings) as configured in the following docker-compose.yml
services:
api-1:
image: mynamespace/my-image-name:1.0
environment:
DB_NAME: db_api_1
api-2:
image: mynamespace/my-image-name:1.0
environment:
DB_NAME: db_api_2
In a Makefile I have rules for deploying the containers and installing the database for each of my api instances.
What I am trying to achieve is to create a make rule that dumps a database given an env. Knowing that I have no MySQL client installed on my api instances, I thought there should be a way to extract the env variables I need (with printenv VARNAME) from an api container then use it in the database container.
Anyone knows how this could be achieved ?
Assuming that it's an environment variable that you set using the -e option to docker run, you could do something like this:
docker exec api_container sh -c 'echo $VARNAME'
If it is environment variable that was set inside the container e.g. from a script, then you're mostly out of luck. You could of course inspect /proc/<pid>/environ, but that's hacky and I wouldn't recommend it.
It also sounds as if you would benefit from using something like docker-compose to manage your containers.

How to share a value between all docker containers spun op by the same "docker-compose up" call?

Context
We are migrating an older application to docker and as a first step we're working against some constraints. The database can not yet be put in a container, and moreover, it is shared between all developers in our team. So this question is to find a fix for a temporary problem.
To not clash with other developers using the same database, there is a system in place where each developer machine starts the application with a value that is unique to his machine. Each container should use this same value.
Question
We are using docker-compose to start the containers. Is there a way to provide a (environment) variable to it that gets propagated to all containers?
How I'm trying to do it:
My docker-compose.yml looks kind of like this:
my_service:
image: my_service:latest
command: ./my_service.sh
extends:
file: base.yml
service: base
environment:
- batch.id=${BATCH_ID}
then I thought running BATCH_ID=$somevalue docker-compose up my_service would fill in the ${BATCH_ID}, but it doesn't seem to work that way.
Is there another way? A better way?
Optional: Ideally everything should be contained so that a developer can just call docker-compose up my_service leading to compose itself calculating a value to pass to all the containers. But from what I see online, I think this is not possible.
You are correct. Alternatively you can just specify the env var name:
my_service:
environment:
- BATCH_ID
So the var BATCH_ID is defined from the current docker-compose execution scope; and passed to the container with the same name.
I don't know what I changed, but suddenly it works as described.
BATCH_ID is the name of the environment variable on the host.
batch.id will be the name of the environment variable inside the container.
my_service:
environment:
- batch.id=${BATCH_ID}

Passing environmental variables when deploying docker to remote host

I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.

Docker Compose - Command using Container Environment Variable

Using Docker Compose to link a master and slave service together. The slave container is thus automatically injected by Compose with environment variables containing the various ports and IPs needed to connect to the other master container.
The service accepts the IP/Port of the master via a command line argument, so I set this in my commands.
master:
command: myservice
ports:
- '29015'
slave:
command: myservice --master ${MASTER_PORT_29015_TCP_ADDR}:${MASTER_PORT_29015_TCP_PORT}
links:
- master:master
The problem is that the environment variables like MASTER_PORT_29015_TCP_PORT are evaluated when the compose command is run, and not from within the container itself where they are actually set.
When starting the cluster - you see the warning: WARNING: The MASTER_PORT_29015_TCP_ADDR variable is not set. Defaulting to a blank string.
I tried setting entrypoint: ["/bin/sh", "-c"] but produced unusual behaviour where the service wouldn't see any variables at all. (For information, the service I'm actually using is RethinkDB).
As stated in the documentation, link environment variables are now discouraged, and you should just write master instead of $MASTER_PORT_29015_TCP_ADDR. Moreover, there doesn't seem to be any point to writing $MASTER_PORT_29015_TCP_PORT when you know its value's going to be 29015.
Hence, change the command to:
myservice --master master:29015

Resources