Docker: how to provide secret information to the container? - docker

I have my app inside a container and it's reading environment variables for passwords and API keys to access services. If I run the app on my machine (not inside docker), I just export SERVICE_KEY='wefhsuidfhda98' and the app can use it.
What's the standard approach to this? I was thinking of having a secret file which would get added to the server with export commands and then run a source on that file.
I'm using docker & fig.

The solution I settled on was the following: save the environment variables in a secret file and pass those on to the container using fig.
have a secret_env file with secret info, e.g.
export GEO_BING_SERVICE_KEY='98hfaidfaf'
export JIRA_PASSWORD='asdf8jriadf9'
have secret_env in my .gitignore
have a secret_env.template file for developers, e.g.
export GEO_BING_SERVICE_KEY='' # can leave empty if you wish
export JIRA_PASSWORD='' # write your pass
in my fig.yml I send the variables through:
environment:
- GEO_BING_SERVICE_KEY
- JIRA_PASSWORD
call source secret_env before building

docker run provides environment variables:
docker run -e SERVICE_KEY=wefsud your/image
Then your application would read SERVICE_KEY from the environment.
https://docs.docker.com/reference/run/
In fig, you'd use
environment:
- SERVICE_KEY: wefsud
in your app spec. http://www.fig.sh/yml.html
From a security perspective, the former solution is no worse than running it on your host if your docker binary requires root access. If you're allowing 'docker' group users to run docker, it's less secure, since any docker user could docker inspect the running container. Running on your host, you'd need to be root to inspect the environment variables of a running process.

Related

Docker container name resolution inside and outside

I have a flask app that uses rabbitmq where both are docker containers (along with other components, such as a celery workers). I want to use a common .env environment file for both dev and container use in my docker-compose.
Example .env
RABBITMQ_DEFAULT_HOST=localhost
Now, if I use this with with flask run it works fine as the container rabbitmq port is mapped to the host. If I run this inside the flask docker container, it fails because localhost of the flask container is not the same as the host. If I change localhost to my container name, rabbitmq.
RABBITMQ_DEFAULT_HOST=rabbitmq
It will resolve nice inside the flask container via docker to the dynamic ip of the rabbitmq container (local port map not even necessary), however, my flask run during development has no knowledge of this name / ip mapping and will fail.
Is there any easy way to handle this so it's easily portable to other devs and just "works" when either outside using flask run or inside the container via docker-compose?
I'd also like to limit the port exposure if possible, such as 127.0.0.1:5672:5672.
Update
So far, this is the best I've come up with.. in the program, I use a socket to check if the name resolves, if not, then it looks to the env with a default to localhost.
import socket
def get_rabbitmq_host() -> str:
try:
return socket.gethostbyname("rabbitmq") # container name
except socket.gaierror:
return os.getenv("RABBITMQ_DEFAULT_HOST", "localhost")
Here is another method I tried that's a lot faster (no dns timeout), but changes the order a bit.
def get_rabbitmq_host() -> str:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1)
result = sock.connect_ex(("127.0.0.1", 5672))
sock.close()
if result == 0:
return "127.0.0.1"
elif (
os.getenv("RABBITMQ_DEFAULT_HOST") == "localhost"
or os.getenv("RABBITMQ_DEFAULT_HOST") == "127.0.0.1"
):
return "rabbitmq"
else:
return os.getenv("RABBITMQ_DEFAULT_HOST", "rabbitmq")
Well no, not really. Or yes, depending on how you view it.
Since now you find out that localhost does not mean the same in every context, nmaybe you should split up the variables, even though in some situations it maybe have the same value.
So just something like
rabbit_mq_internal_host=localhost
rabbit_mq_external_host=rabbitmq #container name!
Is there any easy way to handle this so it's easily portable to other devs and just "works" when either outside using flask run or inside the container via docker-compose?
Well: that is the point of the .env files. You have to different environments there, so make two different .env files. Or let everyone adjust the .env file according to her/his preferred way of running the app.
I'd also like to limit the port exposure if possible, such as 127.0.0.1:5672:5672
If you connect from container to container within a docker network, you do not need to publish the port at all. Only ports that have to be accessed from outside the network.
I am not sure if I completely understood your situation. I am assuming that you are developing the application and have environment which you would like to have it separated in accordance to the environment for example localhost, development, test etc ...
With that assumption as above. I would suggest to have env's in accordance to the environment like env_localhost, env_development where each key=value will be in accordance to the environment. Also, have an env.template file with empty key= so that if someone does not want a docker based runs then can setup that accordingly in a new file calling it the .env.
Once the above is created now you can modify your docker build for the app the Dockerfile I mean where you can utilise the following snippet. The important part is the environment variable called SETUP and the rename of the environment to .env during the build process:
# ... Other build commands follow
WORKDIR /usr/src/backend
COPY ./backend .
ARG SETUP=development # This is important environment we will pass in future. Defaults to a value like development.
COPY ./backend/env_${SETUP} .env # This is passed auto during docker-compose build I will tell that next.
# ... Other build commands follow
After the modification of the Dockerfile, now you can perform docker-compose build according to the environment by passing a SETUP as env to the build as follows:
docker-compose build --build-arg SETUP=localhost your_service_here
Additionally, once this process is stable you can create a Makefile and have make build-local, make build-dev and so on.

Storing default environment variables in Vault instead of env files in docker-compose for standard services

I have a docker-compose stack which uses standard software containers like:
InfluxDB
MariaDB
Node-Red
running on a Industrial Single Board Computer (which may not be connected to the internet)
for initial setup (bringing the stack up), I pass some standard credentials like admin credentials via their environment variable files e.g. influxdb.env, mariadb.env etc.
A typical example of a docker-compose.yml here is:
services:
influxdb:
image: influxdb:2.0
env_file:
- influxdb.env
nodered:
image: nodered/node-red:2.2.2
env_file:
- node-red.env
An example of influxdb.env could be:
INFLUXDB_ADMIN_USER=admin
INFLUXDB_ADMIN_PASSWORD=password!#$2
# other env vars that might be crucial for initial stack boot up
These files are on the disk and can still be vulnerable. I wish to understand if Hashicorp Vault can provide a plausible solution where such credentials (secrets) can be stored as key-value pairs and be made available to the docker-compose services upon runtime.
I understand one bottleneck that since I am using standard containers (ready-to-use) and they may not have vault integration. However, can I still use vault to store the env vars and let the services access them on runtime? Or do I have to write side-cars for these containers and then let them accept these env var values?
You have a few constraints to work with here:
Not storing secrets permanently in storage
docker-compose command line
Vault's output format
Docker composer can read it's environment variables from a file. I suggest that you create that file and provide it to docker-compose with the --env-file parameter.
I can think of two approach to write that file:
Write the output of multiple vault kv get to a file, in NAME=VALUE format
Use vault agent's template engine
The first option is quite straighforward. Call a function that outputs the secrets and send it to a file:
#!/bin/bash
function write_vault_secret_to_env_file() {
local ENVIRONMENT_VARIABLE_NAME=$1
local SECRET_PATH=$2
local SECRET_NAME=$3
echo "$ENVIRONMENT_VARIABLE_NAME=$(vault kv get --field $SECRET_NAME $SECRET_PATH)"
}
echo "$(write_vault_secret_to_env_file FIRST_ENVIROMENT_VAR secret/my-path/things first-secret)" >> my-env-file.sh
echo "$(write_vault_secret_to_env_file SECOND_ENVIROMENT_VAR secret/my-path/stuff second-secret)" >> my-env-file.sh
Vault agent 's template engine is much more powerfull, but is more complex to set up.
Another suggestion would be to use Vault's dynamic secrets for databases (InfluxDB is supported). But you need to provide Vault with DBA privileges in your database. If you create the database from scratch everytime, you could make the DBA password dba-root, give Vault that password and instruct it to rotate it for you.

How do I generate a secret key and share it between two containers in docker-compose? Is it possible?

The problem: I'd like to use imgproxy in my project, but I need some way to generate a signing key when the containers are first run. I then need to share that key between two containers: imgproxy, which accepts it in an environment variable, and my server application, where I could read it from wherever needed. The key needs to be unique and random for each deployment. It would be great to avoid having to run any additional commands before docker-compose up to generate these keys.
What I considered so far:
There are docker-compose secrets. Those live in files. You still need to create and fill those files before you start anything.
I can simply instruct the users to generate the key and edit docker-compose.yml to add it there.
Anyway, what's the best/correct way to approach this? This feels like a popular use case, so surely there has to be something I missed?
The best way to handle this is to create the secret externally; in Compose, perhaps in a .env file. This will translate well to other environments and doesn't require changing code at all. This also works well with secrets that require some user intervention to set up (for example, signing a TLS certificate), it will survive a docker-compose down, and it works even if you split the two halves of the application into separate environments.
If these considerations don't matter to you, and it's really important that the startup be autonomous, you could put the secret into a shared file. Decide that one of the containers is "first". Write a script that runs at startup time that generates the secret:
#!/bin/sh
# Create a random token if it doesn't already exist
if [ ! -f /secrets/token ]; then
dd if=/dev/random bs=48 count=1 | base64 > /secrets/token
fi
# Read back the token into an environment variable
SECRET_TOKEN=$(cat /secrets/token)
# Run the main container process
exec "$#"
In your Dockerfile, COPY this script in and make it be the image's ENTRYPOINT. This must use JSON-array syntax, ENTRYPOINT ["entrypoint.sh"]. If you're launching your application via ENTRYPOINT, move that command into CMD instead (or combine a split ENTRYPOINT/CMD into CMD).
Now in your Compose setup, you need to create a volume and share it between the two containers.
version: '3.8'
volumes:
secrets: # empty
services:
imgproxy:
image: ...
volumes:
- secrets:/secrets # matches the path in the entrypoint script
server:
image: ...
volumes:
- secrets:/secrets # could be a different path
(In particular if you're considering eventually running this application on Kubernetes, this approach won't work well. Of the volume types Kubernetes supports, few can be mounted into multiple containers at the same time. There is a native Kubernetes Secret object that's intended for this use, but that then gets back to the original pattern of "create the secret separately".)

Passing environmental variables when deploying docker to remote host

I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.

Managing dev/test/prod environments with Docker

There seems to be sparse conflicting information around on this subject. Im new to Docker and need some help. I have several docker containers to run an application, some require different config files for local development as they do for production. I don't seem to be able to find a neat way to automate this with Docker.
My containers that include custom config are Nginx, Freeradius and my code/data container is Laravel therefore requires a .env.php file (L4.2 at the moment).
I have tried Dockers environment variables in docker compose:
docker-compose.yml:
freeradius:
env_file: ./env/freeradius.env
./env/freeradius.env
DB_HOST=123.56.12.123
DB_DATABASE=my_database
DB_USER=me
DB_PASS=itsasecret
Except I can't pick those variables up in /etc/freeradius/mods-enabled/sql where they need to be.
How can I get Docker to run as a 'local' container with local config, or as a 'production' container with production config without having to actually build different containers, and without having to attach to each container to manually config them. I need it automated as this is to eventually be used on quite a large production environment which will have a large cluster of servers with many instances.
Happy to learn Ansible if this is how people achieve this.
If you can't use environment variables to configure the application (which is my understanding of the problem), then the other option is to use volumes to provide the config files.
You can use either "data volume containers" (which are containers with the sole purpose of sharing files and directories) with volumes_from, or you can use a named volume.
Data Volume container
If the go with the "data volume container" route, you would create a container with all the environment configuration files. Every service that needs a file uses volumes_from: - config. In dev you'd have something like:
configs:
build: dev-configs/
freeradius:
volumes_from:
- configs
The dev-configs directory will need a Dockerfile to build the image, which will have a bunch of VOLUME directives for all the config paths.
For production (and other environments) you can create an override file which replaces the configs service with a different container:
docker-compose.prod.yml:
configs:
build: prod-configs/
You'll probably have other settings you want to change between dev and prod, which can go into this file as well. Then you run compose with the override file:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
You can learn more about this here: http://docs.docker.com/compose/extends/#multiple-compose-files
Named Volume
If you go with the "named volume" route, it's a bit easier to configure. On dev you create a volume with docker volume create thename and put some files into it. In your config you use it directly:
freeradius:
volumes:
- thename:/etc/freeradius/mods-enabled/sql
In production you'll either need to create that named volume on every host, or use a volume driver plugin that supports multihost (I believe flocker is one example of this).
Runtime configs using Dockerize
Finally, another option that doesn't involve volumes is to use https://github.com/jwilder/dockerize which lets you generate the configs at runtime from environment variables.

Resources