Pass all environment variables to swarm services - docker

How can I pass all common environment variables in a single domain to docker swarm services at once?
Is there any third party application, docker image or service for this?
I have to give environment variables to docker swarm services one by one. It would be great if there was only one system and all its services would automatically get their environment variables from there.

You can use the docker secrets command to manage secrets for your services.
For example, to create a secret called my_secret:
$ docker secret create my_secret my_secret_value
To pass the secret to a service, you can specify it in the service's definition:
services: my_service: secrets: - my_secret
The secret will be made available to the service as an environment variable called MY_SECRET.

Related

Consume secret inside dockerfile

Is it possible to access machine environments inside dockerfile? I was thinking passing the SECRET as build ARG, like so:
docker-compose:
version: '3.5'
services:
service:
...
build:
...
args:
SECRET: ${SECRET}
...
dockerfile:
FROM image
ARG SECRET
RUN script-${SECRET}
Note: the container is build in kubernetes, I can not pass any arguments to the build command or perform any command at all.
Edit 1: It is okay to pass SECRET as ARG because this is not sensitive data. I'm using SECRETS to access micro service data, and I can only store data using secrets. Think of this as machine environment.
Edit 2: This was not a problem with docker but with the infrastructure that I was working with which does not allow any arguments to be passed to the docker build.
The secrets should be used during run time and provided by execution environment.
Also everything that is executing during a container build is written down as layers and available later to anyone who is able to get access to an image. That's why it's hard to consume secrets during the build in a secure way.
In order to address this, Docker recently introduced a special option --secret. To make it work, you will need the following:
Set environment variable DOCKER_BUILDKIT=1
Use the --secret argument to docker build command
DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt...
Add a syntax comment to the very top of your Docker file
# syntax = docker/dockerfile:1.0-experimental
Use the --mount argument to mount the secret for every RUN directive that needs it
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
Please note that this needs Docker version 18.09 or later.
ARG is a build time argument. You want to keep Secrets secret and not write them in the artifacts. Keep secrets in external environment variables or in external files.
docker run -e SECRET_NAME=SECRET_VALUE
and in docker-compose:
services:
app-name:
environment:
- SECRET_NAME=YOUR_VALUE
or
services:
app-name:
env_file:
- secret-values.env
Kubernetes
When you run exactly the same container image in Kubernetes, you mount the secret from a Secret object.
containers:
- name: app-name
image: app-image-name
env:
- name: SECRET_NAME
valueFrom:
secretKeyRef:
name: name-of-secret-object
key: token
Yes, to passing secret data as ARG if you need to access the secret during the container build; you have no (!?) alternative.
ARG values are only available for the duration of the build so you need to be able to trust the build process and that it is cleaned up appropriately at its conclusion; if a malicious actor were able to access the build process (or after the fact), it could access the secret data.
It's curious that you wish to use the secret as script-${SECRET} as I assumed the secret would be used to access an external service. Someone would be able to determine the script name from the resulting Docker image and this would expose your secret.

Why do I need to be in Swarm mode to use Docker secrets?

I am playing around with a single container docker image. I would like to store my db password as a secret without using compose (having probs with that and Gradle for now). I thought I could still use secrets even without compose but when I try I get...
$ echo "helloSecret" | docker secret create helloS -
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
Why do I need to use swarm mode just to use secrets? Why can't I use them without a cluster?
You need to run swarm mode for secrets because that's how docker implemented secrets. The value of secrets is that workers never write the secret to disk, the secret is on a need-to-know basis (other workers do not receive the secret until a task is scheduled there), and on managers encrypt that secret on disk. The storage of the secret on the manager uses the raft database.
You can easily deploy a single node swarm cluster with the command docker swarm init. From there, docker-compose up gets changed to docker stack deploy -c docker-compose.yml $stack_name.
Secrets and configs in swarm mode provide a replacement for mounting single file volumes into containers for configuration. So without swarm mode on a single node, you can always make the following definition:
version: '2'
services:
app:
image: myapp:latest
volumes:
- ./secrets:/run/secrets:ro
Or you can separate the secrets from your app slightly by loading those secrets into a named volume. For that, you could do something like:
tar -cC ./secrets . | docker run -i -v secrets:/secrets busybox tar -xC /secrets
And then mount that named volume:
version: '2'
volumes:
secrets:
external: true
services:
app:
image: myapp:latest
volumes:
- secrets:/run/secrets:ro
Check out this answer: https://serverfault.com/a/936262 as provided by user sel-en-ium :-
You can use secrets if you use a compose file. (You don't need to run
a swarm).
You use a compose file with docker-compose: there is documentation for
"secrets" in a docker-compose.yml file.
I switched to docker-compose because I wanted to use secrets. I am
happy I did, it seems much more clean. Each service maps to a
container. And if you ever want to switch to running a swarm instead,
you are basically already there.
Unfortunately the secrets are not loaded into the container's
environment, they are mounted to /run/secrets/

How to read external secrets when using docker-compose

I wonder how can i pass external secrets into services spawned by docker-compose. I do the following:
I create new secret
printf "some secret value goes here" | docker secret create wallet_password -
My docker-compose.yml:
version: "3.4"
services:
test:
image: alpine
command: 'cat /run/secrets/wallet_password'
secrets:
- wallet_password
secrets:
wallet_password:
external: true
Then I run:
docker-compose -f services/debug/docker-compose.yml up -d --build
and
docker-compose -f services/debug/docker-compose.yml up
I get the following response:
WARNING: Service "test" uses secret "wallet_password" which is external. External secrets are not available to containers created by docker-compose.
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Starting debug_test_1 ...
Starting debug_test_1 ... done
Attaching to debug_test_1
test_1 | cat: can't open '/run/secrets/wallet_password': No such file or directory
Sooo.... is there any way of passing external secret into container spawned by docker-compose?
Nope.
External secrets are not available to containers created by docker-compose.
The error message sums it up pretty nicely. Secrets are a swarm mode feature, the secret is stored inside of the swarm manager engine. That manager does not expose those secrets to externally launched containers. Only swarm services with the secret can run containers with the secret loaded.
You can run a service in swarm mode that extracts the secret since it's just a file inside the container and the application inside the container can simply cat out the file contents. You can also replicate the functionality of secrets in containers started with compose by mounting a file as a volume in the location of the secret. For that, you'd want to have a separate compose file since the volume mount and secret mount would conflict with each other.
You need to run a swarm. This is how it goes:
Create a swarm:
docker swarm init
Create your secrets (as many as you need):
docker secret create <secret_name> <secret_content>
Check all the available secrets with:
docker secret ls
Now, use the docker-compose as precursor for the service:
docker stack deploy --compose-file <path_to_compose> <service_name>
Be aware that you'll find your secrets in a plain text file located at /run/secrets/<secret_name>.

Passing environmental variables when deploying docker to remote host

I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.

how do you manage secret values with docker-compose v3.1?

Version 3.1 of the docker-compose.yml specification introduces support for secrets.
I tried this:
version: '3.1'
services:
a:
image: tutum/hello-world
secret:
password: the_password
b:
image: tutum/hello-world
$ docker-compose up returns:
Unsupported config option for services.secret: 'password'
How can we use the secrets feature in practice?
You can read the corresponding section from the official documentation.
To use secrets you need to add two things into your docker-compose.yml file. First, a top-level secrets: block that defines all of the secrets. Then, another secrets: block under each service that specifies which secrets the service should receive.
As an example, create the two types of secrets that Docker will understand: external secrets and file secrets.
1. Create an 'external' secret using docker secret create
First thing: to use secrets with Docker, the node you are on must be part of a swarm.
$ docker swarm init
Next, create an 'external' secret:
$ echo "This is an external secret" | docker secret create my_external_secret -
(Make sure to include the final dash, -. It's easy to miss.)
2. Write another secret into a file
$ echo "This is a file secret." > my_file_secret.txt
3. Create a docker-compose.yml file that uses both secrets
Now that both types of secrets are created, here is the docker-compose.yml file that will read both of those and write them to the web service:
version: '3.1'
services:
web:
image: nginxdemos/hello
secrets: # secrets block only for 'web' service
- my_external_secret
- my_file_secret
secrets: # top level secrets block
my_external_secret:
external: true
my_file_secret:
file: my_file_secret.txt
Docker can read secrets either from its own database (e.g. secrets made with docker secret create) or from a file. The above shows both examples.
4. Deploy your test stack
Deploy the stack using:
$ docker stack deploy --compose-file=docker-compose.yml secret_test
This will create one instance of the web service, named secret_test_web.
5. Verify that the container created by the service has both secrets
Use docker exec -ti [container] /bin/sh to verify that the secrets exist.
(Note: in the below docker exec command, the m2jgac... portion will be different on your machine. Run docker ps to find your container name.)
$ docker exec -ti secret_test_web.1.m2jgacogzsiaqhgq1z0yrwekd /bin/sh
# Now inside secret_test_web; secrets are contained in /run/secrets/
root#secret_test_web:~$ cd /run/secrets/
root#secret_test_web:/run/secrets$ ls
my_external_secret my_file_secret
root#secret_test_web:/run/secrets$ cat my_external_secret
This is an external secret
root#secret_test_web:/run/secrets$ cat my_file_secret
This is a file secret.
If all is well, the two secrets we created in steps 1 and 2 should be inside the web container that was created when we deployed our stack.
Given you have a service myapp and a secrets file secrets.yml:
Create a compose file:
version: '3.1'
services:
myapp:
build: .
secrets:
secrets_yaml
Provision a secret using this command:
docker secret create secrets_yaml secrets.yml
Deploy your service using this command:
docker deploy --compose-file docker-compose.yml myappstack
Now your app can access the secret file at /run/secrets/secrets_yaml. You can either hardcode this path in your application or create a symbolic link.
The different question
This answer is probably to the question "how do you provision your secrets to your docker swarm cluster".
The original question "how do you manage secret values with docker compose" implies that the docker-compose file contains secret values. It doesn't.
There's a different question: "Where do you store the canonical source of the secrets.yml file". This is up to you. You can store it in your head, print on a sheet of paper, use a password manager, use a dedicated secrets application/database. Heck, you can even use a git repository if it's safely secured itself. Of course, never store it inside the system you're securing with it :)
I would recommend vault. To store a secret:
# create a temporary secret file
cat secrets.yml | vault write secret/myappsecrets -
To retrieve a secret and put it into your docker swarm:
vault read -field=value secret/myappsecrets | docker secret create secrets_yaml -
Of course, you can use docker cluster itself as a single source of truth for you secrets, but if your docker cluster breaks, you'd lost your secrets. So make sure to have a backup elsewhere.
The question nobody asked
The third question (that nobody asked) is how to provision secrets to developers' machines. It might be needed when there's an external service which is impossible to mock locally or a large database which is impossible to copy.
Again, docker has nothing to do with it (yet). It doesn't have access control lists which specify which developers have access to which secrets. Nor does it have any authentication mechanism.
The ideal solution appears to be this:
A developer opens some web application.
Authenticates using some single sign on mechanism.
Copies some long list of docker secret create commands and executes them in the terminal.
We have yet to see if such an application pops up.
You can also specify secrets stored locally in a file using file: key in secrets object. Then you don't have to docker secret create them yourself, Compose / docker stack deploy will do it for you.
version: '3.1'
secrets:
password:
file: ./password
services:
password_consumer:
image: alpine
secrets:
- password
Reference: Compose file version 3 reference: Secrets
One question was raised here in the comments, why should I initialize a swarm if I only need secrets? And my answer is that secrets is created for the swarm, where you have more than one node and you want to manage and share secrets in a secure way. But if you have one node, this will not (almost) add any extra security if someone can access your host machine where you have the one node swarm, as secrets can be retrieved from the running containers, or directly on the host if the secret is created from a file, like a private key.
Check this blog: https://www.docker.com/blog/docker-secrets-management/
And read the comments:
"Thank you very much for the introductory article. The steps are mentioned to view the contents of secrets in container will not work when the redis container is created on a worker node."
Is that the exact indentation of your docker-compose.yml file? I think secret secrets should be nested under a (i.e. one of the services), not directly under services section.
I guess the keyword is secrets not secret. That is at least what I understand from reading the schema.
The keyword is secrets instead of secret.
It should also properly indented under service a.

Resources