how do you manage secret values with docker-compose v3.1? - docker

Version 3.1 of the docker-compose.yml specification introduces support for secrets.
I tried this:
version: '3.1'
services:
a:
image: tutum/hello-world
secret:
password: the_password
b:
image: tutum/hello-world
$ docker-compose up returns:
Unsupported config option for services.secret: 'password'
How can we use the secrets feature in practice?

You can read the corresponding section from the official documentation.
To use secrets you need to add two things into your docker-compose.yml file. First, a top-level secrets: block that defines all of the secrets. Then, another secrets: block under each service that specifies which secrets the service should receive.
As an example, create the two types of secrets that Docker will understand: external secrets and file secrets.
1. Create an 'external' secret using docker secret create
First thing: to use secrets with Docker, the node you are on must be part of a swarm.
$ docker swarm init
Next, create an 'external' secret:
$ echo "This is an external secret" | docker secret create my_external_secret -
(Make sure to include the final dash, -. It's easy to miss.)
2. Write another secret into a file
$ echo "This is a file secret." > my_file_secret.txt
3. Create a docker-compose.yml file that uses both secrets
Now that both types of secrets are created, here is the docker-compose.yml file that will read both of those and write them to the web service:
version: '3.1'
services:
web:
image: nginxdemos/hello
secrets: # secrets block only for 'web' service
- my_external_secret
- my_file_secret
secrets: # top level secrets block
my_external_secret:
external: true
my_file_secret:
file: my_file_secret.txt
Docker can read secrets either from its own database (e.g. secrets made with docker secret create) or from a file. The above shows both examples.
4. Deploy your test stack
Deploy the stack using:
$ docker stack deploy --compose-file=docker-compose.yml secret_test
This will create one instance of the web service, named secret_test_web.
5. Verify that the container created by the service has both secrets
Use docker exec -ti [container] /bin/sh to verify that the secrets exist.
(Note: in the below docker exec command, the m2jgac... portion will be different on your machine. Run docker ps to find your container name.)
$ docker exec -ti secret_test_web.1.m2jgacogzsiaqhgq1z0yrwekd /bin/sh
# Now inside secret_test_web; secrets are contained in /run/secrets/
root#secret_test_web:~$ cd /run/secrets/
root#secret_test_web:/run/secrets$ ls
my_external_secret my_file_secret
root#secret_test_web:/run/secrets$ cat my_external_secret
This is an external secret
root#secret_test_web:/run/secrets$ cat my_file_secret
This is a file secret.
If all is well, the two secrets we created in steps 1 and 2 should be inside the web container that was created when we deployed our stack.

Given you have a service myapp and a secrets file secrets.yml:
Create a compose file:
version: '3.1'
services:
myapp:
build: .
secrets:
secrets_yaml
Provision a secret using this command:
docker secret create secrets_yaml secrets.yml
Deploy your service using this command:
docker deploy --compose-file docker-compose.yml myappstack
Now your app can access the secret file at /run/secrets/secrets_yaml. You can either hardcode this path in your application or create a symbolic link.
The different question
This answer is probably to the question "how do you provision your secrets to your docker swarm cluster".
The original question "how do you manage secret values with docker compose" implies that the docker-compose file contains secret values. It doesn't.
There's a different question: "Where do you store the canonical source of the secrets.yml file". This is up to you. You can store it in your head, print on a sheet of paper, use a password manager, use a dedicated secrets application/database. Heck, you can even use a git repository if it's safely secured itself. Of course, never store it inside the system you're securing with it :)
I would recommend vault. To store a secret:
# create a temporary secret file
cat secrets.yml | vault write secret/myappsecrets -
To retrieve a secret and put it into your docker swarm:
vault read -field=value secret/myappsecrets | docker secret create secrets_yaml -
Of course, you can use docker cluster itself as a single source of truth for you secrets, but if your docker cluster breaks, you'd lost your secrets. So make sure to have a backup elsewhere.
The question nobody asked
The third question (that nobody asked) is how to provision secrets to developers' machines. It might be needed when there's an external service which is impossible to mock locally or a large database which is impossible to copy.
Again, docker has nothing to do with it (yet). It doesn't have access control lists which specify which developers have access to which secrets. Nor does it have any authentication mechanism.
The ideal solution appears to be this:
A developer opens some web application.
Authenticates using some single sign on mechanism.
Copies some long list of docker secret create commands and executes them in the terminal.
We have yet to see if such an application pops up.

You can also specify secrets stored locally in a file using file: key in secrets object. Then you don't have to docker secret create them yourself, Compose / docker stack deploy will do it for you.
version: '3.1'
secrets:
password:
file: ./password
services:
password_consumer:
image: alpine
secrets:
- password
Reference: Compose file version 3 reference: Secrets

One question was raised here in the comments, why should I initialize a swarm if I only need secrets? And my answer is that secrets is created for the swarm, where you have more than one node and you want to manage and share secrets in a secure way. But if you have one node, this will not (almost) add any extra security if someone can access your host machine where you have the one node swarm, as secrets can be retrieved from the running containers, or directly on the host if the secret is created from a file, like a private key.
Check this blog: https://www.docker.com/blog/docker-secrets-management/
And read the comments:
"Thank you very much for the introductory article. The steps are mentioned to view the contents of secrets in container will not work when the redis container is created on a worker node."

Is that the exact indentation of your docker-compose.yml file? I think secret secrets should be nested under a (i.e. one of the services), not directly under services section.

I guess the keyword is secrets not secret. That is at least what I understand from reading the schema.

The keyword is secrets instead of secret.
It should also properly indented under service a.

Related

Pass all environment variables to swarm services

How can I pass all common environment variables in a single domain to docker swarm services at once?
Is there any third party application, docker image or service for this?
I have to give environment variables to docker swarm services one by one. It would be great if there was only one system and all its services would automatically get their environment variables from there.
You can use the docker secrets command to manage secrets for your services.
For example, to create a secret called my_secret:
$ docker secret create my_secret my_secret_value
To pass the secret to a service, you can specify it in the service's definition:
services: my_service: secrets: - my_secret
The secret will be made available to the service as an environment variable called MY_SECRET.

Advice needed: How to correctly handle self-signed ssl cert+key pairs for encrypting inter-container communication?

Probably some rather noob questions but I have searched around but haven't been able to figure out the best way to handle self-signed certs (from an opsec perspective) to encrypt communication between dockerized services on my Debian server like Redis, Authelia, Portainer, etc.?
The certificates are created and signed and all the containers in question are prepared for host-mounting volumes for cert-key pairs.
So the question is simply:
Do I just store the cert-key pairs in the folders already mounted to the containers like e.g. /docker/appdata/portainer/config?
Who should be the owner:group of the certs+keys pairs, root or the user running the container or something third?
Which permissions should be set for the certs+key pairs?
By the way. My docker is setup as a single-node swarm and all containers deployed with stack deploy and properly connected to one another with custom docker bridge networks and communication is working so my question is only related to the handling of the cert+key pairs
Many thanks in advance...
Do I just store the cert-key pairs in the folders already mounted to the containers like e.g. /docker/appdata/portainer/config?
I would recommend using docker secrets. You have the option of creating secrets with docker secret create or loading them in from files on the host. e.g.
version: '3.9'
services:
app:
image: ubuntu
secrets:
- source: my-cert
target: /certs/my-cert.crt # Load it into /certs/my-cert.crt, default is /run/secrets/{secret_name}
mode: 0400 # Read only by owner, default is 444
# uid: '0' # Default UID is root, change if needed.
# gid: '0' # Default GID is root, change if needed.
- source: my-other-cert
target: /certs/my-other-cert.crt
mode: 0400
secrets:
my-cert:
external: true # Load cert from a secret made with "docker secret create"
my-other-cert:
file: ./some-cert.crt # Load cert from a file on your host machine
Who should be the owner:group of the certs+keys pairs, root or the user running the container or something third?
Almost certainly root, unless it's another user running inside the container. You can configure this via the uid and gid options on secrets.
Which permissions should be set for the certs+key pairs?
You want 400 or 444 on everything generally. Using secrets has the added benefit of making these files un-writeable from the container (even to root). You can configure this with the mode option on secrets.
If you are going to load a secret from a file on disk instead of using docker secret create, it should be from shared storage that's accessible on all swarm nodes. I know you said it's a swarm of one, just added for clarity.

Consume secret inside dockerfile

Is it possible to access machine environments inside dockerfile? I was thinking passing the SECRET as build ARG, like so:
docker-compose:
version: '3.5'
services:
service:
...
build:
...
args:
SECRET: ${SECRET}
...
dockerfile:
FROM image
ARG SECRET
RUN script-${SECRET}
Note: the container is build in kubernetes, I can not pass any arguments to the build command or perform any command at all.
Edit 1: It is okay to pass SECRET as ARG because this is not sensitive data. I'm using SECRETS to access micro service data, and I can only store data using secrets. Think of this as machine environment.
Edit 2: This was not a problem with docker but with the infrastructure that I was working with which does not allow any arguments to be passed to the docker build.
The secrets should be used during run time and provided by execution environment.
Also everything that is executing during a container build is written down as layers and available later to anyone who is able to get access to an image. That's why it's hard to consume secrets during the build in a secure way.
In order to address this, Docker recently introduced a special option --secret. To make it work, you will need the following:
Set environment variable DOCKER_BUILDKIT=1
Use the --secret argument to docker build command
DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt...
Add a syntax comment to the very top of your Docker file
# syntax = docker/dockerfile:1.0-experimental
Use the --mount argument to mount the secret for every RUN directive that needs it
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
Please note that this needs Docker version 18.09 or later.
ARG is a build time argument. You want to keep Secrets secret and not write them in the artifacts. Keep secrets in external environment variables or in external files.
docker run -e SECRET_NAME=SECRET_VALUE
and in docker-compose:
services:
app-name:
environment:
- SECRET_NAME=YOUR_VALUE
or
services:
app-name:
env_file:
- secret-values.env
Kubernetes
When you run exactly the same container image in Kubernetes, you mount the secret from a Secret object.
containers:
- name: app-name
image: app-image-name
env:
- name: SECRET_NAME
valueFrom:
secretKeyRef:
name: name-of-secret-object
key: token
Yes, to passing secret data as ARG if you need to access the secret during the container build; you have no (!?) alternative.
ARG values are only available for the duration of the build so you need to be able to trust the build process and that it is cleaned up appropriately at its conclusion; if a malicious actor were able to access the build process (or after the fact), it could access the secret data.
It's curious that you wish to use the secret as script-${SECRET} as I assumed the secret would be used to access an external service. Someone would be able to determine the script name from the resulting Docker image and this would expose your secret.

Why do I need to be in Swarm mode to use Docker secrets?

I am playing around with a single container docker image. I would like to store my db password as a secret without using compose (having probs with that and Gradle for now). I thought I could still use secrets even without compose but when I try I get...
$ echo "helloSecret" | docker secret create helloS -
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
Why do I need to use swarm mode just to use secrets? Why can't I use them without a cluster?
You need to run swarm mode for secrets because that's how docker implemented secrets. The value of secrets is that workers never write the secret to disk, the secret is on a need-to-know basis (other workers do not receive the secret until a task is scheduled there), and on managers encrypt that secret on disk. The storage of the secret on the manager uses the raft database.
You can easily deploy a single node swarm cluster with the command docker swarm init. From there, docker-compose up gets changed to docker stack deploy -c docker-compose.yml $stack_name.
Secrets and configs in swarm mode provide a replacement for mounting single file volumes into containers for configuration. So without swarm mode on a single node, you can always make the following definition:
version: '2'
services:
app:
image: myapp:latest
volumes:
- ./secrets:/run/secrets:ro
Or you can separate the secrets from your app slightly by loading those secrets into a named volume. For that, you could do something like:
tar -cC ./secrets . | docker run -i -v secrets:/secrets busybox tar -xC /secrets
And then mount that named volume:
version: '2'
volumes:
secrets:
external: true
services:
app:
image: myapp:latest
volumes:
- secrets:/run/secrets:ro
Check out this answer: https://serverfault.com/a/936262 as provided by user sel-en-ium :-
You can use secrets if you use a compose file. (You don't need to run
a swarm).
You use a compose file with docker-compose: there is documentation for
"secrets" in a docker-compose.yml file.
I switched to docker-compose because I wanted to use secrets. I am
happy I did, it seems much more clean. Each service maps to a
container. And if you ever want to switch to running a swarm instead,
you are basically already there.
Unfortunately the secrets are not loaded into the container's
environment, they are mounted to /run/secrets/

How to read external secrets when using docker-compose

I wonder how can i pass external secrets into services spawned by docker-compose. I do the following:
I create new secret
printf "some secret value goes here" | docker secret create wallet_password -
My docker-compose.yml:
version: "3.4"
services:
test:
image: alpine
command: 'cat /run/secrets/wallet_password'
secrets:
- wallet_password
secrets:
wallet_password:
external: true
Then I run:
docker-compose -f services/debug/docker-compose.yml up -d --build
and
docker-compose -f services/debug/docker-compose.yml up
I get the following response:
WARNING: Service "test" uses secret "wallet_password" which is external. External secrets are not available to containers created by docker-compose.
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Starting debug_test_1 ...
Starting debug_test_1 ... done
Attaching to debug_test_1
test_1 | cat: can't open '/run/secrets/wallet_password': No such file or directory
Sooo.... is there any way of passing external secret into container spawned by docker-compose?
Nope.
External secrets are not available to containers created by docker-compose.
The error message sums it up pretty nicely. Secrets are a swarm mode feature, the secret is stored inside of the swarm manager engine. That manager does not expose those secrets to externally launched containers. Only swarm services with the secret can run containers with the secret loaded.
You can run a service in swarm mode that extracts the secret since it's just a file inside the container and the application inside the container can simply cat out the file contents. You can also replicate the functionality of secrets in containers started with compose by mounting a file as a volume in the location of the secret. For that, you'd want to have a separate compose file since the volume mount and secret mount would conflict with each other.
You need to run a swarm. This is how it goes:
Create a swarm:
docker swarm init
Create your secrets (as many as you need):
docker secret create <secret_name> <secret_content>
Check all the available secrets with:
docker secret ls
Now, use the docker-compose as precursor for the service:
docker stack deploy --compose-file <path_to_compose> <service_name>
Be aware that you'll find your secrets in a plain text file located at /run/secrets/<secret_name>.

Resources