Docker secrets passing as environment variable - docker

I put the docker in swarm mode and did the following
echo "'admin'" | docker secret create password -
docker service create \
--network="host" \
--secret source=password,target=password \
-e PASSWORD='/run/secrets/password' \
<image>
I was not able to pass the password secret created via environment variable through docker service.
Please help me out where I am going wrong.

You are misunderstanding the concept of docker secrets.
The whole point of creating secrets is avoiding putting sensitive information into environment variables.
In your example the PASSWORD environment variable will simply carry the value /run/secrets/password which is a file name and not the password admin.
A valid usacase of docker secrets would be, that your docker-image reads the password from that file.
Checkout the docs here especially the example about MySQL:
the environment variables MYSQL_PASSWORD_FILE and MYSQL_ROOT_PASSWORD_FILE to point to the files /run/secrets/mysql_password and /run/secrets/mysql_root_password. The mysql image reads the password strings from those files when initializing the system database for the first time.
In short: your docker image should read the content of the file /run/secrets/password

There is no standard here.
Docker docs discourages using environment variables, but there is confusion whether it is setting password directly as string in "environment" section or other usage of environment variables within container.
Also using string instead of secret when same value might be used in multiple services requires checking and changing it in multiple places instead of one secret value.
Some images, like mariadb, is using env variables with _FILE suffix to populate suffixless version of variable with secret file contents. This seems to be ok.
Using Docker should not require to redesign application architecture only to support secrets in files. Most of other orchestration tools, like Kubernetes, supports putting secrets into env variables directly. Nowadays it is rather not considered as bad practice. Docker Swarm simply lacks good pracitces and proper examples for passing secret to env variable.
IMHO best way is to use entrypoint as a "decorator" to prepare environment from secrets.
Proper entrypoint script can be written as almost universal way of processing secrets, because we can pass original image entrypoint as argument to our new entrypoint script so original image "decorator" is doing it's own work after we prepare container with our script.
Personally I am using following entrypoint with images containing /bin/sh:
https://github.com/DevilaN/docker-entrypoint-example

Related

Docker compose secrets

The newer docker compose (vs docker-compose) allows you to set secrets in the build section. This is nice because if you do secrets at runtime then the file is readable by anyone that can get into the container by reading /run/secrets/<my_secret>.
Unfortunately, it appears that it's only possible to pass the secrets via either the environment or a file. Doing it via the environment doesn't seem like a great idea because someone on the box could read the /proc/<pid>/environment while the image is being built to snag the secrets. Doing it via a file on disk isn't good because then the secret is being stored on disk unencrypted.
It seems like the best way to do this would be with something like
docker swarm init
$(read -sp "Enter your secret: "; echo $REPLY) | docker secret create my_secret -
docker compose build --no-cache
docker swarm leave --force
Alas, it appears that Docker can't read from the swarm for build time secrets for some unknown reason.
What is the best way to do this? This seems to be a slight oversight, along the lines of docker secrete create not having a way to prompt for the value instead of having to resort to to hacks like above to keep the secret out of your bash history.
UPDATE: This is for SWARM/Remote docker systems, not targeted on local build time secrets. (I realised you were asking for those primarily and just mentioned swarm in the second part of the question. I believe it still holds good advice for some so ill leave the answer undeleted.
Docker Swarm can only read runtime-based secrets you create with the docker secret create command and must already exist on the cluster when deploying stack. We had been in the same situation before. We solved the "issue" using docker contexts. You can create an SSH-based docker context which points to a manager (we just use the first one). Then on your LOCAL device (we use Win as the base platform and WSL2/Linux VM for the UNIX part), you can simply run docker commands with inline --context property. More on context on official docs. For instance: docker --context production secret create .... And so on.

Do Docker Hub image repositories contain the environment variables from .env file?

I am building out my product environments using Docker and want to make sure my secrets and keys are secure. To do so, I would like to use .env files. For convenience sake, I would like to avoid dealing with Docker secrets.
For local development I am using OpenFaas which requires the image be pushed to Docker Hub for use with k3s OpenFaas. I am concerned that the Docker Hub image may contain the variables from the .env file used with docker-compose.yaml.
Are the environment variables included in the Docker Hub image repository?
The docs and This stack overflow response suggest that environment variables are only used at "runtime", which I understand to mean they are not included. That should mean only someone with admin access to the server would be able to inspect the image for the secrets. Am I wrong in this assumption? Should I be using Docker secrets?
Update 08/19/21
Through testing I feel more confident that the .env variables are not included in the image on Docker Hub. Note I am using OpenFaas faas-cli to handle docker deployment. To test this I did the following:
Comment out the environment_file: section of the .yml file
Upload the image using faas-cli up <functions>.yml (this part builds and pushes the docker image to a Docker Hub repository, then deploys the OpenFaas function)
Invoke the function. The function simply returns the environment variable. With the environment variables "commented out" of the .yml file, the function returns "undefined" meaning the variable is not available to the function.
This gave me some confidence, but not as much as I'd like, so next I did the following:
Uncomment the environment_file: section of the .yml file
Run the command faas-cli up -f <functions>.yml --no-cache --skip-push.
Invoke the function
--no-cache ensures that the image is pulled fresh, --skip-push skips the part where the docker image is pushed to the Docker Hub repository. Thus the "build" of the image should use the image created with the "environment_file" commented out, but now "run" with the environment_file uncommented on the local .yml file.
After invoking the function this time, the variable is available and returns appropriately. As long as my interpretation of how the build, push, and deploy portions of the faas-cli works is correct, then I feel confident the .env file and variables are not part of the Docker Hub image.
If you use docker-compose for making images ( for some odd reason ) there are 2 ways to pass variables.
Passing them directly into the docker-compose and pointing to the file which contains variables.
In both cases as you said the variables are passed at the runtime. Which means if you use the variable file, the one who runs the container has to have that file on their system and if you pass it directly into the docker compose someone has to pass these variables again when running the docker images. ( They need to pass it along with the docker run command )

How To Store and Retrieve Secrets From Hashicorp Vault using Docker-Compose?

I have setup an instance of Hashicorp Vault. I have successfully written and read secrets to and from it. Getting Vault up and running and is the easy part. Now, how do I use Vault as a store to replace the .env file in docker-compose.yml? How do I read secrets from Vault in all of my docker-compose files?
Even more difficult: how do I dynamically generate keys to access access the secrets in Vault, then use those keys in my docker-compose.yml files, without editing those files each time I restart a stack? How is that process automated? In short, just exactly how can I leverage Hashicorp Vault to secure the secrets that are otherwise exposed in the .env files?
I have read all of their literature and blog posts, and haven't been able to find anything that outlines that process. I am stuck and any tips will be greatly appreciated.
Note: This is not a question about running a Hashicorp Vault container with docker-compose, I have successfully done that already.
Also Note: I cannot modify the containers themselves; I can only modify the docker-compose.yml file
You would need to query the vault API to populate either your .env file or in the entrypoint of your container. My preference would be the container entrypoint at worst, and ideally directly in your application. The reason is because vault secrets could be short lived, and any container running for longer than that period would need to refresh it's secrets.
If you go with the worst case of doing this in the entrypoint, there are a few tools that come to mind. confd from Kelsey Hightower, and gomplate.
confd can run as a daemon and restart your app inside the container when the configuration changes. My only concern is that it is an older and less maintained project.
gomplate would be run by your entrypoint to expand a template file with the needed values. That file could just be an env.sh that you then source into your environment if you needed env vars. Or you can run it within your command line as a subshell, e.g.
your-app --arg "$(gomplate ...sometemplate...)"
If you only use these tools to set the value once and then start your app, make sure to configure a healthcheck and/or graceful exit your app when the credentials expire. Then run your container with orchestration (Kubernetes/Swarm Mode) or set a restart policy so that it restarts after any credentials expire to get the new credentials.

Securing/Encrypting the sensitive environment variables

I'm using an env file which contains sensitive information for docker creation.
But the thing is they are not secure. They can be easily viewed via docker inspect, and hence they are available to any user that can run docker commands.
I'm looking for a way in which I can secure these values from the outside users, without using docker swarm.
Is there a way to achieve this?
For variables needed in built-time (image creation):
ARG: --build-arg
For env variables needed when container starts:
--env-file: It lets you nobody can see your variables doing history inspecting your cli command.
Use docker secrets: possible in swarm, docker enterprise. (docker swarm secrets)

Is it possible to customize environment variable by linking two docker containers?

I've created a docker image for my database server and one for the web application. Using the documentation I'm able to link between both container using the environment variables as follow:
value="jdbc:postgresql://${DB_PORT_5432_TCP_ADDR}:${DB_PORT_5432_TCP_PORT}/db_name"
It works fine now but it would be better that the environment variables are more general and without containing a static port number. Something like:
value="jdbc:postgresql://${DB_URL}:${DB_PORT}/db_name"
Is there anyway to link between the environment variables? for example by using the ENV command in the dockerfile ENV DB_URL=$DB_PORT_5432_TCP_ADDR or by using the argument --env by running the image docker run ... -e DB_URL=$DB_PORT_5432_TCP_ADDR docker_image ?
Without building this kind of functionality into your docker startup shell scripts or other orchestration mechanism, this is not possible at the moment to create environment variables like you are describing here. You do mention a couple of workarounds. However, the problem at least with using the -e DB_URL=... in your docker run command is that your $DB_PORT_5432_TCP_ADDR environment variable is not known at runtime, and so you will not be able to set this value when you run it. Typically, this is what your orchestration layer is used for, service discovery and passing this kind of data among your containers. There is at least one workaround mentioned here on SO that involves constructing a special shell script that you put in your CMD or ENTRYPOINT directives that passes the environment variable to the container.

Resources