How to handle JWT and App Secrets in a prebuilt docker environment - docker

We are pre building the Shopware 6 code (composer install, storefront + admin build), but not the theme build and copy it into a docker container.
What is the best way to generate or supply the JWT secrets when running such a prebuilt container.
Normally we would do a
bin/console secrets:generate-keys
bin/console system:generate-jwt-secret
on the first installation.
But can this secrets also be kept in an ENV variable to avoid the need for a persitent /var volume?

You can override secrets locally as described here.
So in theory:
Run secrets:generate-keys to generate keys once.
Run secrets:decrypt-to-local to get the secrets added to your env file.
Run secrets:encrypt-from-local on deployment to set secrets from your env your file.

Related

How to supply env file for a docker GCP CloudRun Service

I have .env file for my docker-compose, and was able to run using "docker-compose up"
Now I pushed to cloud registry, and want to Cloud Run
How can I supply the various environemnt variables?
I did create secrets in secret manager, but how can I integrate both, so that my container starts reading all those needed secrets?
Note: My docker-compose is an app with database, but I can split them as 2 containers, if needed, but they still need secrets
Edit: Added secret references.
EDIT:
I am unable to run my container
If env file X=x , and docker-compose environemnt app.prop=${X}
then should I create secret X or x?
Is Cloud run using Dockerfile or docker-compose? I image pushed is built from docker-compose only. Sorry I am getting confused (not assuming trivial things as it is not working)
It is not possible to use docker-compose on Cloud Run, as it is designed for individual stateless containers. My suggestion is to create an image from your application service, upload the image to Google Container Registry in order to use it for your Cloud Run service, and connect it to Cloud SQL following the attached documentation. You can store database credentials with Secret Manager and pass it to your Cloud Run service as environment variables (check this documentation).

How To Store and Retrieve Secrets From Hashicorp Vault using Docker-Compose?

I have setup an instance of Hashicorp Vault. I have successfully written and read secrets to and from it. Getting Vault up and running and is the easy part. Now, how do I use Vault as a store to replace the .env file in docker-compose.yml? How do I read secrets from Vault in all of my docker-compose files?
Even more difficult: how do I dynamically generate keys to access access the secrets in Vault, then use those keys in my docker-compose.yml files, without editing those files each time I restart a stack? How is that process automated? In short, just exactly how can I leverage Hashicorp Vault to secure the secrets that are otherwise exposed in the .env files?
I have read all of their literature and blog posts, and haven't been able to find anything that outlines that process. I am stuck and any tips will be greatly appreciated.
Note: This is not a question about running a Hashicorp Vault container with docker-compose, I have successfully done that already.
Also Note: I cannot modify the containers themselves; I can only modify the docker-compose.yml file
You would need to query the vault API to populate either your .env file or in the entrypoint of your container. My preference would be the container entrypoint at worst, and ideally directly in your application. The reason is because vault secrets could be short lived, and any container running for longer than that period would need to refresh it's secrets.
If you go with the worst case of doing this in the entrypoint, there are a few tools that come to mind. confd from Kelsey Hightower, and gomplate.
confd can run as a daemon and restart your app inside the container when the configuration changes. My only concern is that it is an older and less maintained project.
gomplate would be run by your entrypoint to expand a template file with the needed values. That file could just be an env.sh that you then source into your environment if you needed env vars. Or you can run it within your command line as a subshell, e.g.
your-app --arg "$(gomplate ...sometemplate...)"
If you only use these tools to set the value once and then start your app, make sure to configure a healthcheck and/or graceful exit your app when the credentials expire. Then run your container with orchestration (Kubernetes/Swarm Mode) or set a restart policy so that it restarts after any credentials expire to get the new credentials.

Handling secrets inside docker container without using docker swarm

One question, how do you handle secrets inside dockerfile without using docker swarm. Let's say, you have some private repo on npm and restoring the same using .npmrc inside dockerfile by providing credentials. After package restore, obviously I am deleting .npmrc file from container. Similarly, it goes for NuGet.config as well for restoring private repos inside container. Currently, I am supplying these credentials as --build-arg while building the dockerfile.
But command like docker history --no-trunc will show the password in the log. Is there any decent way to handle this. Currently, I am not on kubernetes. Hence, need to handle the same in docker itself.
One way I can think of is mounting the /run/secrets/ and storing the same inside either by using some text file containing password or via .env file. But then, this .env file has to be part of pipeline to complete the CI/CD process, which means it has to be part of source control. Is there any way to avoid this or something can be done via pipeline itself or any type of encryption/decryption logic can be applied here?
Thanks.
Thanks.
First, keep in mind that files deleted in one layer still exist in previous layers. So deleting files doesn't help either.
There are three ways that are secure:
Download all code in advance outside of the Docker build, where you have access to the secret, and then just COPY in the stuff you downloaded.
Use BuildKit, which is an experimental Docker feature that enables secrets in a secure way (https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information).
Serve secrets from a network server running locally (e.g. in another container). See here for detailed explanation of how to do so: https://pythonspeed.com/articles/docker-build-secrets/
Let me try to explain docker secret here.
Docker secret works with docker swarm. For that you need to run
$ docker swarm init --advertise-addr=$(hostname -i)
It makes the node as master. Now you can create your secret here like: -
crate a file /db_pass and put your password in this file.
$docker secret create db-pass /db_pass
this creates your secret. Now if you want to list the secrets created, run command
$ docker secret ls
Lets use secret while running the service: -
$docker service create --name mysql-service --secret source=db_pass,target=mysql_root_password --secret source=db_pass,target=mysql_password -e MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_password" -e MYSQL_PASSWORD_FILE="/run/secrets/mysql_password" -e MYSQL_USER="wordpress" -e MYSQL_DATABASE="wordpress" mysql:latest
In the above command /run/secrets/mysql_root_password and /run/secrets/mysql_password files location is from container which stores the source file (db_pass) data
source=db_pass,target=mysql_root_password ( it creates file /run/secrets/mysql_root_password inside the container with db_pass value)
source=db_pass,target=mysql_password (it creates file /run/secrets/mysql_password inside the container with db_pass value)
See the screenshot from container which container secret file data: -

In MLflow project using docker environment, how to setup aws credentials

I am working on using 'MLflow' project and one use case is like this.
The MLflow running target/environment is docker.
Data lives on aws s3
When developing on a laptop. The laptop has an aws profile to access data.
(When developing on EC2, the EC2 have role attached to access s3)
Currently, I have credentials stored on the host as '~/.aws/credential', and can access s3 in the host. Question is: In MLflow project, how do I make program running on docker access s3 files?
Note that the question is not "in general" how to setup docker. The question is the recommended way to do the aws setup/configuration in a MLflow project. Thanks!
You can use a volume, for application data.
Specifically, for aws credentials, you can mount the credentials directory itself,
Obviously, you'll need to make sure to install any required dependencies for aws or mlflow. But here are the required parts for adding a user and mounting the credentials as a volume.
First, in your Dockerfile,
# add user with home directory
RUN useradd -m mlflow
# set default user
USER mlflow
# set working directory
WORKDIR /home/mlflow
Then to mount during run,
docker run -it -v "${HOME}"/.aws:/home/mlflow/.aws \
mlflow
Note: make sure to never hard-code credentials inside of any Docker containers.

Docker secrets passing as environment variable

I put the docker in swarm mode and did the following
echo "'admin'" | docker secret create password -
docker service create \
--network="host" \
--secret source=password,target=password \
-e PASSWORD='/run/secrets/password' \
<image>
I was not able to pass the password secret created via environment variable through docker service.
Please help me out where I am going wrong.
You are misunderstanding the concept of docker secrets.
The whole point of creating secrets is avoiding putting sensitive information into environment variables.
In your example the PASSWORD environment variable will simply carry the value /run/secrets/password which is a file name and not the password admin.
A valid usacase of docker secrets would be, that your docker-image reads the password from that file.
Checkout the docs here especially the example about MySQL:
the environment variables MYSQL_PASSWORD_FILE and MYSQL_ROOT_PASSWORD_FILE to point to the files /run/secrets/mysql_password and /run/secrets/mysql_root_password. The mysql image reads the password strings from those files when initializing the system database for the first time.
In short: your docker image should read the content of the file /run/secrets/password
There is no standard here.
Docker docs discourages using environment variables, but there is confusion whether it is setting password directly as string in "environment" section or other usage of environment variables within container.
Also using string instead of secret when same value might be used in multiple services requires checking and changing it in multiple places instead of one secret value.
Some images, like mariadb, is using env variables with _FILE suffix to populate suffixless version of variable with secret file contents. This seems to be ok.
Using Docker should not require to redesign application architecture only to support secrets in files. Most of other orchestration tools, like Kubernetes, supports putting secrets into env variables directly. Nowadays it is rather not considered as bad practice. Docker Swarm simply lacks good pracitces and proper examples for passing secret to env variable.
IMHO best way is to use entrypoint as a "decorator" to prepare environment from secrets.
Proper entrypoint script can be written as almost universal way of processing secrets, because we can pass original image entrypoint as argument to our new entrypoint script so original image "decorator" is doing it's own work after we prepare container with our script.
Personally I am using following entrypoint with images containing /bin/sh:
https://github.com/DevilaN/docker-entrypoint-example

Resources