Using Docker Secrets in Container - docker

I am using Docker CE 17.09-1. I am leveraging Docker Swarm and have deployed a Stack with multiple services.
I've decided to use Docker Secrets for various credentials. One of the services I am running requires that I enter the database username and password in a configuration file. I have created two secrets for each required credential and I see the read-only files under /run/secrets/ in the container. How do I insert the contents of those files into my configuration file? My config file is a .ini file, and contains a number of values.
Thank you in advance for any suggestions.

What I considered before is to modify my ENTRYPOINT or CMD script in order for that script to modify or generate my local config file (a template), valued with the secrets read at runtime in /run/secrets.
Then the same script would launch the service in foreground, once the configuration files are properly generated/valued.

Depending on the service, you may be able to set the path to the secrets file (within /run/secrets) in an environment variable, or else either point to the secrets file in the .ini file or mount the secrets file where the image is expecting the secret
For an example of the former, look at the mysql image on Docker Hub -
as indicated https://hub.docker.com/_/mysql/ :
As an alternative to
passing sensitive information via environment variables, _FILE may be
appended to the previously listed environment variables, causing the
initialization script to load the values for those variables from
files present in the container. In particular, this can be used to
load passwords from Docker secrets stored in
/run/secrets/ files.
For an example of the latter, see the rabbitmq image on Docker Hub. As noted https://hub.docker.com/_/rabbitmq/ :
If you wish to provide the cookie via a file (such as with Docker
Secrets), it needs to be mounted at /var/lib/rabbitmq/.erlang.cookie:

Related

Best practice for handling service configuration in docker

I want to deploy a docker application in a production environment (single host) using a docker-compose file provided by the application creator. The docker based solution is being used as a drop-in replacement for a monolithic binary installer.
The application ships with a default configuration but with an expectation that the administrator will want to apply moderate configuration changes.
There appears to be a few ways to apply custom configuration to the services that are defined in the docker-compose.yml file however I am not sure which is considered best practice. The two I am considering between at the moment are:
Bake the configuration into a new image. Here, I would add a build step for each service defined in the docker-compose file and create a minimal Dockerfile which uses COPY to replace the existing configuration files in the image with my custom config files. Using sed and echo in CMD statements could also be used to change configuration inline without replacing the files wholesale.
Use a bind mount with configuration stored on the host. In this case, I would store all custom configuration files in a directory on the host machine and define bind mounts in the volumes parameter for each service in the docker-compose file.
The first option seems the cleanest to me as the application is completely self-contained, however I would need to rebuild the image if I wanted to make any further configuration changes. The second option seems the easiest as I can make configuration changes on the fly (restarting services as required in the container).
Is there a recommended method for injecting custom configuration into Docker services?
Given your context, I think using a bind mount would be better.
A Docker image is supposed to be reusable in different context, and building an entire image solely for a specific configuration (i.e. environment) would defeat that purpose:
instead of the generic configuration provided by the base image, you create an environment-specific image
everytime you need to change the configuration you'll need to rebuild the entire image, whereas with a bind mount a simple restart or re-read of the configuration file by application will be sufficient
Docker documentation recommend that:
Dockerfile best practice
You are strongly encouraged to use VOLUME for any mutable and/or
user-serviceable parts of your image.
Good use cases for bind mounts
Sharing configuration files from the host machine to containers.

Is there any security advantage to mounting secrets as a file instead of passing them as environment variables with Docker on Kubernetes?

With Docker, there is discussion (consensus?) that passing secrets through runtime environment variables is not a good idea because they remain available as a system variable and because they are exposed with docker inspect.
In kubernetes, there is a system for handling secrets, but then you are left to either pass the secrets as env vars (using envFrom) or mount them as a file accessible in the file system.
Are there any reasons that mounting secrets as a file would be preferred to passing them as env vars?
I got all warm and fuzzy thinking things were so much more secure now that I was handling my secrets with k8s. But then I realized that in the end the 'secrets' are treated just the same as if I had passed them with docker run -e when launching the container myself.
Environment variables aren't treated very securely by the OS or applications. Forking a process shares it's full environment with the forked process. Logs and traces often include environment variables. And the environment is visible to the entire application as effectively a global variable.
A file can be read directly into the application and handled by the needed routine and handled as a local variable that is not shared to other methods or forked processes. With swarm mode secrets, these secret files are injected a tmpfs filesystem on the workers that is never written to disk.
Secrets injected as environment variables into the configuration of the container are also visible to anyone that has access to inspect the containers. Quite often those variables are committed into version control, making them even more visible. By separating the secret into a separate object that is flagged for privacy allows you to more easily manage it differently than open configuration like environment variables.
Yes , since when you mount the actual value is not visible through docker inspect or other Pod management tools. More over you can enforce file level access at the file system level of the Host for those files.
More suggested reading is here Kubernets Secrets
Secrets in Kearse used to store sensitive information like passwords, ssl certificates.
You definitely want to mount ssl certs as files in container rather sourcing them from environment variab
les.

Docker - how to dynamically configure an application that uses flat text config files?

I'm ramping up on Docker and k8s, and am running into an issue with a 3rd party application I'm containerizing where the application is configured via flat text files, without override environment variables.
What is the best way to dynamically configure this app? I'm immediately leaning towards a sidecar container that accepts environment variables and writes the text file config, writes it to a shared volume in the pod, and then the application container will read the config file. Is this correct?
What is the best practice here?
Create a ConfigMap with this configuration file. Then, mount the ConfigMap into the pod. This will create the configuration file in mounted directory. Then, you can use this configuration file as usual.
Here are related example:
Create ConfigMap from file.
Mount ConfigMap as volume.

Build a Docker Linux Container with Jenkins including dynamically added password

I need to build a complex docker container which has an alpine Linux, sftp, vstp and a special application who relies on input from sftp|ftps from a list of customers.
However, this container acts more like a full blown VM...anyway I have to dynamically create the container including all user accounts with their passwords.
I managed it with a bash script, supervisord and a nice Dockerfile:
Result -> it works like a charm.
While I copy my script plus all the config files and app data etc. inside the container I also provide a classic csv list to add my users and their belonging group memberships and passwords.
So far so good...Ok, now the problem. The whole content should be stored in a git repo and absolute no passwords should be there...and:
- no git encryption
- no docker compose
- no docker services etc.
Now my task is to provide all passwords via environment variables through jenkins (not Dockerfile ENV's) ...I have no idea how I can do this. I mean...environment variables are in the environment from jenkins and not inside the container ? Any recommendations ?

How to update docker container image but keep the generated files by container app

What is the best practices for the updating container for the following scenario;
I have images that build on my web app project, and I am puplishing new images based on updated source code, once in a month.
Buy my web app generates files or updates some file in time after running in container. For example, app is creating new xml files under user folder for each web user. Another example is upload files by users.
I want to keep these files after running new updated image without lose.
/bin/
/first.dll
/second.dll
/other-soruces/
/some.cs
/other.cs
/user/
/user-1.xml
/user-2.xml
/uploads/
/images
/image-1.jpg
/web.config
Should I use the volume feature of Docker ? Is there any another strategy ?
Short answer, yes, you do want a volume for these directories. More specifically, two volumes: /user and /uploads.
This gets into a fundamental practice of image and container design that is best done by dividing your application into three parts:
The application code, binaries, libraries, and other runtime dependencies.
The persistent data that the application access and creates.
The configuration that modifies how the application runs, particularly in different environments with the same code.
Each of these parts should go in a different place in docker.
The first part, the code and binaries, goes in your image. This is what you ship to run your container on different nodes in docker, and what you store in a registry for later reuse.
The second part, your persistent data, gets stored in a volume. There are two main types of volumes to pick from: a named volume and a host volume (aka bind mount). A named volume has a particular feature that improves portability, it will be initialized to the contents of your image at the volume location when the volume is created for the first time. This initialization includes directory and file permissions and ownership, and can be used to seed your volume with an initial state. The host volume (bind mount) is just a directory mount from the docker host into the container, and you get exactly what was on the host, including the uid/gid of the files/directories, along with no initialization procedure. The host volume is very easy to access for developers, but lacks portability if you move into a multi-node swarm cluster, and suffers from uid/gid on the host mapping to different users inside the container since usernames inside the container can be different for the same id's. Any files you write inside the container that are not written to a volume should be considered disposable and will be lost when you recreate the container to update to a new image. And any directories you define as a volume should be considered owned by that volume and will not receive updates from the image when you replace the container.
The last piece, configuration, is often overlooked but equally important. This is anything injected into the application at startup to tell it where to connect for external data, config files that alter it's behavior, and anything that needs to be separated to allow the same image to be reusable in different environments. This is how you get portability from development to production with the same image, and how you get reusability of publicly provided images. The configuration is injected with environment variables, command line parameters, bind mounts of a config file (when you run on a single node), and configs + secrets which are essentially the same bind mount of a config file that is now stored in docker's swarm rather than locally on a single host. In your situation, the /web.config looks suspiciously like a config file that you'll want to move out of the image and inject as a bind mount or swarm config.
To put these all together, you will want a compose file that defines your image, the volumes to use, and any configs or environment variables to set.

Resources