For the last few months I've managed passwords for my docker containers by putting them in the ENV variables.
Example:
web:
environment:
- PASSWORD=123456
Then I bumped into Docker Secrets. So my questions are:
Which are the reasons why I should use them?
Are they more secure? How?
Can you provide a simple example to show their functionalities?
It depends on a use case.
If you're running one application on your own machine for development that accesses just one secret, you don't need docker secrets.
If you're running dozens of machines in production with a dozen of clustered services all requiring secrets for each other, you do need the secret management.
Apart from security concern, it's just plain easier to have a standardized way of accessing, creating and removing your secrets.
a basic docker inspect (among other things) will show all your environment variables, so this is not secure at all.
You can also have a look at
keywhiz
square.github.io/keywhiz
or vault
hashicorp.com/blog/vault.html
From: https://github.com/moby/moby/issues/13490
Environment Variables. Environment variables are discouraged, because they are:
Accessible by any proces in the container, thus easily "leaked"
Preserved in intermediate layers of an image, and visible in docker inspect
Shared with any container linked to the container
Related
I have been building a backend for the last few days that I am launching with docker-compose. I use docker secrets to not have to store passwords - for example for the database - in an environment variable.
Since I want to use AWS ECS to run the docker containers online, and unfortunately docker compose is not supported the way I want, I'm trying to rewrite the whole thing into an ECS-compose file. However, I am still stuck on the secrets. I would like to include them like this:
version: 1
task_definition:
...
services:
my-service:
...
secrets:
- value_from: DB_USERNAME
name: DB_USERNAME
- value_from: DB_PASSWORD
name: DB_PASSWORD
By doing this, the secrets get saved inside environment variables, aren't they? This is not best practice, or is this case different than other cases?
Can I access these variables without problems inside the container by getting the environment variables?
I hope I have made my question clear enough, if not, please just ask again.
Thanks for the help in advance.
It is not best practise to store sensitive information within environment variables. There is an option within AWS ECS where you can configure the environment variables and get the values of those variables from AWS Secrets Manager. This way, the environment variables are only resolved within the container at run time.
But this still means that the container is going to store the variables as environment variables.
I have faced a similar situation while deploying apps onto EKS. I have setup a central vault server for secrets management within AWS and configured my application to directly call the vault endpoint to get the secrets. I had to complicate my architecture as I had to meet PCI compliance standards. If you are not keen on using vault due to its complexity, you can try knox-app (https://knox-app.com/) which is an online secrets management tool built by lyft engineers.
And to answer your second part of the question - yep. If you set the env variables, you will be able to access them within the container without any problem.
I have a front-end (React) application. I want to build it and deploy to 3 environments - dev, test and production. As every front-end app it needs to call some APIs. API addresses will vary between the environments. So they should be stored as environment variables.
I utilize S2I Openshift build strategy to create the image. The image should be built and kind of sealed for changes, then before deployment to each particular environment the variables should be injected.
So I believe the proper solution is to have chained, two-stage build. First one S2I which compiles sources and puts it into Nginx/Apache/other container, and second which takes the result of the first, adds environment variables and produces final images, which is going to be deployed to dev, test and production.
Is it correct approach or maybe simpler solution exists?
I would not bake your environmental information into your runtime container image. One of the major benefits of containerization is to use the same runtime image in all of your environments. Generating a different image for each environment would increase the chance that your production deployments behave differently that those you tested in your lower environments.
For non-sensitive information the typical approach for parameterizing your runtime image is to use one or more of:
ConfigMaps
Secrets
Pod Environment Variables
For sensitive information the typical approach is to use:
Secrets (which are not very secret as anyone with root accesss to the hosts or cluster-admin in the cluster rbac can read them)
A vault solution like Hashicorp Vault or Cyberark
A custom solution you develop in-house that meets your security needs
With Docker, there is discussion (consensus?) that passing secrets through runtime environment variables is not a good idea because they remain available as a system variable and because they are exposed with docker inspect.
In kubernetes, there is a system for handling secrets, but then you are left to either pass the secrets as env vars (using envFrom) or mount them as a file accessible in the file system.
Are there any reasons that mounting secrets as a file would be preferred to passing them as env vars?
I got all warm and fuzzy thinking things were so much more secure now that I was handling my secrets with k8s. But then I realized that in the end the 'secrets' are treated just the same as if I had passed them with docker run -e when launching the container myself.
Environment variables aren't treated very securely by the OS or applications. Forking a process shares it's full environment with the forked process. Logs and traces often include environment variables. And the environment is visible to the entire application as effectively a global variable.
A file can be read directly into the application and handled by the needed routine and handled as a local variable that is not shared to other methods or forked processes. With swarm mode secrets, these secret files are injected a tmpfs filesystem on the workers that is never written to disk.
Secrets injected as environment variables into the configuration of the container are also visible to anyone that has access to inspect the containers. Quite often those variables are committed into version control, making them even more visible. By separating the secret into a separate object that is flagged for privacy allows you to more easily manage it differently than open configuration like environment variables.
Yes , since when you mount the actual value is not visible through docker inspect or other Pod management tools. More over you can enforce file level access at the file system level of the Host for those files.
More suggested reading is here Kubernets Secrets
Secrets in Kearse used to store sensitive information like passwords, ssl certificates.
You definitely want to mount ssl certs as files in container rather sourcing them from environment variab
les.
As per kubernetes docs: http://kubernetes.io/docs/user-guide/configmap/
Kubernetes has a ConfigMap API resource which holds key-value pairs
of configuration data that can be consumed in pods.
This looks like a very useful feature as many containers require configuration via some combination of config files, and environment variables
Is there a similar feature in docker1.12 swarm ?
Sadly, Docker (even in 1.12 with swarm mode) does not support the variety of use cases that you could solve with ConfigMaps (also no Secrets).
The only things supported are external env files in both Docker (
https://docs.docker.com/engine/reference/commandline/run/#/set-environment-variables-e-env-env-file) and Compose (https://docs.docker.com/compose/compose-file/#/env-file).
These are good to keep configuration out of the image, but they rely on environment variables, so you cannot just externalize your whole config file (e.g. for use in nginx or Prometheus). Also you cannot update the env file separately from the deployment/service, which is possible with K8s.
Workaround: You could build your configuration files in a way that uses those variables from the env file maybe.
I'd guess sooner or later Docker will add those functionality. Currently, Swarm is still in it's early days so for advanced use cases you'd need to either wait (mid to long term all platforms will have similar features), build your own hack/woraround, or go with K8s, which has that stuff integrated.
Sidenote: For Secrets storage I would recommend Hashicorp's Vault. However, for configuration it might not be the right tool.
I like the idea of modularizing an applications into containers (db, fronted, backed...) However, according to Docker docs "Compose is great for development, testing, and staging environments".
The sentence tells nothing about production environment. Thus, I am confused here.
Is it better to use Dockerfile to build production image from scratch and install all LAMP stack (etc.) there?
Or is it better to build production environment with docker-compose.yml? Is there any reason (overhead, linking etc.) that Docker doesn't explicitly say that Compose is great for production?
Really you need to define "production" in your case.
Compose simply starts and stops multiple containers with a single command. It doesn't add anything to the mix you couldn't do with regular docker commands.
If "production" is a single docker host, with all instances and relationships defined, then compose can do that.
But if instead you want multiple hosts and dynamic scaling across the cluster then you are really looking at swarm or another option.
Just to extend what #ChrisSainty already mentioned, compose is just an orchestration tool, you can use your own images built with your own Dockerfiles with your compose settings in a single host. But note that it is possible to compose against a swarm cluster as it exposes the same API as a single Docker host.
In my opinion it is an easy way to implement a microservice architecture using containers to tailor services with high efficient availability. In addition to that I recommend checking this official documentation on good practices on using compose in production environments.