Getting AWS parameter store secrets inside ecs-cli compose docker container - docker

I have been building a backend for the last few days that I am launching with docker-compose. I use docker secrets to not have to store passwords - for example for the database - in an environment variable.
Since I want to use AWS ECS to run the docker containers online, and unfortunately docker compose is not supported the way I want, I'm trying to rewrite the whole thing into an ECS-compose file. However, I am still stuck on the secrets. I would like to include them like this:
version: 1
task_definition:
...
services:
my-service:
...
secrets:
- value_from: DB_USERNAME
name: DB_USERNAME
- value_from: DB_PASSWORD
name: DB_PASSWORD
By doing this, the secrets get saved inside environment variables, aren't they? This is not best practice, or is this case different than other cases?
Can I access these variables without problems inside the container by getting the environment variables?
I hope I have made my question clear enough, if not, please just ask again.
Thanks for the help in advance.

It is not best practise to store sensitive information within environment variables. There is an option within AWS ECS where you can configure the environment variables and get the values of those variables from AWS Secrets Manager. This way, the environment variables are only resolved within the container at run time.
But this still means that the container is going to store the variables as environment variables.
I have faced a similar situation while deploying apps onto EKS. I have setup a central vault server for secrets management within AWS and configured my application to directly call the vault endpoint to get the secrets. I had to complicate my architecture as I had to meet PCI compliance standards. If you are not keen on using vault due to its complexity, you can try knox-app (https://knox-app.com/) which is an online secrets management tool built by lyft engineers.
And to answer your second part of the question - yep. If you set the env variables, you will be able to access them within the container without any problem.

Related

k8s management/handling of secrets inside container

I'm currently migrating my docker deployment to k8s manifests and I was wondering about the handling of secretes. Currently my docker container fetches /run/secrets/app_secret_key to get the sensitive information inside the container as env var. but does that have any benefit in comparison to k8s secrets handling as on the other side I can also do something like this in my manifest.yaml:
env:
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
which than directly brings the secret as a env-variable inside the container ...
The only difference I was able to notice is that if I fetch /run/secrets/app_secret_key inside the container like so (docker-entrypoint.sh):
export APP_SECRET_KEY="$(cat /run/secrets/app_secret_key)"
the env var is not visible when I access the container after deployment, it seems that the env var is only available at the "session" where docker-entrypoint.sh gets initially triggered (at container/pod startup).
So my question now is what does make more sense here: simply go with the env: statement shown above or stay with manual fetching /run/secrets/app_secret_key inside the container ...
Thanks in advance
To be frank both are different implementation of same thing you can choose either one but I will prefer kubernetes approch as mounting secret than container reading at run time simply because of visibility.
Won't matter if you look for one container but when we have 30-40+ microservice running accross 4-5+ environment and have like 100 or even 200 secret. In this case one deployment go wrong we can look at deployments manifest and can figure out entire application. We don't have to search for docker file to understand what happening.
Exposing secret as env var or file is just a flavor to use the secret the k8s way.
Some secret like password is just a one line long string, so it’s convenient to use it as env var. Other secret like ssh private key or TLS certificate can be multiple line, that’s why you can mount the secret as volume instead.
Still, it’s recommended to declare your secret as k8s secret resources. That way you can fetch the value needed via kubectl without having to go inside the container. You can also make a template like helm chart that generate the secret manifests at deployment. With RBAC, you can also control who can read the secret manifests.
As per your comments, yes any user that can go inside the container will have access to the resource available to the shell user.

Is there any security advantage to mounting secrets as a file instead of passing them as environment variables with Docker on Kubernetes?

With Docker, there is discussion (consensus?) that passing secrets through runtime environment variables is not a good idea because they remain available as a system variable and because they are exposed with docker inspect.
In kubernetes, there is a system for handling secrets, but then you are left to either pass the secrets as env vars (using envFrom) or mount them as a file accessible in the file system.
Are there any reasons that mounting secrets as a file would be preferred to passing them as env vars?
I got all warm and fuzzy thinking things were so much more secure now that I was handling my secrets with k8s. But then I realized that in the end the 'secrets' are treated just the same as if I had passed them with docker run -e when launching the container myself.
Environment variables aren't treated very securely by the OS or applications. Forking a process shares it's full environment with the forked process. Logs and traces often include environment variables. And the environment is visible to the entire application as effectively a global variable.
A file can be read directly into the application and handled by the needed routine and handled as a local variable that is not shared to other methods or forked processes. With swarm mode secrets, these secret files are injected a tmpfs filesystem on the workers that is never written to disk.
Secrets injected as environment variables into the configuration of the container are also visible to anyone that has access to inspect the containers. Quite often those variables are committed into version control, making them even more visible. By separating the secret into a separate object that is flagged for privacy allows you to more easily manage it differently than open configuration like environment variables.
Yes , since when you mount the actual value is not visible through docker inspect or other Pod management tools. More over you can enforce file level access at the file system level of the Host for those files.
More suggested reading is here Kubernets Secrets
Secrets in Kearse used to store sensitive information like passwords, ssl certificates.
You definitely want to mount ssl certs as files in container rather sourcing them from environment variab
les.

Create environment variables for Kubernetes main container in Kubernetes Init container

I use an Kubernetes Init container to provision the application's database. After this is done I want to provide the DB's credentials to the main container via environment variables.
How can this be achieved?
I don't want to create a Kubernetes Secret inside the Init container, since I don't want to save the credentials there!
I see several ways to achieve what you want:
From my perspective, the best way is to use Kubernetes Secret. #Nebril has already provided that idea in the comments. You can generate it by Init Container and remove it by PreStop hook, for example. But, you don't want to go that way.
You can use a shared volume which will be used by InitConatainer and your main pod. InitContainer will generate the environment variables file db_cred.env in the volume which you can mount, for example, to /env path. After that, you can load it by modifying a command of your container in the Pod spec and add the command source /env/db_cred.env before the main script which will start your application. #user2612030 already gave you that idea.
Another alternative way can be Vault by Hashicorp, you can use it as storage of all your credentials.
You can use some custom solution to write and read directly to Etcd from Kubernetes apps. Here is a library example - k8s-kv.
But anyway, the best and the most proper way to store credentials in Kubernetes is Secrets. It is more secure and easier than almost any other way.

Rancher development environment

I started to use rancher recently for a project.
Within few days I set up a standard microservice architecture with 4 basic services (hosted on Digital Ocean), trying to make it as production ready as possible
Services:
Api Gateway
GraphQL Api
OAuth2 Server
Frontend
it also includes Loadbalancers, Health checks etc...
I'm amazed at how good it is, as such I heavily used all the features provided by rancher in my configs, for example, the DNS conventions <service>.<stack>, sidekicks, rancher-compose etc...
The above services lives in their own repository and they have their
own Dockerfile , docker-compose.yml and rancher-compose.yml for production, so that they can be deployed independently.
Now that I proved myself that rancher will be my new "friend", I need a strategy to run the same application on my local environment and being able to develop my services, just like I would do with Vagrant.
I'm wondering what's the best approach to port an application that runs on rancher to a development environment.
I had some ideas on how to tackle this, however, none of them seemed to allow me to achieve it without re-configuring the whole services for development.
1 - Rancher on local machine
This is the first approach I took, install a rancher-server and a rancher-client locally and deploy the whole stack just like in production. It seemed the most logical idea to me. However, this wouldn't allow me to change the code of the services and being reflected into the containers live. Maybe using shared volumes might work but it looks trivial to me if you have any idea please let me know. For me, This solution is gone :(
2 - Docker compose
My second attempt was to use plainly docker compose and shared volumes, omitting load balancers and all the features of rancher :( However, this might work, I would need to change all the configurations of all my services where they point to a rancher specific DNS domain <service>.<stack> to use just <service> over the bridge network. But this means maintaining 2 different configurations for different environments, which is weird and not fun to do.
3 - Vagrant
As the second solution is already messy (double docker-compose and double configuration for the services) why not just re-create the whole environment in vagrant (without rancher features, maybe with ansible) where one nginx does reverse proxy and resolve requests between services. However, this require also quite a lot work and double effort again :(
Is there any other approach which will make rancher suitable for a development environment in a non-painfull way? How companies which rely on rancher or any other platform tools solved this issue?
Rancher on the local machine is a common pattern. If you run Rancher on a VM, or locally on a Linux box, when you launch your stacks the subtle change is that you add volumes to the host..
services:
myapp:
volumes:
- /Users/myhome/code:/src
...
You could now use templating features in the compose files and the Rancher CLI. Something like:
services:
myapp:
{{ if dev-local == "true"}}
volumes:
- /Users/blah:/src
{{end}}
...
Then you could have an answers file that just has
dev-local="false"

Why should I use Docker Secrets?

For the last few months I've managed passwords for my docker containers by putting them in the ENV variables.
Example:
web:
environment:
- PASSWORD=123456
Then I bumped into Docker Secrets. So my questions are:
Which are the reasons why I should use them?
Are they more secure? How?
Can you provide a simple example to show their functionalities?
It depends on a use case.
If you're running one application on your own machine for development that accesses just one secret, you don't need docker secrets.
If you're running dozens of machines in production with a dozen of clustered services all requiring secrets for each other, you do need the secret management.
Apart from security concern, it's just plain easier to have a standardized way of accessing, creating and removing your secrets.
a basic docker inspect (among other things) will show all your environment variables, so this is not secure at all.
You can also have a look at
keywhiz
square.github.io/keywhiz
or vault
hashicorp.com/blog/vault.html
From: https://github.com/moby/moby/issues/13490
Environment Variables. Environment variables are discouraged, because they are:
Accessible by any proces in the container, thus easily "leaked"
Preserved in intermediate layers of an image, and visible in docker inspect
Shared with any container linked to the container

Resources