I'm currently migrating my docker deployment to k8s manifests and I was wondering about the handling of secretes. Currently my docker container fetches /run/secrets/app_secret_key to get the sensitive information inside the container as env var. but does that have any benefit in comparison to k8s secrets handling as on the other side I can also do something like this in my manifest.yaml:
env:
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
which than directly brings the secret as a env-variable inside the container ...
The only difference I was able to notice is that if I fetch /run/secrets/app_secret_key inside the container like so (docker-entrypoint.sh):
export APP_SECRET_KEY="$(cat /run/secrets/app_secret_key)"
the env var is not visible when I access the container after deployment, it seems that the env var is only available at the "session" where docker-entrypoint.sh gets initially triggered (at container/pod startup).
So my question now is what does make more sense here: simply go with the env: statement shown above or stay with manual fetching /run/secrets/app_secret_key inside the container ...
Thanks in advance
To be frank both are different implementation of same thing you can choose either one but I will prefer kubernetes approch as mounting secret than container reading at run time simply because of visibility.
Won't matter if you look for one container but when we have 30-40+ microservice running accross 4-5+ environment and have like 100 or even 200 secret. In this case one deployment go wrong we can look at deployments manifest and can figure out entire application. We don't have to search for docker file to understand what happening.
Exposing secret as env var or file is just a flavor to use the secret the k8s way.
Some secret like password is just a one line long string, so it’s convenient to use it as env var. Other secret like ssh private key or TLS certificate can be multiple line, that’s why you can mount the secret as volume instead.
Still, it’s recommended to declare your secret as k8s secret resources. That way you can fetch the value needed via kubectl without having to go inside the container. You can also make a template like helm chart that generate the secret manifests at deployment. With RBAC, you can also control who can read the secret manifests.
As per your comments, yes any user that can go inside the container will have access to the resource available to the shell user.
Related
I have been building a backend for the last few days that I am launching with docker-compose. I use docker secrets to not have to store passwords - for example for the database - in an environment variable.
Since I want to use AWS ECS to run the docker containers online, and unfortunately docker compose is not supported the way I want, I'm trying to rewrite the whole thing into an ECS-compose file. However, I am still stuck on the secrets. I would like to include them like this:
version: 1
task_definition:
...
services:
my-service:
...
secrets:
- value_from: DB_USERNAME
name: DB_USERNAME
- value_from: DB_PASSWORD
name: DB_PASSWORD
By doing this, the secrets get saved inside environment variables, aren't they? This is not best practice, or is this case different than other cases?
Can I access these variables without problems inside the container by getting the environment variables?
I hope I have made my question clear enough, if not, please just ask again.
Thanks for the help in advance.
It is not best practise to store sensitive information within environment variables. There is an option within AWS ECS where you can configure the environment variables and get the values of those variables from AWS Secrets Manager. This way, the environment variables are only resolved within the container at run time.
But this still means that the container is going to store the variables as environment variables.
I have faced a similar situation while deploying apps onto EKS. I have setup a central vault server for secrets management within AWS and configured my application to directly call the vault endpoint to get the secrets. I had to complicate my architecture as I had to meet PCI compliance standards. If you are not keen on using vault due to its complexity, you can try knox-app (https://knox-app.com/) which is an online secrets management tool built by lyft engineers.
And to answer your second part of the question - yep. If you set the env variables, you will be able to access them within the container without any problem.
I deploy my Rails 6 app in a Kubernetes cluster and I think about how to implement my ENV vars.
Regularly in Rails apps I use dotenv with regular ENV vars on the host. But it seems, that I can omit them now and make the use Rails credentials. But just because features exists doesn't mean it has to be used or must be better, right?
So I am not sure how to solve this env/security puzzle:
Approach ConfigMap
create a ConfigMap on the cluster to provide ENV vars
put all ENV vars in the ConfigMap
omit Rails credentials
Approach Credentials
provide a Kubernetes Secret or ConfigMap with the RAILS_MASTER_KEY
use Rails credentials for all vars I need
(Do some ENV-vars have to stay in a ConfigMap like RAILS_ENV?)
The downside I am worry about is, that when I wanna change a ENV var (fix typos, scale workers, switch db, credentials...), I have to pass a lot of steps: make a git push, build and tag the container and wait for a deploy.
With a ConfigMap I simply kubectl apply the change.
I like the Rails way "convention over configuration", so scattering vars to two or three different kinds seems not so practical to mem, but I am afraid, I have to.
Which approach is more secure?
Which one is more "productive" or "developer friendly"?
When to use credentials then?
What's best practice in 2021?
You wouldn't use (only) a ConfigMap since that's not safe but you might use a Secret to hold all the env vars in the same way as you describe. Really its up to which workflow you prefer. No matter how you do it, you have some Kubernetes Secrets object somewhere, just a question of scope for it. So you 100% need to have a workflow for that side of things. But if you prefer the day-to-day secrets edits to be via the Rails workflow so you don't need to touch the Kubernetes side as much, that's cool.
Personally I think having fewer systems touching secrets data is better even if it means the Rails devs need to learn a new tool.
I am new to kubernetes deployments so I wanted to know is it possible to pull images from private repo without using imagePullSecrets in the deployment yaml files or is it mandatory to create a docker registry secret and pass that secret in imagePullSecrets.
I also looked at adding imagePullSecrets to a service account but that is not the requirement I woul love to know that if I setup creds in variables can kubernetes use them to pull those images.
Also wanted to know how can it be achieved and reference to a document would work
Thanks in advance.
As long as you're using Docker on your Kubernetes nodes (please note that Docker support has itself recently been deprecated in Kubernetes), you can authenticate the Docker engine on your nodes itself against your private registry.
Essentially, this boils down to running docker login on your machine and then copying the resulting credentials JSON file directly onto your nodes. This, of course, only works if you have direct control over your node configuration.
See the documentation for more information:
If you run Docker on your nodes, you can configure the Docker container runtime to authenticate to a private container registry.
This approach is suitable if you can control node configuration.
Docker stores keys for private registries in the $HOME/.dockercfg or $HOME/.docker/config.json file. If you put the same file in the search paths list below, kubelet uses it as the credential provider when pulling images.
{--root-dir:-/var/lib/kubelet}/config.json
{cwd of kubelet}/config.json
${HOME}/.docker/config.json
/.docker/config.json
{--root-dir:-/var/lib/kubelet}/.dockercfg
{cwd of kubelet}/.dockercfg
${HOME}/.dockercfg
/.dockercfg
Note: You may have to set HOME=/root explicitly in the environment of the kubelet process.
Here are the recommended steps to configuring your nodes to use a private registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json on your PC.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes; for example:
if you want the names: nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )
if you want to get the IP addresses: nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="ExternalIP")]}{.address} {end}' )
Copy your local .docker/config.json to one of the search paths list above.
for example, to test this out: for n in $nodes; do scp ~/.docker/config.json root#"$n":/var/lib/kubelet/config.json; done
Note: For production clusters, use a configuration management tool so that you can apply this setting to all the nodes where you need it.
If the Kubernetes cluster is private, you can deploy your own, private (and free) JFrog Container Registry using its Helm Chart in the same cluster.
Once it's running, you should allow anonymous access to the registry to avoid the need for a login in order to pull images.
If you prevent external access, you can still access the internal k8s service created and use it as your "private registry".
Read through the documentation and see the various options.
Another benefit is that JCR (JFrog Container Registry) is also a Helm repository and a generic file repository, so it can be used for more than just Docker images.
I use an Kubernetes Init container to provision the application's database. After this is done I want to provide the DB's credentials to the main container via environment variables.
How can this be achieved?
I don't want to create a Kubernetes Secret inside the Init container, since I don't want to save the credentials there!
I see several ways to achieve what you want:
From my perspective, the best way is to use Kubernetes Secret. #Nebril has already provided that idea in the comments. You can generate it by Init Container and remove it by PreStop hook, for example. But, you don't want to go that way.
You can use a shared volume which will be used by InitConatainer and your main pod. InitContainer will generate the environment variables file db_cred.env in the volume which you can mount, for example, to /env path. After that, you can load it by modifying a command of your container in the Pod spec and add the command source /env/db_cred.env before the main script which will start your application. #user2612030 already gave you that idea.
Another alternative way can be Vault by Hashicorp, you can use it as storage of all your credentials.
You can use some custom solution to write and read directly to Etcd from Kubernetes apps. Here is a library example - k8s-kv.
But anyway, the best and the most proper way to store credentials in Kubernetes is Secrets. It is more secure and easier than almost any other way.
For the last few months I've managed passwords for my docker containers by putting them in the ENV variables.
Example:
web:
environment:
- PASSWORD=123456
Then I bumped into Docker Secrets. So my questions are:
Which are the reasons why I should use them?
Are they more secure? How?
Can you provide a simple example to show their functionalities?
It depends on a use case.
If you're running one application on your own machine for development that accesses just one secret, you don't need docker secrets.
If you're running dozens of machines in production with a dozen of clustered services all requiring secrets for each other, you do need the secret management.
Apart from security concern, it's just plain easier to have a standardized way of accessing, creating and removing your secrets.
a basic docker inspect (among other things) will show all your environment variables, so this is not secure at all.
You can also have a look at
keywhiz
square.github.io/keywhiz
or vault
hashicorp.com/blog/vault.html
From: https://github.com/moby/moby/issues/13490
Environment Variables. Environment variables are discouraged, because they are:
Accessible by any proces in the container, thus easily "leaked"
Preserved in intermediate layers of an image, and visible in docker inspect
Shared with any container linked to the container