Create environment variables for Kubernetes main container in Kubernetes Init container - docker

I use an Kubernetes Init container to provision the application's database. After this is done I want to provide the DB's credentials to the main container via environment variables.
How can this be achieved?
I don't want to create a Kubernetes Secret inside the Init container, since I don't want to save the credentials there!

I see several ways to achieve what you want:
From my perspective, the best way is to use Kubernetes Secret. #Nebril has already provided that idea in the comments. You can generate it by Init Container and remove it by PreStop hook, for example. But, you don't want to go that way.
You can use a shared volume which will be used by InitConatainer and your main pod. InitContainer will generate the environment variables file db_cred.env in the volume which you can mount, for example, to /env path. After that, you can load it by modifying a command of your container in the Pod spec and add the command source /env/db_cred.env before the main script which will start your application. #user2612030 already gave you that idea.
Another alternative way can be Vault by Hashicorp, you can use it as storage of all your credentials.
You can use some custom solution to write and read directly to Etcd from Kubernetes apps. Here is a library example - k8s-kv.
But anyway, the best and the most proper way to store credentials in Kubernetes is Secrets. It is more secure and easier than almost any other way.

Related

What is the best approach to having the certs inside a container in kubernetes?

I see that normally a new image is created, that is, a dockerfile, but is it a good practice to pass the cert through environment variables? with a batch that picks it up and places it inside the container
Another approach I see is to mount the certificate on a volume.
What would be the best approximation to have a single image for all environments?
Just like what happens with software artifacts, I mean.
Creating a new image for each environment or renewal I find it tedious, but if it has to be like this...
Definitely do not bake certificates into the image.
Because you tagged your question with azure-aks, I recommend using the Secrets Store CSI Driver to mount your certificates from Key Vault.
See the plugin project page on GitHub
See also this doc Getting Certificates and Keys using Azure Key Vault Provider
This doc is better, more thorough and worth going through even if you're not using the nginx ingress controller Enable NGINX Ingress Controller with TLS
And so for different environments, you'd pull in different certificates from one or more key vaults and mount them to your cluster. Please also remember to use different credentials/identities to grab those certs.
The cloud native approach would be to not have your application handle the certificates but to externalize that completely to a different mechanism.
You could have a look at service meshs. They mostly work with the sidecar pattern, where a sidecar container is running in the pod, that handles en-/decryption of your traffic. The iptables inside the pod are configured in a way, that all traffic must go through the sidecar.
Depending on your requirements you can check out istio and linkerd as service mesh solutions.
If a service mesh is not an option I would recommend to store your certs as secret and mount that as volume into your container.

Is it possible to start a docker container with some env variables from the docker API

I'm using docker API to manage my containers from a front-end application and I would like to know if it was possible to use /container/{id}/start with some environnement variables, i can't find it in the official doc.
Thanks !
You can only specify environment variables when creating a container. Starting it just starts the main process in the container that already exists with its existing settings; the “start” API call has almost no options beyond the container ID. If you’ve stopped a container and want to restart it with different options, you need to delete and recreate it.

Is there any security advantage to mounting secrets as a file instead of passing them as environment variables with Docker on Kubernetes?

With Docker, there is discussion (consensus?) that passing secrets through runtime environment variables is not a good idea because they remain available as a system variable and because they are exposed with docker inspect.
In kubernetes, there is a system for handling secrets, but then you are left to either pass the secrets as env vars (using envFrom) or mount them as a file accessible in the file system.
Are there any reasons that mounting secrets as a file would be preferred to passing them as env vars?
I got all warm and fuzzy thinking things were so much more secure now that I was handling my secrets with k8s. But then I realized that in the end the 'secrets' are treated just the same as if I had passed them with docker run -e when launching the container myself.
Environment variables aren't treated very securely by the OS or applications. Forking a process shares it's full environment with the forked process. Logs and traces often include environment variables. And the environment is visible to the entire application as effectively a global variable.
A file can be read directly into the application and handled by the needed routine and handled as a local variable that is not shared to other methods or forked processes. With swarm mode secrets, these secret files are injected a tmpfs filesystem on the workers that is never written to disk.
Secrets injected as environment variables into the configuration of the container are also visible to anyone that has access to inspect the containers. Quite often those variables are committed into version control, making them even more visible. By separating the secret into a separate object that is flagged for privacy allows you to more easily manage it differently than open configuration like environment variables.
Yes , since when you mount the actual value is not visible through docker inspect or other Pod management tools. More over you can enforce file level access at the file system level of the Host for those files.
More suggested reading is here Kubernets Secrets
Secrets in Kearse used to store sensitive information like passwords, ssl certificates.
You definitely want to mount ssl certs as files in container rather sourcing them from environment variab
les.

Where are you supposed to store your docker config files?

I'm new to docker so I have a very simple question: Where do you put your config files?
Say you want to install mongodb. You install it but then you need to create/edit a file. I don't think they fit on github since they're used for deployment though it's not a bad place to store the files.
I was just wondering if docker had any support for storing such config files so you can add them as part of running an image.
Do you have to use swarms?
Typically you'll store the configuration files on the Docker host and then use volumes to bind mount your configuration files in the container. This allows you to separately manage the configuration file from the running containers. When you make a change to the configuration, you can just restart the container.
You can then use a configuration management tool like Salt, Puppet, or Chef to manage copying/storing the configuration file onto the Docker host. Things like passwords can be managed by the secrets capabilities of the tool. When set up this way, changing a configuration file just means you need to restart your container and not build a new image.
Yes, in most cases you definitely want to keep your Dockerfiles in version control. If your org (or you personally) use GitHub for this, that's fine, but stick them wherever your other repos are. One of the main ideas in DevOps is to treat infrastructure as code. In fact, one of the main benefits of something like a Dockerfile (or a chef cookbook, or a puppet file, etc) is that it is "used for deployment" but can also be version-controlled, meaningfully diffed, etc.

amazon ecs Container for Configuration files

at the moment I try to figure out a good setup for my application in amazon ecs.
My application needs a config file. Now I want to have a container to hold my config file so when I want to change something I don't need to redeploy my application.
I can't find any best practice method for this. What I found out is that the ecs tasks just make a docker run and you can't make a docker create.
Does anyone have an idea how I can manage my config files for my applications?
Most likely using Docker for this is overkill. How complex is the data? If it's simple key-value pairs I would use DynamoDB and get rid of the file completely. Another option would be using EFS for the file, or attaching/detaching an EBS volume.
You should not do that, it makes it fragile and you're not guaranteed to be able to access it from all containers across a cluster (or you end up having that on all instances which wastes resources). Why not package it up with the container as-is or package it as much as possible and provide environment variables to fill in the gap? If you really want to go this route I highly suggest something like S3

Resources