I am deploying a Spring boot application to Kubernetes. My Docker file is as follows.
FROM alpine-jdk1.8:latest
RUN mkdir -p /ext/app
COPY target/app-service.war /ext/app
ENV JAVA_OPTS="" \
APPLICATION_ARGS=""
CMD java ${JAVA_OPTS} -jar /ext/app/app-service.war ${APPLICATION_ARGS}
I have many config files under conf directory, but there are secrets also.
So, moved few of them to secrets and few to configMaps in Kubernetes. But, created more than 1 configmaps and secrets to groups configs and secrets.
Since, there are many configMaps and secrets, I had to create many volume mounts and volumes and used the Spring config location to add all these volumes to the classpath as a comma separated values.
- name: APPLICATION_ARGS
value: --spring.config.location=file:/conf,.....
Is there any other better approach?
That is a good approach for secrets, but less so for configMaps.
If your war application can rely on environment variable, a possible approach is to convert that configMap into an rc file (file with properties) which can then be read once by the application and used
You can see an example of such an approach in "The Kubernetes Wars" from knu:t hæugen:
How to deal with configuration?
Kubernetes likes app config in environment variables, not config files.
This is easy in our node apps using convict, pretty easy in our ruby apps and ranging from relatively easy to bloody hard in our java apps.
But how to get config into the replication controllers? We opted for using configmaps (a kubernetes object) to store the config, reference the variables from the rc files and maintain it in git controlled files.
So when we want to change to app config, update the config files and run a script which updates the configmap and reloads all the pods for the app
Related
I am working on a migration task from an on-premise system to a cloud composer, the thing is that Cloud composer is a fully managed version of airflow which restrict access to file systems behind, actually on my on-premise system I have a lot of environment variables for some paths we're saving them like /opt/application/folder_1/subfolder_2/....
When looking at the Cloud composer documentation, they said that you can access and save your data on the data folder which is mapped by /home/airflow/gcs/data/ which implies that in case I move forward that mapping, I will be supposed to change my env variables values to something like : /home/airflow/gcs/data/application/folder_1/folder_2 things that could be a bit painful, knowing that I'm running many bash scripts that rely on those values.
Is there any approach to solve such problem ?
You can specify your env variables during Composer creation/update process [1]. These vars are then stored in the YAML files that create the GKE cluster where Composer is hosted. If you SSH into a VM running the Composer GKE cluster, then enter one of the worker containers and run env, you can see the env variables you specified.
[1] https://cloud.google.com/composer/docs/how-to/managing/environment-variables
With Docker, there is discussion (consensus?) that passing secrets through runtime environment variables is not a good idea because they remain available as a system variable and because they are exposed with docker inspect.
In kubernetes, there is a system for handling secrets, but then you are left to either pass the secrets as env vars (using envFrom) or mount them as a file accessible in the file system.
Are there any reasons that mounting secrets as a file would be preferred to passing them as env vars?
I got all warm and fuzzy thinking things were so much more secure now that I was handling my secrets with k8s. But then I realized that in the end the 'secrets' are treated just the same as if I had passed them with docker run -e when launching the container myself.
Environment variables aren't treated very securely by the OS or applications. Forking a process shares it's full environment with the forked process. Logs and traces often include environment variables. And the environment is visible to the entire application as effectively a global variable.
A file can be read directly into the application and handled by the needed routine and handled as a local variable that is not shared to other methods or forked processes. With swarm mode secrets, these secret files are injected a tmpfs filesystem on the workers that is never written to disk.
Secrets injected as environment variables into the configuration of the container are also visible to anyone that has access to inspect the containers. Quite often those variables are committed into version control, making them even more visible. By separating the secret into a separate object that is flagged for privacy allows you to more easily manage it differently than open configuration like environment variables.
Yes , since when you mount the actual value is not visible through docker inspect or other Pod management tools. More over you can enforce file level access at the file system level of the Host for those files.
More suggested reading is here Kubernets Secrets
Secrets in Kearse used to store sensitive information like passwords, ssl certificates.
You definitely want to mount ssl certs as files in container rather sourcing them from environment variab
les.
I'm ramping up on Docker and k8s, and am running into an issue with a 3rd party application I'm containerizing where the application is configured via flat text files, without override environment variables.
What is the best way to dynamically configure this app? I'm immediately leaning towards a sidecar container that accepts environment variables and writes the text file config, writes it to a shared volume in the pod, and then the application container will read the config file. Is this correct?
What is the best practice here?
Create a ConfigMap with this configuration file. Then, mount the ConfigMap into the pod. This will create the configuration file in mounted directory. Then, you can use this configuration file as usual.
Here are related example:
Create ConfigMap from file.
Mount ConfigMap as volume.
I am using Docker CE 17.09-1. I am leveraging Docker Swarm and have deployed a Stack with multiple services.
I've decided to use Docker Secrets for various credentials. One of the services I am running requires that I enter the database username and password in a configuration file. I have created two secrets for each required credential and I see the read-only files under /run/secrets/ in the container. How do I insert the contents of those files into my configuration file? My config file is a .ini file, and contains a number of values.
Thank you in advance for any suggestions.
What I considered before is to modify my ENTRYPOINT or CMD script in order for that script to modify or generate my local config file (a template), valued with the secrets read at runtime in /run/secrets.
Then the same script would launch the service in foreground, once the configuration files are properly generated/valued.
Depending on the service, you may be able to set the path to the secrets file (within /run/secrets) in an environment variable, or else either point to the secrets file in the .ini file or mount the secrets file where the image is expecting the secret
For an example of the former, look at the mysql image on Docker Hub -
as indicated https://hub.docker.com/_/mysql/ :
As an alternative to
passing sensitive information via environment variables, _FILE may be
appended to the previously listed environment variables, causing the
initialization script to load the values for those variables from
files present in the container. In particular, this can be used to
load passwords from Docker secrets stored in
/run/secrets/ files.
For an example of the latter, see the rabbitmq image on Docker Hub. As noted https://hub.docker.com/_/rabbitmq/ :
If you wish to provide the cookie via a file (such as with Docker
Secrets), it needs to be mounted at /var/lib/rabbitmq/.erlang.cookie:
As per kubernetes docs: http://kubernetes.io/docs/user-guide/configmap/
Kubernetes has a ConfigMap API resource which holds key-value pairs
of configuration data that can be consumed in pods.
This looks like a very useful feature as many containers require configuration via some combination of config files, and environment variables
Is there a similar feature in docker1.12 swarm ?
Sadly, Docker (even in 1.12 with swarm mode) does not support the variety of use cases that you could solve with ConfigMaps (also no Secrets).
The only things supported are external env files in both Docker (
https://docs.docker.com/engine/reference/commandline/run/#/set-environment-variables-e-env-env-file) and Compose (https://docs.docker.com/compose/compose-file/#/env-file).
These are good to keep configuration out of the image, but they rely on environment variables, so you cannot just externalize your whole config file (e.g. for use in nginx or Prometheus). Also you cannot update the env file separately from the deployment/service, which is possible with K8s.
Workaround: You could build your configuration files in a way that uses those variables from the env file maybe.
I'd guess sooner or later Docker will add those functionality. Currently, Swarm is still in it's early days so for advanced use cases you'd need to either wait (mid to long term all platforms will have similar features), build your own hack/woraround, or go with K8s, which has that stuff integrated.
Sidenote: For Secrets storage I would recommend Hashicorp's Vault. However, for configuration it might not be the right tool.