How i can access or read windows environment variable in kubernetes. I achieved the same in docker compose file.
How i can do the same in kubernetes as i am unable to read the windows environment variables?
Nothing in the standard Kubernetes ecosystem can be configured using host environment variables.
If you're using the core kubectl tool, the YAML files you'd feed into kubectl apply are self-contained manifests; they cannot depend on host files or environment variables. This can be wrapped in a second tool, Kustomize, which can apply some modifications, but that explicitly does not support host environment variables. Helm lets you build Kubernetes manifests using a templating language, but that also specifically does not use host environment variables.
You'd need to somehow inject the environment variable value into one of these deployment systems. With all three of these tools, you could include those in a file (a Kubernetes YAML manifest, a Kustomize overlay, a Helm values file) that could be checked into source control; you may also be able to retrieve these values from some sort of external storage. But just relaying host environment variables into a container isn't an option in Kubernetes.
Related
Trying to setup our .NET framework application in Windows AKS and need an elegant way to pass in ApplicationSettings.config & Connectionstrings.config as per environment setup.. trying to use life-cycle hooks & init containers but no luck so far..
any recommendations?
Thanks
When delivering applications in a containerized format such as Docker image on k8s cluster a common pattern is that configuration is read from environment variables.
When drawing configuration from environment variables at runtime, you can set up your pods so that data stored in ConfigMaps is injected into environment variables.
If you want to avoid a code change, here is an excellent article on generating your config files based on environment variables using a startup script.
I am deploying some apps in kubernetes,and my apps using a config management tool called apollo.This tool need to define the apps running environment(develop\test\production......) through this ways:1.java args 2.application.properties 3./etc/settings/data.properties. Now I am running apps in Kubernetes,the question is,how to define running environment variable?
1.if I choose java args,so I should keep some scripts like: start-develop-env.sh/start-test-env.sh/start-pro-env.sh
2.if I choose application.properties,I should keep application-develop.properties/application-test.properties.....
3.if I choose /etc/settings/data.properties,It is impossible to login every docker container to define the config file of each environment.
what is the best way to solve the problem? write in kubernetes deployment yaml and my apps could not read it(define variable in batch pods collections in one place is better).
You can implement #2 and #3 using a configmap. You can define the properties file as a configmap, and mount that into the containers, either as application.properties or data.properties. The relevant section in k8s docs is:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
Using java args might be more involved. You can define a script as you said, and run that script to setup the environment for the container. You can store that script as a ConfigMap as well. Or, you can define individual environment variables in your deployment yaml, define a ConfigMap containing properties, and populate those environment variables from the configmap. The above section also describes how to setup environment variables from a configmap.
With Docker, there is discussion (consensus?) that passing secrets through runtime environment variables is not a good idea because they remain available as a system variable and because they are exposed with docker inspect.
In kubernetes, there is a system for handling secrets, but then you are left to either pass the secrets as env vars (using envFrom) or mount them as a file accessible in the file system.
Are there any reasons that mounting secrets as a file would be preferred to passing them as env vars?
I got all warm and fuzzy thinking things were so much more secure now that I was handling my secrets with k8s. But then I realized that in the end the 'secrets' are treated just the same as if I had passed them with docker run -e when launching the container myself.
Environment variables aren't treated very securely by the OS or applications. Forking a process shares it's full environment with the forked process. Logs and traces often include environment variables. And the environment is visible to the entire application as effectively a global variable.
A file can be read directly into the application and handled by the needed routine and handled as a local variable that is not shared to other methods or forked processes. With swarm mode secrets, these secret files are injected a tmpfs filesystem on the workers that is never written to disk.
Secrets injected as environment variables into the configuration of the container are also visible to anyone that has access to inspect the containers. Quite often those variables are committed into version control, making them even more visible. By separating the secret into a separate object that is flagged for privacy allows you to more easily manage it differently than open configuration like environment variables.
Yes , since when you mount the actual value is not visible through docker inspect or other Pod management tools. More over you can enforce file level access at the file system level of the Host for those files.
More suggested reading is here Kubernets Secrets
Secrets in Kearse used to store sensitive information like passwords, ssl certificates.
You definitely want to mount ssl certs as files in container rather sourcing them from environment variab
les.
I would like to be able to test my docker application on local before sending it to the cluster. I want to use mini Kube for this. Meanwhile, instead of having multiple kube config files which would define env variables for the cloud environment and for my local machine, I would like to override some of the env variables when running in local. I can see that you can do something like that with docker compose:
docker-compose up -f docker-compose.yml -f docker-compose.e2e.yml.
The second file would only have the overriding values. Yes, there are two files but I find it clean.
Is there a way to do something similar with Kube/minikube? Or even something better ???
I think you are asking how to pass different environment values into your Pods depending upon which environment they are deployed to. One pattern to achieve this is to deploy with helm. Then you use templated versions of your kubernetes descriptors for deployment. You also have a values.yaml file that contains values to be injected into the descriptors. You can switch and overlay values.yaml files at the time of install to control which values are injected for a given installation.
If you are asking how to switch whether a kubectl command runs against local or cloud without having to keep switching your kubeconfig file then you can add both contexts to your kubeconfig and use kubectl context to switch between them, as #Ijaz Khan suggests
As per kubernetes docs: http://kubernetes.io/docs/user-guide/configmap/
Kubernetes has a ConfigMap API resource which holds key-value pairs
of configuration data that can be consumed in pods.
This looks like a very useful feature as many containers require configuration via some combination of config files, and environment variables
Is there a similar feature in docker1.12 swarm ?
Sadly, Docker (even in 1.12 with swarm mode) does not support the variety of use cases that you could solve with ConfigMaps (also no Secrets).
The only things supported are external env files in both Docker (
https://docs.docker.com/engine/reference/commandline/run/#/set-environment-variables-e-env-env-file) and Compose (https://docs.docker.com/compose/compose-file/#/env-file).
These are good to keep configuration out of the image, but they rely on environment variables, so you cannot just externalize your whole config file (e.g. for use in nginx or Prometheus). Also you cannot update the env file separately from the deployment/service, which is possible with K8s.
Workaround: You could build your configuration files in a way that uses those variables from the env file maybe.
I'd guess sooner or later Docker will add those functionality. Currently, Swarm is still in it's early days so for advanced use cases you'd need to either wait (mid to long term all platforms will have similar features), build your own hack/woraround, or go with K8s, which has that stuff integrated.
Sidenote: For Secrets storage I would recommend Hashicorp's Vault. However, for configuration it might not be the right tool.