Windows AKS - Copy config files during container startup - azure-aks

Trying to setup our .NET framework application in Windows AKS and need an elegant way to pass in ApplicationSettings.config & Connectionstrings.config as per environment setup.. trying to use life-cycle hooks & init containers but no luck so far..
any recommendations?
Thanks

When delivering applications in a containerized format such as Docker image on k8s cluster a common pattern is that configuration is read from environment variables.
When drawing configuration from environment variables at runtime, you can set up your pods so that data stored in ConfigMaps is injected into environment variables.
If you want to avoid a code change, here is an excellent article on generating your config files based on environment variables using a startup script.

Related

Kubernetes Access Windows Environment Variable

How i can access or read windows environment variable in kubernetes. I achieved the same in docker compose file.
How i can do the same in kubernetes as i am unable to read the windows environment variables?
Nothing in the standard Kubernetes ecosystem can be configured using host environment variables.
If you're using the core kubectl tool, the YAML files you'd feed into kubectl apply are self-contained manifests; they cannot depend on host files or environment variables. This can be wrapped in a second tool, Kustomize, which can apply some modifications, but that explicitly does not support host environment variables. Helm lets you build Kubernetes manifests using a templating language, but that also specifically does not use host environment variables.
You'd need to somehow inject the environment variable value into one of these deployment systems. With all three of these tools, you could include those in a file (a Kubernetes YAML manifest, a Kustomize overlay, a Helm values file) that could be checked into source control; you may also be able to retrieve these values from some sort of external storage. But just relaying host environment variables into a container isn't an option in Kubernetes.

DevOps and environment variables with Docker images and containers

I am a newby with dockers and want to understand how to deal with environment variables in images/containers and how to configure the CI/CD pipelines.
In the first instance I need the big picture before deepdiving in commands etc. I searched a lot on Internet, but in the most of the cases I found the detailed commands how to create, build, publish images.
I have a .net core web application. As all of you know there are appsettings.json files for each environment, like appsettings.development.json or appsettings.production.json.
During the build you can give the environment information so .net can build the application with the specified environment variables like connection strings.
I can define the same steps in de Dockerfile and give the environment as a parameter or define as variables. That part works fine.
My question is, should I have to create seperate images for all of my environments? If no, how can I create 1 image and can use that to create a container and can use it for all of my environments? What is the best practice?
I hope I am understanding the question correctly. If the environments are the same framework, then no. In each project, import the necessary files for Docker and then update the docker-compose.yml for the project - it will then create an image for that project. Using Docker Desktop (if you prefer over CLI) you can start and stop your containers.

how to define and share pods environment variable to pod's apps in kubernetes

I am deploying some apps in kubernetes,and my apps using a config management tool called apollo.This tool need to define the apps running environment(develop\test\production......) through this ways:1.java args 2.application.properties 3./etc/settings/data.properties. Now I am running apps in Kubernetes,the question is,how to define running environment variable?
1.if I choose java args,so I should keep some scripts like: start-develop-env.sh/start-test-env.sh/start-pro-env.sh
2.if I choose application.properties,I should keep application-develop.properties/application-test.properties.....
3.if I choose /etc/settings/data.properties,It is impossible to login every docker container to define the config file of each environment.
what is the best way to solve the problem? write in kubernetes deployment yaml and my apps could not read it(define variable in batch pods collections in one place is better).
You can implement #2 and #3 using a configmap. You can define the properties file as a configmap, and mount that into the containers, either as application.properties or data.properties. The relevant section in k8s docs is:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
Using java args might be more involved. You can define a script as you said, and run that script to setup the environment for the container. You can store that script as a ConfigMap as well. Or, you can define individual environment variables in your deployment yaml, define a ConfigMap containing properties, and populate those environment variables from the configmap. The above section also describes how to setup environment variables from a configmap.

How can I present environmental information (like external service URLs or passwords) to an OpenShift / Kubernetes deployment?

I have a front-end (React) application. I want to build it and deploy to 3 environments - dev, test and production. As every front-end app it needs to call some APIs. API addresses will vary between the environments. So they should be stored as environment variables.
I utilize S2I Openshift build strategy to create the image. The image should be built and kind of sealed for changes, then before deployment to each particular environment the variables should be injected.
So I believe the proper solution is to have chained, two-stage build. First one S2I which compiles sources and puts it into Nginx/Apache/other container, and second which takes the result of the first, adds environment variables and produces final images, which is going to be deployed to dev, test and production.
Is it correct approach or maybe simpler solution exists?
I would not bake your environmental information into your runtime container image. One of the major benefits of containerization is to use the same runtime image in all of your environments. Generating a different image for each environment would increase the chance that your production deployments behave differently that those you tested in your lower environments.
For non-sensitive information the typical approach for parameterizing your runtime image is to use one or more of:
ConfigMaps
Secrets
Pod Environment Variables
For sensitive information the typical approach is to use:
Secrets (which are not very secret as anyone with root accesss to the hosts or cluster-admin in the cluster rbac can read them)
A vault solution like Hashicorp Vault or Cyberark
A custom solution you develop in-house that meets your security needs

docker 1.12 swarm : Does Swarm have a configuration store like kubernetes configMap

As per kubernetes docs: http://kubernetes.io/docs/user-guide/configmap/
Kubernetes has a ConfigMap API resource which holds key-value pairs
of configuration data that can be consumed in pods.
This looks like a very useful feature as many containers require configuration via some combination of config files, and environment variables
Is there a similar feature in docker1.12 swarm ?
Sadly, Docker (even in 1.12 with swarm mode) does not support the variety of use cases that you could solve with ConfigMaps (also no Secrets).
The only things supported are external env files in both Docker (
https://docs.docker.com/engine/reference/commandline/run/#/set-environment-variables-e-env-env-file) and Compose (https://docs.docker.com/compose/compose-file/#/env-file).
These are good to keep configuration out of the image, but they rely on environment variables, so you cannot just externalize your whole config file (e.g. for use in nginx or Prometheus). Also you cannot update the env file separately from the deployment/service, which is possible with K8s.
Workaround: You could build your configuration files in a way that uses those variables from the env file maybe.
I'd guess sooner or later Docker will add those functionality. Currently, Swarm is still in it's early days so for advanced use cases you'd need to either wait (mid to long term all platforms will have similar features), build your own hack/woraround, or go with K8s, which has that stuff integrated.
Sidenote: For Secrets storage I would recommend Hashicorp's Vault. However, for configuration it might not be the right tool.

Resources