I would like to be able to test my docker application on local before sending it to the cluster. I want to use mini Kube for this. Meanwhile, instead of having multiple kube config files which would define env variables for the cloud environment and for my local machine, I would like to override some of the env variables when running in local. I can see that you can do something like that with docker compose:
docker-compose up -f docker-compose.yml -f docker-compose.e2e.yml.
The second file would only have the overriding values. Yes, there are two files but I find it clean.
Is there a way to do something similar with Kube/minikube? Or even something better ???
I think you are asking how to pass different environment values into your Pods depending upon which environment they are deployed to. One pattern to achieve this is to deploy with helm. Then you use templated versions of your kubernetes descriptors for deployment. You also have a values.yaml file that contains values to be injected into the descriptors. You can switch and overlay values.yaml files at the time of install to control which values are injected for a given installation.
If you are asking how to switch whether a kubectl command runs against local or cloud without having to keep switching your kubeconfig file then you can add both contexts to your kubeconfig and use kubectl context to switch between them, as #Ijaz Khan suggests
Related
How i can access or read windows environment variable in kubernetes. I achieved the same in docker compose file.
How i can do the same in kubernetes as i am unable to read the windows environment variables?
Nothing in the standard Kubernetes ecosystem can be configured using host environment variables.
If you're using the core kubectl tool, the YAML files you'd feed into kubectl apply are self-contained manifests; they cannot depend on host files or environment variables. This can be wrapped in a second tool, Kustomize, which can apply some modifications, but that explicitly does not support host environment variables. Helm lets you build Kubernetes manifests using a templating language, but that also specifically does not use host environment variables.
You'd need to somehow inject the environment variable value into one of these deployment systems. With all three of these tools, you could include those in a file (a Kubernetes YAML manifest, a Kustomize overlay, a Helm values file) that could be checked into source control; you may also be able to retrieve these values from some sort of external storage. But just relaying host environment variables into a container isn't an option in Kubernetes.
I'm trying to run my application on Kubernetes. My docker container have environment variables such as PATH and LD_LIBRARY_PATH, which are set in Dockerfile. I tried to change them in the yaml file like this:
env:
- name: LD_LIBRARY_PATH
value: "foo:$(LD_LIBRARY_PATH)"
The above configuration doesn't work, I just see LD_LIBRARY_PATH=foo:$(LD_LIBRARY_PATH) in the pod. This method seems work for Kubernetes env variables such as KUBERNETES_PORT_443_TCP_PROTO, but not for docker env variables.
My questions are:
I think the env settings in yaml are injected into docker before the running time of the container, so kubernetes cannot read the value of LD_LIBRARY_PATH. Therefor it can't change the variable. Do I understand it right?
How to change container environment variables with kubernetes env? I know that I can set the env variables in the command field of yaml file, but that seems not clean and are there other ways to do that?
If Kubernetes can't change existed envs, does it mean that the env field in yaml file is designed to add new envs only?
Thank you!
The Kubernetes variable expansion syntax only works on things Kubernetes directly knows about. Inside a container an environment variable could come from a couple of places (the Dockerfile ENV directive, the base container environment itself, setup in an entrypoint script) and Kubernetes doesn't consider any of these; it only considers things in the same container spec. The API definition of EnvVar hints at this:
Variable references $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables.
You can't use this Kubernetes syntax to change environment variables in the way you're describing. You can only refer to other things in the same env: block (which may come from ConfigMaps or Secrets) and the implicit variables that come from other Services.
(Changing path-type variables at the Kubernetes level doesn't make a lot of sense. Since an image is self-contained, it already contains all of the commands and libraries it would need. It's difficult in Kubernetes to inject more tools or libraries; it'd be better to install them directly in your image, ideally in /usr/lib or /usr/local/lib, but failing that you can update ENV in a Dockerfile similar to how you suggest here.)
I am working on a migration task from an on-premise system to a cloud composer, the thing is that Cloud composer is a fully managed version of airflow which restrict access to file systems behind, actually on my on-premise system I have a lot of environment variables for some paths we're saving them like /opt/application/folder_1/subfolder_2/....
When looking at the Cloud composer documentation, they said that you can access and save your data on the data folder which is mapped by /home/airflow/gcs/data/ which implies that in case I move forward that mapping, I will be supposed to change my env variables values to something like : /home/airflow/gcs/data/application/folder_1/folder_2 things that could be a bit painful, knowing that I'm running many bash scripts that rely on those values.
Is there any approach to solve such problem ?
You can specify your env variables during Composer creation/update process [1]. These vars are then stored in the YAML files that create the GKE cluster where Composer is hosted. If you SSH into a VM running the Composer GKE cluster, then enter one of the worker containers and run env, you can see the env variables you specified.
[1] https://cloud.google.com/composer/docs/how-to/managing/environment-variables
I am deploying some apps in kubernetes,and my apps using a config management tool called apollo.This tool need to define the apps running environment(develop\test\production......) through this ways:1.java args 2.application.properties 3./etc/settings/data.properties. Now I am running apps in Kubernetes,the question is,how to define running environment variable?
1.if I choose java args,so I should keep some scripts like: start-develop-env.sh/start-test-env.sh/start-pro-env.sh
2.if I choose application.properties,I should keep application-develop.properties/application-test.properties.....
3.if I choose /etc/settings/data.properties,It is impossible to login every docker container to define the config file of each environment.
what is the best way to solve the problem? write in kubernetes deployment yaml and my apps could not read it(define variable in batch pods collections in one place is better).
You can implement #2 and #3 using a configmap. You can define the properties file as a configmap, and mount that into the containers, either as application.properties or data.properties. The relevant section in k8s docs is:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
Using java args might be more involved. You can define a script as you said, and run that script to setup the environment for the container. You can store that script as a ConfigMap as well. Or, you can define individual environment variables in your deployment yaml, define a ConfigMap containing properties, and populate those environment variables from the configmap. The above section also describes how to setup environment variables from a configmap.
Business requirement is following:
Stop running container
Modify environment (Ex.Change value of DEBUG_LEVEL environment variable)
Start container
This is easily achievable using docker CLI
docker create/docker stop/docker start
How to do it using kubernetes?
Additional info:
We are migrating from Cloud Foundry to Kubernetes. In CF, you deploy application, stop application, set environment variable, start application. The same functionality is needed.
For those who are not aware of CF application. It is like docker container with single running (micro)service.
Typically, you would run your application as a Deployment or as a StatefulSet. In this case, just change the value of the environment variable in the template and reapply the Deployment (or StatefulSet). Kubernetes will do the rest for you.
click here to refer the documentation
Let's say you are creating a pod/deployment/statefulset using the following command.
kubectl apply -f blueprint.yaml
blueprint.yaml is the YAML file which contains the blueprint of your pod/deployment/statefulset object.
Method 1 - If you specify the environment variables in the YAML file
Then you can change the blueprint.yaml to modify the value of environment variable, .
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Then execute same command again to apply the changes.
Method 2 - If you specify the environment variables in the dockerfile
You should build your docker image with a new tag. Then change the docker image tag in the blueprint.yaml file and execute the same command again to apply the changes.
Method 3
You can also delete and create the pod/deployment/statefulset again.
kubectl delete -f blueprint.yaml
kubectl apply -f blueprint.yaml
There is also another possibility :
Define container environment variables using configmap data
Let Kubernetes react upon ConfigMap changes. It does not trigger restart of Pods by default, unless you change Pod spec somehow. Here is an article, that describes how to achieve it using SHA-256 hash generated of our ConfigMap.