I'm using config maps to inject env variables into my containers. Some of the variables are created by concatenating variables, for example:
~/.env file
HELLO=hello
WORLD=world
HELLO_WORLD=${HELLO}_${WORLD}
I then create the config map
kubectl create configmap env-variables --from-env-file ~/.env
The deployment manifests reference the config map.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: my-image
image: us.gcr.io/my-image
envFrom:
- configMapRef:
name: env-variables
When I exec into my running pods, and execute the command
$ printenv HELLO_WORLD
I expect to see hello_world, but instead I see ${HELLO}_${WORLD}. The variables aren't expanded, and therefore my applications that refer to these variables will get the unexpanded value.
How do I ensure the variables get expanded?
If it matters, my images are using alpine.
I can't find any documentation on interpolating environment variables, but I was able to get this to work by removing the interpolated variable from the configmap and listing it directly in the deployment. It also works if all variables are listed directly in the deployment. It looks like kubernetes doesn't apply interpolation to variables loaded from configmaps.
For instance, this will work:
Configmap
apiVersion: v1
data:
HELLO: hello
WORLD: world
kind: ConfigMap
metadata:
name: env-variables
namespace: default
Deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: my-image
image: us.gcr.io/my-image
envFrom:
- configMapRef:
name: env-variables
env:
- name: HELLO_WORLD
value: $(HELLO)_$(WORLD)
I'm thinking about just expanding the variables before creating the configMap and uploading to kubernetes
Another parallel approach would be to use kustomize:
kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.
It's like make, in that what it does is declared in a file, and it's like sed, in that it emits edited text.
The sed part should be able to generate the right expanded value in your yaml file.
Related
I have a Kubernetes Job, job.yaml :
---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: ["sh", "run-vpn-script.sh", "/to/download/this"] # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
I need to run command for different parameters. I have like 30 parameters to run. I'm not sure what is the best solution here. I'm thinking to create container in a loop to run all parameters. How can I do this? I want to run the commands or containers all simultaneously.
Some of the ways that you could do it outside of the solutions proposed in other answers are following:
With a templating tool like Helm where you would template the exact specification of your workload and then iterate over it with different values (see the example)
Use the Kubernetes official documentation on work queue topics:
Indexed Job for Parallel Processing with Static Work Assignment - alpha
Parallel Processing using Expansions
Helm example:
Helm in short is a templating tool that will allow you to template your manifests (YAML files). By that you could have multiple instances of Jobs with different name and a different command.
Assuming that you've installed Helm by following guide:
Helm.sh: Docs: Intro: Install
You can create an example Chart that you will modify to run your Jobs:
helm create chart-name
You will need to delete everything that is in the chart-name/templates/ and clear the chart-name/values.yaml file.
After that you can create your values.yaml file which you will iterate upon:
jobs:
- name: job1
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(3)"']
image: perl
- name: job2
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(20)"']
image: perl
templates/job.yaml
{{- range $jobs := .Values.jobs }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ $jobs.name }}
namespace: default # <-- FOR EXAMPLE PURPOSES ONLY!
spec:
template:
spec:
containers:
- name: my-container
image: {{ $jobs.image }}
command: {{ $jobs.command }}
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
{{- end }}
If you have above files created you can run following command on what will be applied to the cluster beforehand:
$ helm template . (inside the chart-name folder)
---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job1
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(3)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job2
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(20)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
A side note #1!
This example will create X amount of Jobs where each one will be separate from the other. Please refer to the documentation on data persistency if the files that are downloaded are needed to be stored persistently (example: GKE).
A side note #2!
You can also add your namespace definition in the templates (templates/namespace.yaml) so it will be created before running your Jobs.
You can also run above Chart by:
$ helm install chart-name . (inside the chart-name folder)
After that you should be seeing 2 Jobs that are completed:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
job1-2dcw5 0/1 Completed 0 82s
job2-9cv9k 0/1 Completed 0 82s
And the output that they've created:
$ echo "one:"; kubectl logs job1-2dcw5; echo "two:"; kubectl logs job2-9cv9k
one:
3.14
two:
3.1415926535897932385
Additional resources:
Stackoverflow.com: Questions: Kubernetes creation of multiple deployment with one deployment file
In simpler terms , you want to run multiple commands , following is a sample format to execute multiple commands in a pod :
command: ["/bin/bash","-c","touch /foo && echo 'here' && ls /"]
When we apply this logic to your requirement for two different operations
command: ["sh", "-c", "run-vpn-script.sh /to/download/this && run-vpn-script.sh /to/download/another"]
If you want to run the same command multiple times you can deploy the same YAML multiple times by just changing the name.
You can go with the sed command for replacing the values in YAML and apply those YAML to the cluster for creating the container.
Example job.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: COMMAND # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
command :
'job.yaml | sed -i "s,COMMAND,["sh", "run-vpn-script.sh", "/to/download/this"],"
so the above command will replace all the values in YAML and you can apply the YAML to the cluster for creating the container. Same you can apply for other variables.
You can pass the different parameters as per the need in the command that got set in the YAML.
You can also deploy the multiple jobs using the command also
kubectl create job test-job --from=cronjob/a-cronjob
https://www.mankier.com/1/kubectl-create-job
pass other param as per need into the command.
If you don't just want to run the POD you can also try
kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_run/
I'm creating an application that is using helm(v3.3.0) + k3s. A program in a container uses different configuration files. As of now there are just few config files (that I added manually before building the image) but I'd like to add the possibility to add them dynamically when the container is running and not to lose them once the container/pod is dead. In docker I'd do that by exposing a folder like this:
docker run [image] -v /host/path:/container/path
Is there an equivalent for helm?
If not how would you suggest to solve this issue without stopping using helm/k3s?
In Kubernetes (Helm is just a tool for it) you need to do two things to mount host path inside container:
spec:
volumes:
# 1. Declare a 'hostPath' volume under pod's 'volumes' key:
- name: name-me
hostPath:
path: /path/on/host
containers:
- name: foo
image: bar
# 2. Mount the declared volume inside container using volume name
volumeMounts:
- name: name-me
mountPath: /path/in/container
Lots of other volumes types and examples in Kubernetes documentation.
Kubernetes has a dedicated construct for holding configuration files, ConfigMaps. Helm in turn has support for Accessing Files Inside Templates which can help you copy them into ConfigMap objects. A minimal setup here would look like:
# templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.ini: |
{{ .Files.Get "config.ini" | indent 4 }}
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment:
metadata: { ... }
spec:
template:
spec:
volumes:
- name: config-data
configMap:
name: my-config # matches ConfigMap metadata: { name: }
containers:
- volumeMounts:
- name: config-data # matches volume name: in this file
mountPath: /container/path
You can use Helm's templating constructs in various ways here: to dynamically construct the contents of the ConfigMap, to set an environment variable saying which file to use, and so on.
Do not use hostPath volumes here. Since Kubernetes is designed as a clustered environment, you do not have much control over which node a given pod will run on; you would have to copy these config files to every node in the cluster and try to update them all when a file changed. That's a huge maintenance problem, especially if you don't have direct filesystem access to the nodes.
I have one kubernetes deployment file for e.g:
I want that image_tag is passed at the command line when running the kubectl create -f deployment.yaml command. and suppose i did the export IMAGE_TAG=1.4.3 and want to use that ENV variable value is inserted at the position of image tag.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:{IMAGE_TAG}
ports:
- containerPort: 80
I do this:
sed -i "s/{IMAGE_TAG}/${IMAGE_TAG}/" deployment.yml
kubectl apply -f deployment.yml
as it is not supported by kubectl
most elegant way. you need to install envsubst binary
export key1=val1
export key2=val2
envsubst < deployment.yaml | kubectl apply -f -
Kubectl doesn't support out of the box variables. Because it's tags based. For what you want to achieve you have different options. The most popular option is Helm, but,Kustomize, a new player is getting a lot of traction from the community. You can also use other tools like Terraform which I think a very decent option but unfortunately overlooked.
I want to pass some values from Kubernetes yaml file to the containers. These values will be read in my Java app using System.getenv("x_slave_host").
I have this dockerfile:
FROM jetty:9.4
...
ARG slave_host
ENV x_slave_host $slave_host
...
$JETTY_HOME/start.jar -Djetty.port=9090
The kubernetes yaml file contains this part where I added env section:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: master
spec:
template:
metadata:
labels:
app: master
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: master
image: xregistry.azurecr.io/Y:latest
ports:
- containerPort: 9090
volumeMounts:
- name: shared-data
mountPath: ~/.X/experiment
- env:
- name: slave_host
value: slavevalue
- name: jupyter
image: xregistry.azurecr.io/X:latest
ports:
- containerPort: 8000
- containerPort: 8888
volumeMounts:
- name: shared-data
mountPath: /var/folder/experiment
imagePullSecrets:
- name: acr-auth
Locally when I did the same thing using docker compose, it worked using args. This is a snippet:
master:
image: master
build:
context: ./master
args:
- slave_host=slavevalue
ports:
- "9090:9090"
So now I am trying to do the same thing but in Kubernetes. However, I am getting the following error (deploying it on Azure):
error: error validating "D:\\a\\r1\\a\\_X\\deployment\\kub-deploy.yaml": error validating data: field spec.template.spec.containers[1].name for v1.Container is required; if you choose to ignore these errors, turn validation off with --validate=false
In other words, how to rewrite my docker compose file to kubernetes and passing this argument.
Thanks!
env section should be added under containers, like this:
containers:
- name: master
env:
- name: slave_host
value: slavevalue
To elaborate a on #Kun Li's answer, besides adding environment variables e.g. in the Deployment manifest directly you can create a ConfigMap (or Secret depending on the data being stored) and reference these in your manifests. This is a good way of sharing the same environment variables across applications, compared to manually adding environment variables to several different applications.
Note that a ConfigMap can consist of one or more key: value pairs and it's not limited to storing environment variables, it's just one of the use cases. And as i mentioned before, consider using a Secret if the data is classified as sensitive.
Example of a ConfigMap manifest, in this case used for storing an environment variable:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-env-var
data:
slave_host: slavevalue
To create a ConfigMap holding one key=value pair using kubectl create:
kubectl create configmap my-env --from-literal=slave_host=slavevalue
To get hold of all environment variables configured in a ConfigMap use the following in your manifest:
containers:
envFrom:
- configMapRef:
name: my-env-var
Or if you want to pick one specific environment variable from your ConfigMap containing several variables:
containers:
env:
- name: slave_host
valueFrom:
configMapKeyRef:
name: my-env-var
key: slave_host
See this page for more examples of using ConfigMap's in different situations.
Many applications require configuration via some combination of config files, command line arguments, and environment variables. These configuration artifacts should be decoupled from image content in order to keep containerized applications portable. The ConfigMap API resource provides mechanisms to inject containers with configuration data while keeping containers agnostic of Kubernetes. ConfigMap can be used to store fine-grained information like individual properties or coarse-grained information like entire config files or JSON blobs.
I am unable to find where configmaps are saved. I know they are created however I can only read them via the minikube dashboard.
ConfigMaps in Kubernetes can be consumed in many different ways and mounting it as a volume is one of those ways.
You can choose where you would like to mount the ConfigMap on your Pod. Example from K8s documentation:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
Pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "cat /etc/config/special.how" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
Note the volumes definition and the corresponding volumeMounts.
Other ways include:
Consumption via environment variables
Consumption via command-line arguments
Refer to the documentation for full examples.