helm range exclude #values - devops

I am trying to generate a ConfigMap by reading from env.config from the Helm values file and it's working as expected. However, I need to exclude the value starting with # in the below function from the values file under env.config.
Is any method to achieve this?
data:
{{- range $key, $val := .Values.env.config }}
{{ $key }}: {{ $val | quote }}
{{- end}}
values file:
env:
config:
value1: valueone
value2: valuetwo
#value3: valuethree
value4: valuefour
Here value3 should be ignored while generating the ConfigMap.

Related

Helm resolve .Release.Namespace that was passed into template file

I'm trying to set a var in my values.yaml that by default is set to {{ .Release.Namespace }}. But when I check the end result using --dry-run the value is "{{ .Release.Namespace }}", and not the actual namespace.
If I set ticker.secretNamespace to a string i.e. "foo", it works. How to get this working...
Thanks!
values.yaml
ticker:
secretNamespace: "{{ .Release.Namespace }}"
/tempaltes/prep.yaml
...
containers:
- name: prep
command:
- /bin/bash
- -exuc
- |
DEBUG A: {{ $.Release.Namespace }}
{{- $namespace := $.Release.Namespace }}
DEBUG B: {{ $namespace }}
{{- with .Values.ticker }}
{{- if not (eq .secretNamespace $namespace) }}
DEBUG C: {{ .secretNamespace }}
kubectl --namespace "{{ .secretNamespace }}" create secret ..."
{{- end }}
{{- end }}
...
dry-run result
containers:
- name: prep
command:
- /bin/bash
- -exuc
- |
DEBUG A: test-ns
DEBUG B: test-ns
DEBUG C: {{ .Release.Namespace }}
kubectl --namespace "{{ .Release.Namespace }}" create secret ...
Tried some different notation inside the values.yaml, but no luck.
secretNamespace: {{ .Release.Namespace }}
secretNamespace: {{ $.Release.Namespace }}
secretNamespace: .Release.Namespace
secretNamespace: "{{ .Release.Namespace | quote }}"
secretNamespace: "{{ .Release.Namespace | tpl }}"

How to evaluate helmv3 printf output as jinja expression

Based on number of nodes, I need to refer the certificate and key from yaml file.
{{- $root := . -}}
{{ range $k, $v := until (int ($root.Values.data.nodeCount) ) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $root.Values.clusterName }}-nodes-{{$k}}-secret
type: Opaque
data:
crt: {{ printf "$root.Values.data.node_%d_key" $k }}
---
{{- end }}
Example output: It doesn't show the output value it only shows the printf output which is string, how can I evaluate the printf output to get the evaultation to retrieve result from values.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-0-secret
type: Opaque
data:
crt: $root.Values.data.node_0_key
---
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-1-secret
type: Opaque
data:
crt: $root.Values.data.node_1_key
example values.yml
nodeCount: 3
clusterName: "test"
data:
nodeCount: 2
node_0_crt: "test"
node_0_key: "test"
node_1_crt: "test1"
node_1_key: "test1"
Note that Helm doesn't use Jinja templating syntax. If you were to try using a Jinja expression in a Helm template it wouldn't work. Helm uses Go templates (with a bunch of custom functions).
For the behavior you want, I think you're looking for the tpl function, which lets you evaluate a string as a Helm template. That might look like this:
{{ range until (int $.Values.data.nodeCount) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $.Values.clusterName }}-nodes-{{.}}-secret
type: Opaque
data:
crt: {{ tpl (printf "{{ .Values.data.node_%d_key }}" .) $ }}
---
{{- end }}
Note that I've also removed your use of $root; you can just refer to the $ variable if you need to explicitly refer to the root context. I've also slightly simplified the outer range loop.
Given the above template and your sample data, I get the following output:
---
# Source: example/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-0-secret
type: Opaque
data:
crt: test
---
# Source: example/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-1-secret
type: Opaque
data:
crt: test1

How to pass dynamic argument to a helm chart from values

I have a process running in a ubuntu docker container which I run with the following command:
docker container_name:latest -ms server_name -msp server_port -ma server_address -ir receiver_ip_address -pr receiver_port -s sleep_time -r true
An Entrypoint has been defined in the docker recipe from which the container starts and to which the arguments passed by the docker run are added.
With the following deployment.yaml:
containers:
- name: container_name
image: {{ .Values.global.container_name}}:{{ .Values.global.container_name.tag }}
args:
{{ range .Values.global.container_args }}
- {{ . }}
{{ end }}
And the following values.yaml:
global:
container_name: ['-ms','server_name','-msp','server_port','-ma','server_address',' ir',' receiver_ip_address', '-pr','receiver_port' , '-s','sleep_time','-r' 'true']
In particular, the r and s flags are passed from the values.yaml file as boolean and integer inside singlequote like also server_port.
I get the following error:
INSTALLATION FAILED: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Args: []string: ReadString: expects " or n, but found 1, error found in #10 byte of ...|","-msp",server_port,"-ma","|..., bigger context ...|args":["-ms","server_name","-msp",server_port,"-ma","server_address","-m|...
Removing the integer and boolean values ​​from the value.yaml file and inserting random strings I noticed that the pod was running (with application error) and the previous error did not appear.
Trying to figure out the reason for this problem I found this post:
Error when running Helm Chart with environment variables
I changed the deploy.yaml template line from:
containers:
- name: container_name
image: {{ .Values.global.container_name}}:{{ .Values.global.container_name.tag }}
args:
{{ range .Values.global.container_args }}
- {{ . }}
{{ end }}
to:
containers:
- name: container_name
image: {{ .Values.global.container_name}}:{{ .Values.container_name.tag}}
args:
{{ range .Values.global.container_args}}
- {{ . | quote }}
{{ end }}
using the quote function.

Unable to interpolate environment variable inside POD via the helm chart for stable/elasticsearch-curator

I am trying to make an helm chart for my custom elk stack. I use the stable/elasticsearch-curator chart as a dependency.
In my values.yaml file, I use some env variables to pass the elasticsearch host:
esClusterName: &esClusterName "elasticsearch-logs"
...
elasticsearch-curator:
env:
ES_CLUSTER_NAME: *esClusterName
ELASTICSEARCH_HOST: $(ES_CLUSTER_NAME)-master
configMaps:
config_yml: |-
---
client:
hosts:
- ${ELASTICSEARCH_HOST}
port: 9200
But the variable is not correctly interpolated as shown by this error message:
HTTP N/A error: HTTPConnectionPool(host='$(es_cluster_name)-master', port=9200): Max retries exceeded with ...
Inside my pod, ELASTICSEARCH_HOST = '$(es_cluster_name)-master' --a string of the name of my variable in LOWERCASE and "-master"-- instead of "elasticsearch-logs-master".
I cannot wrap my head arround this. I have used the same technique -- env variable interpolation -- for the other dependencies and it works.
The only difference I see is that the helm chart for elasticsearch-curator passes env variables differently from the other charts:
# stable/elasticsearch-curator/templates/cronjob.yaml (The file is here)
env:
{{- if .Values.env }}
{{- range $key,$value := .Values.env }}
- name: {{ $key | upper | quote}}
value: {{ $value | quote}}
{{- end }}
{{- end }}
And this template expects the values to be passed in values.yaml like so: ( the file is here)
env:
MY_ENV_VAR: value1
MY_OTHER_VAR: value2
whereas all other templates use this way: ( exemple file)
env: {{ toYaml .Values.extraEnvs | nindent 10 }}
with a values.yaml like so: ( exemple file)
extraEnvs:
- name: MY_ENVIRONMENT_VAR
value: the_value_goes_here
But I'm not sure if this difference explains my problem. So my question is: how do I make it work?
I replaced ELASTICSEARCH_HOST with ES_HOST like so:
elasticsearch-curator:
env:
ES_CLUSTER_NAME: *esClusterName
ES_HOST: $(ES_CLUSTER_NAME)-master
configMaps:
config_yml: |-
---
client:
hosts:
- ${ES_HOST}
port: 9200
and it just worked!
I think it comes from the fact that when values.yaml is parsed, the keys from the env: object are sorted in alphabetical order:
env: {
ELASTICSEARCH_HOST: $(ES_CLUSTER_NAME)-master
ES_CLUSTER_NAME: "elasticsearch-logs"
}
Then, when the pod tries to interpolate the value of ES_CLUSTER_NAME inside ELASTICSEARCH_HOST, it does not work since it doesn't know the value of ES_CLUSTER_NAME yet.
It would be nice to have confirmation (or infirmation) of this.

Kubernetes custom jenkins chart throws error associated with override_config_map

I want to spin up jenkins server using latest jenkins.1.3.6 version helm chart. However, when applying the chart I am receiving the error calling include: template: no template "override_config_map" associated with template "gotpl"
Error: render error in
"my-chart/charts/jenkins/templates/jenkins-master-deployment.yaml":
template:my-chart/charts/jenkins/templates/jenkins-master-deployment.yaml:42:28:
executing
"my-chart/charts/jenkins/templates/jenkins-master-deployment.yaml" at
: error calling
include: template:
my-chart/charts/jenkins/templates/config.yaml:335:3: executing
"my-chart/charts/jenkins/templates/config.yaml" at : error calling include: template: no template
"override_config_map" associated with template "gotpl"
As fdlk mentioned in the comment to the CustomConfigMap option yields error with override_config_map missing #4040 :
It helped me to run helm lint my-chart to figure out how many brackets I needed.
My config.tpl now looks like this:
{{- define "override_config_map" }}
apiVersion: v1
kind: ConfigMap
[...]
plugins.txt: |-
{{- if .Values.Master.InstallPlugins }}
{{- range $index, $val := .Values.Master.InstallPlugins }}
{{ $val | indent 4 }}
{{- end }}
{{- end }}
{{- end }}
Using 0.16.4

Resources