Helm resolve .Release.Namespace that was passed into template file - helm3

I'm trying to set a var in my values.yaml that by default is set to {{ .Release.Namespace }}. But when I check the end result using --dry-run the value is "{{ .Release.Namespace }}", and not the actual namespace.
If I set ticker.secretNamespace to a string i.e. "foo", it works. How to get this working...
Thanks!
values.yaml
ticker:
secretNamespace: "{{ .Release.Namespace }}"
/tempaltes/prep.yaml
...
containers:
- name: prep
command:
- /bin/bash
- -exuc
- |
DEBUG A: {{ $.Release.Namespace }}
{{- $namespace := $.Release.Namespace }}
DEBUG B: {{ $namespace }}
{{- with .Values.ticker }}
{{- if not (eq .secretNamespace $namespace) }}
DEBUG C: {{ .secretNamespace }}
kubectl --namespace "{{ .secretNamespace }}" create secret ..."
{{- end }}
{{- end }}
...
dry-run result
containers:
- name: prep
command:
- /bin/bash
- -exuc
- |
DEBUG A: test-ns
DEBUG B: test-ns
DEBUG C: {{ .Release.Namespace }}
kubectl --namespace "{{ .Release.Namespace }}" create secret ...
Tried some different notation inside the values.yaml, but no luck.
secretNamespace: {{ .Release.Namespace }}
secretNamespace: {{ $.Release.Namespace }}
secretNamespace: .Release.Namespace
secretNamespace: "{{ .Release.Namespace | quote }}"
secretNamespace: "{{ .Release.Namespace | tpl }}"

Related

Run docker commands within pod using helm

I am trying to run a docker container within a job that I am deploying with helm using AKS. The purpose of this is to run some tests using Selenium and make some postgres calls to automate web ui tests.
When trying to run within the job, the following error is received:
"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
I ran into this problem locally, but can work around it using
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock web-ui-auto:latest /bin/bash
The problem is I am using helm to deploy the job separate from the running tasks since it can take about an hour to complete.
I tried adding a deployment.yaml to my helm like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "web-ui-auto.fullname" . }}
labels:
app: {{ template "web-ui-auto.name" . }}
chart: {{ template "web-ui-auto.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
revisionHistoryLimit: 5
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "web-ui-auto.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "web-ui-auto.name" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
annotations:
buildID: {{ .Values.buildID | default "" | quote }}
container.apparmor.security.beta.kubernetes.io/{{ .Chart.Name }}: runtime/default
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
runAsNonRoot: {{ .Values.securityContext.runAsNonRoot }}
runAsUser: {{ .Values.securityContext.runAsUser }}
runAsGroup: {{ .Values.securityContext.runAsGroup }}
allowPrivilegeEscalation: {{ .Values.securityContext.allowPrivilegeEscalation }}
seccompProfile:
type: RuntimeDefault
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.deployment.containerPort }}
protocol: TCP
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
but was met with failures still. My question is what is the best method to run docker within the job successfully when deploying with helm. Any help is appreciated.

Helm chart volumes and volumeMounts in deployment file

I can't make my chart use my volumes, and volumeMounts values. In my values.yaml file I have something like this:
volumes:
- name: docker1
hostPath:
path: /var/
- name: docker2
hostPath:
path: /usr/
- name: docker3
hostPath:
path: /opt/
volumeMounts:
- name: docker1
mountPath: /var/
- name: docker2
mountPath: /usr/
- name: docker3
mountPath: /opt/
In my _deployment.tpl file I have something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "fullname" . }}
namespace: {{ .Values.namespace }}
labels:
{{- include "labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
revisionHistoryLimit: {{ .Values.revisionHistory | default 2 }}
selector:
matchLabels:
{{- include "selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
{{- toYaml .Values.podAnnotations | nindent 8 }}
labels:
{{- include "labels" . | nindent 8 }}
spec:
imagePullSecrets:
{{- toYaml .Values.imagePullSecrets | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
{{- toYaml .Values.image.ports | nindent 10 }}
env:
{{- toYaml .Values.env | nindent 10 }}
volumeMounts:
- name: {{- toYaml .Values.volumeMounts | default "" | nindent 10 }}
volumes:
- name: {{- toYaml .Values.volumes | default "" | nindent 10 }}
nodeSelector:
{{- toYaml .Values.nodeSelector | nindent 8 }}
tolerations:
{{- toYaml .Values.tolerations | nindent 8 }}
{{- end }}
I tried to mount volumes and volumeMounts the same way I do with env variables (they work) but sadly it doesn't work.
There is a problem with the indentation of your code.
Volumes should be at the same indentation level as containers.
As follow:
spec:
imagePullSecrets:
{{- toYaml .Values.imagePullSecrets | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
{{- toYaml .Values.image.ports | nindent 12 }}
env:
{{- toYaml .Values.env | nindent 12 }}
volumeMounts:
{{- toYaml .Values.volumeMounts | default "" | nindent 12 }}
volumes:
{{- toYaml .Values.volumes | default "" | nindent 8 }}
nodeSelector:
{{- toYaml .Values.nodeSelector | nindent 8 }}
tolerations:
{{- toYaml .Values.tolerations | nindent 8 }}
If you want to debug the template, you can refer to the official helm document operation.
helm debug

Kubernetes Rolling Update with Helm2

I am trying to perform a Kubernetes Rolling Update using Helm v2; however, I'm unable to.
When I perform a helm upgrade on a slow Tomcat image, the original pod is destroyed.
I would like to figure out how to achieve zero downtime by incrementally updating Pods instances with new ones, and draining old ones.
To demonstrate, I created a sample slow Tomcat Docker image, and a Helm chart.
To install:
helm install https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz --name slowtom \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/initial.yaml
You can follow the logs by running kubectl logs -f slowtom-sf-0, and once ready you can access the application on http://localhost:30901
To upgrade:
(and that's where I need help)
helm upgrade slowtom https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/upgrade.yaml
The upgrade.yaml is identical to the initial.yaml deployment file with the exception of the tag version number.
Here the original pod is destroyed, and the new one starts. Meanwhile, users are unable to access the application on http://localhost:30901
To Delete:
helm del slowtom --purge
Reference
Local Helm Chart
Download helm chart:
curl -LO https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz
tar vxfz ./slowtom.tgz
Install from local helm-chart:
helm install --debug ./slowtom --name slowtom -f ./slowtom/environments/initial.yaml
Upgrade from local helm-chart:
helm upgrade --debug slowtom ./slowtom -f ./slowtom/environments/upgrade.yaml
Docker Image
Dockerfile
FROM tomcat:8.5-jdk8-corretto
RUN mkdir /usr/local/tomcat/webapps/ROOT && \
echo '<html><head><title>Slow Tomcat</title></head><body><h1>Slow Tomcat Now Ready</h1></body></html>' >> /usr/local/tomcat/webapps/ROOT/index.html
RUN echo '#!/usr/bin/env bash' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'x=2' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'secs=$(($x * 60))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'while [ $secs -gt 0 ]; do' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' >&2 echo -e "Blast off in $secs\033[0K\r"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' sleep 1' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' : $((secs--))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'done' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo '>&2 echo "slow cataline done. will now start real catalina"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'exec catalina.sh run' >> /usr/local/tomcat/bin/slowcatalina.sh && \
chmod +x /usr/local/tomcat/bin/slowcatalina.sh
ENTRYPOINT ["/usr/local/tomcat/bin/slowcatalina.sh"]
Helm Chart Content
slowtom/Chart.yaml
apiVersion: v1
description: slow-tomcat Helm chart for Kubernetes
name: slowtom
version: 1.1.2 # whatever
slowtom/values.yaml
# Do not use this file, but ones from environmments folder
slowtom/environments/initial.yaml
# Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 1
env:
- name: y_env
value: whatever
slowtom/environments/upgrade.yaml
# Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 2
env:
- name: y_env
value: whatever
slowtom/templates/deployment.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
---
slowtom/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: {{.Values.slowtom_sf.name}}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
app: {{.Values.slowtom_sf.name}}
visualize: "true"
hasHealthcheck: "{{ .Values.slowtom_sf.hasHealthcheck }}"
isResilient: "{{ .Values.slowtom_sf.isResilient }}"
spec:
type: NodePort
selector:
app: {{.Values.slowtom_sf.name}}
sessionAffinity: ClientIP
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
nodePort: 30901
---
Unlike Deployment, StatefulSet does not start a new pod before destroying the old one during a rolling update. Instead, the expectation is that you have multiple pods, and they will be replaced one-by-one. Since you only have 1 replica configured, it must destroy it first. Either increase your replica count to 2 or more, or switch to a Deployment template.
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
I solved this problem by adding Readiness or Startup Probes to my deployment.yaml
slowtom/templates/deployment.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 1
failureThreshold: 3
---

Problem with definition of Kubernetes Ingress in helm

I'm trying to deploy the following Ingress with helm:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
io.ctl.cd/ssl: "ui.releasename"
name: ui
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
rules:
{{ if eq .Values.nodeSelector.location "minikube" }}
- host: ui.{{ .Release.Namespace }}.minikube.test
{{ else }}
- host: ui.{{ .Release.Namespace }}.devhost
{{ end }}
http:
paths:
- backend:
serviceName: api
servicePort: {{ .Values.api.service.port }}
path: /
And I'm getting the following error
Error: release x-**** failed: Ingress in version "v1beta1" cannot be handled as a Ingress: only encoded map or array can be decoded into a struct
I have a very similar ingress that is working fine, I don't don't want is happening with this one.
I think problem in this string:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
For test, try:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
io.ctl.cd/ssl: "ui.releasename"
name: ui
labels:
chart: "{{ .Chart.Name }}"
spec:
rules:
{{ if eq .Values.nodeSelector.location "minikube" }}
- host: ui.{{ .Release.Namespace }}.minikube.test
{{ else }}
- host: ui.{{ .Release.Namespace }}.devhost
{{ end }}
http:
paths:
- backend:
serviceName: api
servicePort: {{ .Values.api.service.port }}
path: /

ansible template using with_items and with_dict in same task

ansible 2.3.0.0 with python version 2.7.5
host_vars:
manager: Tom
asst_managers:
- Gail
- Susan
- Larry
hotels:
hotel1:
address: 1113 Mockingbird ln
rooms: 40
hotel2:
address: 2222 BlueJay Ln
rooms: 20
task
- name: hot hotels
template:
src: hothotels.j2
dest: /abc
with_items: "{{ asst_manager }}"
with_dict: "{{ hotels }}"
**ERROR! duplicate loop in task: items**
template: hothotels.j2
"{{ manager }}"
assistant managers:
{% for asst in asst_managers %}
"{{ asst }}"
{% endfor %}
{% for (key,value) in hotels.iteritems(() %)
{{ item.key }}
address: {{ item.value.address }}
rooms: {{ item.value.address }}
{% endfor % }
if I run the task without "with_items" and the template with out the manager and asst manager info, it runs but hotel2 is in the file twice and hotel1 is it at all.
thanks

Resources