I can't make my chart use my volumes, and volumeMounts values. In my values.yaml file I have something like this:
volumes:
- name: docker1
hostPath:
path: /var/
- name: docker2
hostPath:
path: /usr/
- name: docker3
hostPath:
path: /opt/
volumeMounts:
- name: docker1
mountPath: /var/
- name: docker2
mountPath: /usr/
- name: docker3
mountPath: /opt/
In my _deployment.tpl file I have something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "fullname" . }}
namespace: {{ .Values.namespace }}
labels:
{{- include "labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
revisionHistoryLimit: {{ .Values.revisionHistory | default 2 }}
selector:
matchLabels:
{{- include "selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
{{- toYaml .Values.podAnnotations | nindent 8 }}
labels:
{{- include "labels" . | nindent 8 }}
spec:
imagePullSecrets:
{{- toYaml .Values.imagePullSecrets | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
{{- toYaml .Values.image.ports | nindent 10 }}
env:
{{- toYaml .Values.env | nindent 10 }}
volumeMounts:
- name: {{- toYaml .Values.volumeMounts | default "" | nindent 10 }}
volumes:
- name: {{- toYaml .Values.volumes | default "" | nindent 10 }}
nodeSelector:
{{- toYaml .Values.nodeSelector | nindent 8 }}
tolerations:
{{- toYaml .Values.tolerations | nindent 8 }}
{{- end }}
I tried to mount volumes and volumeMounts the same way I do with env variables (they work) but sadly it doesn't work.
There is a problem with the indentation of your code.
Volumes should be at the same indentation level as containers.
As follow:
spec:
imagePullSecrets:
{{- toYaml .Values.imagePullSecrets | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
{{- toYaml .Values.image.ports | nindent 12 }}
env:
{{- toYaml .Values.env | nindent 12 }}
volumeMounts:
{{- toYaml .Values.volumeMounts | default "" | nindent 12 }}
volumes:
{{- toYaml .Values.volumes | default "" | nindent 8 }}
nodeSelector:
{{- toYaml .Values.nodeSelector | nindent 8 }}
tolerations:
{{- toYaml .Values.tolerations | nindent 8 }}
If you want to debug the template, you can refer to the official helm document operation.
helm debug
Related
I am facing a "CrashLoopBackoff" error when I deploy a .Net Core API with helm upgrade --install flextoeco . :
NAME READY STATUS RESTARTS AGE
flextoecoapi-6bb7cdd846-r6c67 0/1 CrashLoopBackOff 4 (38s ago) 3m8s
flextoecoapi-fb7f7b556-tgbrv 0/1 CrashLoopBackOff 219 (53s ago) 10h
mssql-depl-86c86b5f44-ldj48 0/1 Pending
I have run ks describe pod flextoecoapi-6bb7cdd846-r6c67 and part of the output is as below :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m4s default-scheduler Successfully assigned default/flextoecoapi-6bb7cdd846-r6c67 to fbcdcesdn02
Normal Pulling 5m3s kubelet Pulling image "golide/flextoeco:1.1.1"
Normal Pulled 4m57s kubelet Successfully pulled image "golide/flextoeco:1.1.1" in 6.2802081s
Normal Killing 4m34s kubelet Container flextoeco failed liveness probe, will be restarted
Normal Created 4m33s (x2 over 4m57s) kubelet Created container flextoeco
Normal Started 4m33s (x2 over 4m56s) kubelet Started container flextoeco
Normal Pulled 4m33s kubelet Container image "golide/flextoeco:1.1.1" already present on machine
Warning Unhealthy 4m14s (x12 over 4m56s) kubelet Readiness probe failed: Get "http://10.244.6.59:80/": dial tcp 10.244.0.59:80: connect: connection refused
Warning Unhealthy 4m14s (x5 over 4m54s) kubelet Liveness probe failed: Get "http://10.244.6.59:80/": dial tcp 10.244.0.59:80: connect: connection refused
Warning BackOff 3s (x10 over 2m33s) kubelet Back-off restarting failed container
Taking from the suggestions here it appears I have a number of options to fix most notable being:
i) Add a command to the Dockerfile that will ensure there is some foreground process running
ii)Extend the LivenessProbe initialDelaySeconds
I have opted for the first and edited my Dockerfile as below :
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:3.1
WORKDIR /app
ENV ASPNETCORE_URLS http://+:5000
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "FlexToEcocash.dll"]
CMD tail -f /dev/null
After this change I am still getting the same error.
UPDATE
Skipped : The deployment works perfectly when I am not using helm i.e I can do a kubectl apply for the deployment/service/nodeport/clusterip and the API is deployed without issues.
I have tried to update values.yaml and service.yaml as below, but after redeploy the CrashLoopBackOff error persists :
templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "flextoeco.fullname" . }}
labels:
{{- include "flextoeco.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "flextoeco.selectorLabels" . | nindent 4 }}
values.yaml
I have explicitly specified the CPU and memory usage here
replicaCount: 1
image:
repository: golide/flextoeco
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "1.1.2"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
service:
type: ClusterIP
port: 80
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: flextoeco.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 100m
memory: 250Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "flextoeco.fullname" . }}
labels:
{{- include "flextoeco.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "flextoeco.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "flextoeco.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "flextoeco.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
labels:
{{- include "flextoeco.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "flextoeco.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
tcpSocket:
port: 8085
initialDelaySeconds: 300
periodSeconds: 30
timeoutSeconds: 20
readinessProbe:
tcpSocket:
port: 8085
initialDelaySeconds: 300
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
{{- toYaml . | nindent 8 }}
{{- end }}
In the Deployment spec, I need to use port 5000 as the containerPort: value and also the port: in the probes. My application is listening on port 5000 :
- name: http
containerPort: 5000
protocol: TCP
livenessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 300
periodSeconds: 30
timeoutSeconds: 20
readinessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 300
periodSeconds: 30
The configuration in service.yaml is correct : If the Deployment spec maps the name http to port 5000 then referring to targetPort: http in the Service is right.
I am trying to run a docker container within a job that I am deploying with helm using AKS. The purpose of this is to run some tests using Selenium and make some postgres calls to automate web ui tests.
When trying to run within the job, the following error is received:
"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
I ran into this problem locally, but can work around it using
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock web-ui-auto:latest /bin/bash
The problem is I am using helm to deploy the job separate from the running tasks since it can take about an hour to complete.
I tried adding a deployment.yaml to my helm like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "web-ui-auto.fullname" . }}
labels:
app: {{ template "web-ui-auto.name" . }}
chart: {{ template "web-ui-auto.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
revisionHistoryLimit: 5
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "web-ui-auto.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "web-ui-auto.name" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
annotations:
buildID: {{ .Values.buildID | default "" | quote }}
container.apparmor.security.beta.kubernetes.io/{{ .Chart.Name }}: runtime/default
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
runAsNonRoot: {{ .Values.securityContext.runAsNonRoot }}
runAsUser: {{ .Values.securityContext.runAsUser }}
runAsGroup: {{ .Values.securityContext.runAsGroup }}
allowPrivilegeEscalation: {{ .Values.securityContext.allowPrivilegeEscalation }}
seccompProfile:
type: RuntimeDefault
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.deployment.containerPort }}
protocol: TCP
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
but was met with failures still. My question is what is the best method to run docker within the job successfully when deploying with helm. Any help is appreciated.
I am trying to perform a Kubernetes Rolling Update using Helm v2; however, I'm unable to.
When I perform a helm upgrade on a slow Tomcat image, the original pod is destroyed.
I would like to figure out how to achieve zero downtime by incrementally updating Pods instances with new ones, and draining old ones.
To demonstrate, I created a sample slow Tomcat Docker image, and a Helm chart.
To install:
helm install https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz --name slowtom \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/initial.yaml
You can follow the logs by running kubectl logs -f slowtom-sf-0, and once ready you can access the application on http://localhost:30901
To upgrade:
(and that's where I need help)
helm upgrade slowtom https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/upgrade.yaml
The upgrade.yaml is identical to the initial.yaml deployment file with the exception of the tag version number.
Here the original pod is destroyed, and the new one starts. Meanwhile, users are unable to access the application on http://localhost:30901
To Delete:
helm del slowtom --purge
Reference
Local Helm Chart
Download helm chart:
curl -LO https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz
tar vxfz ./slowtom.tgz
Install from local helm-chart:
helm install --debug ./slowtom --name slowtom -f ./slowtom/environments/initial.yaml
Upgrade from local helm-chart:
helm upgrade --debug slowtom ./slowtom -f ./slowtom/environments/upgrade.yaml
Docker Image
Dockerfile
FROM tomcat:8.5-jdk8-corretto
RUN mkdir /usr/local/tomcat/webapps/ROOT && \
echo '<html><head><title>Slow Tomcat</title></head><body><h1>Slow Tomcat Now Ready</h1></body></html>' >> /usr/local/tomcat/webapps/ROOT/index.html
RUN echo '#!/usr/bin/env bash' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'x=2' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'secs=$(($x * 60))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'while [ $secs -gt 0 ]; do' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' >&2 echo -e "Blast off in $secs\033[0K\r"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' sleep 1' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' : $((secs--))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'done' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo '>&2 echo "slow cataline done. will now start real catalina"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'exec catalina.sh run' >> /usr/local/tomcat/bin/slowcatalina.sh && \
chmod +x /usr/local/tomcat/bin/slowcatalina.sh
ENTRYPOINT ["/usr/local/tomcat/bin/slowcatalina.sh"]
Helm Chart Content
slowtom/Chart.yaml
apiVersion: v1
description: slow-tomcat Helm chart for Kubernetes
name: slowtom
version: 1.1.2 # whatever
slowtom/values.yaml
# Do not use this file, but ones from environmments folder
slowtom/environments/initial.yaml
# Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 1
env:
- name: y_env
value: whatever
slowtom/environments/upgrade.yaml
# Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 2
env:
- name: y_env
value: whatever
slowtom/templates/deployment.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
---
slowtom/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: {{.Values.slowtom_sf.name}}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
app: {{.Values.slowtom_sf.name}}
visualize: "true"
hasHealthcheck: "{{ .Values.slowtom_sf.hasHealthcheck }}"
isResilient: "{{ .Values.slowtom_sf.isResilient }}"
spec:
type: NodePort
selector:
app: {{.Values.slowtom_sf.name}}
sessionAffinity: ClientIP
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
nodePort: 30901
---
Unlike Deployment, StatefulSet does not start a new pod before destroying the old one during a rolling update. Instead, the expectation is that you have multiple pods, and they will be replaced one-by-one. Since you only have 1 replica configured, it must destroy it first. Either increase your replica count to 2 or more, or switch to a Deployment template.
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
I solved this problem by adding Readiness or Startup Probes to my deployment.yaml
slowtom/templates/deployment.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 1
failureThreshold: 3
---
I have a config for deploying 4 pods(hence, 4 workers) for Airflow on Kubernetes using Docker. However, all of a sudden, worker-0 is unable to make a certain curl request whereas other workers are able to make one. This is resulting in the failure of pipelines.
I have tried reading about mismatching configs and stateful sets but in my case, there is one config for all the workers and this is the only single source of truth.
statefulsets-workers.yaml file is as follows:
# Workers are not in deployment, but in StatefulSet, to allow each worker expose a mini-server
# that only serve logs, that will be used by the web server.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ template "airflow.fullname" . }}-worker
labels:
app: {{ template "airflow.name" . }}-worker
chart: {{ template "airflow.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
serviceName: "{{ template "airflow.fullname" . }}-worker"
updateStrategy:
type: RollingUpdate
# Use experimental burst mode for faster StatefulSet scaling
# https://github.com/kubernetes/kubernetes/commit/****
podManagementPolicy: Parallel
replicas: {{ .Values.celery.num_workers }}
template:
metadata:
{{- if .Values.airflow.pallet.config_path }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- end }}
labels:
app: {{ template "airflow.name" . }}-worker
release: {{ .Release.Name }}
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 1002
fsGroup: 1002
containers:
- name: {{ .Chart.Name }}-worker
imagePullPolicy: {{ .Values.airflow.image_pull_policy }}
image: "{{ .Values.airflow.image }}:{{ .Values.airflow.imageTag }}"
volumeMounts:
{{- if .Values.airflow.storage.enabled }}
- name: google-cloud-key
mountPath: /var/secrets/google
readOnly: true
{{- end }}
- name: worker-logs
mountPath: /usr/local/airflow/logs
- name: data
mountPath: /usr/local/airflow/rootfs
env:
{{- if .Values.airflow.storage.enabled }}
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
{{- end }}
{{- range $setting, $option := .Values.airflow.config }}
- name: {{ $setting }}
value: {{ $option }}
{{- end }}
securityContext:
allowPrivilegeEscalation: false
envFrom:
- configMapRef:
name: pallet-env-file
args: ["worker"]
ports:
- name: wlog
containerPort: 8793
protocol: TCP
{{- if .Values.airflow.image_pull_secret }}
imagePullSecrets:
- name: {{ .Values.airflow.image_pull_secret }}
{{- end }}
{{- if .Values.airflow.storage.enabled }}
volumes:
- name: google-cloud-key
secret:
secretName: {{ .Values.airflow.storage.secretName }}
{{- end }}
volumeClaimTemplates:
- metadata:
name: worker-logs
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
I expect all the workers to be able to connect to the service to which I am making the curl request.
It turns out that the environment was indeed the same, however the receiving machine didn't have the new IP of the node whitelisted.
When all the pods crashed, they took the node down with them and restarting the node gave it a new IP. Hence, connection timed out for the worker in that node.
I'm trying to deploy the following Ingress with helm:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
io.ctl.cd/ssl: "ui.releasename"
name: ui
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
rules:
{{ if eq .Values.nodeSelector.location "minikube" }}
- host: ui.{{ .Release.Namespace }}.minikube.test
{{ else }}
- host: ui.{{ .Release.Namespace }}.devhost
{{ end }}
http:
paths:
- backend:
serviceName: api
servicePort: {{ .Values.api.service.port }}
path: /
And I'm getting the following error
Error: release x-**** failed: Ingress in version "v1beta1" cannot be handled as a Ingress: only encoded map or array can be decoded into a struct
I have a very similar ingress that is working fine, I don't don't want is happening with this one.
I think problem in this string:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
For test, try:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
io.ctl.cd/ssl: "ui.releasename"
name: ui
labels:
chart: "{{ .Chart.Name }}"
spec:
rules:
{{ if eq .Values.nodeSelector.location "minikube" }}
- host: ui.{{ .Release.Namespace }}.minikube.test
{{ else }}
- host: ui.{{ .Release.Namespace }}.devhost
{{ end }}
http:
paths:
- backend:
serviceName: api
servicePort: {{ .Values.api.service.port }}
path: /