My values.yaml file looks like this:
myApp:
jenkinsCreds:
usernamePassword:
- credentialsId: 'github-jenkins'
passwordVariable: 'pass'
usernameVariable: 'USERNAME'
helm:
foo: " "
- credentialsId: 'rabbitmq-dev'
passwordVariable: 'rabbitmq_username'
usernamevariable: 'rabbitmq_password'
helm:
rabbitmqUser: ""
rabbitMqPass: ""
I want to access the rabitmqUser and rabbitmqPass items in a secret. How could I do this in Helm? I can't figure out how to grab it.
This is what my secret used to look like before I added all the new data to values.yaml:
apiVersion: v1
metadata:
name: SuperSecret
kind: Secret
type: Opaque
stringData:
rabbitmq_pass: {{ .Values.rabbitmqPass | quote }}
rabbitmq_user: {{ .Values.rabbitmqUser | quote }}
I am not sure how I would get the values out of my values.yaml file.
I thought if I changed it to: {{.Values.myApp.jenkinsCreds.usernamePassword.helm.rabbitmqUser}}
and {{.Values.myApp.jenkinsCreds.usernamePassword.helm.rabbitMqPass}}I would be able to access the values but this is not the case.
What's the best way to access the rabbitmqUser and the rabbitMqPass from my secret. I'd prefer not to change the shape of my values.yaml file but if I need to that's OK.
Just add some necessary loops and judgments where values is used.
eg.
values.yaml
myApp:
jenkinsCreds:
usernamePassword:
- credentialsId: 'github-jenkins'
passwordVariable: 'pass'
usernameVariable: 'USERNAME'
helm:
foo: " "
- credentialsId: 'rabbitmq-dev'
passwordVariable: 'rabbitmq_username'
usernamevariable: 'rabbitmq_password'
helm:
rabbitmqUser: "root"
rabbitmqPass: "123456"
secret.yaml
apiVersion: v1
metadata:
name: SuperSecret
kind: Secret
type: Opaque
stringData:
{{- range $i, $v := .Values.myApp.jenkinsCreds.usernamePassword }}
{{- if eq $v.credentialsId "rabbitmq-dev" }}
rabbitmq_pass: {{ $v.helm.rabbitmqPass | quote }}
rabbitmq_user: {{ $v.helm.rabbitmqUser | quote }}
{{- end }}
{{- end }}
output:
# Source: test/templates/secret.yaml
apiVersion: v1
metadata:
name: SuperSecret
kind: Secret
type: Opaque
stringData:
rabbitmq_pass: "123456"
rabbitmq_user: "root"
Related
Based on number of nodes, I need to refer the certificate and key from yaml file.
{{- $root := . -}}
{{ range $k, $v := until (int ($root.Values.data.nodeCount) ) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $root.Values.clusterName }}-nodes-{{$k}}-secret
type: Opaque
data:
crt: {{ printf "$root.Values.data.node_%d_key" $k }}
---
{{- end }}
Example output: It doesn't show the output value it only shows the printf output which is string, how can I evaluate the printf output to get the evaultation to retrieve result from values.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-0-secret
type: Opaque
data:
crt: $root.Values.data.node_0_key
---
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-1-secret
type: Opaque
data:
crt: $root.Values.data.node_1_key
example values.yml
nodeCount: 3
clusterName: "test"
data:
nodeCount: 2
node_0_crt: "test"
node_0_key: "test"
node_1_crt: "test1"
node_1_key: "test1"
Note that Helm doesn't use Jinja templating syntax. If you were to try using a Jinja expression in a Helm template it wouldn't work. Helm uses Go templates (with a bunch of custom functions).
For the behavior you want, I think you're looking for the tpl function, which lets you evaluate a string as a Helm template. That might look like this:
{{ range until (int $.Values.data.nodeCount) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $.Values.clusterName }}-nodes-{{.}}-secret
type: Opaque
data:
crt: {{ tpl (printf "{{ .Values.data.node_%d_key }}" .) $ }}
---
{{- end }}
Note that I've also removed your use of $root; you can just refer to the $ variable if you need to explicitly refer to the root context. I've also slightly simplified the outer range loop.
Given the above template and your sample data, I get the following output:
---
# Source: example/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-0-secret
type: Opaque
data:
crt: test
---
# Source: example/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-1-secret
type: Opaque
data:
crt: test1
I was looking into an entirely separate issue and then came across this question which raised some concerns:
https://stackoverflow.com/a/50510753/3123109
I'm doing something pretty similar. I'm using the CSI Driver for Azure to integrate Azure Kubernetes Service with Azure Key Vault. My manifests for the integration are something like:
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
name: aks-akv-identity
namespace: prod
spec:
type: 0
resourceID: $identityResourceId
clientID: $identityClientId
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
name: aks-akv-identity-binding
namespace: prod
spec:
azureIdentity: aks-akv-identity
selector: aks-akv-identity-binding-selector
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: aks-akv-secret-provider
namespace: prod
spec:
provider: azure
secretObjects:
- secretName: ${resourcePrefix}-prod-secrets
type: Opaque
data:
- objectName: PROD-PGDATABASE
key: PGDATABASE
- objectName: PROD-PGHOST
key: PGHOST
- objectName: PROD-PGPORT
key: PGPORT
- objectName: PROD-PGUSER
key: PGUSER
- objectName: PROD-PGPASSWORD
key: PGPASSWORD
parameters:
usePodIdentity: "true"
keyvaultName: ${resourceGroupName}akv
cloudName: ""
objects: |
array:
objectName: PROD-PGDATABASE
objectType: secret
objectVersion: ""
- |
objectName: PROD-PGHOST
objectType: secret
objectVersion: ""
- |
objectName: PROD-PGPORT
objectType: secret
objectVersion: ""
- |
objectName: PROD-PGUSER
objectType: secret
objectVersion: ""
- |
objectName: PROD-PGPASSWORD
objectType: secret
objectVersion: ""
tenantId: $tenantId
Then in the micro service manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment-prod
namespace: prod
spec:
replicas: 3
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: api
image: appacr.azurecr.io/app-api
ports:
- containerPort: 5000
env:
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGDATABASE
- name: PGHOST
value: postgres-cluster-ip-service-prod
- name: PGPORT
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGPORT
- name: PGUSER
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGUSER
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGPASSWORD
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
---
apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: api
ports:
- port: 5000
targetPort: 5000
Then in my application settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ['PGDATABASE'],
'USER': os.environ['PGUSER'],
'PASSWORD': os.environ['PGPASSWORD'],
'HOST': os.environ['PGHOST'],
'PORT': os.environ['PGPORT'],
}
}
Nothing in my Dockerfile refers to any of these variables, just the Django micro service code.
According to the link, one of the comments was:
current best practices advise against doing this exactly. secrets managed through environment variables in docker are easily viewed and should not be considered secure.
So I'm second guessing this approach.
Do I need to look into revising what I have here?
The suggestion in the link is to place the os.environ[] with a call to a method that pulls the credentials from a key vault... but the credentials to even access the key vault would need to be stored in secrets... so I'm not seeing how it is any different.
Note: One thing I noticed is this is the use of env: and mounting the secrets to a volume is redundant. The latter was done per the documentation on the integration, but it makes the secrets available from /mnt/secrets-store so you can do something like cat /mnt/secrets-store/PROD-PGUSER. os.environ[] isn't really necessary and the env: I don't think because you could pull the secret from that location in the Pod.
At least doing something like the following prints out the secret value:
kubectl exec -it $(kubectl get pods -l component=api -o custom-columns=:metadata.name -n prod) -n prod -- cat /mnt/secrets-store/PROD-PGUSER
The comment on the answer you linked was incorrect. I've left a note to explain the confusion. What you have is fine, if possibly over-built :) You're not actually gaining any security vs. just using Kubernetes Secrets directly but if you prefer the workflow around AKV then this looks fine. You might want to look at externalsecrets rather than this weird side feature of the CSI stuff? The CSI driver is more for exposing stuff as files rather than external->Secret->envvar.
I am trying to perform a Kubernetes Rolling Update using Helm v2; however, I'm unable to.
When I perform a helm upgrade on a slow Tomcat image, the original pod is destroyed.
I would like to figure out how to achieve zero downtime by incrementally updating Pods instances with new ones, and draining old ones.
To demonstrate, I created a sample slow Tomcat Docker image, and a Helm chart.
To install:
helm install https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz --name slowtom \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/initial.yaml
You can follow the logs by running kubectl logs -f slowtom-sf-0, and once ready you can access the application on http://localhost:30901
To upgrade:
(and that's where I need help)
helm upgrade slowtom https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/upgrade.yaml
The upgrade.yaml is identical to the initial.yaml deployment file with the exception of the tag version number.
Here the original pod is destroyed, and the new one starts. Meanwhile, users are unable to access the application on http://localhost:30901
To Delete:
helm del slowtom --purge
Reference
Local Helm Chart
Download helm chart:
curl -LO https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz
tar vxfz ./slowtom.tgz
Install from local helm-chart:
helm install --debug ./slowtom --name slowtom -f ./slowtom/environments/initial.yaml
Upgrade from local helm-chart:
helm upgrade --debug slowtom ./slowtom -f ./slowtom/environments/upgrade.yaml
Docker Image
Dockerfile
FROM tomcat:8.5-jdk8-corretto
RUN mkdir /usr/local/tomcat/webapps/ROOT && \
echo '<html><head><title>Slow Tomcat</title></head><body><h1>Slow Tomcat Now Ready</h1></body></html>' >> /usr/local/tomcat/webapps/ROOT/index.html
RUN echo '#!/usr/bin/env bash' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'x=2' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'secs=$(($x * 60))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'while [ $secs -gt 0 ]; do' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' >&2 echo -e "Blast off in $secs\033[0K\r"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' sleep 1' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' : $((secs--))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'done' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo '>&2 echo "slow cataline done. will now start real catalina"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'exec catalina.sh run' >> /usr/local/tomcat/bin/slowcatalina.sh && \
chmod +x /usr/local/tomcat/bin/slowcatalina.sh
ENTRYPOINT ["/usr/local/tomcat/bin/slowcatalina.sh"]
Helm Chart Content
slowtom/Chart.yaml
apiVersion: v1
description: slow-tomcat Helm chart for Kubernetes
name: slowtom
version: 1.1.2 # whatever
slowtom/values.yaml
# Do not use this file, but ones from environmments folder
slowtom/environments/initial.yaml
# Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 1
env:
- name: y_env
value: whatever
slowtom/environments/upgrade.yaml
# Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 2
env:
- name: y_env
value: whatever
slowtom/templates/deployment.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
---
slowtom/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: {{.Values.slowtom_sf.name}}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
app: {{.Values.slowtom_sf.name}}
visualize: "true"
hasHealthcheck: "{{ .Values.slowtom_sf.hasHealthcheck }}"
isResilient: "{{ .Values.slowtom_sf.isResilient }}"
spec:
type: NodePort
selector:
app: {{.Values.slowtom_sf.name}}
sessionAffinity: ClientIP
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
nodePort: 30901
---
Unlike Deployment, StatefulSet does not start a new pod before destroying the old one during a rolling update. Instead, the expectation is that you have multiple pods, and they will be replaced one-by-one. Since you only have 1 replica configured, it must destroy it first. Either increase your replica count to 2 or more, or switch to a Deployment template.
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
I solved this problem by adding Readiness or Startup Probes to my deployment.yaml
slowtom/templates/deployment.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 1
failureThreshold: 3
---
I have a config for deploying 4 pods(hence, 4 workers) for Airflow on Kubernetes using Docker. However, all of a sudden, worker-0 is unable to make a certain curl request whereas other workers are able to make one. This is resulting in the failure of pipelines.
I have tried reading about mismatching configs and stateful sets but in my case, there is one config for all the workers and this is the only single source of truth.
statefulsets-workers.yaml file is as follows:
# Workers are not in deployment, but in StatefulSet, to allow each worker expose a mini-server
# that only serve logs, that will be used by the web server.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ template "airflow.fullname" . }}-worker
labels:
app: {{ template "airflow.name" . }}-worker
chart: {{ template "airflow.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
serviceName: "{{ template "airflow.fullname" . }}-worker"
updateStrategy:
type: RollingUpdate
# Use experimental burst mode for faster StatefulSet scaling
# https://github.com/kubernetes/kubernetes/commit/****
podManagementPolicy: Parallel
replicas: {{ .Values.celery.num_workers }}
template:
metadata:
{{- if .Values.airflow.pallet.config_path }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- end }}
labels:
app: {{ template "airflow.name" . }}-worker
release: {{ .Release.Name }}
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 1002
fsGroup: 1002
containers:
- name: {{ .Chart.Name }}-worker
imagePullPolicy: {{ .Values.airflow.image_pull_policy }}
image: "{{ .Values.airflow.image }}:{{ .Values.airflow.imageTag }}"
volumeMounts:
{{- if .Values.airflow.storage.enabled }}
- name: google-cloud-key
mountPath: /var/secrets/google
readOnly: true
{{- end }}
- name: worker-logs
mountPath: /usr/local/airflow/logs
- name: data
mountPath: /usr/local/airflow/rootfs
env:
{{- if .Values.airflow.storage.enabled }}
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
{{- end }}
{{- range $setting, $option := .Values.airflow.config }}
- name: {{ $setting }}
value: {{ $option }}
{{- end }}
securityContext:
allowPrivilegeEscalation: false
envFrom:
- configMapRef:
name: pallet-env-file
args: ["worker"]
ports:
- name: wlog
containerPort: 8793
protocol: TCP
{{- if .Values.airflow.image_pull_secret }}
imagePullSecrets:
- name: {{ .Values.airflow.image_pull_secret }}
{{- end }}
{{- if .Values.airflow.storage.enabled }}
volumes:
- name: google-cloud-key
secret:
secretName: {{ .Values.airflow.storage.secretName }}
{{- end }}
volumeClaimTemplates:
- metadata:
name: worker-logs
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
I expect all the workers to be able to connect to the service to which I am making the curl request.
It turns out that the environment was indeed the same, however the receiving machine didn't have the new IP of the node whitelisted.
When all the pods crashed, they took the node down with them and restarting the node gave it a new IP. Hence, connection timed out for the worker in that node.
I'm trying to deploy the following Ingress with helm:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
io.ctl.cd/ssl: "ui.releasename"
name: ui
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
rules:
{{ if eq .Values.nodeSelector.location "minikube" }}
- host: ui.{{ .Release.Namespace }}.minikube.test
{{ else }}
- host: ui.{{ .Release.Namespace }}.devhost
{{ end }}
http:
paths:
- backend:
serviceName: api
servicePort: {{ .Values.api.service.port }}
path: /
And I'm getting the following error
Error: release x-**** failed: Ingress in version "v1beta1" cannot be handled as a Ingress: only encoded map or array can be decoded into a struct
I have a very similar ingress that is working fine, I don't don't want is happening with this one.
I think problem in this string:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
For test, try:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
io.ctl.cd/ssl: "ui.releasename"
name: ui
labels:
chart: "{{ .Chart.Name }}"
spec:
rules:
{{ if eq .Values.nodeSelector.location "minikube" }}
- host: ui.{{ .Release.Namespace }}.minikube.test
{{ else }}
- host: ui.{{ .Release.Namespace }}.devhost
{{ end }}
http:
paths:
- backend:
serviceName: api
servicePort: {{ .Values.api.service.port }}
path: /